title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial Defense
Accept (poster)
Summary: This paper proposed a denoising pre-training framework NIM similar to MIM to improve adversarial defense ability. In particular, the input image is a noisy version of the original. The pre-training goal is to do denoising. Strengths: This paper is easy to follow. Weaknesses: * a) Novelty is limited. The idea proposed in this work is very similar to CIM [1]. Compared to MIM, this work changes the masking operation to add noise. * b) Performance is incremental with limited applicability. The proposed De3 as a pure self-supervised pre-training approach does not outperform MIM baselines (MAE and SimMIM). For downstream adversarial robustness tests, De3 shows improvement over MAE and SImMIM, which is not surprising since the pre-training goal is to denoise. * c) Datasets and backbones used in this work to validate performance are scarce. It is suggested that more datasets,downstream tasks, and backbones are evaluated. **References** [1] Fang, Yuxin, et al. "Corrupted image modeling for self-supervised visual pre-training." ICLR 2023. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We'd like to address your concerns: > Novelty is limited. The idea proposed in this work is very similar to CIM [1]. There are many major differences between our work and [1], including 1) Purpose: Our goal is to explore how the generative pretraining paradigm can provide adversarial robustness beyond pretrained visual features, while the focus of [1] is to learn better pretrained visual representations. 2) Method: We propose a novel method that utilizes the strong denoising ability of NIM models to remove the adversarial perturbation, while [1] proposes to adopt an additional generator with a small trainable BEiT to degrade the input image. 3) Conclusions: We show that NIM is able to achieve a strong and tunable accuracy-robustness trade-off that MIM models are unable to compete, indicating the superiority of NIM over MIM in terms of adversarial robustness. In contrast, [1]’s authors demonstrate that both ViTs and CNNs can learn rich visual representations using a unified framework. Overall, the core novelty of our work lies in the proposed methods that effectively leverage the pretraining NIM models for providing adversarial defense to downstream models, which is recognized by other reviewers. > Performance is incremental with limited applicability. We respectfully disagree that our method’s performance is incremental. As explicitly stated in our title, this work aims to go beyond pretrained features and see if pretrained models can provide adversarial robustness to downstream models. We show that our proposed method achieves a significant improvement in robustness against PGD attack by 34.36%, with a marginal clean accuracy drop of 4.37%. Even without the $De^3$ defense, the NIM model’s robust accuracy against FGSM attack is higher by 3.37% than MAE, while the clean accuracy of NIM-MAE is slightly lower by 0.47%. > De3 shows improvement over MAE and SImMIM, which is not surprising since the pre-training goal is to denoise. First, we would like to argue that setting the target improvement as the training goal is a natural and reasonable idea in machine learning research, which would not diminish the work’s value. For example, adversarial training [2] incorporates adversarial examples in training. While it might seem "unsurprising" that adversarial trained models are robust to adversarial examples, the importance of this work has been universally acknowledged. In the context of our work, we believe what matters is that NIM’s improvement over MIM is under fair settings: from pretraining and fine-tuning to evaluation, we ensure everything is the same and the only variable is the degradation method. On the other hand, we believe our results are interesting for that our training goal is not exactly the target: we train the model to denoise *Gaussian noise*, instead of specific *adversarial noise*. Therefore, at inference time, we propose to add Gaussian noise whose magnitude is much larger than the imperceptible adversarial noise, so the latter can be flooded and removed along. From this perspective, the idea is interesting and instructive, as appreciated by fellow reviewers. > Datasets and backbones used in this work to validate performance are scarce. Following the suggestion, we conducted another experiment on CIFAR-10, based on the implementation of [3]. We set the attack radius $\epsilon = 8/255$, and other settings remain the same as in Table 1 of our main paper. The results are as follows: | Model | MAE | MAE | NIM-MAE | NIM-MAE | NIM-MAE | | --- | --- | --- | --- | --- | --- | | $De^3$ | None | $\gamma$=0.75 | None | $\sigma$=40 | $\sigma$=70 | | Clean | **89.88** | 78.36 | 88.31 | 81.09 | 76.30 | | FGSM | 17.51 | **65.13** | 21.66 | 43.26 | 48.86 | | PGD-10 | 0.01 | 22.50 | 2.77 | 32.24| **40.26** | | AA | 0.00 | 6.220 | 0.00 | 31.49 | **41.20** | Interestingly, we observe that unlike on ImageNet, MAE provides a stronger defense against FGSM on CIFAR-10. It is likely because, 1) for low-resolution images, MAE can generate good reconstructions from masked images; 2) when using MAE in $De^3$, 75% of adversarial perturbations will be masked, and if the attack is weak like FGSM, the rest of the perturbation would be too weak to break the model. However, for stronger attacks, NIM still provides better defense than MAE. Regarding more downstream tasks, we would like to remind the reviewers that implementing adversarial attacks for other tasks than classification is not a trivial work [4, 5] and is beyond the scope of this paper. To the best of our knowledge, researchers normally would only conduct experiments on image classification to show the effectiveness of an adversarial defense [6, 7, 8]. Regarding the backbones, we would like to draw attention to Figure 1(a) in the supplementary material, where we have demonstrated that our method is scalable and effective with larger backbones. --- [1] Fang, et al. Corrupted image modeling for self-supervised visual pre-training. ICLR 2023. [2] Madry, et al. Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR 2018. [3] https://github.com/IcarusWizard/MAE. [4] Croce, et al. Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models. 2023. [5] Agnihotri, et al. CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks. 2023. [6] Rebuffi, et al. Revisiting adapters with adversarial training. ICLR 2023. [7] Mo, et al. When adversarial training meets vision transformers: Recipes from training to architecture. NeurIPS 2022. [8] Yoon, et al. Adversarial Purification with Score-based Generative Models. ICML 2021. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. However, I still have the following concerns: * CIM is a very representative work in the MIM domain. Since it is very similar to the method proposed by the authors, at least it should be cited. Moreover, different research purpose does not distinguish the two works. For instance, applying the idea of MAE to audio pre-training is an application but not an innovation. * The pre-training objective of CIM is to denoise the corrupted input image, which is exactly the same idea proposed by the authors in this work. However, although the architectural designs and corruption approaches might be different, the underlying idea is the same. * Adding Gaussian noise to the input image and asking the model to denoise during pre-training improves the model's robustness towards noise in general. * More results on datasets with various scales and types are required to draw solid conclusions. For instance, IN-21k, iNaturalist, COCO etc. With these concerns, I will stand by my rating. --- Reply to Comment 1.1.1: Title: Further discussion with Reviewer qarJ (Part 1/2) Comment: > CIM is a very representative work in the MIM domain. Since it is very similar to the method proposed by the authors, at least it should be cited. > While we acknowledge that CIM is a representative MIM variant, we would like to emphasize again that it has little similarity with our proposed method. In CIM, the authors proposed to improve MIM by using an auxiliary trainable BEiT to degrade the images. In contrast, our paper not only alters the degradation technique but also presents a novel framework that leverages the pretrained model to defend adversarial attacks beyond merely using pretrained representations. In Section 2.1 (lines 78-85), we've already distinguished our work from other research employing different degradation methods and talked about their inspiration for our exploration into NIM. In our final paper, we will cite CIM and include the discussion here for a more comprehensive related works section. > Moreover, different research purpose does not distinguish the two works. For instance, applying the idea of MAE to audio pre-training is an application but not an innovation. > First, we would like to argue that different research purposes usually do distinguish two works. Taking the example of Vision Transformer [1], the authors claimed that they “have explored the direct application of Transformers to image recognition” (see Conclusions in [1]). Yet the paper has become one of the most important works in machine vision. Another good example is “a conceptually simple extension of Masked Autoencoders (MAE) to spatiotemporal representation learning from videos” [2], which was accepted by this venue last year and became influential in its domain. Moreover, our paper is clearly not a simple application of CIM (or any other MIM variants) to the field of adversarial robustness. Regarding the example proposed by the reviewer, it is imaginable that a completely direct application of MAE to audio could result in unexciting work, but in the case of our work, the vast disparity between the domains of adversarial robustness and image recognition makes it unlikely to directly apply CIM or its concepts to adversarial robustness. In light of these factors, we insist that our work is very different from CIM. > The pre-training objective of CIM is to denoise the corrupted input image, which is exactly the same idea proposed by the authors in this work. However, although the architectural designs and corruption approaches might be different, the underlying idea is the same. > A closer look at CIM reveals that its objective diverges from ours. CIM studied two pre-training objectives, one of which is to recover the images corrupted by a small trainable BEiT generator where a pre-trained frozen image tokenizer encoder and decoder are involved, and the other is a discriminative one. Notably, terms like "denoise" or "noise" aren't explicitly mentioned within the CIM paper. In sum, the objective of CIM is much different than “denoise the corrupted input image”. It's worth noting that even if NIM shares some similarities with CIM, equating the two as having the "same idea" is an oversimplification. If one were to perceive CIM and NIM as identical in conception, then by extension, many MIM variants, like MFM [4] and even CIM, would merely be replicas of the original MIM concept and, therefore, lack novelty. Even MIM can be seen as not novel because the idea is the “same” as MLM. However, this isn't the prevailing perspective in our scientific community, and rightfully so. As Professor Michael Black aptly articulated [3], “Taking an existing network and replacing one thing is better science than concocting a whole new network just to make it look more complex.” Furthermore, it's imperative to highlight that our work isn't solely confined to NIM. We further propose $De^3$, a novel framework that effectively leverages the pretraining NIM models for providing adversarial defense to downstream models. This is a significant contribution of our work which is recognized by other reviewers as a novel and interesting method. --- [1] Dosovitskiy et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR 2021. [2] Feichtenhofer et al. Masked Autoencoders As Spatiotemporal Learners. NeurIPS 2022. [3] Black. Novelty in Science: A guide for reviewers. https://perceiving-systems.blog/en/post/novelty-in-science. --- Reply to Comment 1.1.2: Title: Further discussion with Reviewer qarJ (Part 2/2) Comment: > Adding Gaussian noise to the input image and asking the model to denoise during pre-training improves the model's robustness towards noise in general. > We'd like to reference Michael’s blog [3] again where he talks about the relationship between novelty and surprise: ”The novelty, however, must be evaluated *before* the idea existed… If it is easy to explain and obvious in hindsight, this in no way diminishes the creativity (and novelty) of the idea.” To the best of our knowledge, the idea of using the ‘adding-noise-then-denoising’ pretraining to enhance the model’s robustness against adversarial noise has not been proposed before. Therefore, we argue that the idea of our work is novel even it may seem obvious in hindsight. > More results on datasets with various scales and types are required to draw solid conclusions. For instance, IN-21k, iNaturalist, COCO etc. > Following your suggestion, we have shown in the rebuttal that our method is effective on CIFAR-10 besides ImageNet. We humbly disagree that it is necessary to experiment on more datasets to prove the solidity of a method. Previously published studies often substantiate their claims based on results from a handful of datasets. For example, the recent work [5] we compared in Section 5.3, shows the empirical results on CIFAR-10 and Imagenette (a subset of 10 classes from ImageNet-1K). Another related work [6] conducts experiments solely on ImageNet. Similarly, an earlier work on adversarial robustness [7] also only used ImageNet to show the empirical results. Given these precedents, we believe that our choice of ImageNet and CIFAR-10 - both of which are well-regarded and commonly used datasets in the community - provides a sufficient foundation to support our conclusions. --- [3] Black. Novelty in Science: A guide for reviewers. https://perceiving-systems.blog/en/post/novelty-in-science. [4] Xie et al. Masked Frequency Modeling for Self-Supervised Visual Pre-Training. ICLR 2023. [5] Mo et al. When adversarial training meets vision transformers: Recipes from training to architecture. NeurIPS 2022. [6] Kong and Zhang. Understanding Masked Image Modeling via Learning Occlusion Invariant Feature. CVPR 2023. [7] Xie et al. Feature Denoising for Improving Adversarial Robustness. CVPR 2019. --- Rebuttal 2: Title: Further Discussion with Reviewer qarJ Comment: Dear Reviewer qarJ, We genuinely appreciate the time and effort you've invested in reviewing our paper. We have carefully provided relevant responses and results to your concerns. We are eager to further discuss with you and gain your insights. Please let us know if any aspect of our work remains unclear or if you have additional feedback. Thank you. Warm regards, Authors --- Rebuttal 3: Title: Please take a look at authors' responses and other reviewers' comments Comment: Dear Reviewer, Please take a look at authors' responses and other reviewers' comments, Thank you very much. BTW, for a solid review, it would be better to give more details to make a decesion. The current review seems to be short.
Summary: The authors introduce noise image modeling (NIM) as a self-supervised pretext task and demonstrate that their encoder-decoder architecture decreases the success of adversarial attacks by adding noise to a perturbed image and then denoising it. Their method is called $De^3$. Furthermore, the author conduct a major comparison effort to masked image modeling (MIM) and the capabilities against adversarial attacks. Strengths: - This paper is well written and easy to understand. - "flooding out" adversarial attacks is an interesting idea. - The method utilizes the usually unused decoder for "cleaning" adversarial images. - $De^3$, as an adversarial prevention method, can be applied without generating much computation overhead during inference or training. - NIM with $De^3$ can use a dynamic trade-off between clean accuracy and the vulnerability to adversarial images. - On-par results with MIM while doing adversarial training. Weaknesses: 1. Table 1/Figure 3: - The "flooding out effect" for very large sigma (140) seems not reasonable, since it corrupts the whole image. How strong is the flooding in comparison to the adversarial perturbation? I think it is necessary to show the scale difference of (i) the adversarial attacks and (ii) the added noise. - I am a little bit surprised by the denoising performance with high noise-levels (see Figure 3 (f) - sigma 140): the reconstruction seems "too good to be true", where it even recovers the smallest features. This seems odd despite other efforts [1]. I think it needs further extensive investigation: (i) when does the model break in terms of reconstruction quality? What happens when you use an even larger value for sigma? (ii) when does the adversarial defense fail when a high-noise level (sigma 100, 150, 200) is used? 2. Figure 2 (a) needs captions for the image rows/columns to understand it better. 3. Table 1: Inconsistent use of bold highlighting? 4. Table 1: I am not sure if this is a fair comparison, since NIM utilizes its built-in adversarial prevention while MAE/SimMIM (MIM) has none (gamma is not implicitly designed for dealing with adversarial images). 5. line 264-268: From the readers perspective, it seems that the authors tried to achieve state-of-the-art with $De^3$ but failed, so they redirect their writing to fit a different goal. I suggest, the authors omit the sentences about not wanting to achieve SOTA. 6. Figure 5: I suggest to use more symbols for different sigmas. Differentiate between fixed/random sigma and MAE baseline with triangle and then use square/diamond/circle. 7. Figure 5: Ordering of the entries in the legend can be improved References: [1] Mahdaoui, Assia El, Abdeldjalil Ouahabi, and Mohamed Said Moulay. "Image denoising using a compressive sensing approach based on regularization constraints." Sensors 22.6 (2022): 2199. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Do you think that there is a way to construct strong adversarial examples if the attacker knows that the NIM architecture is used, despite the noise randomness? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. Here are our responses: > The "flooding out effect" for very large sigma (140) seems not reasonable, since it corrupts the whole image. > The reconstruction seems "too good to be true". This seems odd despite other efforts [1]. Given that noisy images with large sigma become completely meaningless to humans, it is understandable that one may think the reconstruction is “too good to be true”. However, the experimental results show that ViTs are indeed capable of removing intense Gaussian noise. Note that we are denoising artificial Gaussian noise, which is much easier than real-world unknown noise in [1]. Table 2 (c) in [2] shows a similar result: Top-1 acc for $\sigma=100$ is only lower for 0.1% than the best $\sigma=75$. We think the main reason that highly noisy images are meaningless to humans is that human eyes have limited capabilities, rather than that the information is completely corrupted. > How strong is the flooding in comparison to the adversarial perturbation? I think it is necessary to show the scale difference. In Table 1, when the images are not normalized (the range is [0, 255]), the Gaussian noise’s $\sigma$ is 70 (or 140), and the budget of adversarial perturbations is 4. In other words, the magnitude of the Gaussian noise we added is about 70/4=17.5 (or 35) times the adversarial noise. Thank you for this valuable advice and we will add this scale difference to our final paper. > When does the model break in terms of reconstruction quality? What happens when you use an even larger value for sigma? When does the adversarial defense fail when a high-noise level (sigma 100, 150, 200) is used? When using larger values for sigma, the clean accuracy will decrease, and the robust accuracy will also decrease when it gets too large. For example, in Figure 4, accuracy against FGSM gets lower when the $\sigma$ is larger than 100. When $\sigma$ gets too high (e.g., 200 or 250), the reconstruction quality will be bad, leading to low clean and robust accuracy. Here are the results when using very large $\sigma$ in $De^3$: | $De^3$ | $\sigma=150$ | $\sigma=200$ | $\sigma=250$ | | --- | --- | --- | --- | | Clean | 69.09 | 61.58 | 36.84 | | FGSM | 52.19 | 47.24 | 28.28 | | PGD-10 | 39.61 | 37.37 | 23.63 | > Figure 2 (a) needs captions for the image rows/columns to understand it better. Thank you for the advice. In the top row, the four images are noisy images of $\sigma$=0 (original image), 50, 75, and 100, respectively. In the middle, the four are reconstructed images from the corresponding noisy images at the top row by the NIM model whose $\sigma$ in pretraining is globally set to 75, and the bottom row is four reconstructed images from the corresponding noisy images at the top row by the NIM model whose $\sigma$ in pretraining is randomly sampled from $\Gamma(25, 3)$. We will make it clearer in the final version. > Table 1: Inconsistent use of bold highlighting? We highlight the highest performance for each setting among variants of the same MIM models. In other words, **83.05** is highlighted because it is the highest clean accuracy achieved by all *MAE* or *NIM-MAE* methods, and **51.84** is highlighted because it is the highest accuracy against FGSM achieved by all *SimMIM* or *NIM-SimMIM* methods. > Table 1: I am not sure if this is a fair comparison, since NIM utilizes its built-in adversarial prevention while MAE/SimMIM (MIM) has none. We’d like to clarify that NIM doesn’t have any “built-in adversarial prevention”, either, because it is trained to denoise *Gaussian noise*, instead of specific *adversarial noise*. The comparison is fair because, from pretraining and fine-tuning to evaluation, we ensure everything is the same and the only variable is the degradation method. > line 264-268: I suggest, the authors omit the sentences about not wanting to achieve SOTA. Our intention to write these sentences was to prevent misleading the readers to get the idea from the comparison with adversarial training methods that our purpose was to chase the SOTA. We included the comparison to provide readers with a comprehensive understanding of our method's effectiveness. As stated in the Introduction, our primary goal is to explore how the generative pretraining paradigm can provide adversarial robustness beyond pretrained visual features, and our main takeaway for the community is that NIM can achieve a strong and tunable accuracy-robustness trade-off that MIM models are unable to, indicating the superiority of NIM over MIM in terms of adversarial robustness. However, taking your feedback into account, we'll consider omitting or rephrasing the sentences to prevent any potential misunderstandings. > Figure 5: I suggest to use more symbols for different sigmas. > Figure 5: Ordering of the entries in the legend can be improved. We genuinely appreciate your constructive feedback on the representation in Figure 5. We will incorporate these modifications in our final paper. > Do you think that there is a way to construct strong adversarial examples if the attacker knows that the NIM architecture is used, despite the noise randomness? First, we’d like to clarify that our experiments are under this very assumption that the attacker has full access to both the classifier and the defense model, only without knowing the sampling result of the random noise. To answer the question, we do believe there may be ways to break the defense provided by NIM and $De^3$. For example, as stated in the Limitation section, we only prove that our method is effective against noise-based attacks, but other forms of attacks, like adversarial patches, might present challenges to our defense mechanism. > Limitations are not discussed in the paper. Please refer to the Section B of our supplementary material. --- [2] Xie et al. Masked Frequency Modeling for Self-Supervised Visual Pre-Training. 2022. --- Rebuttal 2: Title: Further Discussion with Reviewer XF5y Comment: Dear Reviewer XF5y, We genuinely appreciate the time and effort you've invested in reviewing our paper. We have carefully provided relevant responses and results to your concerns. We are eager to further discuss with you and gain your insights. Please let us know if any aspect of our work remains unclear or if you have additional feedback. Thank you. Warm regards, Authors --- Rebuttal Comment 2.1: Comment: Thanks for your responses and the clarifications. My concerns were addressed in an appropriate way. Overall, NIM with De^3 seem to be interesting, novel and competitive with existing related methods. My main issue was the reconstruction quality for high sigmas which apparently seems to work. Hence, I am increasing my rating to 7.
Summary: The paper presents a straightforward approach for defending against adversarial attacks while incorporating pre-trained feature learning through the utilization of noisy images. Inspired by Masked Image Modeling (MIM), the paper replaces the masking pretext task by introducing a substantial amount of noise into the image. Subsequently, a transformer-based encoder/decoder is trained to reconstruct the original image, and the encoder is fine-tuned to acquire features that are invariant to noise by comparing the encoded features of the original image input with those of the reconstructed images. The defense mechanism involves retaining the decoder for the downstream task, employing an [encoder → decoder → encoder] mechanism to obtain features that can be further fine-tuned for the downstream task. The authors perform experiments on ImageNet-1K using the ViT-Base backbone and evaluate the proposed method against several existing adversarial attacks, including $l_{\infty}$ bound attacks, FGSM, PGD, and AutoAttack (AA). The results demonstrate that the proposed method exhibits greater robustness compared to baseline pre-trained methods like MAE or SIMM. Strengths: The finding of the paper is interesting, which suggests that the adversarial attacks present as a kind of noise, and it makes sense that denoising works to some extent. This aspect might open up possibilities for combining existing works to further enhance downstream accuracy and adversarial defense. The paper writing is easy to follow and the proposed method is simple and comprehensive. Weaknesses: 1. It would be beneficial to have a more extensive comparison by including well-known self-supervised features like SimCLR, MOCO, DINO, and others, to assess the robustness of the features learned by these methods in comparison to the proposed method. 2. To enhance the paper, it would be valuable if the authors conducted experiments on another dataset and included more baseline methods to gain insights into the generalizability of the proposed method. 3. It would be valuable to compare the proposed method with existing adversarial defense approaches, such as GAN-based methods. 4. Despite some improvements, the defense results are not particularly impressive and do not appear to completely eliminate the attacks. This suggests that the attacks might involve more than just noise. 5. It should be noted that using additional decoders could be a drawback depending on the applications, as it significantly increases memory usage during inference. The approach seems novel, and I find the current version of the paper is fine unless there are serious concerns / flaws from other reviewer’s feedback. However, I would recommend conducting additional experiments to further showcase the generalizability of the proposed method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I wonder if the authors have considered keeping the original MAE approach and combining it with the noisy modelling technique to explore any potential further improvements? See questions above also. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions! Our answers are as follows: > It would be beneficial to have a more extensive comparison by including well-known self-supervised features like SimCLR, MOCO, DINO, and others. While we concur that the comparison would be a great direction to explore, we would like to highlight that it is beyond the scope of this paper. The primary goal of this work is to explore how the generative pretraining paradigm can provide adversarial robustness beyond visual features. Therefore, other pretraining frameworks like contrastive learning (CL) are not considered as they are not applicable to our proposed $De^3$ defense. However, we believe this could be a great research direction for future work. Existing works have compared ViTs and CNNs' adversarial robustness [1], and also compared the pretrained visual features of MIM and CL [2], but the comparison between MIM and CL from the perspective of adversarial robustness has not been explored. This research gap could be a good starting point for a valuable research project. > To enhance the paper, it would be valuable if the authors conducted experiments on another dataset and included more baseline methods. Thanks for this valuable suggestion. We conducted another experiment on CIFAR-10, based on the implementation of [3]. We set the attack radius $\epsilon = 8/255$, and other settings remain the same as in Table 1. The results are as follows: | Model | MAE | MAE | NIM-MAE | NIM-MAE | NIM-MAE | | --- | --- | --- | --- | --- | --- | | $De^3$ | None | $\gamma$=0.75 | None | $\sigma$=40 | $\sigma$=70 | | Clean | **89.88** | 78.36 | 88.31 | 81.09 | 76.30 | | FGSM | 17.51 | **65.13** | 21.66 | 43.26 | 48.86 | | PGD-10 | 0.01 | 22.50 | 2.77 | 32.24 | **40.26** | | AA | 0.00 | 6.220 | 0.00 | 31.49 | **41.20** | Interestingly, we observe that unlike on ImageNet, MAE provides a stronger defense against FGSM on CIFAR-10. The reasons are probably that 1) for low-resolution images, MAE can generate good reconstructions from masked images; 2) when using MAE in $De^3$, 75% of adversarial perturbations will be masked, and if the attack is weak like FGSM, the rest of the perturbation would be too weak to break the model. However, for stronger attacks, NIM still provides better defense. Regarding the baseline method, we adopted two representative MIM methods, MAE and SimMIM, as two baseline methods. It is observed that NIM provides consistently better adversarial defense than MIM, shown in Table 1 of our main paper. > It would be valuable to compare the proposed method with existing adversarial defense approaches. In Section 5.3, we show the comparison of our method and a recent adversarial training approach where ViTs are also adopted as the backbone. Until recently, most adversarial defense methods are based on CNNs and the comparison with them would be out of this work’s scope. More importantly, we would like to emphasize that the main point of this work is not to compete with other adversarial defense methods, but to show that NIM can achieve a strong and tunable accuracy-robustness trade-off and to demonstrate to the community that NIM can serve as a promising and advantageous self-supervised learning paradigm. > Despite some improvements, the defense results are not particularly impressive and do not appear to completely eliminate the attacks. So far, defending against adversarial attacks is still a very challenging task and there is no method that is completely immune to attacks. We show that our proposed method can achieve an improvement in robustness against PGD attack by 34.36%, with a marginal clean accuracy drop of 4.37%. Given that our model is trained to denoise *Gaussian noise*, instead of *adversarial noise*, we think the improvement is significant and interesting. > This suggests that the attacks might involve more than just noise. Indeed, even when $\sigma$ = 250 (62.5 times the adversarial perturbation budget), the clean accuracy (36.84%) is still higher than the robust accuracy against PGD attack (23.63%). This result shows that even when adding a very large Gaussian noise, the adversarial noise would not be fully flooded and removed, indicating that adversarial perturbation may be meaningful to the neural networks, rather than just noise. > It should be noted that using additional decoders could be a drawback depending on the applications, as it significantly increases memory usage during inference. Yes, incorporating additional decoders can indeed increase memory requirements, which could be a concern, especially for applications with tight memory constraints. As suggested by Reviewer judd, distillation could be a potential solution. By training a smaller network to mimic the behavior of the larger network, we could obtain a more efficient model that retains much of the adversarial robustness of the original approach. We acknowledge this limitation and appreciate the insightful suggestion, and we're keen to investigate this avenue in future research. > I wonder if the authors have considered keeping the original MAE approach and combining it with the noisy modelling technique? Thank you for providing this great idea! We have not considered this direction, but we believe it has great potential. It reminds us of the findings presented in [4], where the combination of zooming-in and masking outperformed using only masking. Therefore, it is reasonable to expect that combining masking and adding noise would bring further improvements. We will certainly consider investigating this direction in subsequent studies. Thanks again for the advice. --- [1] Bai et al. Are Transformers More Robust Than CNNs? NeurIPS 2021. [2] Wei et al. Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation. 2022. [3] https://github.com/IcarusWizard/MAE. [4] Tian, et al. Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers. 2022. --- Rebuttal 2: Title: Further Discussion with Reviewer ovhG Comment: Dear Reviewer ovhG, We genuinely appreciate the time and effort you've invested in reviewing our paper. We have carefully provided relevant responses and results to your concerns. We are eager to further discuss with you and gain your insights. Please let us know if any aspect of our work remains unclear or if you have additional feedback. Thank you. Warm regards, Authors
Summary: This paper proposes a novel adversarial defense method $De^3$ to utilize the strong denoising ability of NIM models. The proposed method first adds some Gaussian noise to the adversarial samples and then tries to reconstruct the original images. Experiments show the advantage of NIM over MIM in terms of adversarial robustness. Strengths: 1. The idea of NIM is interesting. The proposed method obtains the adversarial defense as well as provides pretrained features by a simple yet effective modification on masked image modeling. 2. The idea of De3 is instructive to enhance adversarial defense through reconstructing clean images from intensely noisy images. 3. The experiments demonstrate that the proposed method can achieve comparable defense performance with adversarial training. 4. This paper is well-written and easy to follow. Weaknesses: 1. Although De3 can enhance adversarial defense, it increases computational cost during inference time. Note that there exists two encoders and one decoder in De3, can we distill another encoder from them and use only it during inference time? 2. This paper compares NIM against MIM in terms of accuracy and adversarial robustness. It seems that MIM achieves better accuracy on clean images. I wonder can we combine them together and obtain a better trade-off between accuracy and robustness? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I wonder the difference of robustness against other types of attacks, like black-box ones. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are adequately discussed in the supplemental materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive affirmation of our work and your constructive suggestions. Here are our responses. > Although De3 can enhance adversarial defense, it increases computational cost during inference time. Note that there exists two encoders and one decoder in De3, can we distill another encoder from them and use only it during inference time? > We appreciate this valuable suggestion. As we acknowledged in our Limitation section, the computational cost introduced by the defense process is a major drawback of our method. We have not considered using distillation for resolving the issue, but we believe it could be a direction with great potential. By training a smaller network to mimic the behavior of the original network, we could obtain a more efficient model that retains much of the adversarial robustness while lowering the computational costs. Moreover, given that one encoder essentially stems from the initialization of the other before fine-tuning, it's plausible that there's room for optimization by either merging or compressing them. We're genuinely grateful for this constructive feedback and will look into this possibility in our subsequent research. > This paper compares NIM against MIM in terms of accuracy and adversarial robustness. It seems that MIM achieves better accuracy on clean images. I wonder can we combine them together and obtain a better trade-off between accuracy and robustness? > Thank you for another insightful idea. Indeed, since MIM exhibits better accuracy on clean images and NIM shows stronger robustness, there could be potential in combining them to achieve a better accuracy-robustness trade-off. Previous work [1] shows that when combining zooming-in and masking, the pretraining performance would be higher than using only masking. Therefore, it is reasonable to expect that combining masking and adding noise would bring further improvements. We will certainly delve into this promising avenue in our future work. > I wonder the difference of robustness against other types of attacks, like black-box ones. > We adopt Square [2] as a example of black-box attacks, and set the perturbation budget $\epsilon$=16/255 and a budget of 1,000 queries. The results are as follows compared with white-box PGD-10 attacks. | Model | MAE | MAE | NIM-MAE | NIM-MAE | NIM-MAE | | --- | --- | --- | --- | --- | --- | | $De^3$ | None | $\gamma$=0.75 | None | $\sigma$=70 | $\sigma$=140 | | Clean | 83.05 | 44.96 | 82.58 | 78.68 | 70.69 | | PGD-10 (WB) | 0.25 | 11.58 | 0.31 | 34.61 | 39.82 | | Square (BB) | 1.17 | 33.59 | 2.44 | 73.14 | 67.29 | It is shown that our $De^3$ with NIM is very effective for defending against this black-box attack. --- [1] Tian, et al. Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers. 2022. [2] Andriushchenko et al. Square attack: a query-efficient black-box adversarial attack via random search. ECCV 2020. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for dealing with my concerns and including more experiments. The additional results make me more convinced.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a method called Noisy Image Modeling (NIM) as self-supervised learning to improve the adversarial robustness of pretrained features. NIM uses denoising as a pretext task and is effective in reconstructing noisy images for representation learning. In addition, the authors propose a defense technique called De^3 that leverages the denoising capability of NIM to enhance robustness against adversarial attacks. Experimental results show that NIM with De3 defense outperforms Masked Image Modeling (MIM) in terms of adversarial robustness while maintaining competitive performance on clean data. The paper concludes by highlighting the potential of NIM and other variants of MIM for generative visual pretraining. Strengths: 1. Different from the popular MIM, NIM improves the pretrained features by resisting adversarial attacks. The experiment verfies the effectiveness of the proposed method. 2. In addition to the representation learning method, the paper proposes a defense technique called De3 that utilizes the denoising capability of NIM to enhance robustness against adversarial attacks. 3. The paper compares the performance of NIM with the proposed De3 defense to Masked Image Modeling (MIM) and other adversarial training methods. Experimental results show that NIM with De3 defense outperforms MIM in terms of adversarial robustness while maintaining competitive performance on clean data. 4. The paper introduces a modification to NIM that allows for a tunable trade-off between accuracy and robustness. Weaknesses: 1. The paper does not thoroughly analyze the computational cost of the proposed method. Adversarial training methods, such as the one proposed by Wang et al. [34], are known to be computationally expensive. It would have been valuable to compare the computational cost of NIM with other defense techniques. 2. The paper does not extensively explore the impact of hyperparameters on the performance of the proposed method. For example, the authors mention that the trade-off between clean accuracy and robustness can be adjusted by varying the noise level hyperparameter, but they do not provide a detailed analysis of the optimal values or the sensitivity of the method to different hyperparameter settings. 3. The paper does not analyze the transferability of adversarial examples between different models. It would have been valuable to investigate whether the robustness achieved by NIM with De^3 defense can generalize to other models or if it is specific to the pretrained model used in the experiments [1]. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Computational cost. 2. Parameter Sensitivity Analysis. 3. The generalization of the proposed method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your approval and recognition of this paper. Our answers are listed below. > The paper does not thoroughly analyze the computational cost of the proposed method. Adversarial training methods, such as the one proposed by Wang et al. [34], are known to be computationally expensive. It would have been valuable to compare the computational cost of NIM with other defense techniques. Thank you for this valuable suggestion. Indeed, our paper could have benefitted from a comparison of the computational costs between the proposed NIM approach and existing adversarial training methods. We evaluated the throughput of the adversarial training method [21] (used in the comparisons of Section 5.3) and our proposed NIM approach, using an identical training framework implemented on 8 A100 GPUs. The result is as follows | Method | Throughput (images/sec) | Clean | PGD-10 | | :---: | :---: | :---: | :---: | | AT-ViT [21] | 488.0 | 69.28 | 39.97 | | Ours | 2009.4 | 70.69 | 39.82 | Note that the adversarial training method is trained for 20 epochs and ours are trained for 100 epochs. Therefore, adjusted for epochs, our method are (488.0/20)/(2009.4/100)=1.2 times faster than the adversarial training method. > The paper does not extensively explore the impact of hyperparameters on the performance of the proposed method. For example, the authors mention that the trade-off between clean accuracy and robustness can be adjusted by varying the noise level hyperparameter, but they do not provide a detailed analysis of the optimal values or the sensitivity of the method to different hyperparameter settings. > Sorry for the confusion. Regarding the trade-off that can be adjusted by the noise level hyperparameter $\sigma$, we would like to refer to Figure 4 and the description in lines 247-261, where the variance in clean and robust accuracies with respect to $\sigma$ is explicitly presented. It is shown that by increasing the $\sigma$ in defense, the clean accuracy would decrease as the reconstruction gets poorer, but the robust accuracy would be enhanced because adversarial perturbations are flooded by stronger noises. In addition, we present how different random distributions of $\sigma$ in pretraining influence the fine-tuned models’ performance in Section 5.4 and we show that $\sigma \sim \Gamma(25,3)$ achieves the best performance among all models. It is noteworthy that due to the limitation of computational resources, it was challenging to conduct a comprehensive grid search for hyperparameter optimization. However, we would like to highlight that the main point of this work is to show that NIM can achieve a stronger and tunable accuracy-robustness trade-off compared to MIM, instead of achieving the highest possible performance with optimal hyperparameters. > The paper does not analyze the transferability of adversarial examples between different models. It would have been valuable to investigate whether the robustness achieved by NIM with De^3 defense can generalize to other models or if it is specific to the pretrained model used in the experiments [1]. > Thanks for this insightful feedback. Indeed, exploring the transferability of the proposed adversarial defense method would be very helpful for a more comprehensive understanding of our method and its applicability. Given that our method does not train the model to be robust to some specific adversarial noise but removes the perturbations along with strong Gaussian noise, we are optimistic about the generalizability of our defense. Due to limited time and computational resources, we haven't provided empirical evidence to confirm this claim, but we will work on it once we are able to. --- [21] Mo, et al. When adversarial training meets vision transformers: Recipes from training to architecture. NeurIPS 2022.
null
null
null
null
null
null
Post-processing Private Synthetic Data for Improving Utility on Selected Measures
Accept (poster)
Summary: The paper considers how to obtain improved utility from synthetic data under differential privacy. Prior work has tried to achieve utility by incorporating utility directly into the generation of the synthetic data e.g., by making use of workload data. Here, the proposal is to perform post-hoc reweighting of the synthetic data, based on measuring a difference in values between the final data under a set of queries with known answers. This measurement/reweighting step is also done under privacy. The paper shows that the optimization can be done using standard solvers, and so can be performed effectively via linear or quadratic programming (for pure or approximate DP). A specific use case based on aligning the correlation matrix is used to demonstrate that the sensitivity calculations can be quite simple. Numerical experiments demonstrate that the approach can be quite effective in terms of absolute error ratio, and F1 score. Strengths: Synthetic data is widely promoted as a practical way to gain from private data, without running into issues around exhaustion of privacy budget. Ensuring that synthetic data is usable for the target queries is therefore an important challenge, particularly if the workload may not be known at data generation time. This approach shows a promising way to overcome these issues, provided we still have access to the data to perform the private optimization step. The approach is very clean, and shows good promise in terms of utility and performance. Weaknesses: Evaluation is on four benchmark data sets that may be considered "easy". It would be interesting to test this out on additional data, such as that used in recent synthetic data challenges by NIST. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Apart from correlation matrix, can you generate more examples of query families to apply that have bounded sensitivity? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No need to address societal impact. Technical limitations are outlined in Section 5 in broad strokes, but it would be interesting to hear more specific suggestions about how this approach could be extended to handle more diverse data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful read of our paper and constructive comments! --------- **Q1. Evaluation is on four benchmark data sets that may be considered "easy". It would be interesting to test this out on additional data, such as that used in recent synthetic data challenges by NIST.** A1. To clarify, we have also applied our algorithm to the home-credit dataset [MOK18], in addition to the four benchmark datasets. The home-credit dataset is a large-scale dataset, which has 307,511 rows and 104 features. The execution of our method, which includes computing the utility measures from real data, denoising the noisy answers, and computing optimal resampling weights, on this large-scale dataset took approximately 3 mins on a single NVIDIA GeForce RTX 3090 GPU and the results are shown in Table 2 and Figure 1 in the paper. --------- **Q2. Apart from correlation matrix, can you generate more examples of query families to apply that have bounded sensitivity?** A2. Indeed, our proposed method can be expanded for use in many other applications, not limited to correlation alignment. For instance, it can be applied to mitigate data biases while ensuring the DP guarantees of resampled synthetic data. During the generation of private synthetic data, outliers may end up receiving a higher level of noise. This could disproportionately affect minority groups. As a result, ML models, when trained on this DP-synthetic data by downstream users, may exhibit disparate impact when implemented on real data. Specifically, consider a situation where each data record, represented by $x$, includes a sensitive attribute, such as gender or ethnicity, indicated by $s \in \mathcal{S}$. This record also includes an outcome variable, denoted as $y \in \mathcal{Y}$. To adjust the probability distribution $P_{S,Y}$ of the synthetic data to match that of the real data, our post-processing technique can be applied by selecting the following utility measures: $\set{q_{i,j}(x) = \mathbb{I}[s = i, y = j]}_{i\in \mathcal{S}, j \in \mathcal{Y}}$ where $\mathbb{I}$ is the indicator function. This method provides an extension to the current data pre-processing techniques found in fair ML literature [e.g., Kamiran and Calders, 2012] by implementing DP guarantees within the pre-processing pipeline. Broadly, our technique can be applied to enhance any utility measures that can be represented as bounded queries. —-Kamiran, F. and Calders, T., 2012. Data preprocessing techniques for classification without discrimination. --------- **Q3. Technical limitations are outlined in Section 5 in broad strokes, but it would be interesting to hear more specific suggestions about how this approach could be extended to handle more diverse data.** A3. Thank you for raising this important point! There are two additional applications where our approach can be effectively applied. First, in cases where the data exhibit a temporal structure, our approach can be used for post-processing synthetic data. This allows for the alignment of the transition matrix between adjacent time-steps with that of real data. The second application involves aligning the distribution of synthetic data in its tail with real data, as DP mechanisms tend to add higher noise to these regions. This aspect of preserving the tail distribution is particularly crucial in financial data, such as in fraud detection. --- Rebuttal Comment 1.1: Comment: Thank you for these thorough responses to the questions and comments in the review. Applications and evaluation in the context of fairness seems like an interesting direction to pursue. --- Reply to Comment 1.1.1: Title: Thank you for your prompt response! Comment: Thank you for your prompt response! We are pleased to hear that you find the new applications and evaluations we presented in our response to be of interest. We will make sure to incorporate them into the revised paper. Finally, we would like to express our gratitude once more for the insightful and constructive comments you provided.
Summary: The paper proposes a DP post-processing method for synthetic data that weights the synthetic data to match user-selected utility measures on the real dataset. The utility measures on real data are measured with noise to make this post-processing DP. To find the synthetic data weights, the paper derives a closed-form expression, and develops an optimisation algorithm to find optimal dual variables required to evaluate the closed-form optimum. To evaluate the method, the paper conducts experiments on 5 real datasets, testing how much the method is able to improve the synthetic data produced by 5 existing methods of generating DP synthetic data. Strengths: The paper is written well, and the main points are easy to understand. The idea behind the proposed method is interesting, and it should, at least in principle, be applicable to any kind of synthetic data and utility measure. Weaknesses: Some important experimental details are missing. See my questions. The Private PGB method of Neunhoeffer et al. (2021) is fairly similar to the proposed method, and their differences should be discussed. Currently the paper is not even cited. Minor comments: - Slater's condition should be introduced before the proof of Theorem 1. - Seeing the raw utility and F1 scores from the experiment in Table 1 would be useful, as it would show whether the large improvement from post-processing on the GANs is just caused by the GANs generating poor synthetic data - References [App17], [Dat23], [DG17], [MOK18], [Sma23] should have URLs Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Important experimental details: - Is the plain synthetic data baseline $\epsilon$ or $(\epsilon + \epsilon')$-DP? It seems to be the former, which is not a fair comparison, as the generation + post-processing is $(\epsilon + \epsilon')$-DP. - Are the categorical columns also converted to numerical features for AIM and MST? They should not be, as both methods handle categorical values natively. - What is the workload given to AIM? Does it contain the marginal queries with the variables the utility measures are looking at? Minor questions: - How is the real data split into training and test data? - Why were MST and AIM not run on the home-credit dataset? The original paper on Private-PGM (McKenna et al. 2019) that both methods are based on experiments on a dataset with a similar number of features. - Is there a difference between $\lambda^*$ in (4) and $\lambda$ in (5)? References: - Neunhoeffer et al. "Private Post-GAN Boosting" ICLR 2021 - McKenna et al. "Graphical-model based estimation and inference for differential privacy" ICML 2019 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper mentions that they assume the real data is normalised so each feature lies in $[0, 1]$. This is a fairly large limitation, as normalising data under DP while retaining the possibility of undoing the normalisation is not trivial, and should be discussed further.. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and for appreciating the value of the work! --- **Q1. Some important experimental details are missing.** A1. We appreciate your constructive feedback regarding our experimental results! Please find the detailed responses to your questions below, where we hope to adequately address all your concerns. --- **Q2. Private PGB method of Neunhoeffer et al. (2021).** A2. We thank the reviewer for pointing out the missing reference. We will ensure it is included in the revised paper. The key difference between our work and [Neunhoeffer etal., 2021] is that their method is tailored to GAN-based privacy mechanisms while our approach is model-agnostic. In other words, it can be applied to improve the utility of synthetic data generated by *any* privacy mechanisms. This versatility is crucial, given that marginal-based and workload-based mechanisms often yield higher quality synthetic data, as evidenced by benchmark experiments in [Tao etal., 2021], the NIST competition rankings [McKenna etal., 2021], and Table 1, 2 in the PDF submitted in our global response. Our experiments indicate that our method consistently improves the utility of synthetic data produced by all kinds of privacy mechanisms, even when the initial synthetic data is of high quality. —Tao, Y., McKenna, R., Hay, M., Machanavajjhala, A. and Miklau, G., 2021. Benchmarking differentially private synthetic data generation algorithms. —McKenna, R., Miklau, G. and Sheldon, D., 2021. Winning the NIST Contest: A scalable and general approach to differentially private synthetic data. --- **Q3. Slater's condition should be introduced before the proof of Theorem 1.** A3. We thank the reviewer for their suggestion and will include Slater's condition before the proof of Theorem 1. --- **Q4. Seeing the raw utility and F1 scores from the experiment in Table 1 would be useful, as it would show whether the large improvement from post-processing on the GANs is just caused by the GANs generating poor synthetic data** A4. Absolutely, that's an excellent suggestion! Please refer to the PDF we included in our global response for the raw F1 scores of both the original synthetic data and the post-processed data. --- **Q5. Some references should have URLs** A5. We appreciate the reviewer's suggestion and will incorporate the URLs into the revised paper. --- **Q6. Is the plain synthetic data baseline $\epsilon$ or $\epsilon + \epsilon’$-DP? It seems to be the former, which is not a fair comparison.** A6. In response to your concerns, we have conducted *an additional experiment* under the setup you suggested. Please refer to the PDF we submitted in the global response for details. In short, our observations are in line with prior experiments, confirming that our algorithm consistently enhances the utility of synthetic data on selected measures. The reasoning behind our original setup stems from the observation that increasing the privacy budget in synthetic data generation mechanisms *does not* necessarily enhance the quality of the synthetic data, even when averaged over multiple trials. We conjecture this incongruity arises from the random noise injected into the data generation process to ensure DP guarantees. Therefore, even without employing our post-processing algorithm, synthetic data generated from a lower privacy budget may surpass data produced with a higher privacy budget in terms of their overall quality. Our initial experimental setup enabled us to eliminate this potential issue. As a result, we concluded that our post-processing algorithm can consistently enhance the utility of the synthetic data on selected measures, despite requiring a small privacy budget (see lines 45–46 and 53–55). --- **Q7. Are the categorical columns also converted to numerical features for AIM and MST?** A7. To clarify, we used ordinal encoding to pre-process the categorical columns, which were then fed into the AIM and MST mechanisms. --- **Q8. Workload given to AIM** A8. The workload given to AIM consists of all one-way and two-way marginals. The utility measures chosen for our study are first-order and second-order moments, which are functions of the workload. --- **Q9. How is the real data split into training and test data?** A9. We split the real data, using 80% for generating synthetic data and setting aside 20% to evaluate the performance of predictive models trained on the synthetic data. --- **Q10. Why were MST and AIM not run on the home-credit dataset?** A10. The primary reason was the lack of a functional GPU-accelerated implementation for these algorithms at the time of our experimentation. Our attempts to generate synthetic copies of the home-credit dataset, which comprises 100+ columns, using CPU-based computations proved to be exceedingly time-consuming and resource-intensive. Despite our best efforts, these attempts repeatedly led to kernel crashes and hindered our ability to carry out comprehensive experiments. --- **Q11. Difference between $\lambda^{*}$ in (4) and $\lambda$ in (5)?** A11. $\lambda$ represents the variables involved in the optimization problem in Eq. (5), while $\lambda^*$ denotes the optimal solution. We will clarify this notation in the updated version of our paper. --- **Q12. Data normalization.** A12. To clarify, our proposed method can be expanded to a broader context where we do not necessarily assume that each feature is in $[0,1]$. Instead, we only require utility measures that are represented by bounded queries. In the case of aligning the correlation matrix as an application, the only requirement is for all features to have a bounded domain, ensuring that the first and second-order moment queries are also bounded. Users can either define the lower and upper bounds of each feature, or they can be estimated from real data using a DP mechanism (see e.g., an OpenDP API: snsynth.transform.minmax). --- Rebuttal Comment 1.1: Comment: Thank you for the response. You addressed my biggest concerns with the experimental setup very well, so I'm moving to recommend acceptance. --- Reply to Comment 1.1.1: Title: Thank you for your prompt response! Comment: Thank you for your prompt response! We are glad to hear that our response has addressed your biggest concerns. We will ensure that the responses provided above, along with the promised changes, are integrated into the revised paper. Finally, we would like to express our gratitude once more for the insightful and constructive comments you provided.
Summary: The paper introduces a post-processing technique that enhances the utility of synthetic DP data with respect to selected measures. The proposed technique involves resampling from the synthetic data to filter out samples that do not meet the selected utility measures, using a stochastic first-order algorithm to find optimal resampling weights. Strengths: Strengths: - The post-processing technique discussed appears to be novel and significant. - The method is model-agnostic. This makes the technique highly versatile. - The authors provide a good set of numerical experiments to validate their approach. These results are strong and show improvements across multiple benchmark datasets and synthetic data generation algorithms. - The paper is well-structured and the methodology is clearly explained. Weaknesses: Weaknesses: - The paper could benefit from a more detailed discussion on the trade-offs involved in using the proposed post-processing technique. For instance, it would be helpful to understand the impact of the technique on the computational complexity of the data synthesis process. - The authors could provide more examples or case studies to illustrate the practical applications and benefits of their proposed technique, beside the correlation example illustrated. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Could you elaborate on the computational complexity of the proposed post-processing technique? How does it compare to the complexity of existing synthetic data generation algorithms? - How does the proposed technique handle high-dimensional data? Are there any limitations or challenges in this regard? - Could you provide additional examples to illustrate the practical applications and benefits of the proposed technique? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: See my questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the kind comments and the encouragement! --------- **Q1. More detailed discussion on the trade-offs involved in using the proposed post-processing technique and the impact of the technique on the computational complexity of the data synthesis process.** A1. That’s a great suggestion! Indeed, scalability is a key feature of our proposed technique. To be precise, our Algorithm 1 requires a computational cost of $O(bKT)$ where $b$ denotes the mini-batch size, $K$ denotes the number of utility measures of interest, and $T$ denotes the number of iterations. Note that this computational cost is independent with the number of features or the number of synthetic data. In contrast, some existing approaches do not scale to high-dimensional data. For example, they need to solve an integer program multiple times [Vietri etal., 2020] or need to solve a large-scale optimization problem whose complexity depends on the number of synthetic data to generate [Aydore etal., 2021]. Please also refer to our response to your Q4 for further details on how our algorithm scales on a large-scale real-world dataset. —Vietri, G., Tian, G., Bun, M., Steinke, T. and Wu, S., 2020. New oracle-efficient algorithms for private synthetic data release. —Aydore, S., Brown, W., Kearns, M., Kenthapadi, K., Melis, L., Roth, A. and Siva, A.A., 2021. Differentially private query release through adaptive projection. --------- **Q2. Practical applications and benefits of their proposed technique, beside the correlation example illustrated.** A2. Thank you for your valuable suggestion. Yes, our proposed method can be expanded for use in many other applications, not limited to correlation alignment. For instance, it can be applied to mitigate data biases while ensuring the DP guarantees of resampled synthetic data. During the generation of private synthetic data, outliers may end up receiving a higher level of noise. This could disproportionately affect minority groups. As a result, ML models, when trained on this DP-synthetic data by downstream users, may exhibit disparate impact when implemented on real data. Specifically, consider a situation where each data record, represented by $x$, includes a sensitive attribute, such as gender or ethnicity, indicated by $s \in \mathcal{S}$. This record also includes an outcome variable, denoted as $y \in \mathcal{Y}$. To adjust the probability distribution $P_{S,Y}$ of the synthetic data to match that of the real data, our post-processing technique can be applied by selecting the following utility measures: $\set{q_{i,j}(x) = \mathbb{I}[s = i, y = j]}_{i\in \mathcal{S}, j \in \mathcal{Y}}$ where $\mathbb{I}$ is the indicator function. This method provides an extension to the current data pre-processing techniques found in fair ML literature [e.g., Kamiran and Calders, 2012] by implementing DP guarantees within the pre-processing pipeline. Broadly, our technique can be applied to enhance any utility measures that can be represented as bounded queries. —-Kamiran, F. and Calders, T., 2012. Data preprocessing techniques for classification without discrimination. --------- **Q3. The computational complexity of the proposed post-processing technique.** A3. Please refer to our response to your Q1. --------- **Q4. How does the proposed technique handle high-dimensional data? Are there any limitations or challenges in this regard?** A4. Indeed, a key characteristic of our proposed technique is its scalability, particularly with high-dimensional data. This scalability has been achieved by transforming the original optimization problem (Eq. 3), where the number of variables exponentially increases with the number of features, into a dual problem. Moreover, we have introduced a stochastic compositional proximal gradient algorithm (Algorithm 1) for solving the dual problem, enhancing computational efficiency significantly through the use of mini-batches for parameter updates. In addition, we have provided a PyTorch implementation of our algorithm which is optimized for GPU-based computations (please refer to the submitted code). As a practical demonstration, we have applied our proposed technique to the home-credit dataset [MOK18]. This dataset has 307,511 rows and 104 features. The execution of our method, which includes computing the utility measures from real data, denoising the noisy answers, and computing optimal resampling weights, on this large-scale dataset took approximately 3 mins on a single NVIDIA GeForce RTX 3090 GPU. --------- **Q5. Additional examples to illustrate the practical applications and benefits of the proposed technique.** A5. Please refer to our response to your Q2. --- Rebuttal Comment 1.1: Title: Re: rebuttal Comment: Thank you for your responses to my questions! Including a discussion of the computational complexity of the method would indeed be helpful. --- Reply to Comment 1.1.1: Title: Thank you for your prompt response! Comment: Thank you for your prompt response! Yes, we will make sure to include a detailed discussion about the computational complexity of our algorithm in the revised paper, along with a comparison to existing algorithms. Finally, we would like to express our gratitude once more for the insightful and constructive comments you provided.
Summary: The paper under review introduces a technique aimed at enhancing the quality of differentially private synthetic data. Specifically, this approach is applicable when a private synthetic data set, generated by any available privacy-preserving mechanism, does not align with the original data set on certain key measures or queries. The proposed solution involves adjusting the weights of synthetic data samples to ensure their alignment with the original data based on the specified objective. The paper specifically applies this technique to post-process synthetic data such that the resultant synthetic data aligns with the true data's correlation matrix. The proposed algorithm commences by estimating the empirical correlation matrix using the Gaussian mechanism with privacy parameters epsilon=1 and epsilon=3. This involves the addition of independent Gaussian noise to each entry of the correlation matrix. Subsequently, the algorithm resolves a convex optimization problem using a first-order method to determine the optimal sample weights that best conform to the noisy correlation matrix. The research findings demonstrate that the post-processed synthetic data offers a more accurate approximation of the correlation matrix compared to the original synthetic data (prior to the reweighting operation), thereby underscoring the effectiveness of the proposed method. Strengths: The idea of post-processing synthetic data presented in this paper is a compelling approach that boasts potential applicability to a multitude of problems. While the primary objective here is the alignment of pair-wise correlations within the data, theoretically, the objective could be a more intricate query that existing synthetic data methods are not designed to tackle. Weaknesses: In terms of originality, the proposed solution bears similarity to the Private Entropy Projection (PEP) mechanism referenced in [LVW21]. The PEP mechanism functions over the entirety of the data domain, assumed to be sufficiently small for its computational limitations. Nevertheless, it could, in theory, operate over any support, including samples from synthetic datasets, as suggested in this paper. Thus, it would be advantageous to explain the distinctions between these methods. Further, if they indeed differ, the paper should clarify why the PEP method is ill-suited for addressing the problem at hand. Regarding the experimental setup, it appears to have some potential flaws. The paper compares the utility of a synthetic dataset D_syn, presumably generated using an original privacy budget (termed epsilon_original), with the utility of a post-processed dataset D_post, created with an additional privacy budget of epsilon=1. Consequently, the post-processed dataset D_post possesses less privacy than the pre-processed D_syn. Hence, it remains ambiguous whether the superiority of D_post over D_syn can be attributed to the proposed resampling method or merely to the fact that a larger privacy budget was utilized to generate D_post. A more transparent comparison would involve assessing the utility of D_post against a synthetic dataset generated using a budget of (epsilon_original + 1). Lastly, while the post-processing operation enhances the synthetic data in relation to the correlation matrix, it remains uncertain whether this operation compromises other essential data properties. For instance, if the original synthetic data was trained on 3-way marginals, the post-processing step could cause a deviation from these queries, thus leading to a poorer approximation. This possibility could account for the machine learning results in Figure 1, where the post-processed dataset exhibits a lower F1 score under certain conditions. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Why was the algorithm GEM only used for the home-credit dataset ? What is the effect of the post-processes dataset on the measures that the original data was optimized for? That is, does the utility degrade? How does the quality of the original synthetic data affect the absolute performance of the post-process synthetic data ? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review and for appreciating the merits of the work! --------- **Q1. Comparison with PEP in [LVW21].** A1. Thank you for highlighting this comparison. In contrast to PEP in [LVW21], our proposed solution offers several enhancements. First, it is important to note that their Algorithm 4, an implementation of PEP outlined in the appendix, only applies when $\gamma=0$. The extension of this algorithm to accommodate a general non-negative violation tolerance, i.e., $\gamma \geq 0$, is not straightforward. In contrast, we introduced a stochastic compositional proximal gradient algorithm (Algorithm 1 in our paper) that is applicable to any general non-negative violation tolerance. From our numerical experiments, we have observed that setting gamma to be a small positive number (for example, $\gamma = 1e-5$ in all our setups) consistently outperforms when $\gamma = 0$. Second, note that PEP is designed to find an optimal solution to a regularized constraint optimization problem (Eq. 8 in their appendix). However, due to the Laplace/Gaussian noise introduced to the measurements, this problem may prove to be infeasible. Despite the authors' claim in Appendix C.2 that their algorithm can still be executed even if Equation 8 is infeasible, solving the dual optimization under this circumstance may not yield a satisfactory primal solution. In contrast, our approach is to first denoise the noisy measurements prior to running our algorithm. By doing this, we can consistently guarantee the feasibility of the constraint optimization problem (Eq. 3 in our paper). Furthermore, this denoising step could augment the accuracy of our proposed solution since it can be perceived as identifying the maximum likelihood estimation of the utility measures of interest. Finally, we wish to highlight the originality of both the challenges we tackled in this study—incorporating end-user requirements into the data generation pipeline—and our proposed solution—a post-processing pipeline to enhance the utility of synthetic data. These subjects have not been explored in any previous research. --------- **Q2. Experimental setup appears to have some potential flaws.** A2. In response to your comments, we have conducted an additional experiment under the setup you suggested: comparing the utility of synthetic data produced by privacy mechanisms with a privacy budget of $\epsilon_{original} + 1$, against the post-processed synthetic data that was generated by applying our post-processing technique (with a privacy budget of $1$) to synthetic data generated from privacy mechanisms with a privacy budget of $\epsilon_{original}$. Please refer to the PDF we submitted in the global response for more details. In short, our observations are in line with prior experiments, confirming that our algorithm consistently enhances the utility of synthetic data on selected measures. The reasoning behind our initial experimental setup stems from the observation that increasing the privacy budget in synthetic data generation mechanisms *does not* necessarily enhance the quality of the synthetic data, even when averaged over multiple trials. The metrics we considered in this quality assessment include utility measures of interest, downstream performance, and statistical measures. We conjecture this incongruity arises from the random noise injected into the data generation process to ensure DP guarantees. Therefore, even without employing our post-processing algorithm, synthetic data generated from a lower privacy budget may surpass data produced with a higher privacy budget in terms of their overall quality. Our initial experimental setup enabled us to eliminate this potential issue. As a result, we concluded that our post-processing algorithm can consistently enhance the utility of the synthetic data on selected measures, despite requiring a small privacy budget (see lines 45–46 and 53–55). --------- **Q3. Whether post-processing compromises other essential data properties.** A3. Indeed, this is a great point. In response to your concerns, we have incorporated two statistical metrics (average inverse of KL-divergence and Jensen-Shannon distance) along with the F-1 score to evaluate the quality of synthetic data. These metrics are implemented by using synthcity, which is a python package for generating and evaluating synthetic tabular data. As shown, our post-processing algorithm consistently enhances the selected utility measures without compromising either statistical parameters or downstream performance measures. --------- **Q4. GEM model and other experimental results** A4. Due to GEM not being included in the opendp package, we omitted it from Table 1 out of concern that comparing GEM with other mechanisms, processed under different data pre-processing pipelines, may lead to misleading conclusions. In response to your other comments, note that the workloads of AIM include all one-way and two-way marginals. The utility measures presented in Table 1 are based on first-order and second-order moments, which are functions of the workloads in AIM, and our post-processing algorithm continues to yield utility improvements across all datasets. We observed that the poorer the quality of the synthetic data, the greater the enhancement of quality through our post-processing algorithms. More specifically, we've seen that marginal-based mechanisms typically outperform DP-SGD-based mechanisms (please refer to the raw F1 scores reported in Table 1 and 2 in our global response). As shown in these two tables, our post-processing algorithms tend to yield more substantial utility enhancements for DP-SGD-based mechanisms. --- Rebuttal Comment 1.1: Comment: Thank you for your response. You have addressed my major concerns and I will move change my score to accept. --- Reply to Comment 1.1.1: Comment: Thank you so much for your response. We are glad to know that we have addressed your major concerns. We will make sure to include the promised changes in the revision (both in the main text and appendix).
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for taking the time and effort to review our paper! We are delighted to learn that our paper was positively received, and the reviewers found that the idea of post-processing synthetic data presented in this paper is a compelling approach that boasts potential applicability to a multitude of problems (Reviewer SorM); the method is model-agnostic, which makes the technique highly versatile (Reviewer zgZz); the idea behind the proposed method is interesting, and it should, at least in principle, be applicable to any kind of synthetic data and utility measure(Reviewer 5bmq); and the approach is very clean, and shows good promise in terms of utility and performance (Reviewer rPbc). We also recognize that the reviewers are busy handling multiple papers, so their thoughtful feedback is even more appreciated. --- We have included a PDF in this global response that contains additional experiments as suggested by the reviewers. The main differences compared with Table 1 in our submission are: [New setup]. We compare the utility of synthetic data produced by privacy mechanisms with a privacy budget of $\epsilon + 1$, against the post-processed synthetic data that are generated by applying our post-processing technique (with a privacy budget of $1$) to synthetic data generated from privacy mechanisms with a privacy budget of $\epsilon$. [Report raw F1 scores]. We report the F1 scores, both with and without our post-processing, instead of just the F1 score improvement. For your reference, the F1 scores of training on 80% real data and testing on 20% real data are: 0.61 on adult; 0.95 on mushroom; 0.54 on shopping; and 0.47 on bank. [Include two statistical measures]. We include two statistical measures (Jensen-Shannon distance and average inverse of the KL-divergence) for evaluating statistical properties of synthetic data. In short, our observations are in line with prior experiments, confirming that our algorithm consistently improves the utility of the synthetic data across all datasets and all DP mechanisms without degrading the performance of downstream models or statistical metrics. --- Below, we address the concerns and questions raised by each reviewer and detail our plans for updating the paper. We will add the changes in the final version (both in the main text and appendix). We welcome any additional feedback or suggestions that can further strengthen our paper and would be glad to hear from the reviewers. Thanks! Pdf: /pdf/ba8a29f8df6f363c17bb434df6f960d23d357c63.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Symbolic Discovery of Optimization Algorithms
Accept (poster)
Summary: The paper introduces Lion, an automatic search strategy for deep learning optimizers. The paper claims that the resulting optimizer is more compute efficient than the baselines and achieves better accuracy on a variety of tasks from vision and language and model architectures. The search uses an evolutionary algorithm initialized wih AdamW that gradually mutates a population of optimizers and contains several additional strategies to increase search efficiency eg the removal of redundant instructions and a hash-based caching as well as funnel selection to mitigate overfitting on the proxy task. The standout discovery of the Lion optimizer is the use of a sign function in combination with a specific momentum strategy. Throughout their experiments, the authors demonstrate significant improvements, especially on larger architectures with large datasets. Strengths: - The evaluation is very thorough. The authors include several vision and language models, contrastive learning as well as diffusion models. Throughout the experiments on Transformer models, I found the results to be a significant improvement over AdamW, especially since there has not really been a significant upgrade to AdamW in recent years. I am not fully convinced by the results on ResNets and I also think that it would have been better to have a larger selection of convolutional neural networks. But given the importance of transformers in ML, I think this is still a good result. - In general, I find the idea of the paper to automatically search for an optimizer to be relevant. - The presentation is clear overall. Weaknesses: - I think the contribution is mostly technical and seems mostly like an engineering effort as the actual search strategy seems relatively simple. For me, the results are still interesting enough for acceptance. - There is no code available and a lot of experiments are done on Google internal datasets such as JFT and the Imagen dataset which makes it impossible to reproduce a lot of the results in the paper. I could also not find any search hyperparameters in the appendix, eg learning rate used during the search. - The search method still produces quite a lot of unnecessary statements that need to be manually removed to arrive at the final program. While I understand the complexity of the task, I would still prefer a method that produces cleaner code automatically. Being able to produce simplified optimizers during training might even improve results, for example, what would happen if one would start the search again starting at the Lion optimizer instead of AdamW. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Program 4 is an automatic simplification of Program 8. However, the authors further do manual simplifications to arrive at the final optimizer in program 1. I understand that the sign simplification (from the code in red) might be hard to automatically find, however, the method also includes a step to remove statements with minimal effect. Why does this removal step not remove the arcsin and clip functions if the authors manually found that their removal does not cause a quality drop? - The authors discuss that Lion needs a smaller learning rate due to the sign operation compared to AdamW. How does this lr compare to the one used during the search? Did you use a small learning rate during the search which might bias the search towards the sign operation or does the search actually find an optimizer that is optimal for a learning rate that is orders of magnitude smaller than the one used during the search process? In the second case, do you think that searching with a smaller lr might influence the results? - Would it be possible to do a similar search for a second-order method, eg initialized with Newton's method or do you think the additional complexity of the hessian integration would make the search space too complex? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: On some tasks, the improvements are rather minimal. Also, the set of instructions is rather limited and does not allow for complex structures like for-loops. All significant limitations are addressed by the authors in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and insightful comments! > Program 4 is an automatic simplification of Program 8. However, the authors further do manual simplifications to arrive at the final optimizer in program 1. I understand that the sign simplification (from the code in red) might be hard to automatically find, however, the method also includes a step to remove statements with minimal effect. Why does this removal step not remove the arcsin and clip functions if the authors manually found that their removal does not cause a quality drop? This is a great question. We found two reasons why such statements remain in the program. (1) Some of the statements are added by evolution because it was helpful in some period of the search, for example, the clip function could help when the optimizers in the population are unstable and introduces very large gradients but becomes less helpful when the optimizer becomes inherently stable; (2) some statements are added because it helps on proxy tasks but makes no difference on larger target tasks, which can be seen as a type of meta-overfitting as discussed in section 2.3 and Figure 11; (3) some statements makes no difference in quality but are still preserved due to noise in evolution; note that since it doesn't make the quality worse, there is no pressure for evolution to remove them. > The authors discuss that Lion needs a smaller learning rate due to the sign operation compared to AdamW. How does this lr compare to the one used during the search? Did you use a small learning rate during the search which might bias the search towards the sign operation or does the search actually find an optimizer that is optimal for a learning rate that is orders of magnitude smaller than the one used during the search process? In the second case, do you think that searching with a smaller lr might influence the results? The learning rate is represented as a searchable constant in the program and also mutated during the evolutionary search. The initial value for the learning rate is selected based on the initial program (AdamW) so there should be no bias to push it towards the sign operation. In fact, we are quite surprised by the use of the sign operation in the discovered program since prior works using sign operation usually achieve worse results than Adam. Note that, as a baseline, we also performed a hyperparameter search for AdamW on the proxy tasks with more compute and the final result is still worse than the discovered optimizer. > Would it be possible to do a similar search for a second-order method, e.g. initialized with Newton's method or do you think the additional complexity of the hessian integration would make the search space too complex? Thanks for the suggestions! Since we use programs as the representation, it is definitely possible and quite natural to expand the search space to include more operations that can compose second order methods. Since this is our first attempt in this direction, we decided to keep it simple and only include operations for first order optimizers. One potential challenge with searching second order methods would be that introducing hessian matrices or operations like outer products can make the programs more expensive to evaluate, thus increasing the compute cost.
Summary: This paper proposes to formulate algorithm discovery as program search and discover optimization algorithms for deep neural network training. To bridge the large generalization gap between proxy and target tasks, the authors introduce program selection and simplification strategies. The proposed method discovers a simple and effective optimization algorithm called Lion, which is more memory-efficient than Adam as it only keeps track of the momentum. Lion is compared with widely used optimizers, such as Adam and Adafactor, for training a variety of models on different tasks. Empirical results demonstrate that Lion significantly outperforms these optimizers. The authors also examine the limitations of Lion and identify scenarios where its improvements are small or not statistically significant. Strengths: - The proposed method is clear and easy to understand - Empirical results are very sufficient to demonstrate the superiority of proposed method Weaknesses: - The computation resources required by the proposed search method is unaffordable for most research groups - Theoretical analysis is missing in this work Technical Quality: 3 good Clarity: 3 good Questions for Authors: Despite its sound empirical performances, I would also like to see some theoretical analysis on the searched Lion optimizer. How it can converge under which conditions? The authors may consider referring to [7] and see if similar convergence results can be obtained. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I suppose the key limitation of this work is its huge demand on computation power. For most research groups, the value of this work is solely the Lion optimizer (which is rather simple) instead of the search method, though it is certainly good to see some new optimizers no matter how they are found (by human or by machine). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and insightful comments! > The computation resources required by the proposed search method is unaffordable for most research groups We acknowledge that the search cost of Lion (3K TPU V2 days) is still significant, and improving its efficiency is an important future work. However, the search process is a one-time task, while the benefits of Lion are persistent and sustainable. In our evaluation of image classification, we pre-train ViT-G/14 and CoAtNet-7 on JFT, each requires 512 TPU V4 chips for one week, equivalent to 3K TPU V4 days, or approximately 12K TPU V2 days. Lion's ability to save over 2x compute on this task compared to Adafactor already offsets the search cost. As more researchers use Lion on various tasks, these compute savings will continue to grow. Furthermore, Lion can also bring benefits to production models, such as ads, which require frequent training and operate at a large scale. In these cases, the advantages of Lion can be even greater and outweigh the one-time search cost by a significant margin. > Theoretical analysis is missing in this work. Despite its sound empirical performances, I would also like to see some theoretical analysis on the searched Lion optimizer. How it can converge under which conditions? The authors may consider referring to [7] and see if similar convergence results can be obtained. We would like to emphasize that Lion is discovered through symbolic program search without any prior favoring sign operation or the decoupled momentum. While we acknowledge the importance of theoretical analysis, it is out of the scope of the current paper that focuses on the automatic discovery of the algorithm. Therefore, we leave it for future research endeavors.
Summary: In this paper, a new optimizer is designed by program search. The search process overcomes the problem of distribution differences for small data set proxies and large tasks. The new optimizer converges faster and at the same time has strong generalization over a wide range of models for different tasks. Strengths: 1. Apply the idea of program search of AutoML-Zero to the field of optimizer design. Efficient and highly generalized optimizers are designed. 2. Solved the problem of complex and sparse search space by abstract execution pruning, warm-start and proxy task. 3. solved the problem of distribution difference between proxy tasks and larger tasks by funnel selection. 4. Sufficient experiments. The experiments include the validation of extensive models on different tasks. Weaknesses: 1. The original algorithm obtained from the search is still more complex. It needs expert knowledge and labor cost for simplification. See the simplification of Program 4 to Program 1 for details. 2. No specific validation experiments or theoretical analysis of the high memory efficiency of Lion compared to adam. It is only outlined at L233-L239. The authors can use some metrics, such as memory peaks, to compare the memory footprint of different optimizers. 3. The design for the search space is heuristic. In some places, the authors just make statements about the approach, which makes it difficult to understand. For example, the last of the three forms of mutation. Why is the hyperparameter in the statement modified instead of replacing the statement itself? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations have been well discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and insightful comments! > The original algorithm obtained from the search is still more complex. It needs expert knowledge and labor cost for simplification. See the simplification of Program 4 to Program 1 for details. We would like to point out that a large part of the simplification procedure is already automated. For example, program 4 (12 statements) is a result of automatic simplification from program 8 (21 statements). This automatic simplification is done through pruning with abstract execution (More details in Section 2.2 and Appendix I). Note that we only perform the manual simplification step on the best algorithms from the funnel selection and such simplification follows naturally from our analysis of the programs to understand it better. We agree with the reviewer that the simplification could be further automated by introducing heuristics for the mathematical equivalent transformation and ablation study, and leave it for future work. > No specific validation experiments or theoretical analysis of the high memory efficiency of Lion compared to adam. It is only outlined at L233-L239. The authors can use some metrics, such as memory peaks, to compare the memory footprint of different optimizers. Lion keeps track of only the momentum while Adam keeps track of both momentum and second moments. So we believe the memory efficiency is evident since there are less floats to keep in the memory. We agree with the reviewer that metrics memory peaks can be helpful but it depends on specific implementations and hardwares, so we leave the comprehensive comparison across different codebases and hardwares for future work. > The design for the search space is heuristic. In some places, the authors just make statements about the approach, which makes it difficult to understand. For example, the last of the three forms of mutation. Why is the hyperparameter in the statement modified instead of replacing the statement itself? The third type of mutation (modifying a statement by randomly changing its arguments) is introduced to make it easier to add small changes. Compared to replacing the existing statement with one randomly generated statement, i.e., a replacement mutation, it preserves most of the structure and allows "fine-tuning" the arguments. For example, consider a statement where most parts of the statements are good but one argument, which is a float constant, needs to be tuned. In this case, the modification mutation makes it easier to mutate the value of the constant based on the current value while a full replacement would make it harder to make such small changes. Additionally, the replacement mutation can already be composed with the first two types of mutations (insertion and deletion) so we didn't introduce it as a separate one. We will update the draft to make the motivation clearer.
Summary: Summary: The authors' work undoubtedly stands as an exceptional and noteworthy contribution to the field, showcasing a plethora of remarkable achievements that have the potential to significantly influence and shape the landscape of optimization and extend their impact well beyond its boundaries. As I delve into the intricacies of their research, it becomes evident that their dedication and innovative approach have yielded results that are both impressive and promising. However, amidst the numerous commendable aspects of their study, there also exist certain primary concerns that warrant attention and resolution. While I acknowledge the impressive strides made by the authors, it is essential to critically address certain aspects that could further enhance the credibility and applicability of their findings. In the following detailed comments, I will highlight specific areas that require closer scrutiny, offering constructive feedback and suggestions to fortify the integrity and comprehensiveness of their research. Strengths: I am elated to express my joy and satisfaction upon the efficiency of the authors' method has transcended that of the renowned Adam optimizer, exhibiting its superiority across various tasks. This notable achievement undoubtedly highlights the ingenuity and prowess of the authors in devising such a remarkable approach. The profound impact of their work is vividly demonstrated through the astonishing results presented in Table 1 and Figure 1, which undoubtedly leave one in awe. In particular, the Discovered optimizer, aptly named "Lion," showcased in Program 1, stands as a testament to the authors' ability to craft a solution that is both simplistic and elegant, evoking admiration from peers and fellow researchers alike. Moreover, the decision made by the authors to simplify Program 4 and refine it into the more efficient Program 1 is not only logical but also reinforces the practicality of their approach. This thoughtful choice not only streamlines the methodology but also adds to the elegance and cohesiveness of their research. In their diligent pursuit of showcasing the sparsity of the high-performance optimizer, the authors undertook a remarkably extensive and rigorous experimental endeavor, involving the execution of a staggering 2 million randomly selected programs. With meticulous precision and analytical acumen, they thoroughly assessed the performance of each program, leaving no stone unturned in their quest for a superior alternative to AdamW. Upon meticulously scrutinizing the results obtained from this monumental experiment, the authors arrived at a rather intriguing and significant revelation: none of the 2 million randomly selected programs exhibited a performance that surpassed the prowess of AdamW. This particular finding stands as a testament to the exceptional capabilities and efficiency of AdamW as a high-performance optimizer, solidifying its position as a prominent and leading choice in the optimization landscape. The sheer scale and comprehensiveness of this experimental undertaking are truly commendable, underscoring the authors' unwavering commitment to pursuing scientific rigor and thoroughness. Their methodical approach and the impressive volume of data analyzed serve as an invaluable resource for the scientific community, contributing to a deeper understanding of optimizer performance and fostering future advancements in the realm of optimization algorithms. The authors' insightful use of meta-validation to select programs with superior generalization ability is indeed a reasonable and effective approach. Especially noteworthy is their ingenious technique of progressively increasing the complexity of the meta-validation task, aptly termed "Funnel selection." This strategic refinement undoubtedly enhances the efficiency of the selection process, allowing for the identification of programs with enhanced generalization capabilities. Additionally, the thoughtful incorporation of sign updates and regularization is highly justified, especially when considering the context of adversarial attacks, where techniques like L_inf PGD attacks often rely on such sign update mechanisms. In fact, sign updates have also been previously employed as optimizers in prior works. Undoubtedly, the analysis presented in Section 3.2 is both captivating and intellectually stimulating, offering valuable insights that resonate with the broader research community. The robustness and trustworthiness of these findings are indeed commendable and a testament to the authors' thoroughness in their investigations. One fascinating observation that delights me is the correlation between model size and Lion's validation accuracy. The authors' remark on the positive relationship between increased model size and enhanced validation accuracy raises intriguing possibilities. The plausible explanation that the sign update technique increases uncertainty and ultimately improves the model's generalization ability opens exciting avenues for future research. Weaknesses: While their research presents impressive insights and advancements, it is essential to address one concern that arises from the potential handcrafted nature of the simplification and derivation process for the programs. A more elegant solution would involve automating this process as an inherent aspect of defining the search space, further enhancing the overall methodology and augmenting its potential impact. Sign updates have also been previously employed as optimizers in prior works. It is worth kindly noting that, when the proposed method is combined with more potent data augmentations, the relative performance gains might appear less pronounced. This insightful observation highlights the need for further exploration and optimization to fully unlock the method's potential under varying experimental conditions. While considering this aspect, I must admit that I personally feel a bit disappointed. I am genuinely interested in exploring the application of the Lion algorithm to the domain of Stable Diffusion. It is pertinent to acknowledge that training stable diffusion models can be an extremely resource-intensive endeavor, often incurring substantial costs (for instance, approximately 1 million dollars for stable diffusion v1). Given this context, the potential of the proposed Lion method to be utilized in conjunction with stable diffusion holds tremendous appeal and significance. If the Lion algorithm proves to be compatible with stable diffusion models, it would indeed be a highly advantageous and desirable outcome. The ability to leverage the proposed method's capabilities in optimizing stable diffusion models could potentially mitigate the significant expenses associated with their training, thereby opening up new avenues for research and applications in this area. The prospect of such synergy between Lion and Stable Diffusion sparks a sense of optimism and excitement, as it has the potential to drive transformative advancements in the field. As such, I eagerly look forward to any insights or findings that may shed light on the feasibility and practicality of applying the Lion algorithm to the challenging task of stable diffusion, as this would undoubtedly represent a valuable and impactful contribution to the scientific community. One of the concerns in the field of AutoML revolves around the efficacy of the search process, causing a great deal of apprehension among researchers and practitioners. More precisely, a central aspect of this concern is rooted in the correlation between the performance exhibited by programs during the search phase, specifically on the small proxy, and their subsequent performance on the final task. This particular correlation is viewed with heightened attention, as it holds significant implications for the overall effectiveness of the AutoML approach. For instance, in the context of Neural Architecture Search (NAS), where the exploration of different architectural configurations plays a crucial role in achieving optimal results, there is a palpable sense of unease regarding architecture ranking. How well a particular architecture performs on the small proxy tasks might not always be indicative of its actual performance when applied to more extensive and state-of-the-art tasks, and thus, researchers seek to establish a robust link between the two domains. In attempts to address this pressing concern, the authors of the present paper have diligently employed a comprehensive set of measures. Their goal is to carefully select algorithms that can demonstrate a remarkable ability to generalize from small proxy tasks to larger and more complex real-world tasks, thereby enhancing the reliability and applicability of the AutoML approach. However, despite the earnest efforts made by the authors, there remains a notable omission in their work. They have not provided a quantitative analysis that establishes the explicit correlation between the performance achieved on the small proxy tasks and the corresponding performance on the final, real-world tasks. This apparent absence of a rigorous and quantitative examination is indeed regrettable, as such analytical insights are vital for instilling greater confidence in the effectiveness and reliability of the proposed AutoML techniques. In conclusion, while significant strides have been taken in addressing concerns regarding the search effectiveness in AutoML, the lack of a robust quantitative analysis linking the performance between the small proxy and the final tasks leaves a critical aspect unanswered. Efforts to bridge this analytical gap would undoubtedly enhance the overall understanding and trustworthiness of the AutoML methodologies employed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See *Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. See *Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and insightful comments! > A more elegant solution would involve automating this process as an inherent aspect of defining the search space, further enhancing the overall methodology and augmenting its potential impact. We would like to point out that a large part of the simplification procedure is already automated. For example, program 4 (12 statements) is a result of automatic simplification from program 8 (21 statements). This automatic simplification is done through pruning with abstract execution (More details in Section 2.2 and Appendix I). Note that we only perform the manual simplification step on the best algorithms from the funnel selection and such simplification follows naturally from our analysis of the programs to understand it better. We agree with the reviewer that the simplification could be further automated by introducing heuristics for the mathematical equivalent transformation and ablation study, and we leave it for future work. > Sign updates have also been previously employed as optimizers in prior works. We have discussed the previous works that share similarities, such as the use of sign operation, with the discovered algorithms (Section 3.2 and Appendix K). Note that none of the prior works that uses sign update showed superior results compared to Adam and the ablation study in Appendix L demonstrated the benefit from the two linear interpolations used in Lion. > As such, I eagerly look forward to any insights or findings that may shed light on the feasibility and practicality of applying the Lion algorithm to the challenging task of stable diffusion, as this would undoubtedly represent a valuable and impactful contribution to the scientific community. In section 4.3, we evaluate Lion's performance in training diffusion models on ImageNet across three resolutions: 64, 128, and 256. While this approach differs from latent diffusion, it offers encouraging evidence of Lion's performance in stable diffusion. > More precisely, a central aspect of this concern is rooted in the correlation between the performance exhibited by programs during the search phase, specifically on the small proxy, and their subsequent performance on the final task. ... Efforts to bridge this analytical gap would undoubtedly enhance the overall understanding and trustworthiness of the AutoML methodologies employed. We agree with the reviewer that the generalization gap between the proxy tasks and the final / target tasks poses a great challenge. We discussed this challenge in section 2.3. Specifically, Figure 11 in the Appendix demonstrates the meta-overfitting issue caused by the gap. Programs that perform well on proxy tasks don't necessarily perform well on larger tasks. We alleviate this challenge through the funnel selection process described in section 2.3 that uses progressively larger tasks to further select out the programs that are more likely to work on the final task.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Curves for Deep Structured Gaussian Feature Models
Accept (poster)
Summary: This work focuses on the generalization performance of models utilizing multi-layered Gaussian random features. The study evaluates the impact of feature anisotropy, which is often overlooked due to common assumptions that features are generated using independent, identically distributed Gaussian weights. The findings demonstrate that correlations within the first layer of features can enhance generalization, but any structure beyond the initial layer proves generally detrimental. These insights provide valuable perspective on how weight structure affects generalization in random feature models with linear activations. Strengths: 1. The paper is well-written and the theorems are constructed with a solid use of mathematical rigor. 2. The idea is interesting, where how correlation between the rows of the first layer of features can improve generalization is a new theoretical result, which could be very interesting to the community. 3. Visualization and numerical experiments are conducted along with the theoretical results. For instance, the experiments in Figure 2 nicely summarize the theoretical finding, implying the bounds are generally non-vacuous. Weaknesses: 1. The discussion to previous works are limited. There are many RFM works recently. Although they may focus on a different perspective, such as inductive biases or behavior under SGD, but it would still be valuable to briefly discuss the relationship to these works. 2. At the first glimpse, it is hard to parse how the structure can improve the generalization in some of the theorem or corollary. In particular, there are many constants and it is non-trivial to interpret their numerical property. Adding a simple sentence after the theorem to briefly summarize it could help reader better parse the results. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Would figure 2 vary a lot if we modified the number of data points? In particular, would the theory behaves differently under over-parameterized or under-parameterized regime? 2. The theorems analyze the asymptotic learning curve, where the numerical experiments are conducted under a relatively small number of datapoints. Will this cause a gap between the theory and the experimental results? 3. The constraint on the norm of the teacher vector (line 142) is not very intuitive. Can you briefly explain this? 4. Does the result approximately hold for setting without closed form such as Lasso-regression? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors did discuss the limitation and there is no obvious negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful assessment of our work, and are gratified by their favorable comments. ### Weaknesses: 1. *The discussion to previous works are limited. There are many RFM works recently. Although they may focus on a different perspective, such as inductive biases or behavior under SGD, but it would still be valuable to briefly discuss the relationship to these works.* Thank you for this suggestion. As mentioned in our common response to referees, we will add a more extensive discussion of related works on RFMs to the Introduction. 2. *At the first glimpse, it is hard to parse how the structure can improve the generalization in some of the theorem or corollary. In particular, there are many constants and it is non-trivial to interpret their numerical property. Adding a simple sentence after the theorem to briefly summarize it could help reader better parse the results.* Thank you for this helpful suggestion. We will defer most proofs to the Appendix, and will add expository sentences to summarize each. ### Questions: 1. *Would figure 2 vary a lot if we modified the number of data points? In particular, would the theory behaves differently under over-parameterized or under-parameterized regime?* Thank you for this question. Figure 2 shows precisely a sweep over the number of datapoints: $1/\alpha_{0}$ is the ratio of the number of datapoints to the input dimensionality. Therefore, as the abscissa of each of the panels of Figure 2 increases, the number of datapoints increases. These figures, and also the general form of our results, include the transition from over-parameterized to under-parameterized regimes. 2. *The theorems analyze the asymptotic learning curve, where the numerical experiments are conducted under a relatively small number of datapoints. Will this cause a gap between the theory and the experimental results?* Finite-size experiments should certainly deviate by some amount from the asymptotic approximation. As illustrated by Figure 2, the rate of convergence is sufficiently fast such that the agreement is good even at these sizes. 3. *The constraint on the norm of the teacher vector (line 142) is not very intuitive. Can you briefly explain this?* The constraint on the norm of the teacher weight vector is a matter of convenience, and amounts to a choice of units for the label noise and generalization error. The important assumption is that $\Vert \tilde{\mathbf{w}}\_{\ast} \Vert^2 / n\_{0}$ tends to a constant as $n_{0} \to \infty$. If this constant is $C$ rather than 1, the generalization error for isotropic spectra (as in Corollary 3.3) would be the same as that for $C = 1$ if we re-normalize the noise strength as $\eta^2/C$ and the generalization error as $\epsilon/C$. Thus, this convention is a matter of convenience. 4. *Does the result approximately hold for setting without closed form such as Lasso-regression?* Thank you for this question. We have not tested other settings, but we certainly agree that extending these results to other convex regularizers would be an interesting topic for future work. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your explanation which partially addresses the raised concerns. Overall, I think the proposed ideas are interesting, and I will keep my score.
Summary: The authors investigate the exact asymptotics characterization of deep Gaussian feature models. They analyze depth-L linear Random Feature Models where the feature matrix is built as a product of L factors, each drawn from a matrix Gaussian distribution. Thanks to the replica method, they compute the generalization error learning curves for the ridge regression estimator. They study the influence of the weight structure on the test performance, presenting numerical experiments backing up the theoretical claims. Strengths: The authors consider the interesting case of deep RFMs with weight anisotropy. They compute the performance for both ERM and Bayes estimators in the high-dimensional proportional regime. They present numerical simulation and release the code to reproduce the main experiments. Weaknesses: The primary weakness of this paper is the clarity of the presentation, more precisely I believe more effort should be put into guiding a non-expert reader with a more detailed introduction, and by creating connections between the different main results presented. See below for a more detailed discussion. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1) The abstract is very concise. I would expand this section to help guide the reader, e.g., mention that you focus on interpolators, describe what exactly solvable model for anisotropic spectra you considered (power law), briefly explain that the proposed formalism allows as well to cover analysis of Bayesian setting. 2) On a similar note to the first point, the introduction section is too short. There is little explanation of why RFMs are interesting, e.g., it is not mentioned explicitly their relation with the limiting kernel. The analysis of power-law spectrum decay is standard in the kernel literature and goes under the name of source-capacity conditions [1]. There is no mention of the technique which is going to be used for the computation (the replica trick). Please mention why the interpolating (ridgeless) regime is interesting, with associated reference, e.g. ref [5] in the main text. There is no citation to previous work on the Bayesian settings, the reference [42] mentioned in the conclusion could be introduced as well there. A justification should be given for the Gaussian data assumption, e.g., by referring to the extensive line of work on the Gaussian Universality property, see [2]. 3) Please before introducing the preliminaries and the setting on page 2, include a summary of your main contributions. 4) Why is it untractable to study the $\lambda > 0$ case? A numerical investigation of the behavior at finite regularization I believe would enhance the manuscript, if the problem is not untractable. If this is the case, please mention why in the main text. 5) The results are nice but I believe some work is needed to glue them together. The authors may think to move in the appendix some proof and substitute them with explanations in plain words to help the reader build intuition, as the notation is quite heavy. 6) In Sec. 4 remind the reader in which theorem are defined $(\alpha_{min},\mu)$. 7) Can the authors explain in more detail why the exponents $(\omega_1,\dots,\omega_L)$ do not affect the scaling laws if they enter only with the sum (Sec. 5)? [1] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm, 2007. [2] Montanari, A. and Saeed, B. N. Universality of empirical risk minimization, 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: The limitations are addressed in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the referee’s feedback on the clarity of our submitted manuscript, and will revise our manuscript in accordance with their valuable suggestions. ### Questions: 1. *The abstract is very concise. I would expand this section to help guide the reader, e.g., mention that you focus on interpolators, describe what exactly solvable model for anisotropic spectra you considered (power law), briefly explain that the proposed formalism allows as well to cover analysis of Bayesian setting. * As mentioned in our common response, we will revise the Abstract following your suggestions. 2. *On a similar note to the first point, the introduction section is too short. There is little explanation of why RFMs are interesting, e.g., it is not mentioned explicitly their relation with the limiting kernel. The analysis of power-law spectrum decay is standard in the kernel literature and goes under the name of source-capacity conditions [1]. There is no mention of the technique which is going to be used for the computation (the replica trick). Please mention why the interpolating (ridgeless) regime is interesting, with associated reference, e.g. ref [5] in the main text. There is no citation to previous work on the Bayesian settings, the reference [42] mentioned in the conclusion could be introduced as well there. A justification should be given for the Gaussian data assumption, e.g., by referring to the extensive line of work on the Gaussian Universality property, see [2].* Thank you for these suggestions. As mentioned in our common response, we will expand the Introduction to address each of these points in detail. We particularly appreciate the referee’s reference suggestions, which we unfortunately missed in the submitted manuscript. 3. *Please before introducing the preliminaries and the setting on page 2, include a summary of your main contributions.* As mentioned in our common response, we will add a bulleted list of our primary contributions. In short, this list is: - Using the replica method from statistical mechanics, we compute the asymptotic generalization error of deep linear random feature models with weights drawn from general matrix Gaussian distributions. - We show that, in the ridgeless limit, structure in the weights beyond the first layer is detrimental for generalization. - Focusing on the approximately solvable special case of power-law spectra in the weights and in the data, we show that the weight spectrum power laws do not affect the scaling laws of generalization. - We show how our results can be extended from the ridge regression estimator to the Bayesian Gibbs estimator. For sufficiently large prior variance, structure can be beneficial for generalization with this estimator. 4. *Why is it untractable to study the $\lambda > 0$ case? A numerical investigation of the behavior at finite regularization I believe would enhance the manuscript, if the problem is not untractable. If this is the case, please mention why in the main text.* We wholeheartedly agree that a careful investigation of the $\lambda > 0$ case would be interesting. Please see our common response regarding this concern. We reiterate from there that to the best of our knowledge a detailed numerical investigation at large depths ($L>2$) would be challenging, based on analogy with work on product random matrices. 5. *The results are nice but I believe some work is needed to glue them together. The authors may think to move in the appendix some proof and substitute them with explanations in plain words to help the reader build intuition, as the notation is quite heavy.* Thank you for this suggestion. We will move the proofs to the appendix, and use the space thusly freed to add expanded discussions of each result. Please see our common response to comments for a detailed plan of revisions. 6. *In Sec. 4 remind the reader in which theorem are defined ($\alpha_{\mathrm{min}}, \mu$).* Thank you, we will fix this. 7. *Can the authors explain in more detail why the exponents $(\omega_{1}, \ldots, \omega_{L})$ do not affect the scaling laws if they enter only with the sum (Sec. 5)?* Thank you for this question. Our reasoning here is that the weight exponents do not affect the scaling when $\alpha_{0} \gg 1$, as in equation (32). Here, we are using the convention from Maloney et al. (2022) that the scaling law is determined asymptotically. Therefore, we discard all sub-leading contributions and also neglect the exponent-dependent constant multiplying $\alpha_{0}^{\omega_{0}}$. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I sincerely thank the authors for the rebuttal. I think that the main weakness is the clarity in the presentation and that the promised changes will strongly help the readability of the text. I would like to keep my score as in the original review.
Summary: The authors introduce correlations to weights of linear RFMs and study the generalization error. This is done under the assumption data follows Gaussian distribution and linear model. The authors provide a general expression for the limiting generalization error that recovers results from previous works as special cases. Strengths: The present results present appear to be very general and well linked to previous research. The introduction and preliminaries section are relatively clear. Weaknesses: The proofs could be included in the supplementary material which would allow for more text interpreting the results and providing the intuition for their importance. The authors should explain the solutions (15) and (16). The authors should explain in more detail their result, i.e. (18). The authors should be more explicit about the notation, e.g. carefully check the use of /tilde or $\kappa_0$. In my opinion, the clarity of the paper could be improved. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Do solutions (15) and (16) always exist? Can you elaborate on their form? Do you have any intuition how quickly convergence to (13) happen? Do you have any intuition about the limits when p is fixed? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful assessment of our work; we hope that our revisions to the Introduction further enhance its clarity. ### Weaknesses: *The proofs could be included in the supplementary material which would allow for more text interpreting the results and providing the intuition for their importance.* Thank you for this suggestion. We will defer all proofs to the Appendix, and expand the discussion around each result accordingly. *The authors should explain the solutions (15) and (16). The authors should explain in more detail their result, i.e. (18).* Thank you for these suggestions regarding the need to elaborate on our finite-ridge results. As mentioned in our common response, we will add more discussion on this point. Also, with regards to the interpretation of (15), we note that we point out in Lines 108-114 of the submitted manuscript that this solution is precisely the self-consistent equation for the limiting spectral moment generating function of the kernel matrix. This object is discussed in detail in previous works on the limiting spectra of product random matrices, e.g. ref. [28] in the submitted manuscript and references therein. *The authors should be more explicit about the notation, e.g. carefully check the use of /tilde or $\kappa_{0}$.* We will carefully proofread the manuscript to correct any notational issues. *In my opinion, the clarity of the paper could be improved.* Thank you for these suggestions regarding the clarity and explicitness of our manuscript. We hope the changes mentioned in the global response and elsewhere will help address your concerns; the extra page allowed for the camera-ready version will allow us to better unpack our results. ### Questions: *Do solutions (15) and (16) always exist? Can you elaborate on their form?* Solutions to these equations should exist for all covariance matrices with well-behaved limiting spectral densities, but they may not be writable in closed form. In the isotropic case, (16) is easily solvable; see the proof of Corollary 3.2. More generally, it must be solved numerically, as in previous works on ridge regression. Solving (15) is more challenging still; see the work of Penson and Zyczkowski referenced in our common response, and also ref. [28] in the submitted manuscript. *Do you have any intuition how quickly convergence to (13) happen?* To our knowledge, the best error estimates in the case (13) are known in the shallow setting from work by Cheng and Montanari (https://arxiv.org/abs/2210.08571), which shows convergence of generalization error with a multiplicative error that is roughly of order $1/n^{1-\epsilon}$ for some small positive $\epsilon$. We would conjecture, but have not attempted to prove, that a similar statement would hold in our setting. *Do you have any intuition about the limits when p is fixed?* Thank you for this question. If one considers a limit in which the widths of the hidden layers $n_{1}, \ldots, n_{L}$ are taken to infinity for a fixed number of training examples $p$, then the generalization error of the deep linear model should tend simply to that of the shallow model. This is shown by the large-width expansion in Corollary 3.5. --- Rebuttal Comment 1.1: Comment: Thank you for the answers, I maintain my score.
Summary: This work studies the asymptotic risk of deep linear random features models (RFMs), with a high dimensional analysis based on the replica method. It extends previous work on this topic by relaxing the standard i.i.d assumption for the Gaussian weights in each layer. Several consequences of their anaysis are discussed: (i) several known results in linear regression, isotropic models, infinite width RFM, are recovered as special cases ; (ii) it is shown that feature anisotropy is detrimental, in the sense that the risk of the isotropic model lowerbounds the risk of the general model ; (iii) it is shown that feature anisotropy does not affect the scaling laws of the risk ; (iv) going beyond ridge regression, the analysis allows for the derivation of the risk of the Gibbs estimator, where feature anisotropy is shown to be generally beneficial (resp detrimental) for large (resp small) prior variance. Strengths: * Sound piece of theoretical work. * Relaxing the standard feature isotropy assumption seems to be a natural extension of the recent line of work on deep RFMs. Since the learned features of deep learning models often exhibit complex correlations, it could also provide insights on the effects of feature learning of deep networks in a controlled setting. * The paper is very clearly written and enjoyable to read. Weaknesses: My main reservation regarding this paper is related to its scope -- and the significance of the results. * While the analysis presented in the paper is novel and fills a gap in the literature by relaxing a standard assumption, the technical innovations appear to be somewhat limited and incremental compared to previous work, such as References [28]. I believe some related references were missed, see e.g., Mel & Pennington (ICLR 2022) -- which also investigates the effect of feature anisotropy in random feature regression. I understand the main technical difference is that they work with shallow models with a Gaussian feature matrix, whereas the current paper work with Gaussian products (one may argue that the first setting is sufficient to capture the effects of anisotropy in RFMs). * The insights gained from this analysis also seem to have certain limitations. For instance, considering that in setting (13), the risk is studied in expectation over rotation invariant Gaussian matrices $Z_\ell$ at each layer, the result of Section 4 on the optimality of the isotropic case appears rather unsurprising to me. Moreover I believe that the studied setting may not allow for significant insights into representation learning (not that the authors claim otherwise, but this is one of the motivations for the work i.m.o). For example , the assumption of Gaussians requires layerwise independence, while in feature learning scenarios, one would expect learned features in different layers to be correlated with each other -- and with the underlying data structure. So from that poinf of view, I feel the assumptions underlying this work are still quite restrictive. On a minor related note, the general-sounding statement found in Section 1, "these results are consistent with the intuition that representation learning at only the first layer of a deep linear model is sufficient to achieve optimal performance" is a bit puzzing, as it seems to contradict known results in the topic, e.g. those on the implicit sparsity bias in deep linear networks (which requires representation learning in multiple layers, see e g., Woodworth et al, 2020). **References** Mel & Pennington (ICLR 2022). Anisotropic Random Feature Regression in High Dimensions. https://openreview.net/forum?id=JfaWawZ8BmX. Woodworth et al, 2020. Kernel and rich regimes in overparametrized models, https://arxiv.org/abs/2002.09277. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * As the authors mention in the conclusion, Gaussian equivalence theorems could potentially extend these results to more general activations (and losses). Including this extension would considerably increase the paper's scope. Could the authors provide a bit more details about the extent of the technical gaps to be filled to achieve this ? * Could the authors comment on the comparisons with the results of Gerace et al, which seem to address the case of general RFMs with any (fixed) feature matrix F satisfying the balance condition (1.7)? **References** Gerace et al, 2020. Generalisation error in learning with random features and the hidden manifold model. https://arxiv.org/pdf/2002.09339.pdf Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Adequately acknowledged. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for their careful and favorable assessment of our work. We hope that they find that our revisions will strengthen the paper. ### Weaknesses: *My main reservation regarding this paper is related to its scope [...]* Thank you for this comment. As we acknowledge, our work is a direct application of the formalism discussed in Ref. 28, which reflects the fundamental relationship between the generalization error of kernel ridge regression and the spectral statistics of the kernel matrix. However, our solutions in the ridgeless limit and detailed study of large-width properties are even in the context of the spectral properties novel. *I believe some related references were missed, see e.g., Mel & Pennington (ICLR 2022) [...]* We regret that we missed this reference in the submitted manuscript, and will add it to the revised version of our manuscript as part of an extended discussion of prior work on RFMs. To the best of our understanding, Mel and Pennington allows for anisotropic input correlations while still assuming uncorrelated weights, as defined in the prose between equations (2) and (3) of their ICLR paper. Moreover, they assume isotropic regularization in the definition of the regularized kernel matrix between equations (3) and (4). Therefore, if one has even a single layer of random features with anisotropic row and column correlations, our setting is more general. *The insights gained from this analysis also seem to have certain limitations [...] the result of Section 4 on the optimality of the isotropic case appears rather unsurprising to me.* We agree that rotation-invariance makes the optimality of isotropic weight spectra intuitive; this rotation-invariance also makes it intuitive why the dependence of the generalization error on the spectra should decouple across layers. We propose to add the following comment to Section 4 under Lemma 4.1 to note that “The fact that we study generalization in expectation over matrix-Gaussian random features makes the optimality of isotropic spectra intuitive. Since we can represent each structured weight matrix as a product of fixed covariance matrices multiplying an unstructured, rotation invariant Gaussian matrix, it makes sense that there should be no preferred directions along which variance can be concentrated to reduce error.” We hope that this addition will make this limitation of our analysis more explicit. *Moreover I believe that the studied setting may not allow for significant insights into representation learning [...].* We agree that the setting of our work is restricted, which facilitates our analytical progress. Moreover, we agree wholeheartedly that it would be interesting to address the case of correlated weights across layers. As a first step, this could perhaps be addressed within a model in which the weights are jointly Gaussian across layers. However, even if the weights within layers are uncorrelated, this is not directly addressable within the formalism used here, which relies heavily on the assumption of layer-wise independence. This limitation is shared by other physics-style approaches to product random matrix problems. We would therefore suggest that detailed study of such a model could be an interesting topic for future work, with the results presented here as a first step towards understanding RFMs with maximally general weight correlations within and across layers. *On a minor related note, the general-sounding statement found in Section 1 [...] seems to contradict known results in the topic [...].* We acknowledge that this statement, as written, is not sufficiently precise. We will revise it to read, “these results are consistent with the intuition that representation learning at only the first layer of a deep linear model is sufficient to recover a single teacher weight vector.” We appreciate the referee’s feedback on whether this revision addresses their concern, and will also cite Woodworth et al. in our updated manuscript. We note also that Woodworth et al. focus on optimization with gradient flow. ### Questions: *[...] Gaussian equivalence theorems could potentially extend these results to more general activations (and losses). [...]* Existing Gaussian equivalence theorems for deep nonlinear random feature models from Schröder et al. and Bosch et al. (both from 2023) depend on certain concentration and approximate orthogonality assumptions for feature Gram matrices (see eq. 11 of Schröder et al, https://arxiv.org/pdf/2302.00401). When the weights are independent across features, these properties can be verified. For weights drawn from a general matrix Gaussian, this does not obviously follow. To prove Gaussian equivalence for nontrivial covariance matrices, one would need to determine the class of matrices and of nonlinearities for which the required orthogonality conditions can be established. We would be happy to add a comment on this point to the discussion. *Could the authors comment on the comparisons with the results of Gerace et al [...]?* Thank you for this question. The results of Gerace et al depend on the solution to the set of self-consistent equations (2.4), which in turn depend on the limiting Stieltjes transform of the feature matrix. Thus, to apply their results to our setting, one would need to verify the balance condition holds for Gaussian product matrices with sufficiently high probability, and then compute the Stieltjes transform. The computation of the Stieljtes transform is a bottleneck: Gerace et al. focus on settings in which it is known. In our case, the Stieljtes transform of the random feature matrix would be determined by the solution to a non-trivial self-consistent equation of the form (15). Therefore, our replica theory approach in principle offers an alternative route to obtain what should be the same result. We will make sure to cite Gerace et al. in our updated manuscript, and to comment on this point. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. I've also read other reviews and their rebuttal. I am raising my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their careful assessment of our work. A commonly-expressed concern was that the Abstract and Introduction were too terse to provide adequate guidance and context to the reader, and that the paper would be clearer if we provided more discussion of each result. For our revised manuscript, we will take advantage of the additional space freed up by deferring some of the proofs to the Appendix and the additional content page allowed for the camera-ready version to address these concerns. In brief, we will make the following changes: - We will expand the Abstract to explicitly mention the replica formalism and its applicability to the Bayesian setting, as well as our focus on interpolators and the power-law spectrum model. (Reviewer nhrG) - We will add a more detailed description of previous work on random feature models to the Introduction, including the additional references suggested by Reviewers hUXg, nhrG, and VEZW. In particular, we will discuss the relevance of minimum-norm interpolation (i.e., the ridgeless limit $\lambda \downarrow 0$) with RFMs for questions of generalization in overparameterized models more generally, including deep networks. We will also expand our discussion of Gaussian equivalence results to better motivate our setup. - We will add a discussion of the replica method and its applications to computing the generalization error of regression models to the Introduction. (Reviewer nhrG) - We will convert our summary of contributions into a more readable bulleted list. - In Section 3, when we introduce the general form for the generalization error that results from the replica computation (Proposition 3.1), we will expand our discussion of why we focus on the ridgeless limit. See below for a detailed discussion of this point. We would like to elaborate on a common concern: our focus on ridgeless interpolators with $\lambda \downarrow 0$. Following the suggestions of Reviewer nhrG, we will discuss the relevance of minimum-norm interpolation with RFMs in the Introduction. In brief, recent interest in deep learning has focused on when interpolating regressors display benign overfitting properties, i.e., when they generalize well despite interpolating the training data. Moreover, we will add a discussion of the finite-$\lambda$ setting to Section 3 and to the Discussion. In brief, we agree with the referees (particularly Reviewer bdDi) that it is likely that the optimal generalization could be achieved at some non-zero value of $\lambda$. It is of course well-known from previous works that this holds for shallow ridge regression or a RFM with a single hidden layer is often finite and can even be negative, see for instance Wu and Xu, NeurIPS 2020 or Kobak et al., JMLR 2020. We will add a detailed discussion of this point to the updated manuscript. However, how to obtain a clean form for the optimal ridge analytically is in the general-depth case less clear. If one considers differentiating the expression for the generalization error in Proposition 3.1 with respect to the explicit ridge, one would have terms corresponding to the derivatives of each of the inverse moment generating functions with respect to the self-consistently determined variable $\zeta$, and then the implicitly determined derivative of $\zeta$ itself from equation (15). It is not immediately clear to us whether a useful simplification immediately presents itself; we therefore are inclined to leave this question to more detailed investigation in future work. We remark also that the issue of solving (15) is related to a long line of previous work on finding the spectra of product random matrices. Here, even in the case in which all factors are isotropic, exact solutions are known only in the special case in which all factors are square (see Penson and Zyczkowski, Phys. Rev. E 2011). Even numerically, few detailed results are known for depths $L$ greater than 2 or 3. We therefore suggest to leave a detailed numerical investigation of optimal ridge to future work.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper derives the learning curves (generalization error vs number of samples) for random linear deep networks with Gaussian weights with non-trivial correlations. The correlations at the first layer control the performance of those networks, while all other layers (possibly of different widths) are equivalent to a single layer of minimal width among them. Strengths: * Quality: the main result that in the noiseless case, correlations at the first layer improve performance, while correlations in the other layers degrade it (Lemmas 4.1 and 4.2) is interesting (and is not fully captured by the summary “representation learning at only the first layer of a deep linear model is sufficient to achieve optimal performance”). The additional result for generalization error of scale-free correlations, which converge with previously known results for uncorrelated weights, is very nice. Weaknesses: * Quality: it is not justified why the authors focus on the “ridgeless limit”; can’t the minimal generalization error be achieved at a finite value of lambda? Why not? The model studied seems to collapse for large alpha0 (figure 1 a vs b), but the phenomena is not explained. * Clarity: as a theory-heavy paper, the authors could have done a better job in keeping the notation clear. p was not properly defined; I assumed it was the number of samples. The spectral moments generating function from eq 15,17 is introduced only at eq 19. The functions (or scalars) phi and phi bar (evaluated at k0) are not introduced. The important “expectation with respect to the limiting spectral distribution” is not properly defined (“is defined in eq X of the SM” would have been fine as well). * Significance: the results of sections 4 and 5 seem to suggest the depth of the network does not contribute anything, as only alpha_min enters the results. This is probably a limitation of the linear model, and thus the discussion on the behaviour of deep networks seems empty. Also, the focus on the noiseless case makes it hard to extrapolate what (if any) of the results hold under noise at an optimal finite lambda. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Can the minimal generalization error be achieved at a finite lambda value? * Is there any effect of depth when alpha_min is kept fixed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors are clear about the limitations of their work, where only the behaviour for the ridgeless case is studied (and hence the behaviour under noise is not well understood), and only linear networks are studied (and hence the depth does not play an effect), with random initialization (and hence there is no training). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Strengths: *Quality: the main result that in the noiseless case, correlations at the first layer improve performance, while correlations in the other layers degrade it (Lemmas 4.1 and 4.2) is interesting (and is not fully captured by the summary “representation learning at only the first layer of a deep linear model is sufficient to achieve optimal performance”). The additional result for generalization error of scale-free correlations, which converge with previously known results for uncorrelated weights, is very nice.* Thank you for your careful reading of our paper, and for your positive assessment of its results. We will revise the summary sentence in the Introduction to: “These results are consistent with the intuition that representation learning at only the first layer of a deep linear model is sufficient to recover a single teacher weight vector..” We hope the referee agrees that this more clearly describes our result. ### Weaknesses: - *Quality: it is not justified why the authors focus on the “ridgeless limit”; can’t the minimal generalization error be achieved at a finite value of lambda? Why not? The model studied seems to collapse for large alpha0 (figure 1 a vs b), but the phenomena is not explained.* Thank you for this question. Please see our detailed discussion of this point in the common response to referees. The ‘collapse’ observed by the reviewer in Figure 1b reflects the fact that the generalization error of the model with power-law features diverges as $\alpha_{\ell} \downarrow 0$, which results from the fact that these spectra have diverging mean. - *Clarity: as a theory-heavy paper, the authors could have done a better job in keeping the notation clear. p was not properly defined; I assumed it was the number of samples. The spectral moments generating function from eq 15,17 is introduced only at eq 19. The functions (or scalars) phi and phi bar (evaluated at k0) are not introduced. The important “expectation with respect to the limiting spectral distribution” is not properly defined (“is defined in eq X of the SM” would have been fine as well).* Thank you for this suggestion. 1. We note that $p$ was defined in Line 50 of the submitted manuscript (“...with $p$ i.i.d. training examples…”). We recognize that there are varying conventions for the number of training samples in the literature, with $p$ being the common convention in physics and $n$ the usual choice for statisticians. We will add a footnote to emphasize this convention. 2. The spectral generating functions are introduced in Equation 14, under Assumption 3.1 (Lines 89-92). 3. The function $\psi$ is defined in (14) as part of Assumption 3.1. Can the reviewer point us to where we have failed to define $\phi$? We will replace Lines 98-99 of the submitted manuscript with the following clarification: “...where $\mathbb{E}\_{\tilde{\sigma}\_{\ell}}[h(\tilde{\sigma}\_{\ell})] = \lim\_{n\_{\ell} \to \infty} \frac{1}{n\_{\ell}} \sum\_{j=1}^{n\_{\ell}} h(\tilde{\sigma}\_{\ell,j})$ denotes expectation of a function $h$ with respect to the limiting spectral distribution of $\tilde{\mathbf{\Sigma}}_{\ell}$, for $\{\tilde{\sigma}\_{\ell,j}\}$ its eigenvalues.” We hope these clarifications address your concerns. We will carefully go over the paper to address further points of clarification. - *Significance: the results of sections 4 and 5 seem to suggest the depth of the network does not contribute anything, as only alpha_min enters the results. This is probably a limitation of the linear model, and thus the discussion on the behaviour of deep networks seems empty. Also, the focus on the noiseless case makes it hard to extrapolate what (if any) of the results hold under noise at an optimal finite lambda.* Thank you for this comment. It is not correct to say that only $\alpha_{\mathrm{min}}$ enters the generalization error. This can be seen from the formula for the generalization error given in Proposition 3.2, equation (23): in the overparameterized regime, the relative widths $\alpha_{\ell}$ for the other layers enters into the functions $\mu_{\ell}$ defined in equation (21). Most simply, consider the isotropic case given in Corollary 3.2, equation (25): in the overparameterized regime $\alpha_{0}, \alpha_{\mathrm{min}} > 1$, the generalization error has explicit, obvious dependence on $\alpha_{\ell}$. This is also true of the subsequent corollaries in Sections 4 and 5. It is true that only the narrowest layer determines whether the model is overparameterized or bottlenecked, but the other layers have a definitely non-trivial effect. We elaborate on this issue under your subsequent Question regarding this issue. In regards to finite regularization, please see our common response. ### Questions: - *Can the minimal generalization error be achieved at a finite lambda value?* Please see our common response for a discussion of finite-$\lambda$ properties. In short, for the deep models we consider here it is challenging to analytically determine the optimal ridge.. - *Is there any effect of depth when alpha_min is kept fixed?* Thank you for this question. As mentioned above, there is most definitely an effect of depth even if $\alpha_{\mathrm{min}}$ is held fixed. To potentially see this more clearly, consider the case in which all layers have the same width $\alpha_{1} = \alpha_{2} = \cdots = \alpha_{L}$. Then, in the overparameterized regime $\alpha_{0}, \alpha_{1} > 1$, we have $$\epsilon = \left(1 + \frac{L}{\alpha_{1} - 1}\right) \left(1 - \frac{1}{\alpha_{0}} \right) + \left(\frac{1}{\alpha_{0} - 1} + \frac{L}{\alpha_{1} -1}\right) \eta^2$$ Therefore, if one fixes $\alpha_{\mathrm{min}} = \alpha_{1}$ and increases the depth $L$, the generalization error will in this case increase linearly. Conceptually, this is due to the fact that increasing depth increases the variance of the random feature kernel. We will elaborate on this point in our updated manuscript. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for your clarifications and improved presentation. I would raise my score accordingly.
null
null
null
null
null
null
Generating Behaviorally Diverse Policies with Latent Diffusion Models
Accept (poster)
Summary: The paper presents a diffusion-based approach to compressing a policy archive discovered by a Quality-Diversity RL algorithm. The diffusion model operates in the latent space of a VAE and achieves high levels of compression together with a reasonable level of reconstruction. The algorithm is evaluated on a collection of 4 Brax environments. Strengths: - Clearly written paper and description of the approach - Promising results showing high levels of compression on some Brax environments, together with good levels of reconstruction and coverage - Strong visualizations of behavior during training and different synthesized behaviors Weaknesses: - The approach studied in the paper is limited to compression of the original archive. Equally, the sequential behavior composition experiments only reproduce what was possible with the original archive. An interesting next step would be understanding if the diffusion model can generalize to novel measure vectors or language instructions. - The paper assumes very small 2-layer, 128-width MLPs trained by the QD algorithm, it is unclear if the algorithm can scale to larger and more representative networks - High loss in diversity particularly on the ant environment, in Table 1. - Line 8 in the Abstract, should clarify exactly what environments the authors see the compression ratio/coverage on Minor: - Line 42: typo in ‘uspample’ - Scale is hard to see in Figure 3, axes should also be described - The idea of compressing policies into a single diffusion model is related to [1] which compresses offline RL datasets into a single diffusion model, [1] also achieves around 13x compression. - It would be helpful to also indicate the level of compression in Table 1. [1] Synthetic Experience Replay. Cong Lu, Philip J. Ball, Yee Whye Teh, Jack Parker-Holder. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The ablations on network capacity in Table 2 are only run on humanoid which works well, it would be interesting to understand if the large loss in diversity on the ant environment can be alleviated by higher network capacity or better representation learned by the VAE. - Can the diffusion model generalize to novel conditions and improve on the original archive? - Section 3: The convolutional layers used to encode the weights and biases of the network could be less appropriate than equivariant layers such as those used in [1]. [1] Permutation Equivariant Neural Functionals. Allan Zhou, Kaien Yang, Kaylee Burns, Yiding Jiang, Samuel Sokota, J. Zico Kolter, Chelsea Finn. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations of the method are discussed well in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The approach studied in the paper is limited to compression of the original archive...** We agree that generalization to novel language instructions is an interesting direction for future work. However, generalizing to novel measure dimensions is not possible with the environments available. The measure functions in our policy datasets function similar to class categories in computer vision tasks i.e. it is not possible to generalize to novel measures in the same way that generative models trained on CIFAR10 cannot generalize to the CELEB-A dataset unless these datasets were combined apriori. However, similar to how vision models can generate images with different categories of objects drawn in the same image in different locations, we attempt to show with sequential behavior composition that different parts of a fixed-length trajectory can be filled in with arbitrarily different behaviors without catastrophic failure. Prior methods show the difficulty of this task and solve it by explicitly optimizing for sequential composition of different policies during training [1, 2]. [1] Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14. [2] Krishna, Lokesh, and Quan Nguyen. "Learning Multimodal Bipedal Locomotion and Implicit Transitions: A Versatile Policy Approach." arXiv preprint arXiv:2303.05711 (2023). **The paper assumes very small 2-layer, 128-width MLPs ...** To address larger and different types of policy networks, we believe differences may arise in the encoding and decoding of these policies. For decoding, we believe that graph hypernetworks can scale well to different types of networks. It was shown in [1] that hypernetworks can estimate the weights to larger and more representative networks. [2] Further shows that these hypernetworks can estimate weights of larger MLP policies as well. For encoding, an alternative to CNN based encoding can be using neural functional networks as described in [3]. These permutation equivariant neural functional networks seem to learn functions based on CNN networks weights very well. [1] Knyazev, Boris, Michal Drozdzal, Graham W. Taylor, and Adriana Romero Soriano. "Parameter prediction for unseen deep architectures." Advances in Neural Information Processing Systems 34 (2021): 29433-29448. [2] Hegde, Shashank, and Gaurav S. Sukhatme. "Efficiently Learning Small Policies for Locomotion and Manipulation." In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 5909-5915. IEEE, 2023. [3] Zhou, Allan, Kaien Yang, Kaylee Burns, Yiding Jiang, Samuel Sokota, J. Zico Kolter, and Chelsea Finn. "Permutation equivariant neural functionals." arXiv preprint arXiv:2302.14040 (2023). **High loss in diversity particularly on the ant environment, in Table 1.** We believe that the loss in diversity on the ant environment is a result of there being a large number of poor performing policies that do not share parameters with other high performing policies in the dataset, and that this is not an issue with the proposed method itself. From the CDF plots in figure 4, we see that the diffusion model recovers most, if not all, of the higher performing policies, and loses diversity only where lower performing policies are concerned. If the proposed method struggled specifically on ant for some reason i.e. the higher dimensional measure space, we would expect an equivalent drop in diversity along all levels of policy performance. From our GHN size ablation, we see that coverage on Ant correlates strongly with model capacity. Thus, we expect to be able to recover most of the original archive’s coverage with larger models. **Line 8 in the Abstract,..** Thank you for pointing this out. We will clarify this point in the revised manuscript. ### Questions: **The ablations on network capacity in Table 2 are only run on humanoid...** Thank you for your suggestion. We have conducted this ablation and have added it to the attached file. Scaling up the GHN size in the decoder does indeed alleviate the performance drop of our model on the Ant environment. Further, we see that the compression ratio is still reasonable at larger GHN sizes. **Can the diffusion model generalize to novel conditions and improve on the original archive?** We believe that our model can interpolate well within the bounds of the measure space described in the global response above. Unfortunately, with the current specified measures, it is not possible to generalize outside these bounds because it is not physically possible. For example, a measure of 1.2 implies 120% foot contact time with the ground. Measure functions in QD behave as discrete categorizations of behavior, similar to how image categories function in computer vision tasks. However, an interesting research direction would be to add new measure functions online to the archive, filling it in with new policies while jointly using these new policies to train the diffusion model, in order to increase its expressivity and dimensionality of the behavior space. **Section 3: The convolutional layers used to encode the weights and biases of the network could be less appropriate than equivariant layers such as those used in [1].** Thank you for this suggestion! We were not aware of this paper prior to submission and believe this would be a promising direction to improve our method’s representation capacity in the future. ### Minor weaknesses: The corrected figure has been added to the attached additional figures page. **The idea of compressing policies into a single diffusion model is related to [1] ..** Thank you for bringing this recent work to our attention. We agree that compressing offline RL datasets and compressing policy datasets is related, so we will add a citation to this work. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the responses to the review. I will maintain my current score.
Summary: Quality-Diversity Reinforcement Learning generates a set of policies (here called the archive) that are learned to produce varied behaviors in the environment. These archives can be large, and this paper aims to compress a previously learned archive into a single model by leveraging a conditional diffusion model. Each of the policies in the archive is first represented in a latent space by using a variational auto-encoder that reconstructs the weights and biases of the policy's neural network. Once the encoder and decoder of the policies have been trained, a diffusion model is then fit to the encoded latents, and conditioning information is used to help guide sampling. The paper explores conditioning based on the measure of a policy (where the measure is a set of functions used to split the policies into different regions of behavior space), or text descriptions of the policies. The paper shows that the learned diffusion model can generate policies that return similar rewards as the original archive in aggregate, and also have high overlap with the conditioning measure when the sampled policy is executed again in the environment. Strengths: This paper proposes an interesting application of powerful generative models to fit a Quality-Diversity archive of policies. The ability to reconstruct the full archive from a single generative model increases the practicality of QD approaches to skill discovery in reinforcement learning. The paper does a good job highlighting this contribution, and it is indeed an intriguing direction. Weaknesses: - Evaluation of the model is thorough, in that ablations and several domains are explored, however it is difficult to assess the quality of the approach given that no alternative approaches are attempted. In line 210, the paper argues that other approaches to archive distillation are not comparable because the underlying archive is different. I disagree: since the main contribution of this paper is a distillation method, it should be able to compared to other distillation methods when the archive is held fixed. - I find the metrics used to evaluate the model difficult to interpret, perhaps related to the lack of baseline. The Mean-Error in Measure (MEM) metric described by the paper is a reasonable one, but the paper does not describe what measures are used in the various tasks, so interpreting the scale of MEM is difficult. QD score is similarly difficult to interpret. Is a decrease of $0.6 \times 10^7$ QD score a meaningful one? - The experiment on sequential behavior composition is similarly difficult to interpret. Is 80% success of 4 consecutive behaviors good? Perhaps including the success rate of the original archive would be a good start. - Graphs in Figure 3 are not immediately interpretable without prior exposure to that form of QD visualization. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - I find the design of the network encoder to be strange. Why are convolutional filters applied to the weight matrix of an MLP when encoding? Is there some spatial structure there to be captured? - What dependence does the diffusion model have on the underlying archive to be compressed? - Where do text descriptions of behaviors come from? Are policies labeled after learning, or are the labels pre-specified with the desired measures? Is it easy to create this conditioning data? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed the method's limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Evaluation of the model is thorough,...** There are two archive distillation methods most similar to ours, DCG-Map-Elites [1] and the QD-Transformer [2]. However, both of those methods are concurrent with ours and the code was not available for either of them at the time of submission. There are also design differences that make direct comparison difficult. The algorithm proposed in [1] jointly fills the archive and distills the policies into a descriptor-conditioned actor policy. Replacing the descriptor-conditioned actor with our diffusion model would fundamentally be a different algorithm and produce different results. We understand the reviewer’s concerns for baselines. However, to the best of our knowledge, archive distillation is a relatively novel idea and only starting to be investigated. We hope to be a baseline for future methods. [1] Faldor, M., Chalumeau, F., Flageat, M., & Cully, A. (2023). MAP-Elites with Descriptor-Conditioned Gradients and Archive Distillation into a Single Policy. arXiv preprint arXiv:2303.03832. [2] Macé, Valentin, Raphaël Boige, Felix Chalumeau, Thomas Pierrot, Guillaume Richard, and Nicolas Perrin-Gilbert. "The Quality-Diversity Transformer: Generating Behavior-Conditioned Trajectories with Decision Transformers." arXiv preprint arXiv:2303.16207 (2023). **I find the metrics used to evaluate the model difficult to interpret,..** Please see the answer in the global response at the top. **The experiment on sequential behavior composition....** We have performed sequential behavior composition evaluations on the original archive, and obtain the same results as reported in the paper by the generative model. There are a number of prior works [1,2] where the primary or secondary goal was to compose behaviors with explicit optimization on the distilled model. Therefore, we believe that 80% is an impressive success rate given that our method was not explicitly designed or optimized to compose policies sequentially. [1] Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions On Graphics (TOG) 37.4 (2018): 1-14. [2] Krishna, Lokesh, and Quan Nguyen. "Learning Multimodal Bipedal Locomotion and Implicit Transitions: A Versatile Policy Approach." arXiv preprint arXiv:2303.05711 (2023). **Graphs in Figure 3 are not immediately interpretable...** Please see the answer in the global response at the top. ### Questions **I find the design of the network encoder to be strange...** Deconvolutional layers are typically used for parameter generation in hypernetworks [1][2]. For symmetry, we chose to use convolutional layers for encoding. Our experiments show that convolutional layers are sufficient to perform this encoding task, which indicates that useful policy encoding does not depend on the interactions of parameters that are distant from each other in the weight matrices, a type of loose spatial structure. The use of convolutional layers has the advantage of reducing the total parameter count when compared to an MLP-based encoder. The encoder is not used during policy generation, and consequently has not been a major focus in this work. In future work, encoding may be further improved by incorporating advancements from recent work in [3]. [1] Knyazev, Boris, Michal Drozdzal, Graham W. Taylor, and Adriana Romero Soriano. "Parameter prediction for unseen deep architectures." Advances in Neural Information Processing Systems 34 (2021): 29433-29448. [2] Hegde, Shashank, and Gaurav S. Sukhatme. "Efficiently Learning Small Policies for Locomotion and Manipulation." In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 5909-5915. IEEE, 2023. [3] Zhou, Allan, Kaien Yang, Kaylee Burns, Yiding Jiang, Samuel Sokota, J. Zico Kolter, and Chelsea Finn. "Permutation equivariant neural functionals." arXiv preprint arXiv:2302.14040 (2023). **What dependence does the diffusion model have on ..** We believe that in order to generalize and smoothly interpolate between behaviors as shown in the sequential behavior composition task, the diffusion model requires archives with higher resolutions, on the order of thousands of policies. The “Covariance Matrix Adaptation” line of work i.e. CMA-ME [1], CMA-MAEGA [2], etc. tend to favor larger archives, since having more cells results in higher quality gradient estimates in the evolutionary adaptation component of these algorithms. Methods such as PGA-ME [3] tend to favor lower resolution archives because they jointly optimize the entire archive and do not maintain a local search distribution that optimizes for where in the archive to explore next. We use PPGA [4] for our work, which under the hood runs CMA-MAEGA, thus favoring large archives, and has also produced SOTA results on the locomotion tasks, thus making it the ideal choice for use with diffusion models. [1] Fontaine, Matthew C., Julian Togelius, Stefanos Nikolaidis, and Amy K. Hoover. "Covariance matrix adaptation for the rapid illumination of behavior space." In Proceedings of the 2020 genetic and evolutionary computation conference, pp. 94-102. 2020. [2] Fontaine, Matthew, and Stefanos Nikolaidis. "Differentiable quality diversity." Advances in Neural Information Processing Systems 34 (2021): 10040-10052. [3] Lim, Bryan, Manon Flageat, and Antoine Cully. "Understanding the Synergies between Quality-Diversity and Deep Reinforcement Learning." arXiv preprint arXiv:2303.06164 (2023). [4] Batra, Sumeet, Bryon Tjanaka, Matthew C. Fontaine, Aleksei Petrenko, Stefanos Nikolaidis, and Gaurav Sukhatme. "Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning." arXiv preprint arXiv:2305.13795 (2023). **Where do text descriptions of behaviors come from?..** Please see the answer in the global response at the top. --- Rebuttal Comment 1.1: Comment: Thank you, authors, for your clarifications. My major concerns about baselines have been addressed by the clarification that other similar work is concurrent. Additional clarifications on the metrics and figures are also helpful.
Summary: This paper tried to solve the high space complexity in Quality Diversity and proposes a method that uses diffusion model to distill the archive into a single generative model based on policy parameters. Strengths: * This paper leverages the generation power of diffusion model and condenses one model instead of thousands of policies. * This paper is well structured. Weaknesses: * Quality Diversity is not well-known. It's better to include a background section instead of including basic knowledge in the related work. * It's confusing no baselines can be directly compared. The metrics can not only be rewards but also space efficiency. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Need more explanation and comparison in experiment * There are some environments for quality diversity like QD-GYM. What's the performance of the method in QD-GYM? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Quality Diversity is not well-known. It's better to include a background section instead of including basic knowledge in the related work.** We have included a brief background section in our related work section. Please see the global response for a detailed explanation of measures we used in Quality diversity. We would be happy to split into its own section and extend it where space permits. **It's confusing no baselines can be directly compared. The metrics can not only be rewards but also space efficiency.** Unfortunately, possible baselines are very recent work (within 3 months of the submission), and code is not available to easily reproduce them. We show space efficiency in Table 2. We would be happy to add a note to Table 1 to point to Table 2 for details on space efficiency. ### Questions: **Need more explanation and comparison in experiment** Thank you for your feedback. Please see the global response for more details. We will add additional analyses to the experiments section, space permitting, and hope this resolves any additional questions or points of confusion the reviewer may have. **There are some environments for quality diversity like QD-GYM. What's the performance of the method in QD-GYM?** We use QDax[1], which re-implements all of the tasks available in QD-Gym in a GPU-accelerated environment. If our method were to be run on QD-Gym the results should be identical, but the experiments would take significantly longer to run. [1] Lim, Bryan, Maxime Allard, Luca Grillotti, and Antoine Cully. "Accelerated Quality-Diversity through Massive Parallelism." arXiv preprint arXiv:2202.01258 (2022).
Summary: This work presents a novel framework using latent diffusion models to distill the archive of polices into a single generative model over policy parameters. The latent diffusion model with VAE backbone compresses the high dimensional neural network (NN) parameters into a compact, making it possible to reconstruct policies parameterized by NN. Further, the conditioning mechanism of diffusion models is used to flexibly generate policies with different behaviors. Strengths: 1. The author(s) use latent diffusion model to compress the high dimensional neural network (NN) parameters into a compact, making it possible to directly generate policies parameterized by NN. 2. The proposed framework recovers 98% of the original rewards and 89% of the original coverage while achieving a compression ratio of 13x. Weaknesses: 1. The performance and accuracy of the proposed method show significant discrepancies when applied under text conditions. Experiments show that the success rate of the method is influenced by the selection of text labels. This suggests that the model possesses limited understanding and generation capabilities in terms of language descriptions. 2. The proposed method shows poor performance on tasks with high-dimensional measure vector, such as Ant, suggesting its limited modeling capability in high-dimensional measure spaces. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How many policies are used as training dataset when conducting archive reconstruction experiments? 2. Do the dimensions of the measures used in each environment have explicit semantic information? If so, can the author(s) further demonstrate the model's generalization ability on it? (For instance, assuming that a particular dimension of the metrics represents the robot's movement speed, if the value of that dimension is set to 1.2 in the condition of the diffusion model, will the generated strategies be able to achieve speeds beyond those observed in the training dataset?) If not, how could the users employ this model to generate policies of their desired behaviors? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The author(s) only demonstrated the model's ability to reconstruct the original dataset and did not conduct further experimental demonstrations regarding its generalization capability. However, the generalization ability should be an important consideration when evaluating generative models. 2. At present, it appears that text conditioning does not show a sufficiently favorable influence within the model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The performance and accuracy of the proposed method show significant discrepancies when applied under text conditions. Experiments show that the success rate of the method is influenced by the selection of text labels. This suggests that the model possesses limited understanding and generation capabilities in terms of language descriptions.** Thank you for your insights; we too found that the language-based generation capabilities are limited compared to measure-conditioned policy generation. Our hypothesis for this discrepancy lies in the fine-granularity of measure conditioning compared to the coarse-granularity of language conditioning. In order to produce large enough training data for the diffusion model, each measure is finely discretized into 100 equally spaced bins i.e. a 2D archive for a bipedal robot contains 10k cells and thus as many as 10k policies with unique measures. In practice, however, there are much fewer actual gaits than the 10,000 “unique” policies that can exist in the archive. Thus, a single language condition describing a specific high-level gait corresponds to a region of the archive, rather than a specific policy with a specific measure, as is the case with measure conditioning. This means that there is less training data for the language encoder to work with when compared with measure conditioning. Nonetheless, language conditioning is still useful since it is more intuitive than using measures, and it’s likely the case that many real world applications will not require measure-level granularity, instead opting for gait-level granularity. Improving on language-level conditioning, however, is an interesting direction of research we intend to pursue in the future. **The proposed method shows poor performance on tasks with high-dimensional measure vector, such as Ant, suggesting its limited modeling capability in high-dimensional measure spaces.** While we agree that the performance of ant is lacking compared to the other tasks, we do not believe that the dimensionality of ant’s measure space is affecting performance since the measure space of ant (4) is not significantly higher than the other tasks (2). In addition, the ant measures are discretized into only 10 bins as opposed to the 100 bins in the 2D archives of the other tasks, making the total maximum policy count in the datasets the same for all tasks. The CDF plots in Figure 4 show that the diffusion model recovers most, if not all, of the higher performing policies, and loses diversity where lower performing policies are concerned. If the proposed method struggled with the higher dimensionality of the measure space, we would expect an equivalent drop in diversity along all levels of policy performance. This suggests that the ant archive dataset itself contains many different policies that have parameter variations that are difficult to distill but achieve low reward. The MEM is also not corrected by the dimensionality of the measure space. By dividing the MEM by the measure dimensionality, we can compute a MEM per dimension of Ant that has a similar value as the other environments. Finally, an ablation on GHN size on the Ant environment and provided in the added document shows that performance can be improved with larger decoders. ### Questions: **How many policies are used as training dataset when conducting archive reconstruction experiments?** 7470 policies are used for humanoid, 9815 for half-cheetah, 8192 for walker2d, and 6180 for ant. **Do the dimensions of the measures used in each environment have explicit semantic information? If so, can the author(s) further demonstrate the model's generalization ability on it? (For instance, assuming that a particular dimension of the metrics represents the robot's movement speed, if the value of that dimension is set to 1.2 in the condition of the diffusion model, will the generated strategies be able to achieve speeds beyond those observed in the training dataset?) If not, how could the users employ this model to generate policies of their desired behaviors?** The dimensions do not have explicit semantic information. In our tasks, the measures are defined as the proportion of foot contact time for each leg of the robot over the entire trajectory. This inherently bounds the measure space to [0.0, 1.0] for each measure dimension. Consequently, there is no physical interpretation of a measure value of 1.2. The goal of each policy is the common RL objective for locomotion tasks, which is to move forward as fast as possible while minimizing energy consumption and avoiding termination conditions such as falling over. Thus, we cannot modulate the speed of the generated policies since the dataset contains policies that were trained to move as fast as possible. It would, however, be possible to modulate velocity, if forward progress was added as a measure function instead of as the objective. While it would be interesting to add forward progress as a measure function and experiment with out of distribution generalization in a separate QD paper, our focus in this work was archive distillation. Here, we achieve desired behaviors using measure or language-conditioning describing the behavior, where the behaviors in question correspond to different gaits. --- Rebuttal Comment 1.1: Title: Thanks detailed response Comment: I truly appreciate the author's detailed response. Thanks to that, all of my questions and concerns have been thoroughly clarified.
Rebuttal 1: Rebuttal: We thank all the reviewers for their in-depth feedback. Attached here is a pdf document with additional tables and figures. **Measures in Quality Diversity:** Thank you for the reviewers' feedback. We see that our explanation of measures is lacking. Below is a brief explanation of what measures are used. The same edits will be made to the revised manuscript. The measure functions for all tasks in our experiments are the same – each measure function measures the proportion foot contact time for each foot on the robot. The measure functions are thus bounded to the [0, 1] interval, where 0 indicates the foot never touched the ground, and 1 indicates the foot never left the ground. We agree that QD-scores are difficult to interpret, and also have the weakness that they are sensitive to archive resolution and scale of the objective function. However, we feel it is necessary to include it in our evaluation to allow our results to be compared with prior works in QD that include the QD-score in their evaluations. A more interpretable description of our method’s performance can be found in the CDF plots in Figure 4. CDF plots show the percentage of policies that achieve an episodic reward of R or higher, for all possible R values on the x-axis. Thus, it captures aspects of both performance and diversity, as well as how the policies are distributed with respect to performance – data which is not well represented by a scalar value. We will revise the description of Table 1 to emphasize that Figure 4 provides more interpretable evaluation results. **Visualization of the archive:** Figure 3 visualizes an implicit archive over the course of training for the diffusion model. The axes of the figure describes the measures described above. As the diffusion model learns to better represent the policy distribution w.r.t. performance and behavior, it is capable of filling more cells (where each cell corresponds to a policy with that behavior) with high performing solutions. Near-optimal policies are represented in yellow. We appreciate the feedback and will update the caption and axes of Figure 3 to better explain this. The updated figure is in the attached additional figures pdf file. **Text Labeling policies** The training text labels were generated by manually labeling policies in the original archive after the archive is generated but before training the diffusion model. We labeled policies by repeatedly inspecting a rollout for the policy farthest in parameter space from all previously labeled policies. We continued this process until we had labeled 128 of the policies, and labeled the remaining policies using nearest neighbors in parameter space. Labeling took approximately 3-5 minutes per policy on average, for a rough total of 10 hours. Most of that time was spent searching for the next policy or waiting for policies to be rendered, which could be avoided with better labeling tools. Pdf: /pdf/2a297c71650c791720953d27c9f1bdbbe037bf4a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Uncertainty Quantification via Neural Posterior Principal Components
Accept (poster)
Summary: This work proposes to model the full covariance matrix of aleatoric uncertainty for per-pixel image restoration tasks using a low-rank approximation. The low-rank approximation is estimated by a separate network trained end-to-end with the original predictor. The PCs, which are usually computed sequentially, are constructed in parallel using stop_gradiet operations. Strengths: The approach is fast and, as far as I know, architecture agnostic. Weaknesses: - The approach requires training a separate model. - It is unclear how good parallel end-to-end training of the PCs is compared to sequential training. I could imagine that this type of approach yields pretty unstable training dynamics. - The covariance measured by this approach is over the entire image. I could image that for pixel-wise tasks there exists at least some location invariance. - Some important related work on low-rank approximations for pixel-wise aleatoric uncertainty [1] and efficient uncertainty estimation is missing (see works discussed in [2]) - Minor: in eq (2) w_i is normalized to unit length whereas e_i is not. Is this intended? [1] Monteiro, M., Le Folgoc, L., Coelho de Castro, D., Pawlowski, N., Marques, B., Kamnitsas, K., van der Wilk, M. and Glocker, B., 2020. Stochastic segmentation networks: Modelling spatially correlated aleatoric uncertainty. Advances in Neural Information Processing Systems, 33, pp.12756-12767. [2] Postels, J., Segu, M., Sun, T., Sieber, L.D., Van Gool, L., Yu, F. and Tombari, F., 2022. On the practicality of deterministic epistemic uncertainty. In Proceedings of the 39th International Conference on Machine Learning (Vol. 162, pp. 17870-17909). PMLR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What speaks against directly parameterizing the conditional output distribution using a multivariate gaussian with low-rank approximated covariance? As far as I know, you would also never have to explicitly evaluate the cov. - Could the authors comment on the training dynamics of the approach? How does the result compare to sequential training? - Could the authors comment on location invariance (see weaknesses)? - In l 160 mention that the error prediction model requires the right biases to work well. Can the authors elaborate on this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Method proposition** Our work proposes to output the top $K$ covariance eigenvectors/PCs directly **without** assuming a low-rank (LR) structure. Although we can use the resulting PCs $\\bf{W}$ to construct an LR covariance approx., at no point do we use this assumption. Our work is inspired by PCA which is a non-probabilistic technique, unlike factor analysis (FA)/probabilistic PCA (PPCA) which explicitly assumes the posterior is Gaussian and approximates the full covariance matrix with $\\bf{\\Sigma}\\approx \\bf{\\Psi} + \\bf{W}\bf{W}^{\\top}$ where $\\bf{\\Psi}$ is diagonal. In general, the 2 methods are distinct; the subspaces of PCA and FA are different unless $\\bf{\\Psi}=\\sigma^2\\bf{I}$ as in PPCA. **Missing related work** Thanks for bringing these refs to our attention. Ref [1*] is indeed related to Gaussian covariance approximations and we'll cite it in our final paper. Please note however that we did cite [25] that came out a year later in NeurIPS 21. This paper employs the same covariance approx. for image denoising rather than segmentation (a task more relevant to our work). As for ref [2*], this paper is focused on epistemic and not aleatoric uncertainty which is the focus of our work. The works dealing with aleatoric uncertainty in [2] are either referenced in our paper or focused on classification. **Why not parameterize the output using a Multivariate Gaussian (MVG) with low-rank approximated covariance?** The short answer is training stability. More specifically, the only work we are aware of using the LR MVG in image restoration is ref [25]. In principle, as you suggest, one can use this approx. in conjunction with a log-likelihood loss to impose an LR MVG on the output. In this case, the loss can be indeed calculated efficiently without explicitly evaluating the full covariance thanks to the Woodbury matrix identity and the matrix determinant lemma. However, MVGs with both unknown mean and covariance are known to be notoriously unstable to train (*e.g.* see https://arxiv.org/abs/1906.03260). In fact, as we noted in Appendix A (l 66-74, Figure A3), we found it numerically unstable to even train a per-pixel Gaussian distribution using the standard loss (eq. (A2)) and consequently resorted to eq. (A1) (similar to ref [2]). Please note that also in your ref [1*] the authors reported stability issues (paragraph “caveat” on pg. 5), and resorted to early stopping before overflow errors occurred. In addition, they also encountered infinite covariance issues in background areas and addressed it using masking. Therefore, this approach is limited by training stability, especially in the case of model misspecification. In contrast, NPPC outputs the top PCs of the covariance without imposing any probabilistic assumptions on the output distribution. **The approach requires training a separate model.** Please note that our approach enables training both the image prediction (*i.e.* the posterior mean) and the PCs jointly using a **single** model (see sections 3.4-3.5, eq. (7)), without breaking the task into two parts. However, as acknowledged in Appendix A (l 39-57, Figure A2), jointly learning the mean and the PCs led to less stable training and required more parameters+time to converge. Hence, for our CelebA-HQ/biological experiments, we focused on a two-step setting where the mean is pre-trained, and afterward, the result is wrapped around with NPPC predicting only the PCs and the variances. Please note that this limitation is not unique to NPPC, and is a standard strategy to stabilize variance prediction networks also for MVGs, *e.g.* as done in ref [6] (see also https://arxiv.org/abs/1906.03260). In fact, a similar strategy was adopted in ref [1*] that you pointed out, where even for the toy problem the authors reported stability issues and resorted to mean pre-training. **Parallel vs sequential training, and stability of training dynamics** In our experiments, when pre-training the mean we did not experience training instability. As for joint vs sequential training, this is a very good question that warrants empirical validation. To realize sequential training without significantly altering the number of parameters used to predict the PCs, we trained multiple models with an increasing number of PCs $K=1,2,\\dots,5$, and compared the PCs backward across different models. The results for the task of image colorization on CelebA-HQ confirm that end-to-end training leads to approximately the same PCs as sequential training (See Figs. R1/R2 in the rebuttal PDF). Thanks for pointing this out, we will include these results in the final version. **Location invariance** Indeed, for certain tasks, there might be some location invariance. However, please note that even if the covariance structure is local (*e.g.* a Toeplitz matrix), the PCs are not necessarily localized. In our experiments, we used a fully conv. U-net, and hence our predicted PCs were equivariant to shifts in the input (*i.e.* shifted inputs -> shifted PCs). In fact, as we explained in Appendix A (l 23-27), for the biological data we learned the PCs on $64\\times 64$ patches cropped from the full images, as cell information tends to be local. At test time, we tested on $2\\times$ bigger patches as shown in Fig. 5 and Appendix C. Ultimately, in such cases NPPC should be either applied patch-wise or the number of PCs $K$ should be increased. We will include this point in our discussion. **In eq (2) $\\bf{w}\_i$ is normalized to unit length whereas $\\bf{e}\_i$ is not. Is this intended?** As explained in Appendix A (l 58-65), we divide by $\\|\\bf{e}\_i\\|\_2\^2$ to standardize the loss values across tasks and save expensive hyperparameter tuning. We will clarify this in the text. **Biases required for NPPC to work well (l 160)** The “biases” we referred to in the text are the standard inductive/implicit biases underlying common architectures (*e.g.* convs. for images, etc). Thanks, we will clarify it.
Summary: For uncertainty quantification in image recovery problems, the authors propose a way to train a neural network to produce estimates of the principle components of the posterior covariance matrix. Their approach starts with a neural network that produces the posterior mean and trains a new network (or a new prediction head on the posterior-mean network) to learn the principle components and their corresponding eigenvalues. They demonstrate their method on denoising, inpainting, super-resolution, and biological image-to-image translation problems. They show that their method produces principle component estimates of a similar quality to those of diffusion models but thousands of times faster. Strengths: 1. For image recovery, the idea of visualizing posterior uncertainty using principle components is a good one and, to my knowledge, has not been adequately explored in the literature. The most common way to present image uncertainty information is to plot a pixel-wise variance map, but this does not show pixel dependencies, which are critical to understanding the structure of the uncertainty. And trying to visually interpret uncertainty structure from dozens of posterior image samples is tedious and heuristic. 2. The proposed method is fast at inference, since it uses only a single pass through a neural network. This stands in contrast to Langevin/score/diffusion methods, which require generating many posterior samples, each of which can require thousands of passes through a neural network. 3. The proposed method is widely applicable because it can be built on top of an existing conditional-mean (i.e., MMSE) estimation network, which are widely available. This stands in contrast to, say, conditional normalizing flows, whose architectural constraints make them difficult to apply/tune on new applications. 4. The numerical results are impressive. The authors have tested their method on a wide range of applications, some of which involve consider large images (CelebA-HQ). Also, their experiments suggest that their method gives similar RMSE and residual error magnitude to recent diffusion models. 5. The paper is very clearly written. Weaknesses: 1. The authors focus on a comparison to diffusion methods, which are very slow. But modern conditional GANs and conditional normalizing flows (CNFs) can quickly generate posterior samples with performance that meets or exceeds that of recent diffusion models. For example, a 2022 CVPR super-resolution contest (https://arxiv.org/abs/2205.05675) showed CNFs dominating other methods. Compared to modern CNFs (e.g., https://arxiv.org/abs/2006.14200) or CGANs (e.g., https://arxiv.org/abs/2210.13389), it’s not clear that the proposed method has any speed or performance advantages. 2. The proposed method is a one-trick pony, in that it generates only posterior principle components, and not high-quality image recoveries like posterior sampling methods do (e.g., diffusion, CGAN, CNF, etc.). For example, the generated "x+w" face images are very blurry. This weakness is acknowledged by the authors in Section 5. 3. The proposed method seems practical for recovering only a few principle components (e.g., tens at most), whereas in some applications the uncertainty is not well described by only a few principle components. In other words, sometimes the eigenvalues of the posterior covariance decays very slowly. The authors acknowledge this issue in their face experiment. The reviewer has observed, in their own work, that slow eigenvalue decay is the rule rather than the exception. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The paper was very clear and so I have no questions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Advantages of the proposed method compared to fast posterior samplers such as conditional GANs/Normalizing Flows.** Thanks, this is an important point. Please note that even in the case of a fast conditional generative model capable of sampling from the posterior using a single neural function evaluation (NFE), we still need 100 samples to faithfully perform PCA (*i.e.* at least 100 NFEs). This is as opposed to our method, which requires only a single NFE to output the PCs directly. Following your comment, to ensure we highlight this point, we will include in the final manuscript another comparison with MAT (https://arxiv.org/abs/2203.15270) on image inpainting. To the best of our knowledge, MAT is the current SotA in image inpainting on CelebA-HQ (see https://paperswithcode.com/sota/image-inpainting-on-celeba-hq). Our method achieved superior results with RMSE($\downarrow$)/Residual Error Magnitude($\downarrow$) of $10.71/9.42$ for "eyes" and $12.86/11.21$ for "mouth" compared to $11.56/10.66$ and $13.73/12.53$ for MAT, all while being $100\times$ faster. **The proposed method only provides posterior principle components and does not generate high-quality posterior samples.** Indeed, as mentioned in the discussion (Section 5), the premise of this work was uncertainty quantification rather than posterior sampling. As you correctly noted, trying to visually interpret the uncertainty structure from dozens of posterior image samples is tedious and heuristic. Therefore, people usually resort to summarizing the samples either to per-pixel variance maps or alternatively to a few principal directions of variation. With this final goal in mind, we designed our method to output the PCs directly. In principle, this weakness could be tackled by employing NPPC in the latent space of a powerful encoder; however, this is beyond the scope of this current work. **The proposed method is only practical for a small number of principal components which might be insufficient in some applications.** Indeed, as we acknowledged in the discussion (Section 5), for severely ill-posed inverse problems, a linear subspace with a small number of PCs captures very little of the error. As you correctly point out, we did notice this to be the case for face images on the tasks of super-resolution and inpainting. Please note, however, that on the task of image colorization presented in the supplementary (also for facial images), a small number of PCs ($K=5$) were actually able to recover large portions of the error. Hence, the practicality of predicting a few PCs to convey posterior uncertainty is eventually dataset and **task** dependent. Nonetheless, beyond a certain number of PCs, navigating the different components may become just as tedious as navigating the original posterior samples. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I think we are in agreement on all points except for one. I don't feel that it's appropriate to claim that your results are "superior" to MAT because it's not appropriate to judge the quality of inpainting by RMSE. It's well known that the RMSE metric rewards blurry reconstructions, not sharp/realistic ones, and your "x+w" reconstructions are noticeably blurry. As for the other reviews, I believe their scores are low because they didn't understand some aspects of the research problem (e.g., the lack of ground-truth on which to validate the PCs and variances). That said, in an effort to find a middle ground, I will slightly reduce my overall score from 8 to 7. --- Reply to Comment 1.1.1: Comment: Thanks for the positive score. We'd like to clarify a point regarding the RMSE of the prediction. Please note that this RMSE is computed between the GT and the mean prediction of MAT which is also blurry (the average of 100 samples). Of course, we completely agree that RMSE is not a good measure for inpainting. Our goal is only to compare the residual error magnitude, which is the measure that quantifies the accuracy of the estimated PCs. However, it would be unfair to report that number alone, without reporting the RMSE of the prediction, because the error is computed with respect to the prediction. Note that we could report the results of a different experiment, where we train our network to output posterior directions with respect to MAT's mean prediction, rather than with respect to our mean prediction. In this case, we seemingly don't have to report the RMSE of the prediction (as both methods are computed with respect to the same mean). However, in this case our residual error magnitude turns out to be much smaller than that achieved by computing PCA on MAT's predictions. This is because our network has a lot of error to cut from (our first PC captures some of the error that corresponds to the inaccurate mean). We would gladly report these results, but we felt this is a bit deceiving and unfair. This is the reason we chose to report the residual error of each algorithm with respect to its own mean, in which case it seems natural to also report the RMSE of the mean.
Summary: This paper proposes using a deep neural network to predict the principle components (PCs), and the associated uncertainties, of the output directly, instead of just the most likely output. This is done by proposing a PC loss that ensures PCs are unit vectors and are orthogonal. The experimental results indicate that the proposed method performs comparably or better than other methods on super-resolution and inpainting tasks, while using much less compute when predicting for new examples. Strengths: The paper proposes, what is to my knowledge, a novel PC-based method that leads to a more computationally efficient method for super-resolution and inpainting tasks that performs comparable to, or better than, competing techniques. The proposed technique also estimates the data (i.e. aleatoric) uncertainty by learning the variances associated with each PC. Knowing the uncertainty of the predictions could be helpful in understanding the outputs of the proposed method, which is necessary for many scientific applications. Weaknesses: The main weakness of the paper is evaluations. The proposed method was only compared to two other methods on one dataset (Celeb-A-HQ). This comparison was not done on MNIST or the biological image-to-image translation dataset. The predicted PCs were not compared to the ground truth PCS. Also, the quality of the uncertainty quantification, a key claim of the paper, was not evaluated well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could the predicted principle components and variances be compared to the ground truths for a dataset such as MNIST? It was done for an extremely simple toy dataset in the supplementary material. - Is there another way to verify the improvement in uncertainty quantification that the proposed method brings? - Could the proposed method be compared with other image-to-image translation methods of the biological imaging dataset? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation of no guaranteed generalization performance is given. Is there a potential limitation related to choosing the number of PCs to predict? Societal impact was not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The proposed method was only benchmarked on CelebA-HQ and not on MNIST/biological dataset. Could there be further comparisons on the biological dataset?** The reason we did not compare to other techniques on the MNIST/biological datasets is that to the best of our knowledge, there exist no pre-trained posterior samplers for these datasets. In all our experiments, we intentionally used only pre-trained models optimized by the respective authors to avoid introducing implementation bias. Specifically, note that for the biological dataset, the authors of [44] used a cGAN based on pix2pix in their implementation, which is known to suffer from mode collapse. Hence, this provided single point estimates for every measurement, preventing us from comparing the PCs. Having said that, in the final manuscript, we will also include further comparisons with MAT (https://arxiv.org/abs/2203.15270) on image inpainting. To the best of our knowledge, MAT is the current SotA in image inpainting on CelebA-HQ (See paperswithcode). Our method achieved superior results with RMSE($\\downarrow$)/Residual Error Magnitude($\\downarrow$) of $10.71/9.42$ for “eyes”' and $12.86/11.21$ for “mouth” compared to $11.56/10.66$ and $13.73/12.53$ for MAT, all while being $100\times$ faster. **The predicted PCs were not compared to the ground truth (GT) PCs. Could the predicted PCs and variances be compared to the GT on MNIST?** Thanks for this important question. Please note that there exists no GT uncertainty in image restoration datasets. This is because each element in the dataset is comprised of a **single** posterior sample $\\mathbf{x}\_i\\sim P\_{X\\lvert Y}(\\mathbf{x}\\lvert\\mathbf{y}=\\mathbf{y}\_i)$ for each observed image $\\mathbf{y}\_i$. Therefore, as we stated in the paragraph "The challenge in posterior PCA" (l 153-166), directly computing the posterior covariance $\\mathbb{E}\\left[(\\mathbf{x}-\\hat{\\mathbf{x}})(\\mathbf{x}-\\hat{\\mathbf{x}})^\\top\\lvert\\mathbf{y}\\right]$ (or properties of the covariance such as the top $K$ eigenvectors) is practically impossible. The key implicit assumption underlying our approach (and empirical risk minimization in general) is that the posterior mean $\\mu(\\bf{y})=\\mathbb{E}[\\mathbf{x}\\lvert\\mathbf{y}]$ and the posterior covariance $\\Sigma(\\mathbf{y})=\\mathbb{E}[(\\mathbf{x}-\\mu(\\mathbf{y}))(\\mathbf{x}-\\mu(\mathbf{y}))^\\top\\lvert\\mathbf{y}]$ vary smoothly with $\\mathbf{y}$. Hence, a neural network training on a dataset $\\mathcal{D}=\\left\\{\\left(\\mathbf{x}\_i,\\mathbf{y}\_i\\right)\\right\\}\_{i=1}^{N\_d}$, will be able to capitalize on inter-sample dependencies and learn a smooth approximation of the top $K$ posterior PCs as a function of $\\mathbf{y}$. Having said that, we agree that further verifying the quality of the predicted PCs may be good for reassuring the readers that our method is valid. For this purpose, following common practice (*e.g.* as done in refs [6,25]), we will include a controlled experiment by designing a toy model with a known posterior distribution $P\_{X\\lvert Y}$ (*i.e.* where the PCs are known ground-truth functions of $\\mathbf{y}$). We'll use our approach to train a network to predict the PCs from a fixed dataset of single posterior samples $\\left(\\mathbf{x}\_i \\sim P\_{X\\lvert Y}(\\mathbf{x}\\lvert\\mathbf{y}=\\mathbf{y}\_i), \\mathbf{y}\_i\\right)$, and compare the result to the GT PCs over a test set of $\\mathbf{y}$'s. **The predicted PCs and variances were not evaluated well. Could the predicted variances be further verified?** As mentioned above, the quality of the PCs is difficult to ascertain **directly** as there is no GT available. Similarly, for the $k^{\\text{th}}$ predicted variance $\\sigma\_k^2$, we only have the norm of a single projected error $\\lvert\\mathbf{w}\_k^{\\top} \\mathbf{e}\_i\\rvert$ per measurement $\\mathbf{y}\_i$, and hence no GT either. Please note that, unlike per-pixel methods, in our case we cannot compare the aleatoric uncertainty and the test error directly (*e.g.* RMSE vs. fraction of pixels above an uncertainty threshold), because our method does not assume pixels are independent (an incorrect assumption in images). Additionally, please note that we did evaluate the resulting PCs indirectly by measuring the Residual Error Magnitude $\\|\\mathbf{e}-\mathbf{W}\mathbf{W}^{\\top}\\mathbf{e}\\|\_2$ which is a function of the PCs subspace. Nonetheless, in addition to this result, we further verified the predicted variances by comparing the projected test error $\\mathbf{w}\_k^{\\top}\\mathbf{e}\_i$ to the predicted variance $\\sigma\_k^2$ for every test point $\\mathbf{y}\_i$ (See Fig.R1/R2 in the rebuttal PDF). The results indicate that NPPC estimates the standard deviation with high accuracy (mean estimate of 0.96 vs GT of 1). We will add this quantitative validation to our final manuscript. **Is there a potential limitation related to choosing the number of PCs to predict?** Thanks, an important point. Theoretically, different measurements may require a different number of PCs $K$ depending on the posterior's complexity. In NPPC, $K$ is a hyper-parameter hard-coded within the network's architecture (it is the number of network outputs) and needs to be set in advance prior to training, acting as an upper bound on the number of possible PCs. This could be computationally inefficient, however, please note that we also predict the standard deviations along each of the $K$ PCs for each input $\\mathbf{y}\_i$. Therefore, if for a certain observation the standard deviations of some of the PCs are small, then the user can simply choose to ignore those PCs. We will touch on this point in our final manuscript. **Societal impact was not addressed** We'll comment on this in the final version. In terms of broader impact, proper uncertainty quantification is crucial for trustworthy interpretable systems, particularly in healthcare applications (*e.g.* biological data).
Summary: This work proposes to do image recovery inference using posterior principal components. The posterior principal components are often regarded as a function of observed image, but estimating it from a single image is impossible. Therefore, this work proposes to learn the principal components directly by training a neural net on the triplets of data (observed image, ground truth and posterior mean). It learns all the principal components together using shared weights, and at the same time managed to preserve orthogonality. The experimental results show its usefulness in various tasks. Strengths: 1. The paper is very well written and organized 2. The task of the paper is very important and useful 3. Designing and training a neural net that outputs all principal components at once and maintains othorgonality is nontrivial, and the paper solved the problem quite nicely. 4. The experimetal results are encouraging, as the proposed method outperforms the state-of-the-art posterior samplers Overall, I think it is a solid paper, and its idea is simple and effective Weaknesses: There is no quality control on the predicted principal component. It is not necessarily a weakness, but it seems to be a common issue for most of deep learning based inference method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Given different observations, the posterior distribution could be quite different. In this case, do you still select the same number of principal components for each observation? 2. If you train two NNs, where the first NN output 5 components, and the second NN outputs 10 components. Are the top 5 components of the two NNs match each other? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **There is no quality control on the predicted principal components** Thanks for this important comment. Please note that as correctly mentioned in your summary, there exists no ground truth uncertainty in image restoration datasets. This is because each element in the dataset is comprised of a **single** posterior sample $\\mathbf{x}\_i\\sim P\_{X\\lvert Y}(\\mathbf{x}\\lvert\\mathbf{y}=\\mathbf{y}\_i)$ for each observed image $\\mathbf{y}\_i$. Therefore, as we stated in the paragraph "The challenge in posterior PCA" (l 153-166), directly computing the posterior covariance $\\mathbb{E}\\left[(\\mathbf{x}-\\hat{\\mathbf{x}})(\\mathbf{x}-\\hat{\\mathbf{x}})^\\top\\lvert\\mathbf{y}\\right]$ (or properties of the covariance such as the top $K$ eigenvectors) is practically impossible. The key implicit assumption underlying our approach (and empirical risk minimization in general) is that the posterior mean $\\mu(\\mathbf{y})=\\mathbb{E}[\\mathbf{x}\\lvert\\mathbf{y}]$ and the posterior covariance $\\Sigma(\\mathbf{y})=\\mathbb{E}[(\\mathbf{x}-\\mu(\\mathbf{y}))(\\mathbf{x}-\\mu(\mathbf{y}))^\\top\\lvert\\mathbf{y}]$ vary smoothly with $\\mathbf{y}$. Hence, a neural network training on a dataset $\\mathcal{D}=\\left\\{\\left(\\mathbf{x}\_i,\\mathbf{y}\_i\\right)\\right\\}\_{i=1}^{N\_d}$, will be able to capitalize on inter-sample dependencies and learn a smooth approximation of the top $K$ posterior PCs as a function of $\\mathbf{y}$. Having said that, we agree that further verifying the quality of the predicted PCs may be good for reassuring the readers that our method is valid. For this purpose, following common practice (*e.g.* as done in refs [6] and [25]), we will include a controlled experiment by designing a toy model with a known posterior distribution $P\_{X\\lvert Y}$ (*i.e.* where the PCs are known ground-truth functions of $\\mathbf{y}$). We'll use our approach to train a network to predict the PCs from a fixed dataset of single posterior samples $\\left(\\mathbf{x}\_i \\sim P\_{X\\lvert Y}(\\mathbf{x}\\lvert\\mathbf{y}=\\mathbf{y}\_i), \\mathbf{y}\_i\\right)$, and compare the result to the ground-truth PCs over a test set of $\\mathbf{y}$'s. **The complexity of the posterior distribution may vary across observations. Is the number of PCs fixed for every observation?** Thanks for raising this point. The short answer is yes - *i.e.* the number of predicted PCs $K$ is fixed for different observations. This is because in our approach $K$ is hard-coded within the network's architecture (it is the number of network outputs). We treat $K$ as a hyper-parameter that needs to be set in advance prior to training, acting as an upper bound on the number of possible directions. However, please note that NPPC also predicts the standard deviations along each of the $K$ PCs for each input $\\bf{y}\_i$. Therefore, if for a certain observation the standard deviations of some of the PCs are small, then the user can simply choose to ignore those PCs. We will touch on this point in our discussion. **If we train two NNs one outputting $K=5$ PCs and one outputting $K=10$ PCs, do the top $K=5$ PCs of both models match?** That's a great experiment, thanks. We tested this on the Celeba-HQ dataset for the task of image colorization. The resulting first 5 PCs when training two NNs with 5/10 components were very similar (up to a flipped sign) with an average cosine similarity of 0.9 across the first 3 PCs, and 0.83 overall (please see Figures R1/R2 in the rebuttal PDF). For later PCs (*e.g.* the 4th and 5th) the similarity slightly drops as the error variance along multiple PCs is roughly the same, and hence the ordering of the PCs becomes less distinct and prune to optimization errors. To reinforce our results, we also tested the consistency across models outputting $K=1,2,3,4,5$ PCs and found they were also backward consistent. These results further validate that NPPC consistently outputs the PCs in the correct order. We will add this important finding to the appendix and refer to it in our final manuscript. --- Rebuttal 2: Comment: I thank authors for the detailed response. It addressed all my concerns and questions. I raised my score to weak accept.
Rebuttal 1: Rebuttal: The PDF includes 2 figures containing the results of experiments proposed by the reviewers. Pdf: /pdf/7e1f7ab4bc0ee5bb8f2c9b7bef83a2fe63205fde.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Regret Matching+: (In)Stability and Fast Convergence in Games
Accept (spotlight)
Summary: The paper pushed RM+ based algorithm better than $O(\sqrt T)$ theoretically the first time. Specifically, the authors proposed smooth RM+, conceptual RM+ and extragradient RM+ with theoretical regret guarantee. Strengths: To the best of my knowledge, this paper is the first one to establish RM+ based algorithm with better theoretical gurantee than $O(\sqrt T)$. Before, although people observe $O(1)$ regret guarantee of RM+ in *most* games (there exist counterexamples that make it around $O(T^{0.7})$), there's no variant of RM+ that has better than $O(\sqrt T)$ regret bound. However, this paper proposed a really simple and straight-forward version of RM+ and proved better regret bound. This may partially reveal why RM+ and CFR+ have good performance in many games. Weaknesses: - The main text of the paper does not reveal much about the proof intuition. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I'm a bit confused about the CFR part. Does CFR equipped with the new RM+ has constant social regret? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors discussed limitations in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Please see below for our response. - _The proof intuition in the main text:_ We will make sure to provide more intuition of the proofs in the final version when an additional page is available. - _Does CFR equipped with the new RM+ have constant social regret?_ Yes, as we mentioned at the end of Section 5, the Clairvoyant CFR algorithm that computes a sequence of iterates with regret at most $\epsilon$ in $O(1/\epsilon)$ iterations using $O(\log(1/\epsilon)/\epsilon)$ gradient computations. Proofs for extensive-form games are in Appendix J. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Thank you for answering my questions !
Summary: The authors study $\text{RM}^+$. They show that there are instances of loss sequences in variants of $\text{RM}^+$ unstable. The authors point out that the decisions of $\text{RM}^+$-based algorithms are performed by normalizing an aggregate payoff vector. Hence, if one inputs an instance that is "close" to the origin, two consecutive aggregate payoffs can point in different directions in spite of being relatively close, which could make the algorithm cycle between two strategies. The authors exhibit an example establishing that the potential instability mentioned does indeed occur. The authors then present two methods to the instability discussed dealing with having two consecutive aggregate payoffs being too close to the origin. In one, they simply re-initialize the regret vector too some non-zero amount. The other solution is simply to ignore an area of the space near the origin and projecting onto that space. Not surprisingly, the second method seems to have a better theoretical guarantee but it's more expensive to implement. Strengths: The paper is very-well written. It was a pleasure to read. I had some very minor comments about the draft in regards to exposition and writing. I also like how natural their solutions are to the issue of instability. The proofs seem to be correct and everything feels very natural, which is probably due to the authors' writing. The experiments also seem reasonable and establish that their methods perform well with random synthetic data. Weaknesses: Would the authors please put their contribution in context? How "important" it is to the overall community? They have already addressed some of this in the draft, but I was hoping for longer explanation in the rebuttal period. Some small suggestions 1. Generally, I think citations are not appropriate for abstracts since abstracts are often read independently of the full paper, so readers may not have access to the cited sources at that stage. If you want to include a specific reference or highlight previous related work in the abstract, I recommend paraphrasing the information or briefly mention the authors' names and year of publication without the specific citation. This maintains the abstract's clarity and conciseness while still giving credit to the relevant work. I realize adding the references simply as numbers might have done intentionally to hide the identity of the authors. 2. I'd recommend combining the sentences for on lines 45 to 48 "However, in a game setting..." and "Indeed, we identify..." to avoid using the word "this" which is a bit ambiguous. 3. Would it be possible to list the references in increasing order? 4. For the sake of completeness, it would be useful if T were defined as the horizon or something like that before being used. 5. Notation-wide, I think $\Delta(3)$ should probably be $\Delta^3$ on line 169. 6. The definition of $\mathbf{z}^t$ in Theorem 4.1 was a bit unclear to me until I saw it defined in Algorithm, 1. Maybe declare it before like you did for Theorem 5.1 in expression (2) 7. On line 346, when you say "$\text{RM}^+$-based algorithms" I would add in parenthesis to refer to which algorithms are meant (either by number or name) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: More the smooth predictive RM^+, was there anything essentially different if one uses a different threshold for $||R_1||$? Now, 1 is being used, which seems very reasonable. However, how ``far away" does one need to know from 0 to help stabilization? Like if I were to change 1 for a constant C, how does this affect the theoretical guarantees? Q2: I must be missing something, but I don't fully understand why clairvoyant CFR doesn't seem to perform as well with EFG as it does with matrix games. If you were to use the strategic game equivalent of the EFGs tested, what is the performance. I apologize because there is probably something I'm missing. Q3: In your experiments, it seems the data was generated randomly. This is good to know. However, did you try to create data that would be more "difficult" like maybe a space that is an annulus? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors make an effort of what they have consider and the limitations of their work in terms of experimental results obtained for EFGs I also would like to see how the algorithm performs in "harder" spaces than simply random ones. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and valuable comments. We will revise the citations, notations, and definitions based on your suggestions. Below we answer your questions. - _How important is it to the overall community?_ Understanding how CFR+ works well in game solving is an important open question in the community. Because of the widespread use of these algorithms (for one, it was used in recent poker AI milestones, where poker AIs beat human poker players [1,2,3,4]), it has become an important open problem to explain their strong empirical behaviors, e.g. as mentioned in [5]. Apart from the regret guarantees in $O(\sqrt{T})$, prior to our work virtually nothing was known that could provide a sound theoretical argument explaining RM+ and CFR+. Although we don't fully answer this question, in this paper we point out that (in)stability plays a crucial role in their empirical performances. Lacking stability can ruin CFR+ in some simple examples. On the other hand, from the theoretical perspective, we are the first to show fast convergence of RM-based algorithms by stabilizing the algorithms. This not only greatly shrinks the discrepancy between theory (usually OMD-based) and practice (usually RM-based), but also sets a cornerstone for developing better and more robust variants of CFR+. - _Different threshold for $\|R_1\|$:_ Changing the threshold for $\| R \|_{1}$ has an impact on the Lipschitz constant of the function $g$ as defined in Proposition 1, line 159 of our submission. In particular, in our algorithms, there are 2 tunable parameters: the threshold you mentioned and the step size $\eta$. It suffices to fix the threshold and tune $\eta$ only. The theorems still hold when changing the threshold to another constant $C>0$ by adjusting $\eta$ accordingly. We will provide more details on this in the final version of our paper. - _Clairvoyant CFR doesn't seem to perform as well with EFG as it does with matrix games:_ You are correct that Clairvoyant CFR appears to have weaker performances (Figure 3) than its counterpart ExRM+ on matrix games (Figure 2), as we also point out in the paper. We do not currently have an explanation for this. However, note that it is consistent with some of the experiences for some of the other $O(1/T)$ convergence-rate methods such as e.g. mirror prox and optimistic mirror descent/FTRL in the prior literature. For those algorithms, it has been observed that they can sometimes beat RM+ on matrix games, but it is much harder to beat CFR+ on EFGs. We view this as a general open problem regarding the "hard" structure of EFGs that makes it more difficult to attain fast convergence rates than for matrix games. See e.g. [6,7] below for past observations of this phenomenon. - _Converting into strategic game equivalent:_ The equivalent strategic-form representation of an EFG is exponentially large in the EFG size, and thus not practical for most of the games that we consider (see appendix K.2 line 696 for the description of the EFGs used in the experiments). - _Did you try to create data that would be more "difficult" like maybe a space that is an annulus?_ We did not try this, but it is an interesting idea and we will attempt it for the final version. We remark that unlike extensive form games—for which established benchmark games exist—there are no clearly established normal-form benchmark games, and it seems common practice in the literature to experiment on random matrix games. We hope that we have answered your questions and addressed your concerns about the significance of our results. In light of our responses, we would like to invite you to reconsider the importance of our contributions. [1] M. Bowling, N. Burch, M. Johanson, and O. Tammelin. Heads-up limit hold’em poker is solved. Science, 347(6218):145–149, 2015. [2] N. Brown and T. Sandholm. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418–424, 2018. [3] N. Brown and T. Sandholm. Superhuman AI for multiplayer poker. Science, 365(6456):885–890, 2019 [4] M. Moravcık, M. Schmid, N. Burch, V. Lisy, D. Morrill, N. Bard, T. Davis, K. Waugh, M. Johanson, and M. Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337):508– 513, 2017. [5] Farina, G., Kroer, C., and Sandholm, T. (2021, May). Faster game solving via predictive Blackwell approachability: Connecting regret matching and mirror descent. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 6, pp. 5363-5371). [6] Gao, Yuan, Christian Kroer, and Donald Goldfarb. "Increasing iterate averaging for solving saddle-point problems." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021. [7] Farina, Gabriele, Christian Kroer, and Tuomas Sandholm. "Optimistic regret minimization for extensive-form games via dilated distance-generating functions." Advances in neural information processing systems 32 (2019). --- Rebuttal Comment 1.1: Title: I have read the author(s)'s rebuttal comments Comment: Thank you for your comments. I have read them and I have changed my review accordingly.
Summary: The paper studies variants of the RM+ algorithm for learning in games. They show that RM+ does not satisfy stability, which is an important property for proving good regret bounds, if all players play according to the same algorithm. The paper "fixes" the instability of RM+ by considering a restarting and chopping variant which stabilise RM+ and then prove O(T^{1/4}) and O(1) individual and social regret bounds respectively. Strengths: For the most part paper is well written and easy to follow. The paper points to the importance of stability for proving sub-sqrt regrets bounds, which I find interesting. Weaknesses: In my opinion Sec 5 too dense and thus it lacks clarity. I would suggest to put only the main technical result there , or to give a skimmed version, in which the more technical assumptions are deferred to the appendix. I think that 4 statements with little discussion, cannot convey clarity. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It seems that for the restarting trick you cannot prove O(1) social regret. Is this because restarting may happen at different times for different players? If so I think you should include a larger discussion on this. Moreover in the abstract it seems like both restarting and chopping would give O(1) social regret. I think you should state explicitly in the abstract that only holds for the chopping variant. 2. Do you conjecture that O(1) social is only hard to prove for the restarting method or that is not reached? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time reviewing the paper. We now answer your question and remarks on the manuscript. - Clarity: in the final version we will use the additional page to provide more details on the results in Section 5 (Conceptual RM+/ExRM+). This will improve the overall clarity of the paper. - Social regret for restarting (theory): we believe that it is hard to prove that restarting achieves $O(1)$ social regret; this is precisely for the reason that you give: restarting may happen asynchronously across players. We are not ready to make a conjecture about whether this is true or not. We will provide more details on this in the revised version. We will also clarify the abstract. - Social regret for restarting (experiments): in our numerical experiments, we observed that restarting yields $O(1)$ social regret on the hard $3 \times 3$ matrix game instance (see Figure 7 in Appendix K). The numerical performances from Figure 2 also suggest that this is the case for random matrix game instances. However we did not search extensively over matrix game instances to try to disprove this statement, so we do not make any claims on this matter in the paper. Overall, we would like to emphasize that our paper is the first to provide some explanations on the very strong performances of RM+ and CFR+, which usually converge much faster than their $O(\sqrt{T})$ theoretical guarantees and routinely outperform theoretically stronger algorithms. Because of the widespread use of these algorithms (it was used in recent poker AI milestones, where poker AIs beat human poker players [1,2,3,4]), it has become an important open problem to explain their strong empirical behaviors, e.g. as mentioned in [5]. Apart from the regret guarantees in $O(\sqrt{T})$, prior to our work virtually nothing was known that could provide a sound theoretical argument explaining RM+ and CFR+. In light of this, we would like to invite you to reconsider the importance of our contributions, which identify the instability of RM+ as a key issue for proving stronger convergence rates, and to which we propose two practical solutions, restarting and clipping, leading to state-of-the-art convergence rates and strong practical performances for matrix games. [1] M. Bowling, N. Burch, M. Johanson, and O. Tammelin. Heads-up limit hold’em poker is solved. Science, 347(6218):145–149, 2015. [2] N. Brown and T. Sandholm. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418–424, 2018. [3] N. Brown and T. Sandholm. Superhuman AI for multiplayer poker. Science, 365(6456):885–890, 2019 [4] M. Moravcık, M. Schmid, N. Burch, V. Lisy, D. Morrill, N. Bard, T. Davis, K. Waugh, M. Johanson, and M. Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337):508– 513, 2017. [5] Farina, G., Kroer, C., and Sandholm, T. (2021, May). Faster game solving via predictive Blackwell approachability: Connecting regret matching and mirror descent. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 6, pp. 5363-5371). --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns! I'll raise my score accordingly
Summary: This paper studies the RM+ algorithm and its variants in the context of learning in games. - They first show that RM+ and predictive RM+ can be unstable in certain environments. Although the unstable player benefits from instability, the regret of other players may blow up because they are forced to make predictions in an unstable environment. - Then they provide several approaches to stabilize RM+. - The first approach, called **stable predictve RM+**, requires each player to (asynchronously) restart whenever $\mathbf{R}^t_i$ becomes small. Although this stabilizes the algorithm and guarantees $O(T^{1/4})$ regret for each player, it's hard to bound the social regret by $O(1)$ . - The second approach is called **smooth predictive RM+**. It chops off the area of decision space that is too close to the origin by adding projection steps. Besides individual regret bounds, this approach also guarantees $O(1)$ social regret. - They also propose **Conceptual RM+** (and the one fixed-point approximation of it) that achieves $O(1)$ individual regret and **Extragradient RM+** that achieves $O(1)$ social regret. - The authors conduct experiments on matrix games and extensive-form games. Strengths: - The paper contributes to the theoretical understanding of the RM+ and its variants. It is the first one to show the instability of RM+ and predictive RM+ by concretely constructing a hard case that exhibits the $T^{-0.5}$ convergence rate. - The paper proposes several variants of RM+ that not only have appealing theoretical guarantees on both individual regret and social regret, but also perform well in experiments. - The paper is very well-written. The authors clearly explain the intuition behind the theoretical results. Weaknesses: - In section 3, the authors construct a hard case for RM+ and predictive RM+ where the empirical convergence rate are on the order of $T^{-0.5}$. However, the residue errors of the linear fit in Figure 1 are quite large. Providing a theoretical proof that the asymptotic convergence rate is indeed $T^{-0.5}$ would strengthen the result. - Given the connection between RM+ and OMD in [12] and the fast convergence properties of OMD in games, the performance guarantee for the stabilized variants of RM+ do not seem surprising. Nonetheless, as mentioned by the authors, RM+-based algorithms are not inherently stable, so they need extra considerations. - The experiments show the strong performances of the proposed algorithms compared to RM+ and PRM+. It would be better if the authors also compared their performance with the optimistic/predictive FTRL/OMD algorithms that enjoy fast convergence rates in theory. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the proof of Theorem 3.1, the authors constructed an example where the precision of the losses increases with $T$ (the minimum value is exponentially small), a scenario that may not practically appear. - Is this phenomenon of increasing precision the fundamental reason of instability, or is it merely an artifact of the proof? - Does this same phenomenon appear in the subsequent experiments involving the $3\times3$ matrix game? - If a bounded precision assumption were introduced, could this potentially stabilize RM+ or PRM+? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time reviewing the paper. - We have run additional numerical experiments to compare Optimistic OMD (OOMD) and Optimistic FTRL (OFTRL) with ExRM+, Stable and Smooth PRM+. We ran these algorithms for the $3 \times 3$ matrix game instance and for $10$ random matrix game instances of size $15 \times 15$, with a step size of $\eta = 0.1$ for all algorithms. We present the empirical results in Figure 1 in the PDF attached to our response common to all reviewers. We found that ExRM+, Smooth and Stable PRM+ perform on par with Optimistc FTRL and Optimistic OMD, two methods that achieve the theoretical state-of-the-art guarantees for solving two-player zero-sum games. Question about the diminishing norm of the loss: We believe that this is simply an artifact of the proof, as we do not observe this phenomenon for our $3 \times 3$ matrix game instance. - We have verified this with additional numerical experiments, by computing the $\ell_{2}$-norm of the losses faced by each player at every iteration and the minimum absolute value of the coefficients of the losses faced by each player at every iteration. We refer to Figure 2 in the attached PDF document (in the responses common to all reviewers). As you can note, we found that these two quantities become in the order of $10^{-1}$ after $10^3$ iterations but do not converge to $0$. Therefore we do not believe that a "bounded precision assumption" would stabilize RM+ or PRM+. Finally, you say "_Given the connection between RM+ and OMD in [12] and the fast convergence properties of OMD in games, the performance guarantee for the stabilized variants of RM+ do not seem surprising._" - We disagree with this framing of our paper. While it is true that [12] shows a connection to OMD, it has been an open problem since [12] whether (predictive) RM+ can achieve a fast convergence as with optimistic OMD. Whether our result is "surprising" or not is of course debatable (and perhaps not a good way to evaluate it one way or the other), but we do believe that this was a significant problem that had been open for a few years. Reducing it to "not surprising due to OMD" is too dismissive, in our view. Moreover note that, in a sense, we resolve the PRM+ open problem in the \emph{negative}, since our counterexamples show that in fact PRM+ does not achieve a faster rate. This is in spite of the fact that PRM+ is also equivalent to optimistic OMD (on a different feasible set than our stabilized variants). Only through our stabilizing ideas is a faster rate possible. --- Rebuttal Comment 1.1: Comment: Thank you for the response and for running the additional experiments. My concerns have been adequately addressed, and I will retain my original score of 7.
Rebuttal 1: Rebuttal: We thank all the reviewers. We responded to each of your questions in individual comments. Here we upload a PDF of two figures to support those responses. Specifically, in the PDF, we plotted: - Figure 1: Comparison of Optimistic OMD, Optimistic FTRL, ExRM+, Stable and Smooth PRM+ on random matrix instances and our hard $3 \times 3$ matrix game instance. - Figure 2: Minimum absolute value of the losses of each player and $\ell_{2}$-norm for the loss of each player, for the $3 \times 3$ hard matrix game instance. Pdf: /pdf/15d23b3e72244b6f97555ee4e36373c68f6ed8da.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper focuses on investigating the stability of the Regret Matching+ based algorithm. It demonstrates that both RM+ and predictive RM+ algorithms exhibit instability, leading to significant regret for other players within a game setting. To address this issue, the authors propose two methods: restarting and chopping off. By incorporating these methods, the authors introduce several variants of RM+-based algorithms, including stable/smooth Predictive RM+, conceptual RM+, and Extragradient RM+. The paper also provides theoretical regret bounds for these proposed algorithms, further establishing their efficacy in mitigating instability and reducing regret. Strengths: One of the key contributions of this paper is the theoretical speedup provided for the RM+ based algorithm, which is crucial for achieving fast convergence in games. By offering improved convergence guarantees, the proposed advancements in the algorithm can significantly enhance its practical utility. The writing in the paper is well-organized and clear, effectively conveying the technical details. The strength of the paper's technical aspects further adds to its overall quality, making it more accessible and understandable for readers. The comprehensive experimental results presented in the paper further enhance its credibility. Notably, the authors also consider the application of the algorithm in extensive form games, which is important for real-world scenarios. This consideration highlights the practical relevance and versatility of the proposed techniques. Overall, the paper provides valuable theoretical insights, demonstrates technical proficiency, and showcases the practical applicability of the proposed algorithm, making it a strong contribution to the field. Weaknesses: Just a small point about presentation. It would be beneficial for the authors to consider organizing the algorithms discussed in the paper by creating a table. This table can help provide a clear overview of the different algorithms, highlighting their similarities and differences. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The thoerem need $\eta$ to be small. However, in the matrix game experiment, $\eta$ is chose as $10$. Is $\eta = 10$ too big in this setting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time reviewing the paper. In the final version of the paper, we will add a table to summarize the different regret guarantees for the algorithms introduced in this paper. Regarding your question: you are right that the stepsizes we use in the experiments are generally larger than what the theory requires. This is a commonly-found phenomenon, that the theoretical stepsizes are too conservative, see e.g. the related literature in [10,25]. We will make sure to emphasize this more in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your response!
null
null
null
null
null
null
Mitigating Source Bias for Fairer Weak Supervision
Accept (poster)
Summary: This work found that unfair LFs in programmatic weak supervision could introduce bias to the resultant training labels, and proposed to address the bias via source bias mitigation and provided theoretical guarantee. Experimental results show that the effectiveness of the approaches in both synthetic and real datasets. Strengths: 1. This work studies an important yet overlooked problem in programmatic weak supervision: the biases induced by unfair LFs 2. The proposed method, compatible with traditional fair ML methods, could mitigate biases and improve the performance at the same time. 3. The theoretical results are convincing, which show that the LF bias could be arbitrary yet can be fixed by the proposed method under some conditions Weaknesses: I am not aware of any major weakness except that the proposed model, if I understand it correctly, is a new label model built on an existing one, which means users have to use the proposed label model if they want to mitigate the biases. It is unclear how biases should be mitigated if users prefer other choice of label model. In terms of label models that incorporate feature vector, one related work is missing: "Leveraging instance features for label aggregation in programmatic weak supervision" Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: It is not intuitive how the improvement of fairness and performance can be achieved at the same time. Usually, there is a trade-off between performance and fairness in ML models, could you explain why and how which is not the case in this work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: see weakness above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the thoughtful comments and reference. We will add the suggested paper in our updated draft. * **On the choice of label models**: A remarkable property of our label model is that **it is only used for one step in our overall pipeline: mitigating source bias**. Once this step is complete, any other label model can be used for aggregating sources into pseudolabels. Our WRENCH experiment (Table 5) is conducted with the newly-introduced Hyper Label Model [1] after applying SBM. * **On the trade-off between performance and fairness in ML models**: We discuss this question in our common response and provide more detail here. The trade-off between performance and fairness in machine learning models is usually caused by inherent fairness violations in datasets. Models trained on such datasets are unfair, and so fairness techniques seek to mitigate this, but ultimately face constraints. On the other hand, **our method tackles bias induced by creating labeled datasets via weak supervision**. Weak supervision allows for the efficient creation of labeled datasets by using multiple noisy sources of signal for labels. In addition to being noisy, these sources are often far more biased than hand-labeled datasets. As a result, a dataset created through weak supervision may prove to be highly unfair even if its counterpart with ground-truth labels was perfectly fair. This problem is also an opportunity. Our approach is to model where bias is induced within weak supervision sources, remove this bias, and as a result, improve both fairness and performance, while keeping the advantage of using weak supervision---exploiting cheap label sources omnivorosly. [1] Wu, Renzhi, et al. "Learning Hyper Label Model for Programmatic Weak Supervision.", ICLR 2022. --- Rebuttal Comment 1.1: Title: Post rebuttal Comment: I read the authors' rebuttal and other reviews. The author nicely addressed my questions. I would vote for accept and keep my score.
Summary: *The paper studies the problem of unfairness introduced by weak supervision methods through noisy data augmentation. It proposes a mitigation strategy using counterfactuals that would create a more balanced/unbiased dataset. * Paper shows that Labeling functions can be arbitrarily biased by preferring examples away from the center of the distribution and show theoretically there is no change in sample complexity required to achieve the gains in accuracy and fairness, under strict assumptions. * Empirical results show that by further augmenting data that counterfactually transforms examples across groups, both accuracy and fairness metrics can be improved - with results on synthetic and popular benchmark fairness and weak supervision datasets. Strengths: * Strong empirical results on synthetic and real datasets * New problem formulation that extends fairness methods to the weak supervision case Weaknesses: * Theoretical results are under strong distributional assumptions, how they can be relaxed should be better articulated * Standard data augmentation techniques such as Autolabel [1] are missing in evaluation set up. https://ieeexplore.ieee.org/abstract/document/10136178 Technical Quality: 3 good Clarity: 3 good Questions for Authors: (see weaknesses) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This is missing, and should be included Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestions and noting the strengths of our work. We have added the suggested reference. * **On modeling assumptions**. Indeed, our label model _may appear_, at first glance, to require strong assumptions. However, in fact it **has weaker assumptions compared to previous weak supervision models.** We describe the differences versus two popular weak supervision models next. Common models use one of the following assumptions: **i) Ising model assumption [1, 2, 3]**. Here, the model is given by $$P(\lambda^1, \ldots, \lambda^m,y) = \frac{1}{Z}\exp(\theta_y y + \sum_{j=1}^m \theta_j \lambda^j y)$$ Here, the $\lambda$'s are labeling functions, $y$ is the latent true label, $Z$ is a partition function (a normalizing constant), the $\theta$'s are the model parameters. This label model is based on the Ising pairwise interactions model. In this model, parameters are estimated globally, which means that **the label model requires uniform accuracy in the entire feature space---a strong assumption.** In other words, regardless of the point $x \in \mathcal{X}$ whose label is being estimated, the assumption states that the labeling function accuracies are identical. Naturally, this is not a realistic assumption. Indeed, in practice, it has been observed that LF accuracy strongly depends on the input features, as certain parts of the feature space are invariably more challenging to label than others. **ii) Strong smoothness leading to a partitionable feature space [4]** $$P_x(\lambda^1, \ldots, \lambda^m, y) = \frac{1}{Z}\exp(\theta_y y + \sum_{j=1}^m \theta_{j,x} \lambda^j(x) y)$$ This model assumes that accuracy depends on the input feature $x$. In fact, in its most basic form, each $x$ has a separate model (with parameters given by $\theta_x$). However, since we can observe only one sample for each $x$, recovering parameters $\theta_{j, x}$ is impossible. Thus the authors in [4] instead assume strong smoothness: that is, **that the feature space can be partitioned and the accuracy in each partition is uniform.** This is still a strong assumption. While this model incorporated the input feature space, nevertheless accuracy tends to drop as data points get far from high accuracy centers---even within parts. This observation led to our newly-proposed model. **iii) Proposed model** $$P(\lambda^1(z), \ldots, \lambda^m(z), y) = \frac{1}{Z}\exp\left(\theta_yy + \sum_{j=1}^m \frac{\theta_{j}}{1+d(x^{\text{center}_j}, g_k(z))} \lambda^j(g_k(z)) y\right)$$ Our label model captures the phenomenon in Figure 2 of our draft, which shows that the accuracy of each LF drops as it moves away from its center. This means that our model neither require universal uniformity, as (i) does, nor per-part uniformity, as (ii) does. It makes the much weaker assumption of the existence of a center point where LFs are most accurate---which indeed matches practical LF scenarios, especially for LFs that are programs expressing heuristics. As a result, **our model not only has less restrictive assumptions, but also provides a framework that interprets unfairness as drift away from centers as an outcome of a group transformation.** [1] Ratner, Alexander, et al. "Snorkel: Rapid training data creation with weak supervision." VLDB 2017. [2] Fu, Daniel, et al. "Fast and three-rious: Speeding up weak supervision with triplet methods." ICML 2020. [3] Ratner, Alexander, et al. "Training complex models with multi-task weak supervision." AAAI 2019. [4] Chen, Mayee F., et al. "Shoring up the foundations: Fusing model embeddings and weak supervision.", UAI 2022.
Summary: This paper proposes a novel bias mitigation technique to address the fairness issues in weak supervision settings. The core idea is to use a counterfactual fairness-based correction method. The authors theoretically show that the proposed method can improve both accuracy and fairness and this is also supported by both synthetic and real datasets evaluations. Overall, this paper calls attention to bias and fairness studies in the weak supervision context, which has never been addressed specifically and provides an effective and theoretically sound method. To the best of my knowledge, the method considered in this paper is novel and is a valuable contribution to the community. Based on the above factors, I recommend acceptance. Strengths: This paper is well-motivated and well-written. The proposed method is intuitively simple yet very effective and theoretically sound. I appreciate the comprehensive evaluation both with synthetic data and with WRENCH. Weaknesses: This paper focuses on a novel area of weak supervision studies. Though the method proposed is simple, I do believe it has any major weak nesses. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I welcome the authors to discuss the potential drawbacks of formulating the fairness argument behind counterfactual fairness, particularly in the programmatic weak supervision settings. Given diverse sources, wouldn't the biases be smoothed out? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Overall, this paper is well organized and I have not identify any apparent limitations for the current content. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for noting the strengths of our work and providing useful questions and comments. * **On the potential drawbacks of the proposed method in the programmatic WS settings**: A possible drawback of the proposed method is that it requires high quality estimation of Monge mapping (or any counterfactual mapping.) Fortunately, as we described in the general response, in the typical weak supervision settings, we have access to massive unlabeled datasets which enable reliable estimation in these procedures. Better yet, we have found that our method works well even with relatively small datasets ($n \leq 10^4$, e.g. the Wrench Mushroom dataset ($n=6499$)). If the available unlabeled data is genuinely insufficient, our technique may struggle. To address this, we note that there are ways to overcome this limitation as well. For example, there are techniques that can dramatically reduce the amount of data that is needed in optimal transport. Such techniques involve the use of _keypoints_ (pairs of matched points) [1]. Domain experts could craft such pairs, enabling optimal transport, and ultimately our entire pipeline, to operate in the very low data regime. * **On the possibility that diverse label sources resolve unfairness naturally**: As mentioned in our common resonse, this appears to be a plausible approach towards mitigating unfairness. Unfortunately, diverse sources may not in fact behave in this way---further motivating the use of our approach. For example, our synthetic experiment in A.1. shows that diverse label sources can make fairness _worse_. In this setting, label sources that have high accuracy centers around the major clusters have higher weights in voting as an outcome of label model learning in weak supervision. Thus, increasing the number of LFs (diversifying) can be beneficial in finding such centers, and yields improved performance. However, it makes disadvantaged groups perform worse, since they are dislocated from high accuracy centers, leading to unfairness. SBM not only resolves such issues, but also further improves the performance gain by bringing back the dislocated data points around the high accuracy centers. [1] Gu, Xiang, et al. "Keypoint-guided optimal transport with applications in heterogeneous domain adaptation.", NeurIPS 2022.
Summary: The paper focuses on the bias issue in weak supervision. The paper shows the labeling functions of weak supervision may produce biased pseudo-labels, which is first empirically shown in the paper. Also, they theoretically show that even though a dataset has fair ground-truth labels, the weak supervision process using labeling functions may produce biased weak labels for the dataset. To mitigate this issue, the paper proposes an optimal transport-based algorithm that modifies the weak labels to be fairer. In experiments, the paper evaluates the proposed algorithm in tabular, NLP, and computer vision datasets. Strengths: S1. The paper considers an important research problem, the fairness issue in the weak supervision pipeline. It seems this paper is the first work to handle fairness in weak supervision. S2. The paper theoretically shows the labeling functions may produce unfair pseudo-labels even though the underlying label distribution is fair. S3. The proposed method can be used together with the existing fair in-processing algorithms, which is another good aspect of the paper. Weaknesses: W1. The explanations on the labeling function itself are limited. - For example, can the number of labeling functions affect the fairness performance? The Snorkel original paper [1] mentioned that the number of labeling functions highly affects the labeling performances, but the current paper does not explain how such details may make differences in the fairness scenario. - Also, as shown in the vision dataset experiments, using fairer labeling functions can reduce the effectiveness of the proposed algorithm. Then, why is the proposed transport algorithm better than making labeling functions itself fairer? Currently, the paper seems to use very simple labeling functions (described in the appendix), and it would be helpful if the paper could provide any comparison between the proposed algorithm and another possible direction of making fairer labeling functions. [1] Ratner et al., Snorkel: Rapid Training Data Creation with Weak Supervision, VLDB’18 W2. Experiments show some questionable results. - As several scenarios show large F1 score drops, it is unclear whether it is okay to use the proposed algorithm. I understand the F1 score can be affected by the imbalanced label classes, but the F1 score degradation is still severe. For example, SBM (OT-S) + LIFT case in the Bank Marketing dataset and all SBM results in the Civil dataset show large F1 score drops. It would be helpful if the paper could provide at least some possible way to prevent such F1 score drops. - In the experiments, no error range is provided, which makes the empirical results less convincing. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Although the paper provides several meaningful discussions and insights, there are some remaining concerns, especially regarding the labeling function and empirical results. The details are in the above weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The paper did not discuss the limitations and potential negative social impact of the work. A possible limitation of this work can be other prominent fairness definitions that this work cannot handle. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their kind words, constructive feedback, and useful suggestions. * **On the number of labeling functions**: We discussed the relationship between the number of LFs, fairness, and performance in our common response. The result shows that our method can resolve unfairness induced by weak supervision and improve performance, while existing weak supervision baselines lose fairness as they gain performance by increasing the number of LFs. * **On improving individual LF fairness**: Indeed, if we could write fair labeling functions directly, the resulting datasets would be fair as well. Unfortunately, this is extremely difficult. Most labeling functions are small programs written by domain experts to express heuristic ideas. Ensuring that these programs are themselves fair requires substantial research progress in other fields. Similarly, other sources for labeling functions are pre-existing external knowledge bases, crowdworkers, pretrained models, and more. Here as well, ensuring fairness directly would be very hard. As a concrete example, labeling functions in our vision dataset experiments are generated by models pretrained on other datasets. In such situations, where we do not have access to the original dataset, it may not even be possible to directly modify LFs to be fair. Instead, **our method provides a generic and cheap approach to deal with the bias of label sources in weak supervision.** Another idea is to simply demand that each labeling function used is created to be fair in the first place, throwing away any type of source of LFs that may not satisfy this property (including heuristics, pretrained models, etc.) Doing so would ensure fairness as well---but removes all of the benefits of weak supervision. In fact, crafting only fair labeling functions that lead to a satisfactory dataset may be more difficult than hand-labeling in the first place, which is the pain point weak supervision was designed to solve. The **value of our approach is that it is best-of-all-worlds**: we can continue to omnivorously use weak sources of signal, while handling bias with the additional technique we proposed. * **On the F1 score in Bank Marketing and CivilComments datasets**: As mentioned, Bank Marketing and CivilComments datasets have severe class imbalance (the proportion of positive class: 11.8% and 11.3%, respectively), which may lead more imbalance in the transport step, producing high odds to flip noisy labels towards dominant classes. A simple remedy is performing balanced sampling. We can selectively use the target group data points so that their noisy labels have a balanced class ratio. The table below shows the result with balanced transport in Bank Marketing data with LIFT. We observe that this simple idea can improve the F1 score while maintaining accuracy and fairness scores. In general, _any_ counterfactual mapping estimation method that considers class imbalance can be plugged in. | Cond | Acc | F1 | DP Gap | EO Gap | | ------------------------------------- | ------------- | ------------- | ------------- | ------------- | | FS | $0.912 \pm 0.000$ | $0.518 \pm 0.000$ | $0.128 \pm 0.000$ | $0.117 \pm 0.000$ | | WS (Baseline) | $0.674 \pm 0.000$ | $0.258 \pm 0.000$ | $0.543 \pm 0.000$ | $0.450 \pm 0.000$ | | SBM (w/o OT) + LIFT | $0.698\pm 0.000$ | $0.255 \pm 0.000$ | $0.088 \pm 0.000$ | $0.137 \pm 0.000$ | | SBM (OT-L) + LIFT | $0.892 \pm 0.015$ | $0.305 \pm 0.015$ | $0.104 \pm 0.001$ | $0.121 \pm 0.019$ | | SBM (OT-S) + LIFT | $0.698 \pm 0.011$ | $0.080 \pm 0.006$ | $0.109 \pm 0.017$ | $0.072 \pm 0.014$ | | SBM (OT-S) + LIFT + Balanced sampling | $0.827 \pm 0.002$ | $0.498 \pm 0.003$ | $0.133 \pm 0.002$ | $0.077 \pm 0.002$ | * **On error ranges**: Our proposed method is fairly simple and the effect of randomness is very limited. In repeated experiments with different seeds, **we do not observe any significant deviations, but will provide error ranges from repeated runs in the updated draft.** We include several examples of these updated results in the following: * Synthetic data experiments (Section 5.2. Figure 4): In rebuttal supplementary A.2, we show 95% confidence intervals, which are from 10 repetitions with different seeds. Unsurprisingly, the deviation rapidly drops as n increases. * Real data experiments (Section 5.1.): We reported the mean of 5 repeated experiment results in Section 5.1. We will include error ranges in our updated manuscript. Examples include: #### Adult dataset | Cond | Acc | F1 | DP Gap | EO Gap | | ------------------- | ------------- | ------------- | ------------- | ------------- | | FS | $0.824 \pm 0.000$ | $0.564 \pm 0.000$ | $0.216 \pm 0.000$ | $0.331 \pm 0.000$ | | WS (Baseline) | $0.717 \pm 0.000$ | $0.587 \pm 0.000$ | $0.475 \pm 0.000$ | $0.325 \pm 0.000$ | | SBM (w/o OT) | $0.720 \pm 0.000$ | $0.592 \pm 0.000$ | $0.439 \pm 0.000$ | $0.273 \pm 0.000$ | | SBM (OT-L) | $0.560 \pm 0.000$ | $0.472 \pm 0.000$ | $0.893 \pm 0.000$ | $0.980 \pm 0.000$ | | SBM (OT-S) | $0.723 \pm 0.003$ | $0.590 \pm 0.003$ | $0.429 \pm 0.010$ | $0.261 \pm 0.005$ | | SBM (w/o OT) + LIFT | $0.704 \pm 0.000$ | $0.366 \pm 0.000$ | $0.032 \pm 0.000$ | $0.192 \pm 0.000$ | | SBM (OT-L) + LIFT | $0.700 \pm 0.017$ | $0.520 \pm 0.005$ | $0.015 \pm 0.015$ | $0.138 \pm 0.020$ | | SBM (OT-S) + LIFT | $0.782 \pm 0.043$ | $0.448 \pm 0.015$ | $0.000 \pm 0.000$ | $0.178 \pm 0.036$ |
Rebuttal 1: Rebuttal: We thank all of the reviewers for their kind comments and feedback. Reviewers recognized the strengths of our paper, which we briefly reiterate before we dive into in-depth responses. * **This is the first study of fairness in weak supervision** (Reviewers wmSM, UYjh, GLUc, 6e6W). * Our approach offers **theoretical results** showing that (1) even when a dataset with ground-truth labels is fair, **a weakly-supervised counterpart can be arbitrarily biased** and (2) a finite-sample **recovery result for the correction algorithm** (Reviewers wmSM, UYjh, GLUc, 6e6W). * **Strong empirical results on synthetic and real datasets** (Reviewers UYjh, GLUc). * The proposed method is **compatible with the existing fair machine learning algorithms** (Reviewers wmSM, 6e6W). We address two common questions before proceeding to individual responses. * **On advantages and limitations of our formulation** (Reviewers UYjh, 6e6W): First, we highlight how **bias in weak supervision differs** from typical supervised learning settings. In supervised learning, datasets have inherent fairness violations and fairness-aware methods try to train a fair model from unfair data. Such methods typically use constrained optimization, which tries to fit the training data under a fairness constraint. This usually entails tradeoffs between fairness and performance. The type of unfairness we tackle is **different**. It is an artifact of the WS process (i.e., how the noisy labels are generated) and may not be inherent in datasets (Theorem 1 in our work characterizes a range of such scenarios). As a result, it is possible to **i) maintain the advantages of using WS (cheap labeling), ii) improve fairness, and iii) improve performance**. Our method uses a simple counterfactual mapping to seek to achieve all of these benefits simultaneously. One potential drawback appears to be that high-quality estimation of the counterfactual mapping may be difficult. Fortunately, scenarios where this is the case are not common in weak supervision, since the standard setting involves using large unlabeled input datasets. This allows for small estimation error in the counterfactual mapping. As a result, we do not observe this limitation in any of the practical settings of interest. Even when this limitation is operative (i.e., when using small unlabeled datasets), it is still possible to overcome it by applying recent techniques in optimal transport [1,2] that enable improving the estimation of the counterfactual mapping. This is done, for example, by exploiting a handful of matched _keypoints_. Such methods can be easily applied if practitioners have some intuition about group matchings (e.g. toxic comment matching in different languages)---which is often the case in weak supervision scenarios. * **On the relationship between the fairness and the number of labeling functions** (Reviewers UYjh, wmSM): Several reviewers hypothesize that diversifying label sources naturally resolves unfairness. Indeed, we had a similar belief early on in our study. Unfortunately, we ultimately observed the opposite scenario: diversification can increase unfairness in practice. To demonstrate this idea, we conducted experiments varying the number of labeling functions and their parameters. We include one such case for synthetic data, depicting it in the figure in A.1. It shows that **in standard WS, increasing the number of LFs increases bias** as the accuracy improves. Meanwhile, **our approach improves fairness _and_ performance** when increasing the number of labeling functions. [1] Gu, Xiang, et al. "Keypoint-guided optimal transport with applications in heterogeneous domain adaptation.", NeurIPS 2022. [2] Panda, Nishant, et al. "Semi-supervised Learning of Pushforwards For Domain Translation & Adaptation." , arxiv 2023. Pdf: /pdf/09ec86f7759e54a792f1256566046c0ad510db30.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Aggregating Capacity in FL through Successive Layer Training for Computationally-Constrained Devices
Accept (poster)
Summary: The paper focuses on enabling federated learning (FL) among clients that are bounded by the memory capacity. To this end, the work proposes successive layer training (SLT), a technique that sustains the memory usage below a given memory budget throughout the FL process. SLT partitions the model into three parts: *i)* a first part with frozen weights, *ii)* a intermediate part where all weights are trained, and *iii)* a final part (which can stop before the end of the architecture) whose only a fraction of the weights are trained, controlled by a parameter *s* which selects the number of channels to be trained. At training time, the FL process is broken into *N* stages. At each stage, the layers that are included in each of the three parts are changed. Progressively, the final part is moved towards the last layers of the network, until the whole model has been trained. The proposed scheme is applied on CNNs and compared with a number of baselines that employ nested submodels. Strengths: 1) The proposed STL technique is interesting. Its parametrisation with respect to the indices of three-part network partitioning and the number of configuration steps provides a high level of flexibility and can thus be adapted to various settings, *e.g.* different memory constraints and clients with significant heterogeneity in their memory capacity. 2) The paper is well written, the proposed method is clearly described and positioned within the existing FL approaches. 3) The evaluation includes appropriate baselines and covers adequately different setups. Weaknesses: 1) Althought STL seems to outperform the evaluated baselines, the potential reasons behind this performance gain is not discussed. For example, in the Heterogeneous Memory Constraints setup (Section 3.4), despite being the most realistic, the paper does not discuss why STL performs better than HeteroFl and FjORD, *i.e.* the two strongest baselines for this setup. The *Heterogeneity results* paragraph describes the results of Table 3 in a textual form, but does not attempt to explain why these are as they are. 2) The paper does neither discuss the selection of parameter *N*, which determines the number of distinct configurations to be used during the FL process, nor experimentally evaluates its impact. Similarly, there is no investigation on how these configuration steps are distributed across the global rounds. This is briefly touched upon in the last paragraph of Section 3.2, but this is not sufficient. More thorough investigation is needed, for STL to be useful. The key question is how to train using STL, *i.e.* how many configuration steps to use and how much time to spent using each configuration. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) Please specify the number of configuration steps *N* and the duration of each step for the presented experiments. 2) See the two points in Weaknesses. 3) As the work focuses only on memory constraints and not computational capabilities, it would be better to modify the title to reflect that, *e.g.* "Aggregating Capacity in FL through Successive Layer Training for Memory-Constrained Devices" instead of "Computationally Constrained". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper adequately covers its limitations and broader impact. All the main points are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We like to thank the reviewer for the fair and constructive feedback on our manuscript. We will consider changing the papers title. # Performance of HeteroFL and FjORD We think the performance of HeteroFL and FjORD (Section 3.4) is limited due to the same effect that is present in FedRolex and FD: the limited co-adaptation that is possible within a layer, and calculating gradients of a subset of the layer without considering the remaining layer weights that reside on the server model. We show in Appendix A that the more distinct subsets are used the lower the accuracy gets, and that even with 4-5 subsets, the accuracy in CIFAR10 can reduce from $80\%$ to $60\%$. HeteroFL and FjORD use in principle the same subset mechanism as FedRolex and FD. Specifically, to lower the resources, constrained devices train a fixed smaller subset that is merged on the server with the remaining parameters. For example in HeteroFL increasing the number of heterogeneity levels from three ($[0.125, 0.25, 0.5]$) to four ($[0.125, 0.25, 0.5, 1.0]$) reduces the accuracy from $24.1$p.p. to $23.3$p.p., despite $\frac{1}{4}$ of devices training the full NN in the latter case. In SLT however, we avoid that devices train different subsets of a layer. We ensure that each device (independent from resources) trains the same head $F_H$, while allowing stronger devices to freeze fewer layers. # Number of steps $N$ for different NN models and memory constraints The number of steps $N$ depends on the choice of $K_F$ and $K_T$. For the full argument of why we select $K_T = K_F + 1$, please refer to the answer to reviewer tzDU. As a consequence of this design choice, only a single layer gets *filled up* per step. Therefore, in general, we require as many steps as there are layers in an NN. However, if the memory limit allows within a step to apply $s=1$, all remaining layers can be trained in full width, thus no further step is needed. Consequently, the number of steps also depends on the given memory constraint. The following table gives the number of steps for each NN architecture and evaluated constraints. | constraint $s$ | ResNet20 | ResNet32 | DenseNet40 | |---|---|---|---| | 0.66 | - | - | 10 | | 0.5 | 8 | 14 | - | | 0.33 | - | - | 15 | | 0.25 | 12 | 26 | - | | 0.125 | 16 | 14 | - | # Distribution of $N$ steps to $R$ rounds The following describes our methodology to distribute $N$ steps to $R$ rounds. The mapping of $N$ to $R$ for four of our experiments is visualized in **Figure R.1** in the rebuttal pdf. Generally, a layer that is trained should receive sufficient amount of training, to extract useful features for the downstream layers, but at the same time, it should not overfit in the current configuration. Hence, there is a tradeoff between too little training and too much training within a step. We distribute all steps $N$ over total rounds $R$ by calculating the share of parameters that are added to the training when transitioning from step $n$ to $n+1$. We calculate the number of trained parameters in the NN, based on $K_F$, $K_T$, and $s$ using $Q$, s.t. $Q(K_F, K_T, s) = \bigg( \sum_{k \in \\{k:K_F < k \leq K_T\\}} P_k M_k \bigg) + P_{K_T+1}\lfloor sM_{K_T+1}\rfloor + \sum_{k \in \\{k:K_T+1 < k \leq K\\}}\lfloor sP_k\rfloor \lfloor sM_k \rfloor$ Using $Q$ we calculate the number of rounds per step $R_N$, $R_n = R \frac{Q(K_F^{(n)},K_T^{(n)},s^{(n)})}{Q(0,0,1)}$ in case $n = 0$, $R_n = R \frac{Q(K_F^{(n)},K_T^{(n)},s^{(n)}) - Q(K_F^{(n-1)},K_T^{(n-1)},s^{(n-1)})}{Q(0,0,1)}$ else and lastly, the switching points $r_n$ by using $r_n = R_n + \sum_{i=0}^{n-1} R_j$. We observe that distributing the rounds based on the number of added parameters produces robust results throughout all experiments with fast convergence and high accuracy. We have also evaluated other empirical approaches like sharing rounds equally among rounds, which resulted in slower convergence and did result in lower final accuracy (e.g., $-3\text{p.p.}$ for CIFAR10). Additionally, we considered dynamic approaches, that switch based on the test accuracy/loss on the server (a similar technique has been used by Kundu and Jaja [1]). However, such approaches require dataset-specific hyperparameter tuning, and our results show that they do not produce necessary better results, as shown in **Figure R.4** in the rebuttal pdf. [1] Amit Kumar Kundu and Joseph Jaja. "FedNet2Net: Saving Communication and Computations in Federated Learning with Model Growing." International Conference on Artificial Neural Networks. Cham: Springer Nature Switzerland, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the precise answers. Please make sure to integrate the responses to the main part of the paper.
Summary: The paper proposes an FL training methodology to reduce the memory footprint for constrained devices. Specifically, the layers inside the original model are divided into three categories, i.e., frozen, fully trained, and partially trained. The frozen layers only participate in the forward pass but do not require storing the activations or computing gradients for the backward pass. The partially trained layers also only train a fraction of the parameters, thus reducing the memory overhead. The authors further provide heuristics for determining the configuration of layers within these three categories throughout training and the number of training steps allocated to each configuration. Strengths: - The paper is well-written, easy to follow, and the ideas are clearly presented. - The experiments investigate the method from different aspects like iid versus non-iid data distribution as well as iid versus non-iid compute devices, and show benefits over several baselines. Weaknesses: - While the proposed method works well on the evaluated benchmarks, it appears to be a straight-forward combination of existing ideas (using s ratio of parameters to partially train layers + progressive freezing). As such, the reviewer finds the originality of the ideas rather limited. - Can the authors provide some intuition or theoretical justification for acceptable bounds on s and the general configuration of F_F, F_T, and F_H? Specifically, what is the bound on these design hyperparameters after which the convergence is impeded? - There seems to be a large gap between the performance of prior work (FedRolex) versus the performance presented in the original paper. Can the authors clarify any difference in the experimental setup that may have led to this? - It would be beneficial to present the corresponding training memory footprint (activation+parameters+grads) for each benchmark to better clarify what constraints each benchmarked s_fd ratio corresponds to, and see e.g., how aggressive is 0.125 ratio. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss the potential negative societal impact of the work in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We like to thank the reviewer for the fair and constructive feedback on our manuscript. # SLT design choices ($K_T, K_F, s$) Our general choice of setting $K_T = K_F + 1$ while maximizing $s$ throughout all configurations $N$ is motivated by preliminary ablation studies we conducted (please refer to **Figure R.3** in the rebuttal pdf). Specifically, we investigated, given a specific memory budget, should $K_T$ or $s$ be maximized, as increasing both increases the memory consumption in training. We evaluated the accuracy in a centralized training setup (no freezing applied, i.e. $K_F=0$). The results suggest that maximizing $s$ in favor of $K_T$ clearly results in higher accuracies, therefore, we only *fill up* and train a single layer per step (i.e. the minimal possible number of layers). $F_H$ is given by the design choice of $K_T$. # SLT Memory limit / minimal $s$ The memory reduction limit of SLT is in general affected by two things: * Firstly, the limit is constrained by how much memory it requires to *fill up* and train a single layer of the NN (relative to full training). Note that the memory cost of training a single layer compared to the training of the full NN decreases as we have more layers. * Secondly, it is influenced by how low $s$ can be set. Note that the subset of a layer of the head $w$ is defined by $s$, ($w \in \mathbb R^{\lfloor sP \rfloor \times \lfloor sM \rfloor}$). Hence, the minimum of $s$ would be determined by $P$ and $M$, since at least a single filter per layer has to be trained. Consequently, if a layer only has 6 output filters, the minimum applicable $s = \frac{1}{6}$. SLT allows for memory reduction of $0.125\times$ for ResNet20, $0.09\times$ for ResNet32, and $0.08\times$ for ResNet56. # Accuracy difference between our implementation of FedRolex and the FedRolex paper We are fairly confident that our implementation of FedRolex reflects what is described in the original paper since we used the paper and the authors' public source code for implementation. We additionally ran our implementation against theirs and confirmed that throughout the rounds the same subset of parameters get modified on the FL device. We think the differences in the FedRolex paper (Figure 4) are caused by the following aspects: * In FedRolex, the authors use a significantly larger NN model to evaluate CIFAR10 (generic ResNet18, $11$M parameters) while we use the specifically adapted version for CIFAR10 (ResNet20, $0.27$M parameters). * FedRolex scales down the NN with $\gamma=\{2,4,8,16\}$, where $\gamma$ refers to **size** (parameters) reduction of the global model, while we evaluate the reduction of memory that is mainly dominated by the activations. As a consequence, when scaling down the NN with $s$, the size of the NN reduces quadratically, while memory only reduces linearly. To demonstrate this, we added $\gamma$ and memory scaling into a table, comparing the accuracy results from Figure 3 of the FedRolex paper and our results from Figure 1. | $\gamma$ | $1$ | $2$ | $4$ | $8$ | $16$ | |---|---|---|---|---|---| | memory reduction | $1$ | $1.41$ | $2$ | $2.82$ | $4$ | | FedRolex (paper, Figure4) | $84.5\\%$ | $68\\%$ | $58\\%$ | $58\\%$ | $58\\%$ | | FedRolex (ours, Figure1) | $87.6\\%$ | - | $80\\%$ | - | $61\\%$ | In our experiment (Figure 1), we evaluate with IID data while FedRolex evaluates "low heterogeneity" of data. The remaining differences are the number of rounds (2000 vs. 2500), learning rate, and learning rate decay schedules. We use sinus-decay, while FedRolex uses fixed steps to lower the learning rate. When comparing full training (i.e. $1\times$) this already creates an accuracy improvement of ~$3$p.p. for FedRolex. So in practice, our setup allows for (slightly) higher final accuracies. Apart from these, the setup is fairly similar using 100 devices with 10 active per round. We, therefore, think the differences in accuracy mostly come down to the choice of the NN model and the distribution of data. # Memory footprint of models and experiments Generally, the maximum amount of memory for training is dominated by the activations. With scaling down the number of filters using $s$, the activation maps linearly reduce. The number of parameters and respective gradients however reduce quadratically. The following tables show the maximum memory consumption during training and the respective contributions of activations, weights, and gradients for end-to-end training of the NN and scaled-down models (small model). |**Full NN training** |ResNet20|ResNet32|DenseNet40| |---|---|---|---| |activations (MB)|$96.0$| $853.8$| $408.7$| |weights (MB)|$1.2$|$2.8$|$0.7$| |gradients (MB)|$1.2$|$2.8$|$0.7$| |**Reduction factor $s=0.125$** |ResNet20|DenseNet32|DenseNet40| |---|---|---|---| activations (MB)| $12.0$ | $106.7$| $50.3$ weights (MB)| $0.02$| $0.05$ | $0.01$ gradients (MB)| $0.02$ | $0.05$ | $0.01$| --- Rebuttal Comment 1.1: Title: Post-rebuttal feedback Comment: Thank you for your rebuttal. I have adjusted my score accordingly.
Summary: This paper introduces an approach known as Successive Layer Training (SLT) for federated learning (FL) on edge devices with limited resources. The core concept of SLT involves training a portion of the neural network (NN) parameters on the devices while keeping the remaining parameters fixed. Subsequently, the training process progressively expands to include larger subsets until all parameters have been trained. The authors assert that SLT offers several advantages, including reduced memory requirements for training, enhanced accuracy and convergence speed in FL, and compatibility with heterogeneous devices that possess varying memory constraints. To validate their claims, the authors conduct evaluations of SLT on multiple datasets and NN architectures. Additionally, they compare SLT against state-of-the-art techniques such as Federated Dropout (FD), FedRolex, HeteroFL, and FjORD. Strengths: 1, This paper is well-motivated. Memory footprint is a key constraint in training models on edge devices. The proposed approach allows for co-adaptation between parameters and reduces the memory footprint. 2, The paper is well-written, offering clarity and organization. 3, This paper conducted a comprehensive experimental analysis on four datasets (CIFAR10, CIFAR100, FEMNIST, and TinyImageNet) and three NN architectures (ResNet20, ResNet32, and DenseNet40). Comparisons are made with several baselines, and the results demonstrate that SLT achieves higher accuracy and faster convergence in both independent and non-independent and identically distributed (iid and non-iid) settings. Additionally, the performance of SLT is investigated in heterogeneous environments, where devices have different memory constraints. Weaknesses: The paper primarily focuses on addressing memory constraints in federated learning (FL) by introducing the Successive Layer Training (SLT) technique. However, it does not thoroughly consider other types of resource constraints, such as computation or communication, which are also crucial aspects of FL on devices. The paper does not provide insights into how SLT impacts computation time or communication costs in the FL process. It is important to acknowledge that SLT may introduce certain overheads or inefficiencies in these areas, potentially influencing its practical applicability and overall performance. Further investigation and analysis of the computational and communication implications of SLT would provide a more comprehensive understanding of its practicality and effectiveness in resource-constrained scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the evaluation, the used model sizes are relatively large and can be over-parameterized for the dataset. As a result, the method Small Model, which is still sufficient for the task, can achieve better performance than the SOTA methods. If the model size goes smaller (i.e., LeNet with 1x, 2x, or 4x channels), do the proposed method and the method Small Model still outperform the SOTA baselines? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We like to thank the reviewer for the fair and constructive feedback on our manuscript. # Model capacity ## Model capacity in conducted experiments The observation is correct that in some experiments the NN architecture is over-parametrized for the given dataset. We observe this, especially for the FEMNIST dataset from the Leaf benchmark. Here, using a memory constraint of $0.5\times$, only reduces the accuracy from $87.6\\%$ to $86.9\\%$. However, we observe that if the dataset is more complex and capacity is not sufficient, SLT shows larger gains over a small model (e.g., when using tinyimagenet and ResNet32). ## Effectiveness of SLT and baselines with small models In general, we observe that SLT can reduce the memory requirement (relative to the full NN) more effectively if the NNs have more layers, as the minimum memory SLT requires for training is mainly dominated by the memory cost of *filling up* a single layer. Hence, when having only a few layers, as it is the case with LeNet, SLT is not as effective w.r.t. memory reduction, and hence we do not expect SLT to perform much better than a small model. Additionally, we confirmed that the trends of the gap between a small model and FedRolex still exist, even when capacity is sufficiently lower. We Evaluated a minimal ResNet structure with only 9 layers and scaled-down memory using $0.125\times$. We observe that the small model reaches an accuracy of $57.7\\%$, while FedRolex only reaches $25.6\\%$. # Communication and computation in SLT We differentiate between memory constraints and constraints on communication and computation (FLOPs), as the latter are less restrictive constraints (e.g. referred to as soft constraints in [1]). Specifically, insufficient available memory excludes a device from the training (as it is done with Google GBoard training [2]), while constraints on communication and FLOPs can slow down the convergence (e.g., w.r.t. time or energy). Hence, when dealing with communication and FLOPs resources, one should opt for using them efficiently. ### Communication efficiency of SLT We want to point out that we already consider communication efficiency in our evaluation (Figure 4). We show (Figure 4: x-axis is the uploaded volume of the data to the server, and hence the communication overhead) that we require significantly less communication (GBytes) to reach the same accuracy as state of the art. Also, with respect to the communication overhead, we converge as fast as a small model but enable higher final accuracies. This is in part due to the freezing of layers, as the parameters of frozen layers do not change, and hence do not need to be uploaded to the server, increasing the communication efficiency. ### Computation (FLOPs) Efficiency of SLT We performed a similar evaluation to show the FLOPs overhead in federated training for SLT and the related baselines. We observe similar trends: 1) we require significantly fewer FLOPs to reach a certain level of accuracy, compared with state of the art; and 2) the number of FLOPs required by SLT to reach a certain accuracy is similar to the small model, but SLT enables higher final accuracies. The results are shown in **Figure R.2** in the rebuttal pdf. We would add these results to a revised version of the manuscript. [1] K. Pfeiffer, M. Rapp, R. Khalili, J. Henkel, "Federated Learning for Computationally-Constrained Heterogeneous Devices: A Survey", ACM Computing Surveys, Volume 55, Issue 14, 2023 [2] Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. Applied federated learning: Improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903, 2018.
Summary: In this paper, authors study the case where the edge device is not capable of holding the training of the whole network due to limited memory. Authors find that it is necessary to train a whole network rather than train a submodule as previous methods do. Motivated by this observation, authors propose to first scale down the intermediate size of the network and then expanding it as its precedent layers being fully trained. Authors provided experimental results and those results successfully proved that SLT outperforms other algorithms in this scenario. Strengths: The observation that it is necessary to train a full network, rather than a submodule, sounds reasonable to me. Moreover, the SLT method proposed looks new to me. This idea could be helpful for future studies in memory-constrained FL training. Weaknesses: The paper is not clearly written in some parts, such as Section 2.2. My major concerns come from two parts: 1) Why SLT can achieve almost the same result w.r.t. the ''small model'' method when the model can accommodate the edge device, as shown in Figure 4? To me, the shallower layers are only trained when the deeper layers get forward partially in the early stage, and they are kept frozen when training the deeper layers. This means the shallower layers are not fully trained with the correct gradient that comes from the whole model, which means SLT is not equivalent to the full training algorithm and it shall get a worse performance than the ''small model'' method. Therefore I doubt why it can get a matched accuracy as shown in Figure4. 2) Actually the footprint of activations can be alleviated by checkpointing or offload, please compare SLT with these methods. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to my questions in the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Authors have clarified their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We like to thank the reviewer for the fair and constructive feedback on our manuscript. # SLT's improvements over a "small model" baseline * The observation is correct that parts of the model are never trained end-to-end in SLT. However, SLT pretrains a subset of all layers end-to-end in step $0$ (equal to an affordable small model), where no layer is frozen $K_T=0,K_F=0$. After this pretraining phase, gradually, step by step, a layer is *filled up* while shallower layers get frozen. This way shallower layers are trained to still produce useful features for the downstream head. Thereby, SLT reaches higher accuracies than a small model that has the same memory footprint in training. This is especially the case if the small model does not have sufficient capacity for the given dataset. SLT allows training models with larger capacities. Despite the error introduced by our training scheme being not end-to-end, the gain in capacity outweighs this error as can be seen in Figure 4 and Table 1. At the same time, SLT is not able to reach the accuracy of the full NN, hence can not compete against an end-to-end trained full NN. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I have adjusted my score accordingly.
Rebuttal 1: Rebuttal: We attached a pdf with four figures (R.1, R.2, R.3, R.4) to support our answers to the reviewers' questions. Pdf: /pdf/002383a2b13315dd71d11ced6e185a68f2fa7976.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposed a new training method for neural networks in the federated setting, called successive layer training. This method tries to train NN layer by layer by freezing the trained layer and scaling down the NN heads. The authors have a clear presentation for this method and propose detailed experiments to show the performance. Strengths: The experimental results show good performance when compared with other algorithms for this memory-constrained setting. It is exciting to see that such an (approximate) layer-by-layer training strategy can have an acceptable accuracy. The configuration for this setting is also flexible when adapting to environments with different resource constrained. Weaknesses: The method itself is somewhat straightforward and the paper makes no explanation about why this strategy can work. Meanwhile, the experiments were only done on some toy neural networks and thus were not very representative. At least testing on Resnet50 on Imagenet is necessary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Although this strategy here is proposed under the background of FL, it is a general approach suitable for any distributed environment. So is it effective for applying the method in standard distributed training, while communication is the bottleneck. Meanwhile, is it possible to apply this strategy to attain the accuracy of full training? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some suggestions are as follows: 1. Give some explanation for this method 2. Do adequate experiments on standard nets and datasets 3. Study this training strategy as a general approach first and then apply it to FL Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We like to thank the reviewer for the fair and constructive feedback on our manuscript. # Scope of the work Our work is motivated by the observation that the current state of the art in memory-aware-FL through scaling of the NN does not work well. We think that the bad performance is caused by the fact that gradients of the subset of a layer are calculated (for a local epoch) without consideration of the parameters that reside on the full server NN (as also discussed in Appendix A). We think this problem is tightly coupled to the federated averaging mechanism which is used in federated learning. Hence, as the state of the art does, we target federated learning environments. We show in Figure 4 in the paper that SLT requires significantly less communication to reach the same level of accuracy as the state of the art, so SLT could be also effective in distributed training settings where the communication is the bottleneck. However, this should be studied more in detail and is thus delegated to future work. # ResNet50/ImageNet We think that currently for devices in federated learning, such as smartphones, IoT devices, and sensors, it is not feasible to train large models like ResNet50 on ImageNet (full image resolution). We want to add that we also lack the capabilities hardware-wise to simulate ResNet50/ImageNet in a federated environment as a simulation of FL takes considerably more time than centralized training. Through our study, we observed that SLT benefits from two factors: a) a larger capacity gap between a small model ($0.125\times$) and the full model ($1.0\times$); and b) deeper NNs, as the memory cost of *filling up* a single layer reduces, thus allows for larger values of $s$ throughout the training. We believe that this effect would also transfer to larger NNs and more complex problems like ResNet50 and ImageNet. To provide further evidence for that we evaluated a downscaled version of ImageNet (64 by 64 pixels) with ResNet56 (one of the cheaper CIFAR10 adapted family of ResNet models), where with 500 devices SLT reaches $22.0\\%$ of Top-1 accuracy, while the small model baseline only reaches $8.2\\%$ Top-1. It can be seen that compared to TinyImageNet, this is already a larger gap. We are confident that the FedRolex and FjORD would perform worse than a small model, as it is the case for all other experiments. --- Rebuttal 2: Title: Does the proposed approach admit theoretical guarantees in convergence and accuracy Comment: Wondering if the proposed approach admits theoretical guarantees in convergence and accuracy. It looks like an applicatoin of coordinate descent under the stochastic setting. --- Rebuttal 3: Title: Reply to AC (1KKn) comment Comment: We want to thank the area chair for this comment and the interesting observation. We agree that parts of our technique can be interpreted as (block) coordinate descent, as we freeze layers (i.e. fixing these variables) while optimizing the remaining ones. However, we also see some differences to coordinate descent: For the sake of simplicity, we ignore the stochastic part. Also, let $w$ be the set of full parameters, $w_x$, $w_y$ be subsets of $w$, $l(w)$ be the loss w.r.t $w$, and $g_w$ be the gradients w.r.t $w$. In (block) coordinate descent (in a cyclic step), a gradient for a subset of parameters $g_{w_x}$ is calculated based on the loss surface of all parameters $l(w)$. This is different from our approach, as in SLT, we first calculate gradients for a subset of the parameters ($g_{w_x}$) based on the loss surface of the subset $l(w_x)$. We then freeze parameters (i.e. early layers) while adding remaining parameters to the head followed by essentially coordinate descent, i.e. we calculate and apply $g_{w_y}$ based on $l(w_x \cup w_y)$. Throughout SLT's training steps, the process of adding parameters and freezing layers is repeated. In essence: in coordinate descent, the optimization problem stays the same throughout the training, while in our case the loss surface $l()$ changes as more parameters are added. We, therefore, think that existing convergence guarantees for coordinate descent can not be applied to our technique. Throughout our empirical evaluation, we also made the following observations w.r.t. accuracy: * SLT reaches higher accuracy than FedRolex and FD, that both essentially, in order to eventually train all parameters $w$, cycle between calculating and applying gradients $g_{w_x}$ based on $l(w_x)$ followed by calculating and applying $g_{w_y}$ based on $l(w_y)$, hence, changing the loss surface constantly. In SLT however, it is ensured that when calculating $g_{w_y}$, the loss surface of all *already trained* parameters is $f(w_x \cup w_y)$ is considered. * FedRolex and FD perform worse than only doing $g_{w_x}$ based on $l(w_x)$ throughout the training, as is the case with a small model baseline. * Given a constraint on memory, SLT cannot reach the same accuracy as end-to-end training without any constraints.
null
null
null
null
null
null
FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
Accept (poster)
Summary: This paper presents FedGame, a game-theory based dynamic defense mechanism against backdoor attacks in Federated Learning (FL). FedGame adapts to attackers using a minimax game model. The authors theoretically prove that a model trained with FedGame is close to one trained without attacks, implying robustness against malicious disruptions. Empirical evaluations on benchmark datasets reveal that FedGame compares with existing six existing defenses, reducing attack success rate on MNIST and CIFAR10. Strengths: 1. Backdoor defense in federated learning is an important problem. 2. The paper provides theoretical analysis on the attack and defense as a two-player game. Weaknesses: 1. The authors build an auxiliary global model to reverse engineer the trigger, as mentioned in Sec 4.2, to address the key challenge in solving Equation 5 is to construct an auxiliary global model. Then does the authors make any assumptions on the auxiliary global model $\theta^{t}_{a}$ comparing to global model? If so, is there any analysis on auxiliary global model in the minimax game framework? It is unclear that how the assumption is made and it would be beneficial to understand the implications if the assumed conditions do not hold true. 2. The determination of the 'genuine score', $p_{i}^{t}$, is another part that needs clarification. The paper does not provide theoretical estimations or robust definitions for such an important score metric for the defense, which raises questions about potential inaccuracies in the estimation and their impact on the defense framework. 3. The evaluation methodology could be further enhanced. If the genuine score is an empirical number, it would be useful to have it studied in the evaluation section. Moreover, comparisons with state-of-the-art baselines like FLIP [52] would provide a more comprehensive analysis of the proposed solution's performance. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. **C1: Any assumption on the auxiliary global model and global model.** Thanks for the insightful question. Our auxiliary global model is obtained by updating the global model in the previous FL round with the average local model updates of the current FL round. Our assumption is that the auxiliary global model obtained in this way could be backdoored if a local model from a compromised client in the current FL round is backdoored, enabling us to reverse engineer the trigger to protect the current global model. In other words, if the auxiliary global model is not backdoored, our current global model would be very similar to the auxiliary global model and thus is also not backdoored. We will add related discussion and analysis in our revision. **C2: Potential inaccuracies in the estimation of the genuine score.** We note that our framework does not require a very accurate estimation of the genuine scores for clients, as long as the genuine scores are different for malicious clients (e.g., small genuine scores) and benign clients (e.g., large genuine scores). In Figure 2 of the Appendix, we show that the average genuine score for compromised clients is much smaller than that of benign clients, which validates the feasibility of the optimization problem. We will add related discussion in our revision. **C3: Genuine score should be studied in the evaluation section.** Thanks for the suggestion. In Figure 2 of the Appendix, we show the average genuine scores for compromised clients and benign clients. We will make it more clear in our revision. **C4: Comparison with baselines like FLIP.** Thanks for the suggestions. Following the suggestion, we compare our FedGame with FLIP under the default setting. We find that the FLIP can only reduce the ASR to 65.62\%, while our approach can reduce the ASR to 9.72\%. The reason is that their method can only tolerate a small fraction of compromised clients. In our default setting, the fraction of malicious clients is 60\%. We will add the related discussion and results in our revision. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have raised my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the insightful suggestions and positive feedback on our response!
Summary: Existing defenses consider a static attack model where an attacker sticks to a fixed strategy and does not adapt its attack strategies, causing they are less effective under adaptive attacks. Hence, the authors propose FedGame to bridge this gap. The main idea of FedGame is to model single or multi-stage strategic interactions between the defender in FL and dynamic attackers as a min-max game. It mainly consists of three steps: 1) building an auxiliary global model. 2) exploiting it to reverse engineer a backdoor trigger and target class. 3) testing whether the local model will predict an input embedded with the reverse engineered trigger as the target class to compute a genuine score. Strengths: 1. The idea of modeling strategic interactions between the defender and attackers as a min-max game is novel and less studied in previous works. The proposed FedGame seems to be competitive with related methods. 2. The paper is well-organized in general, well-written and mostly easy to follow. 3. This paper addresses a pertinent problem in Federated learning. The extensive experiments show that FedGame method works even if more than half of the clients are malicious. Weaknesses: 1. The authors use the genuine score $p_i^t$ to quantify the extent to which a client is benign, however, the results of $p_i^t$ highly depends on the quality of reversed-trigger generated by NC or other tools, implying the limitations of the FedGame are the same as NC, e.g., maybe invalid in large-size trigger reverse (such as Blend, Reflection, …) and all-to-all attacks, etc. 2. FedGame seems to have high time complexity. In the server side, the defender needs to run NC process to generate reverse-trigger (When there are many categories, the time consumption of NC is enormous). In the client side, the attacker needs to find an optimal $r_i^t$ using grid search (Every search for $r_i^t$ requires retraining the local model). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper points out that existing defenses become less effective against dynamic attackers who strategically adapt their attack strategies. However, the definition of dynamic attacks is unclear. Further, the dynamic attack simulated by this paper seems to be only the change of the poisoning rate $r_i^t$. Does this mean that the proposed method in this paper has certain limitations? Because there are many possible factors that may affect the attack performance, such as the selection of poisoned neurons [1], the design of the loss function [2], etc. 2. In practical, the defender cannot manipulate the attacker's malicious behavior. Hence, it seems the defense method proposed in this paper just need to achieve the goal of $\min (\sum_{i \in S_a} p_i + \sum_{j \in S-S_a} p_j)$. If so, the FedGame is more likely a combination of NC method with limited impact from the min-max game. If not, does it mean that FedGame needs benign clients to act as attackers to achieve better defense effects? Will FedGame be invalid if the target of the real attackers and the fake attackers are different? 3. In section 4.3, when searching for an optimal $r_i^t$, the goal of attackers is $\max (p_i^t+\lambda r_i^t)$. However, increasing $p_i^t$ and increasing $r_i^t$ are contradictory because large $p_i^t$ means small ASR, while large $r_i^t$ means high ASR. It seems that the attack constructed in this paper is not competitive enough, which raises some doubts about the necessity of the attack stage. [1]. PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations. [2]. Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see the questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. **C1: Large-size trigger.** We note that our framework is very general and is compatible with any trigger reverse engineering methods such as NC and FeatureRE. For instance, FeatureRE [42] (published in NeurIPS’22) shows that it is effective for large-size triggers (even for triggers that occupy the entire image), which addresses the limitations of NC. We will make it more clear and add related discussion in our revision. **C2: Time complexity.** We agree with the reviewer that our defense incurs extra computation costs for the server. However, we note that the server (e.g., servers in the FL applications deployed by Google and Apple) is usually very powerful in practice. Moreover, the reverse engineered triggers could be used for all clients at once. As we propose a defense framework, we consider a strong attacker, where a compromised client could use grid search to find an optimal $r_i^t$ to maximize the attack's effectiveness. The attack would be less effective if the compromised client cannot optimize $r_i^t$. We will add the related discussion. **C3: The definition of dynamic attacks is unclear.** Sorry for the confusion. In our dynamic attack, we consider an attacker could optimize both the poisoning rate and trigger pattern. In Appendix D.6, our experimental results show our defense is still effective when an attacker can optimize the entire trigger pattern. We will clarify it in our revision. Moreover, we will discuss other factors such as the selection of poisoned neurons [1] and the design of the loss function [2] as suggested. In addition, following the suggestions, we have added an experiment which uses the loss function in [2]. We observe that our method can reduce the ASR from 98.63\% to 11.32\%, indicating the effectiveness of our method. **C4: Defense goal.** Yes, the goal of the defender is to minimize (or maximize) the genuine scores for compromised (or benign) clients, i.e., $\min (\sum_{i \in S_a} p_i - \sum_{j \in S -S_a} p_j)$. Since the benign clients do not have attack behavior, genuine scores obtained by our framework for benign clients would be high, enabling us to distinguish benign and compromised clients. Note that our major contribution is to propose a general game-theory based framework, which is compatible with any trigger reverse engineering method such as NC and FeatureRE. Our results show our framework is also effective with FeatureRE. We will add the related discussion. **C5: Optimization of $r_i^t$ and $p_i^t$.** We note that the local model of a compromised client is more likely to predict a trigger-embedded input as the target class when the poisoning rate $r_i^t$ is larger. As a result, the genuine score computed by the server for the local model of the compromised client would be smaller, making the attack less effective. In other words, a larger $r_i^t$ makes the attack more effective without defense but less stealthy (e.g., the genuine score is smaller under our defense framework), which makes the attack less effective under defense, and this forms the defense principle. Thus, we aim to find an optimal $r_i^t$ to achieve a tradeoff between the poisoning rate and the genuine score to maximize the attack effectiveness under our defense. We will add related discussion to make it more clear in our revision.
Summary: Federated learning (FL) enables a distributed training paradigm, where multiple clients can jointly train a global model without needing to share their local data. However, recent studies have shown that federated learning provides an additional surface for backdoor attacks. For instance, an attacker can compromise a subset of clients and thus corrupt the global model to mispredict an input with a backdoor trigger as the adversarial target. Existing defenses for federated learning against backdoor attacks usually detect and exclude the corrupted information from the compromised clients based on a static attacker model. Such defenses, however, are not adequate against dynamic attackers who strategically adapt their attack strategies. To bridge this gap in defense, we model single or multi-stage strategic interactions between the defender in FL and dynamic attackers as a minimax game. Based on the analysis of our model, we design an interactive defense mechanism FedGame. We also prove that under mild assumptions, the global FL model trained with FedGame under backdoor attacks is close to that trained without attacks. Empirically, we perform extensive evaluations on benchmark datasets and compare FedGame with multiple state-of-the-art baselines. Our experimental results show that FedGame can effectively defend against strategic attackers and achieves significantly higher robustness than baselines. For instance, FedGame reduces attack success rate by 82% on CIFAR10 compared with six state-of-the-art defense baselines under Scaling attack. Strengths: The article is well written, has a smooth grammar, and a clear purpose. It better summarizes and clarifies the concepts and definitions of backdoor attack and Federated learning. It is novel and innovative to express FedGame as a minimax game between defenders and attackers.The mathematical derivation is relatively detailed. Weaknesses: There may be biases in the selection and collection of data samples, which affect the accuracy and reliability of the results. In terms of research methods, there is a lack of sufficient description of the control group and experimental group, making it difficult to determine causal relationships. The literature review section lacks comprehensive and in-depth academic research, resulting in a lack of clarity in the research background of the article. The discussion on the results and discussion section is not clear and powerful enough, and requires more detailed explanation and support. The improper use of statistical analysis methods or hypothesis testing has affected the credibility of the results. There are some loose and incoherent issues in the structure and organization of the paper, which require better logic and framework. The summary in the conclusion section may be too brief to fully summarize the significance and contributions of the research.The writing of the paper could be improved for better description and clarification. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.Have you cited the latest and relevant research results? 2.Insufficient experimental content. 3.The experimental datasets are all small datasets, and model testing needs to be carried out on the Big data set. 4.Icons are not clear enough Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author's paper has good innovation and rigorous writing logic, but the experiment is not sufficient and needs to be further expanded to obtain more universal conclusions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. **C1: Have you cited the latest and relevant research results?** Thanks for the question. To the best of our knowledge, we have cited those research results. We will do a comprehensive literature survey to add and discuss more recent research results (e.g., those published after the NeurIPS’23 deadline). Thanks for the kind reminder! **C2: Insufficient experimental content.** Following the suggestions, we will add more discussion and explanation in our experimental results sections. We will also polish the structure and organization of the paper. Moreover, we will expand the conclusion section and polish the writing of the paper to make it clear in the description. **C3: The experimental datasets are all small datasets.** In Appendix D.6, we conduct experiments on the Tiny-ImageNet dataset. Our experimental results show our defense is still effective on the large dataset. We will make it more clear in our revision. **C4: Icons are not clear enough.** Thanks for the suggestions. We will polish the figures in the paper to make them clear. --- Rebuttal Comment 1.1: Comment: Dear authors: I extend my gratitude for the comprehensive responses and the inclusion of additional experiments. Overall, I find myself without any further inquiries concerning this manuscript. --- Reply to Comment 1.1.1: Comment: We really appreciate the reviewer for the time and effort in reviewing the paper and reading our rebuttal.
Summary: The authors introduce a game-theoretical defense mechanism named "FedGame" designed to safeguard against backdoor attacks in the Federated Learning (FL) environment. They demonstrate through theoretical analysis that training the global FL model using FedGame under backdoor attacks yields results closely resembling those obtained without any attacks. The researchers conduct extensive evaluations using benchmark datasets and compare FedGame's performance with various state-of-the-art baselines. The experimental findings indicate that FedGame successfully thwarts strategic attackers and significantly outperforms the baseline approaches, displaying remarkable robustness against adversarial threats. Strengths: 1. It is interesting to incorporate game theory to federated learning. 2. The authors show that using FedGame to train the global FL model under backdoor attacks yields similar results to those without attacks. 3. The authors extensively evaluate FedGame on benchmark datasets, showcasing its effectiveness in defending against strategic attackers and outperforming baselines with significantly higher robustness. Weaknesses: From game perspective, 1. Ideally, we expected to see a game model incorporating FL's online nature (e.g., Markov Game). It is unclear to me why it is a multi-stage strategic game (sequential decision problem) since each FL round's optimization problem is solved independently. 2. The term minmax game is ambiguous. It should be making clear which solution concept (Stackelberg or Nash equilibrium, Zero-sum or General-sum utility) is considered. Otherwise, it is hard to claim the existence of the solution and convergence of the algorithm. 3. The game's information design is unclear (game with complete or incomplete information). It should be modeled clearly (probably mathematically?) which player have what information that is known/unknown by some other player. Informational concerns play a central role in players’ decision making in such strategic environments. From security perspective, 1. Why the reverse engineering methods can be directly applied to FL setting considering they normally require large amount of clean data without bias (FL clients may hold heterogeneous data)? Will the aggregation influence the reversed pattern since the aggregated model is updated by a combination of clean and poisoned data? 2. The action spaces for both attacker and defender are not general enough. From the attack side, poison ratio is not the only parameter the attacker can manipulate (e.g., scaling factor, batch size, number of local training epoch, etc.). In general, attacker can control the model updates send to the server. From the defense side, the authors only consider the client-wise aggregation-based training stage defense. There are other defense (e.g., coordinate-wise, post training stage defenses) could be considered to extent the action space. 3. In FL, sever is not only unaware of the attacker, but also lack of environment information, e.g., number of total clients, non-iid level, subsampling, etc. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Instead of utilizing reverse engineering, why not considering other approaches closer to game concept like modeling a Bayesian game, and update defender's prior belief about attacker during FL process? 2. Does the "dynamic backdoor attack" mean the attacker can change backdoor trigger and targeted label during FL process? If that is the case, the attacker should consider a long-term goal instead of a one-shot myopic objective. Otherwise, it could mitigate its own attack effects even there is no defense exists. 3. It is unclear to me in both section 4.3 and Appendix how the attacker uses stochastic gradient descent to optimize the trigger $\delta$. In the conducted DBA experiments, the trigger(s) take the form of fixed-sized square(s) located at specific position(s) and may involve certain numbers of squares. These characteristics represent potential parameters that can be incorporated into the optimization problem. However, it is worth noting that for achieving the most versatile trigger, it is worthwhile to consider every pixel in the image. In recent works [1] [2], researchers have explored generated backdoor triggers that encompass the entire range of the image. 4. I wonder what if the reverse engineering cannot get the correct trigger and/or targeted label? There should be either theoretical analysis or ablation experiments to further exam the influence. [1] Salem, Ahmed, et al. "Dynamic backdoor attacks against machine learning models." 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P). IEEE, 2022. [2] Doan, Khoa D., Yingjie Lao, and Ping Li. "Marksman backdoor: Backdoor attacks with arbitrary target class." Advances in Neural Information Processing Systems 35 (2022): 38260-38273. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: 1. The post-training stage defenses play a vital role in countering backdoor attacks. Even within the context of FL, certain techniques such as Neuron Clipping [3] and Pruning [4] have demonstrated their effectiveness in detecting and mitigating the impact of backdoor attacks. Consequently, I am curious to know how the proposed FedGame performs compare to these post-training stage defenses. Noted that, some of them do not require to know backdoor trigger or target label. 2. I wonder the efficiency of the FedGame algorithm. It seems for each FL epoch, it needs process reverse engineering method (costly according to original papers) and calculate a equilibrium (also costly in most of settings like genera sum Nash aquarium). Is there any assumption or simplification I missed here? [3] Wang, Hang, et al. "Universal post-training backdoor detection." arXiv preprint arXiv:2205.06900 (2022). [4] Wu, Chen, et al. "Mitigating backdoor attacks in federated learning." arXiv preprint arXiv:2011.01767 (2020). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable suggestions. **C1: Why the game is multi-stage.** We are sorry for the confusion. In the abstract, we meant to say a multi-stage interaction between the attacker and defender over different FL rounds instead of a multi-stage game. **C2: Minmax term not clear.** We are sorry for the confusion. Our game is a zero-sum Stackelberg game. We will add the discussion of the existence of the Stackelberg equilibrium following existing literature. **C3: Game’s information.** The defender knows local models but does not have 1) information on compromised clients, 2) poisoning rate $r_i^t$, trigger $\delta$, and target class $y^{tc}$ of the attacker. The attacker knows the global model $\Theta^t$ in each FL round and the trigger reverse engineer method (to consider a strong attacker). We will add more details. **C4: How does reverse engineering in FL work?** We would like to kindly mention that we don't heavily rely on the resources of clean data. FedGame does not require reverse-engineering the exact trigger as long as the reverse-engineered trigger can be used to distinguish benign and compromised clients. Thus, compared with [38, 42], we have a lower requirement for the reconstructed trigger. In our experiments, we only use a moderate number of clean samples (10% the size of training data in the default setting) to reverse engineer. The aggregation step will not affect the effectiveness of reverse engineering, since these FL attacks are designed to poison the global model with only a few local models poisoned. **C5: Action space.** We really appreciate the suggestion. We hope to provide a general framework. We conduct thorough experiments considering the poisoning ratio as the action space of players since it is critical in controlling the effectiveness and cost of attackers. Moreover, we also consider optimizing other variables, such as the trigger pattern. We agree that it is an interesting future work to extend the action space. We will discuss. **C6: Environment information.** We agree. Note that we don’t assume the server is aware of the environment information to ensure the generalizability of FedGame. For instance, without an attacker, FedGame would automatically assign similar genuine scores for clients, reducing FedGame to FedAvg, which often gives the best utility. With an attacker, FedGame still does not need to know the number of total clients, non-iid level, or subsampling, e.g., our results show FedGame is effective with 1) different total number of clients, and 2) a subset of clients was selected in each FL round. We will make this clear. **C7: Other game formulations.** Here we aim to propose the first game-theory-based framework. We believe it is an interesting future direction to consider other game formulations, e.g., Bayesian games. We will add a discussion. **C8: Dynamic backdoor attacks.** Yes. We consider and allow a dynamic attacker who could change the backdoor trigger and targeted label during the FL process. Currently, we only consider the possibly worse-case attacker in each round by optimizing their attack objective. We agree that an attacker could consider a long-term goal, leading to more sophisticated attacks. We leave the attack development (developing such an attack is still an open challenge) and the examination of FedGame under the attack as future work. We will add related discussion and future work. **C9: Trigger optimization.** We are sincerely sorry for the confusion. We optimize the trigger according to different attacks. For instance, we follow the setting of DBA, and optimize the position for each local trigger and preserve its pattern. For scaling attack, we optimize every pixel of the trigger. Concretely, we randomly sample a batch of samples from $\mathcal{D}_i$, then compute the gradient of the loss function in Line 219 with respect to $\delta^*$, and finally update $\delta^*$ based on the gradient. Note that we clip the gradient such that each entry of $\delta^*$ is a valid pixel value. We will add detailed illustrations. As suggested, we added an experiment for triggers in [1]. FedGame reduces ASR from 99.43% to 10.83% under our default setting, meaning FedGame is effective for diverse triggers. We will add results and discussion. [1] "Dynamic backdoor attacks against machine learning models." **C10: Reverse-engineered trigger/targeted label could be incorrect.** FedGame would still be effective if reconstructed triggers were incorrect, as long as they can distinguish benign and compromised clients. We added an experiment to validate this. We use 20 iterations (originally, it’s 100) when reverse engineering the trigger under the default setting to obtain low-quality and incorrect triggers (the trigger is not visually similar to the original one in this case). We find that genuine scores for malicious clients are still low. The final ASR is 13.23% which is slightly higher than the original ASR. We will add a discussion. **C11: Post-training defenses.** FedGame could be combined with those defenses to form a defense-in-depth. To maintain utility, in general, [3] needs some clean samples that have the same distribution as the training dataset, while FedGame is effective even if samples of server have different distributions from FL task. [4] prunes filters based on pruning sequences collected from clients. Thus, [4] could be less effective under a large fraction of compromised clients, while FedGame is effective under 80% of compromised clients. We will add the discussion on [3, 4] and comparison. **C12: Efficiency.** We agree that FedGame incurs extra computation costs for the server. We would like to kindly note that in practice, the server (e.g., deployed by Google) is usually powerful. Given the triggers, it is very efficient to compute genuine scores for clients, as shown in Appendix D.6. Moreover, we could compute them in parallel. We will add these discussions in our revision. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and informative clarification. After reading the rebuttal and other reviewers' comments, most of my concerns have been addressed. I will change my rating. Thank you. --- Reply to Comment 1.1.1: Comment: We really appreciate the reviewer for the positive feedback on our response and constructive suggestions!
Rebuttal 1: Rebuttal: We really appreciate the reviewers for the constructive feedback and suggestions. We are glad that the reviewers find our work novel, effective, and well-organized. In summary, we have added the following additional experiments following the reviewers’ suggestions: 1. We add an experiment to validate the effectiveness of FedGame for triggers that encompass the entire range of the image. 2. We add an ablation experiment to show our FedGame is still effective when reverse engineered trigger/targeted label could be incorrect. 3. We add an experiment to validate the effectiveness of FedGame when an attacker could design a different loss function. 4. We add a comparison with FLIP (a defense baseline). In addition, we have added the following clarifications/explanations in our revision: 1. We clarify the minmax game of our defense. 2. We clarify the information design in our game. 3. We add an explanation on the trigger reverse engineering in FL. 4. We clarify the environment setup. 5. We clarify the dynamic backdoor attack strategies. 6. We add the details of the trigger optimization. 7. We clarify and discuss the post-training defenses. 8. We discuss the efficiency of our framework. 9. We clarify the relevant research and experiment content. 10. We add an explanation of our framework for large triggers. 12. We add an explanation on the optimization of $r_i^t$ and $p_i^t$. 13. We clarify the auxiliary global model and global model. 14. We add an explanation on genuine score estimation. 15. We clarify the evaluation steps of the genuine score.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
RegBN: Batch Normalization of Multimodal Data with Regularization
Accept (poster)
Summary: This paper focuses on mitigate the influence of unwanted variability and bias in multimodal data. To this end, the authors proposed a novel non-parametric regularization technique, i.e., RegBN. The proposed method utilizing Frobenius norm to constrain cross modality consistency, enabling model to discern underlying patterns. Comprehensive experiments show the usefulness of proposed approach in various datasets from different research areas. Strengths: 1. Paper is well-written and follows a good structure. 2. Comprehensive performance evaluation are shown, involving various recent multimodal methods and seven datasets, that proposed approach is working in practice. 3. Mitigating unwanted variability and bias of multimodal data is an important topic and few methods have touched this field. The proposed method is interesting. Improving the reliability of multimodal learning by normalizing multimodal feature is a new perspective. Weaknesses: Major - Method: The proposed method mitigates the influence of unwanted variability and bias by minimizes the linear relationship between multimodal layers. In Figure 1, the authors claim that "by leveraging RegBN, these data sources can be rendered independent, thereby enabling a multimodal network to discern underlying patterns and optimize its performance". To the best of my knowledge, exploring consistency and complementarity[1] may be two mainly underlying reason why multimodal model outperforms the unimodal. Does the proposed method ignore to explore the cross-modal consistency? - Analysis: In line 108, the authors claim that "The goal is to find potential similarities (mainly caused by confounding factors and dependencies during data collection)". In my view, this is an essential assumption for this paper. I think the authors should provide more discussion about why potential confounding factors and dependencies cause these similarities. It could be necessary to give some explanation and citations. - Comparisons: In line 33 "For instance, the race or gender of speakers in audio classification, backgrounds in video parsing, or the education level of patients in dementia diagnosis is commonly recognized as confounders.". This statements seems very closely to the motivation of DRO[2]. It is appreciated to give some discussion about the relationship between RegBN and DRO. Minor - Ambiguous notations: In line 101 and 102, what is the meaning of N and M? [1] Deep Partial Multi-View Learning, Changqing Zhang, Yajie Cui, Zongbo Han, Joey Tianyi Zhou, Huazhu Fu and Qinghua Hu, IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE T-PAMI) [2] Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In International Conference on Machine Learning, pages 6781–6792. PMLR, 2021. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please address the issues raised in the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Please address the issues raised in the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for the comments and feedback. We have addressed each comment carefully and provided a point-by-point response, as outlined below. *Question #1: Method: The proposed method mitigates the influence of unwanted variability and bias by minimizes the linear relationship between multimodal layers. In Figure 1,... To the best of my knowledge, exploring consistency and complementarity [1] may be two mainly underlying reason why multimodal model outperforms the unimodal. Does the proposed method ignore to explore the cross-modal consistency?* **Authors' Response**: In [1], the authors aim to improve performance with missing data. When “views” or data sources are missing for a specific instance, the missing data is imputed. The consistency is used for the generation of the missing information as constraints. In our work, we are not considering missing data, which could be the subject of future work. In the case of complete observations, having the same information multiple times in the network does not provide benefits to the classification and rather makes the network more inefficient (or equivalently called imbalanced modality), as demonstrated in our experiments. RegBN aims at making the different modalities' data independent by removing confoundings. As demonstrated by the quantitative and qualitative experimental results, ensuring independence can improve the reliability of analyses and predictions by leveraging the synergies between different types of information while minimizing confounding impacts between modalities. Thanks to this comment, we have included a concise discussion on consistency, complementarity, and multimodal normalization in the Supplementary’s Q&A section-Q1. *Comment #2: Analysis: In line 108, the authors claim that "The goal is ..." It could be necessary to give some explanation and citations.* **Authors' Response**: We apologize if the text is not clear about the confounders. By definition, a confounder is the common cause of treatment and outcome. Hence, there are similarities between confounder and treatment. We want to remove those similarities (dependencies) to avoid biased results. As an example, the imaging device may impact the characteristics of images, as it is the case for medical images. If the task is to predict diagnosis, and the diagnoses are not balanced across scanners, the model can learn to differentiate scanners instead of disease. Hence, we want to remove the dependence of the scanner on the images. We will enhance the clarity of the Introduction Section by incorporating the following sentences: “Confounding variables pertain to external factors that introduce bias (either positive or negative) in the relationship between the variables being studied [1*,2*]. The complexity of confounders emerges from their potential pervasiveness across diverse data modalities. For instance, in image analysis, confounders might encompass lighting variations, while in audio classification, speaker attributes like race or gender can be confounding factors. In video parsing, backgrounds play a role, and in dementia diagnosis, the education level of patients can be a confounder. Furthermore, positive or negative correlations can exist among heterogeneous data that impact the distributions of the learned features [2*, 3*].” [1*] Elwood JM, editor. Causal Relationships in Medicine. Oxford: Oxford University Press; 1988. p. 332. [2*] Soleymani et~al.. A survey of multimodal sentiment analysis. Image and Vision Computing, 65, pp.3-14, 2017. [3*] Manuscript: ref 28 *Comment #3: Comparisons: In line 33 "For instance,...". This statement seems very close to the motivation of DRO[2]. It is appreciated to give some discussion about the relationship between RegBN and DRO.* **Authors' Response**: The paper in [2] introduces JTT and compares it to group DRO in the experiments. JTT addresses the problem that classifiers may have an overall high accuracy, but certain groups may have low accuracy. JJT first learns a classifier on the input data and then assigns higher weights to training samples that were misclassified. Subsequently, training a second model with the weighted training set pays more attention to those misclassified samples. This can help with the worst group’s accuracy. In RegBN, we address the normalization of multimodal data, which is a very different problem from JTT. We consider multiple input modalities. For instance, consider the example that we would have audio recordings and the gender of the speaker as input. JTT would first train a model and maybe, the samples with the worst accuracy would mainly contain men. By putting more emphasis on the worst samples in the second training, the performance of men's audio recordings may increase. For RegBN, we would have the audio data and gender as inputs. The RegBN layer would now create a latent representation of audio that is independent of the gender variable. In the next step, the classification layer takes normalized audio and gender as its inputs, wherein the audio features are gender-neutral. *Minor Comment: Ambiguous notations: In lines 101 and 102, what is the meaning of N and M?* **Authors' Response**: We thank the reviewer for highlighting this point. In this study, 'f' and 'g’ are N- and M-dimensional feature maps, respectively. We will make the necessary revision to the text to accurately reflect this. We sincerely appreciate the reviewer's insightful suggestions, and we are committed to integrating them into the final version of the manuscript. We are hopeful that our detailed responses have effectively attended to all of the reviewer's concerns. In the event that any additional questions or points of clarification arise, we kindly invite the reviewer to raise those during the forthcoming reviewer-author discussion period. Your comment is greatly appreciated, and we look forward to engaging in further discussions during the mentioned timeframe. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, my main concerns about the motivation and underlying reason of effectiveness has been addressed. Taking other reviewers' comments into account, I tend to increase my score to 5/10.
Summary: This paper introduces an approach for the normalization of multimodal data called RegBN. The RegBN aligns multimodal features by a learnable projection matrix with regularization in the form of the Frobenius norm. The regularization strength is updated with the L-BFGS optimization algorithm. Extensive experiments on various modalities are conducted to verify the method. Strengths: - The paper is well-written and easy to follow. - Aligning multimodal data from the perspective of the normalization method can be promising. - Extensive experimental results show the effectiveness of the proposed RegBN. Weaknesses: - The method is not very novel for the reviewer. Essentially, RegBN is an l2 regression with regularization in the form of the Frobenius norm on weight. Moreover, L-BFGS has been widely adopted to update the regularization strength. - From the extensive experiments, RegBN is actually a multimodal feature fusion module. Other than baselines of normalization methods such as MDN and PMDN, more comparisons with other multimodal feature fusion modules such as [1] and [2] should be provided. - Why the method is called RegBN. It is not clear how RegBN is connected with the normalization method. [1] Yikai Wang et al. Learning Deep Multimodal Feature Representation with Asymmetric Multi-layer Fusion. 2021 [2] Yikai Wang et al. Deep Multimodal Fusion by Channel Exchanging. 2020 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses. Overall, this paper proposes a simple but effective scheme to better fuse multimodal features. The technical contribution is limited. The experimental results are comprehensive and convincing. It would be better to provide more comparative experiments as suggested. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the comments and feedback provided by the reviewer. We would like to address the concerns raised regarding the presentation of our method in the manuscript. We reordered the comments to ensure an efficient and comprehensive response, taking into account overlapping concerns. *Comments #1&2: C1) From the extensive experiments, RegBN is actually a multimodal feature fusion module. Other than baselines of normalization methods such as MDN and PMDN, more comparisons with other multimodal feature fusion modules such as [1] and [2] should be provided. C2) Why the method is called RegBN. It is not clear how RegBN is connected with the normalization method.* **Authors' Response**: We have already addressed this concern meticulously earlier in Supplementary-Section A: Multimodal Normalization and Fusion. In that section, we provided a clear discussion about fusion and normalization. In broad terms, fusion and normalization are distinct subjects with unique tasks. RegBN functions by normalizing input X in relation to input Y, resulting in a normalized X with the same dimensions as the original input X. This stands in contrast to fusion, where inputs X and Y are combined to generate one or more outputs with distinct content and dimensions. Hence, RegBN, as a normalization method, can be used in the structure of any fusion or deep learning method. We are well informed about the latest advancements in the fusion domain, and it is noteworthy that the paper [2] recommended by the reviewer had been previously cited as ref [51] in the manuscript. Our manuscript comprehensively covers the nuances of early, layer, and late fusion, as depicted and elaborated in Figure I.1 in Supplementary. We have incorporated a segment of this talk into Supplementary-Section G, addressing Question 3 for more clarification. *Comment #3: The method is not very novel for the reviewer. Essentially, RegBN is an l2 regression with regularization in the form of the Frobenius norm on weight. Moreover, L-BFGS has been widely adopted to update the regularization strength.* **Authors' Response**: Based on the previous comment, there may have been confusion by the reviewer about the actual role of RegBN (as a fusion or a normalization layer) and, therefore, also the confusion about its novelty. The other respected reviewers clearly stated the high novelty of RegBN. The novelty of RegBN lies in addressing challenges for multimodal data normalization. Kindly take into account that we do not assert novelty in our use of L-BFGS. We chose it as a practical tool for implementing our method. However, our approach and the objective of this study remain adaptable to any future minimization methodologies. We trust that the provided responses address the reviewer’s concerns regarding RegBN. We have taken diligent measures to ensure comprehensive coverage of RegBN details within both the manuscript and Supplementary. As the reviewer-author discussion period is scheduled from August 10 to August 16, we eagerly await any forthcoming questions and will be delighted to offer further clarification during this time. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. The rebuttal addresses my concerns properly. I would like to increase my score to 5.
Summary: The paper introduces a novel approach called RegBN for normalizing multimodal data before fusion in neural networks. The integration of heterogeneous multimodal data poses challenges due to confounding effects and dependencies among different data sources, which can introduce variability and bias. RegBN addresses these issues by using the Frobenius norm as a regularizer term. The method generalizes well across multiple modalities and eliminates the need for learnable parameters, simplifying training and inference. The effectiveness of RegBN is validated on eight databases from various research areas, covering diverse modalities. The proposed method shows broad applicability across different neural network architectures, enabling effective normalization of both low- and high-level features in multimodal models. Overall, RegBN offers a promising solution for improving the performance of multimodal models by addressing normalization challenges. Strengths: The paper presents a novel idea regarding multi-modal fusion. The proposed method is model-agnostic, i.e., the authors provide details about how to apply RegBN within different frameworks, including late fusion, layer (intermediate) fusion, and early fusion. This method does not require significant modification of the original framework and can be plugged in regardless of the encoder's architecture. Extensive experiments are conducted, including eight datasets covering applications in multimedia, affective computing, healthcare diagnosis, and robotics. On all datasets, the proposed method outperformed the considered baseline: vanilla model, or model with normalization in other manner such as BatchNorm or PMDN. This work provides a relatively unique exploration of the topic of multi-modal learning which I believe is worth presenting to the community. Weaknesses: Authors conducted experiments between with/without RegBN and also compared with other normalization methods. It will be interesting to see how does RegBN help regardling of the problem of imbalanced modality utilization, as pointed out in recently papers [1, 2, 3] [1] Wang, Weiyao et al. “What Makes Training Multi-Modal Classification Networks Hard?” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019): 12692-12702. [2] Huang, Yu et al. “What Makes Multimodal Learning Better than Single (Provably).” Neural Information Processing Systems (2021). [3] Wu, Nan et al. “Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks.” ICML 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Typo in the caption of Figure 3. Subfigures are indexed incorrectly. - The authors should remember to correct the misstatement in the main paper about SMIL for MM-IMDb. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors mentioned that the current algorithm works only on single GPU. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our gratitude for the reviewer's comments and insightful suggestions. Below, we have provided a point-by-point response to each comment, encompassing all concerns, for your careful consideration. *Comment: Authors conducted experiments between with/without RegBN and also compared with other normalization methods. It will be interesting to see how does RegBN help regardling of the problem of imbalanced modality utilization, as pointed out in recently papers [1, 2, 3]. ([1] Wang, Weiyao et al. “What Makes Training Multi-Modal Classification Networks Hard?” CVPR-2020. [2] Huang, Yu et al. “What Makes Multimodal Learning Better than Single (Provably).” NeurIPS-2021. [3] Wu, Nan et al. “Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks.” ICML-2022.)* **Authors' Response**: We thank the reviewer for the insightful comment. Modality imbalance is a common occurrence in multimodal learning. Fortunately, RegBN can present a valuable solution to address the issues arising from modality imbalance. We conducted an experiment utilizing Prototypical Modality Rebalance (PMR) [1*], one of the latest developed techniques tailored for mitigating modality imbalance within multimodal learning. Our experiment was conducted on the Colored-and-gray MNIST dataset [2*]. Similar to the MNIST dataset, Colored-and-gray MNIST contains 60,000 training samples and 10,000 test samples. Each sample comprises a gray-scale image (as the first modality) and a monochromatic image (as the second modality). The latter exhibits a strong color correlation with its corresponding digit label. As noted in [1*], the monochromatic images within the training set are robustly color-correlated with their respective digit labels. In contrast, the validation set consists of 10,000 samples, wherein the color correlation of monochromatic images with their labels is weaker. The RegBN in this experiment was applied to raw input images. The results of the test set are as follows: PMR: [acc: 32.97%, acc_gray: 19.57% acc_color: 16.26%] PMR with RegBN: [acc: 37.04%, acc_gray: 20.61% acc_color: 21.44%] We have also attached the training loss and validation accuracy as Figure 1. As shown in the attached plots, the performance of PMR in both training loss and validation score is boosted in the presence of RegBN. RegBN provides PMR with an enhanced capability to be trained deeper (lower loss) by rendering independency between the two modalities. The optimization mechanism of RegBN, detailed in Sections 3.1&3.2, does not allow the multimodal learning to fall into its local minima. This increased freedom contributes significantly to an improvement in PRM performance. We have condensed a portion of this discussion into Question 2 of the Supplementary's Q&A section (Section G). [1*] Fan et\~al. PMR: Prototypical Modal Rebalance for Multimodal Learning. CVPR, 2023. [2*] Kim et\~al.. Learning not to learn: Training deep neural networks with biased data. CVPR, 2019. *Questions: 1. Typo in the caption of Figure 3. Subfigures are indexed incorrectly. 2. The authors should remember to correct the misstatement in the main paper about SMIL for MM-IMDb.* **Authors' Response**: We highly thank the reviewer for reminding us of the typos. We apologize for the errors and we will correct them in our potential final version. *Limitation: Authors mentioned that the current algorithm works only on single GPU.* **Authors' Response**: The initial release of RegBN on GitHub, which will also be referenced in the manuscript's final version, features support for a single GPU. Our ongoing efforts involve the development of a multiple-GPU implementation of L-BFGS, aiming to optimize RegBN for training with large-scale datasets. As progress is made, we will update and release this enhanced version on the GitHub repository. Once again, we thank the reviewer for every comment. Our responses aim to address all concerns. If more questions or suggestions arise, we invite the reviewer to discuss them during the Aug 10-16 reviewer-author period. --- Rebuttal Comment 1.1: Comment: I appreciate the authors in especially extending the experiments and I am very excited to see the results presented. Given my expertise and experience in doing very relevant research on the same topic, I highly recommend this paper be accepted.
Summary: This paper proposes RegBN for the normalization of multimodal data. RegBN uses the Frobenius norm as a regularizer term to address the side effects of confounders and underlying dependencies among different sources of data. The proposed method generalizes well across multiple modalities and eliminates the need for learnable parameters, simplifying training and inference. Experiments on eight databases from five research areas demonstrate the effectiveness of the proposed method. Strengths: - This paper proposes a novel RegBN method for the normalization of multimodal data. - Extensive experiments demonstrate the effectiveness of the proposed method. Weaknesses: - My main concern is that the proposed method lacks comparison with other competitive normalization methods such layer norm and group norm. - Using eight databases from five research areas is not necessary. The authors should focus on two or three datasets. - Synthetic multimodal dataset should be provided to thoroughly evaluate the effectiveness of the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude for the reviewer's comments and feedback on our study. We have taken into account all the comments and kindly request the reviewer to consider our responses, which we have addressed in a meticulous manner. *Comment #1: My main concern is that the proposed method lacks comparison with other competitive normalization methods such layer norm and group norm.* **Authors' Response**: Thank you for your insights. Since traditional normalization techniques such as Batch normalization (BN), layer normalization (LN), and group normalization (GN) are not explicitly designed for multimodal data and confounder removal, resulting in unsatisfactory results in the experiments that were omitted from the main manuscript. Our manuscript prioritizes the presentation of techniques demonstrating acceptable performance or contextual relevance. To ensure transparency and comprehensiveness, we have already included the results of such techniques in specific experiments, notably the synthetic experiment (Section E.6) and the Healthcare domain experiment (Section E.4). This allows for a holistic understanding of the methodology's performance across varying scenarios. We have updated Supplementary-Sections E.4 and E.6-Tables I.7 and I.9 to include LN results (please find the tables in the attachment). As anticipated, the LN outcomes in both experiments are comparable to those of traditional normalization techniques such as BN and GN. *Comment #2: Using eight databases from five research areas is not necessary. The authors should focus on two or three datasets.* **Authors' Response**: Our motivation to assess and validate the proposed method across diverse modalities was driven by the abundance of multimodal data such as video, 2D/3D image, text, tabular, audio, etc. The wide range of experiments presented in this study has provided us with a robust platform to showcase the versatility of RegBN—a facet that has been warmly embraced by our fellow reviewers. We also opted for publicly available and community-accepted benchmark datasets for reproducibility. To ensure readability, we presented a condensed result overview in the manuscript while reserving the detailed results in the Supplementary file for clarity. Our hope is that RegBN gains widespread adoption as a normalization method in multimodal learning, and we are keen to observe the performance of RegBN on multimodal data from other sensors. *Comment #3: Synthetic multimodal dataset should be provided to thoroughly evaluate the effectiveness of the proposed method.* **Authors' Response**: Sections 4.5 & E.6 provide the results for synthetic datasets with a single channel. To address the synthetic multimodal dataset, we conducted an experiment utilizing Prototypical Modality Rebalance (PMR) [1*], one of the latest developed techniques tailored for mitigating modality imbalance within multimodal learning. Our experiment was conducted on the Colored-and-gray MNIST dataset [2*]. Similar to the MNIST dataset, Colored-and-gray MNIST contains 60,000 training samples and 10,000 test samples. Each sample comprises a gray-scale image (as the first modality) and a monochromatic image (as the second modality). The latter exhibits a strong color correlation with its corresponding digit label. As noted in [1*], the monochromatic images within the training set are robustly color-correlated with their respective digit labels. In contrast, the validation set consists of 10,000 samples, wherein the color correlation of monochromatic images with their labels is weaker. The RegBN in this experiment was applied to raw input images. The results of the test set are as follows: PMR: [acc: 32.97%, acc_gray: 19.57% acc_color: 16.26%] PMR with RegBN: [acc: 37.04%, acc_gray: 20.61% acc_color: 21.44%] We have also attached the training loss and validation accuracy. As shown in the attached plots, the performance of PMR in both training loss and validation score is boosted in the presence of RegBN. RegBN provides PMR with an enhanced capability to be trained deeper (lower loss) by rendering independency between the two modalities. The optimization mechanism of RegBN, detailed in Sections 3.1&3.2, does not allow the multimodal learning to fall into its local minima. This increased freedom contributes significantly to an improvement in PRM performance. We have condensed a portion of this discussion into Question 2 of the Supplementary's Q&A section (Section G). It is worth noting that the Colored-and-gray MNIST dataset should not be considered a multimodal synthetic dataset. However, it does hold some relevance to Comment \#3, as the input images encompass images with varying channel numbers. [1*] Fan et\~al. PMR: Prototypical Modal Rebalance for Multimodal Learning. CVPR, 2023. [2*] Kim et\~al.. Learning not to learn: Training deep neural networks with biased data. CVPR, 2019. Once more, we extend our gratitude for your valuable comments and feedback on this study. Should any additional questions or clarifications arise, we enthusiastically encourage you to raise them during the upcoming reviewer-author discussion period scheduled from August 10 to August 16. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for providing the response. I acknowledge that this response has been read and will be fully considered. Regards, AC
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their insightful and constructive feedback. We have incorporated their suggestions and responded comprehensively to their comments. Since the reviewer comments do not share common themes, we have addressed them individually. We have incorporated a Q&A section in the Supplementary file that concisely summarizes the discussion related to specific questions raised by the reviewers. This section, provided in one page with 4 questions, is intended to proactively address potential inquiries that readers might have about the same matter. We hope that our study will be well-received as a valuable contribution to the NeuroIPS' focus on theory & application. We remain available for any further discussions or inquiries the reviewers may have. It is important to mention that the enclosed file includes figures and tables that have been prepared to address some of the comments from two respected reviewers with the IDs: eB1v and saMu. Best regards, Authors Pdf: /pdf/e42a3fb5aa7c579ab37a49c831f85b9367286716.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a novel regularization approach tailored for multimodal/heterogenous data fusion, which can be applied off-the-shelf to modern neural network architectures. The approach, namely RegBN, targets the reduction/removal of confounding factors and partial dependencies that are usually present among features extracted from different modalities. To do so, RegBN is implemented as a regularization technique that promotes feature independence across data modalities. The regularization is iteratively optimized over mini-batches using a separate solver not relying on back propagation and gradient descent. This separation is critical as otherwise the network could simply learn to use the confounding factors instead of removing them. Experiments are performed on several multimodal datasets based on different modalities and spanning across different applications, such as multimedia, affective computing, healthcare diagnosis, and robotics. Moreover, a synthetic dataset is designed to further highlight the impact of RegBN. The results showcase a positive impact in all the test cases. Strengths: 1. The approach is novel and grounded with theoretical motivation 2. The method can be plugged off-the-shelf to most multimodal architectures, possibly leading to vaste adoption 3. Wide breadth of experiments that explore several multimodal scenarios 4. Positive results in all scenarios Weaknesses: 1. No integration with recent widely adopted multimodal foundation models, such as CLIP, ImageBind, and many others. Despite I understand that RegBN purpose contrasts with the multimodal alignment objective driving those models, I believe it would be valuable to bring them to the discussion. It would also be beneficial for the positioning of the paper. 2. The boost is sometimes minor, e.g see Table 2, 5, I.3, and in some settings even slightly negative, e.g., see Table 5 and I.4 (video fusion rows). Also, it is unclear the high difference between AV-MNIST results reported in the main paper (~ 99% accuracy) and the ones reported in table I.3 (~71% accuracy). Furthermore, why for the experiments on LLP, training metrics are reported in Table 2 and validation metrics in the supplementary? 3. As RegBN requires a separate optimization, its integration affects the time complexity of the model training. The Authors provides multiple examples on how RegBN could/should be integrated in existing architectures, see Figure I.1, however, in most cases a single RegBN layer is added to the network. It is unclear whether this choice is suboptimal for performance in favor efficiency, or whether it is useless to have multiple RegBN as in I.1c. 4. Minor: 1. The writing is for sure enjoyable, however there are some parts that could have better explained / could be less ambiguous. For instance, in section 3: _"Ideally, the residual layer ... RegBN minimizes the linear relationship between these layers via: equation (2)"_, but then equation (2) does only contain $f$ and $g$, and thus it is completely tacit the key insight on why solving equation(2) would provide $f_r$ orthogonal to $g$, as well as it is not explicit that $f_r$ corresponds to the normalization of $f$. Moreover, Figure 1 provides a general idea of the impact of RegBN, but it could have been more aligned with the formulation, e.g., by adding $f$, $f_r$, and $g$; and in general the caption could be expanded to better describe it — what does the grey band represent?. Finally the positioning could have been more clear by stating why recent multimodal foundation models like CLIP are out of the scope. 2. “multimodAl” often mistyped as “multimodEl”; Table I.5, 3rd column, video fusion, (70.6) vs (81.4) seems a bit suspicious; Color gradient is not explained in Figure I.9; wrong x axis label in Figure I.10; Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Do the Authors envision any use/application of RegBN for large foundation models, e.g., is it meant for whenever they are employed for downstream tasks? 2. What could be the reason why RegBN affects negatively performance? Is it due to the fact that when evaluating some dataset in-distribution learning spurious correlation might provide a boost? Please comment also on the last two sentences of weakness # 2. 3. Is it enough to apply RegBN once as shown in I.1a,b,d or might be worth to apply it to several layers? 4. Consider slightly modifying sec. 3 and Figure 1, as well as adding somewhere (introduction or related work) a few positioning sentences wrt foundation models. Fix other typos/mistakes. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Societal impact is not discussed. I believe that RegBN could provide a benefit in reducing stereotypical biases that currently affect multimodal models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere appreciation to the reviewer for providing valuable comments and suggestions. We have provided a point-by-point response, as outlined below. *C#1 & Q#1: C) No integration with recent widely adopted multimodal foundation models,... Q) Do the Authors envision any use/application of RegBN for large foundation models...?* **Authors' Response**: We highly value the reviewer's insightful suggestion. RegBN is capable of being employed within the architecture of any multimodal technique. In a preliminary experiment involving CLIP, we performed fine-tuning on a pre-trained CLIP model trained on MS-COCO captions [1*]. The results on CIFAR10 are as follows: Top-1 Accuracy: CLIP without RegBN: 15.07%, CLIP with RegBN: 16.02%. Top-5 Accuracy: CLIP without RegBN: 66.8%, CLIP with RegBN: 67.05%. For larger-scale datasets like MS-COCO, layer fusion seems promising to unlock RegBN's full potential (compared to late fusion), and we will prioritize exploring this avenue as you kindly recommended. In accordance with both this comment and the one provided in Question #3, we are motivated to expand the content of the **fourth** paragraph in the Introduction section. *C#2 and Q#2: C) The boost is sometimes minor, ...?* Q) What could be the reason why RegBN affects negatively performance?...* **Authors' Response**: We extend our gratitude for the detailed points. To ensure providing detailed responses, we have categorized and re-ordered the comments into three parts and have addressed each segment as outlined below: Part i) AV-MNIST vs. Small AV-MNIST: For enhanced visual representation, we opted to employ small AV-MNIST, consisting of 1,500 raw audio recordings from three speakers and 1,500 MNIST images (manuscript: lines 195-199). Table I.3 presents the outcomes for AV-MNIST. We acknowledge that AV-MNIST is indeed more challenging (compared to small AV-MNIST) due to a preprocessing step that is elaborated in Supplementary-Section C.3. This involves a 75% reduction of visual modality energy via PCA, which results in a noticeable drop in classification performance on the solo visual modality from around 99% on MNIST to 64.3%. Part ii) Minor Boost and Fusion Strategy: We recognize the intrinsic challenge in defining and quantifying confounding factors within real-world data as well as in modality imbalance. Different data acquisition settings introduce varying ranges of confounders, which might intensify due to diverse conditions. RegBN's performance is dependent on data specifications. For instance, compression on AV-MNIST exerts a considerable impact on classification results, while a dataset without compression might yield more accurate outcomes (footnote 1). Moreover, the fusion strategy adopted holds a substantial influence on results. Part iii) Experiments on LLP: There must have been confusion about the results in Table 2, which are not from the training set but from the test set. We will clarify it in the manuscript. We reported validation results in Supplementary-Section E.2 to showcase RegBN's impact on training from an accuracy perspective. Footnote 1: In the case of the original AV-MNIST, we reached out to the authors for the dataset, but regrettably, only the compressed version was made available to us. *C#3 & Q#3: Comment: As RegBN requires a separate optimization, ... whether it is useless to have multiple RegBN as in I.1c. Q: Is it enough to apply RegBN once ... or might be worth applying it to several layers?* **Authors' Response**: We are grateful for the insightful comment. From a technical standpoint, RegBN establishes mutual independence between the data of two input modalities. Hence, when applying the RegBN layer successively to the same pair of inputs, the resulting outputs are anticipated to remain independent. Therefore, it is advisable not to utilize RegBN repeatedly over a particular pair of modalities unless there is a shift in the processing context. For instance, modalities X and Y can achieve mutual independence via RegBN, and subsequently, their output can be rendered mutually independent with other modalities as well. As a preliminary experiment, we incorporated RegBN into the MLP network outlined in Section D.6. This integration involved utilizing RegBN K times within the synthetic experiment (Section E.6). For Experiment I, the results for different values of K (K\in{1,2,...,5}) fall within the range of [87.19, 87.43]. Given the inherent randomness of the synthetic dataset, the observed changes in the results (\~0.25%) are relatively minor and opting for `K=1' would be the most suitable choice. In the context of layer fusion (Fig. I1.c), where RegBN is employed multiple times, the input feature maps at each instance differ from one another. RegBN can be employed as long as its inputs are not mutually independent. A portion of this part is mentioned as Q4 of the Suppl. Q&A section. *Minor Cs and Q #3: C) The writing is for sure enjoyable, however ... Q: Consider slightly modifying sec. 3 and ... Fix other typos/mistakes. * **Authors' Response**: We acknowledge and apologize for the errors highlighted, all of which will be duly rectified. In line with the reviewer's observations, the accurate value for video fusion in the 3rd column of Table I.5 is indeed 80.6. we have also planned to revise Section 3 regarding the foundation models and modality imbalance. We would like to emphasize our deep appreciation for the reviewer's vigilance in pointing out the inaccuracies and oversights. *Limitation: Societal impact is not discussed...* **Authors' Response**: We have planned to enrich the manuscript with a thorough exploration of the societal implications associated with the use of RegBN. Once again, we deeply appreciate all the comments and suggestions. We hope we have addressed every comment satisfactorily. We kindly invite the reviewer to raise any additional questions or points during the forthcoming reviewer-author discussion period. --- Rebuttal Comment 1.1: Comment: Thanks to the Authors for the additional clarifications. I confirm my score
null
null
null
null
null
null
Robust Matrix Sensing in the Semi-Random Model
Accept (poster)
Summary: This paper studies the problem of low-rank matrix recovery for semi-random measurement matrices. This model allows for a mix of RIP matrices and arbitrarily chosen matrices. Using an iterative re-weighted approach, the authors demonstrate provable recovery using concepts from sparse recovery. Strengths: The paper does a great job and connecting the sparse recovery framework to the setting studied. Although this indeed is not a new observation, it is discussed and explained well here. The paper is overall written well and the results are solid and robust. The theoretical guarantees are clearly stated and give precise recovery guarantees in this framework. Weaknesses: The semi-random model is interesting mathematically but could be motivated better practically within the paper. The paper would be stronger if it included experimental results that supported the theory. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do the authors believe their results can apply to combinations of sparse and low-rank matrices? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Future work is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and positive feedback. On “combinations of sparse and low-rank matrices”: If we understand correctly, the reviewer is referring to the problem of sensing a matrix M in the semi-random model, in which the underlying matrix M can be decomposed as M = A + B where A is low-rank (and incoherent) and B is sparse. One possible approach is to maintain low-rank matrices A_t and B_t in each iteration, run alternating weighted gradient descent using the weight oracles proposed in [KLL+23] and in our paper, and show that A_t and B_t converge to the ground truth matrices. While we have not explored this setting on a technical level, this is an exciting avenue for future work and we thank the reviewer for pointing us in this direction. [KLL+23] Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian. Semi-Random Sparse Recovery in Nearly-Linear Time. COLT 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your responses.
Summary: This paper study the matrix sensing problem (or low-rank matrix recovery) in the so-called semi-random model. The semi-random model stems from the fact that an unknown set of sensing matrices satisfy the Restricted Isometry Properties (RIP) while the remaining set are chosen adversarily. This model has the advantage to design methods that are robust to noise or adversarial examples. The authors design design a descent-style algorithm that iteratively reweights the input to recover the ground truth matrix. They extend a previous work that tackle the sparse vector recovery problem in the semi-random model, to matrix low-rank estimation problem. Strengths: - The work tackles a very important theoretical problem in Machine learning that have direct implications on designing practical algorithm for robustness in the recovery of low-rank signals. - The paper generalize existing work [KL22+] from a theoretical standpoint. Weaknesses: - Lack of empirical evaluation of the main results on at least synthetic data - Some of the theoretical claims have not been proved in the paper : 1- The authors claim that their work improves correctness and accuracy of existing non-convex relaxations [BNS16] 2- On the [BNS16] method, there are bad local minima when considering the semi-random model - The authors say that one of the drawback of convex solutions are their prohibitive time complexities. But their results show that they achieve comparable run-time as existing convex solutions Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Q1- Some of the theoretical claims have not been proved in the paper : 1- The authors claim that their work improves correctness and accuracy of existing non-convex relaxations [BNS16] Could the authors please provide a proof of the statement in the appendix? 2- On the [BNS16] method, there are bad local minima when considering the semi-random model Could the authors please provide a proof of the statement in the appendix? Q2- The authors say that one of the drawback of convex solutions are their prohibitive time complexities. But their results show that they achieve comparable run-time as existing convex solutions. Could the authors please elaborate more on this? Q3- Could the authors please provide a small section of empirical evidence of their results, at least on synthetic data for a low-rank matrix recovery problem under adversary addition to their input, showing the gain of their method (time vs accuracy) over existing approaches? Q4- Combining the current analysis with the work [KL22+], to which extent the analysis of estimation of SPARSE and LOW-RANK matrix in the semi-random model doable? Q5- Could you please add the Limitations of the paper, as required in the guidelines? I would raise my score if the questions Q1-Q3 above are properly addressed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have not provided the limitations of their work in the manuscript. Could they please add the limitations in the rebuttal? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and positive feedback. Regarding bad local optima in the semi-random model (Q1, Part 2): Section 5 of [BNS16] proved that bad local minima exist if the sensing matrices do not satisfy RIP. We mentioned this in Lines 74-77 of our paper. More precisely in our setting, a semi-random adversary can add an arbitrary number of sensing matrices (provided that the linear measurements are correct), so we can have 1% of the sensing matrices satisfying RIP and use the remaining 99% to implement the counter-examples in [BNS16]. We will discuss this in more detail, and we will include the counter-examples in the Appendix to be self-contained. On comparison with existing non-convex approaches such as [BNS16] (Q1, Part 1): We intended to say that existing non-convex approaches can get stuck in bad local optima in the semi-random model, while our algorithm is guaranteed to converge to a global optimum. We agree with the reviewer that the phrase “improves the correctness and accuracy” should be made more precise. We will clarify this. Regarding our runtime being comparable to convex approaches (Q2): We focused on nonconvex approaches because they have many advantages in practice. For example, gradient descent is conceptually much simpler than state-of-the-art semidefinite program (SDP) solvers, and it can converge in much fewer iterations predicted by theory, while the runtime of cutting-plane and interior-point methods are often close to their theoretical bounds. More importantly, despite having comparable asymptotic runtime as existing convex approaches, we believe our work will serve as a starting point for the discovery of more practical and provably robust algorithms for semi-random matrix sensing. Regarding experiments (Q3): We believe that our results are substantial even without experiments, and we hope that our theoretical results are judged based on their merits. Our main contribution is to pose and study the problem of semi-random matrix sensing, and to provide descent-style algorithms with good asymptotic runtimes. We did not include experiments because our focus is not to obtain the fastest possible algorithm for semi-random matrix sensing, and experiments sometimes divert readers' attention toward such metrics. That said, we acknowledge that experimental evaluation is important and a fruitful direction for future work. On recovering sparse and low-rank matrices (Q4): If the ground-truth matrix M is sparse, we can view M as a vector and use [KLL+23] to recover M in the semi-random model. Consequently, when M is sparse *and* low-rank, one can choose between our algorithm and [KLL+23]. The two algorithms have different sample complexities and running times, and the choice should be made depending on the rank and sparsity parameters. A potentially better approach is to integrate the two algorithms, e.g., by running weighted gradient descent and projecting onto the set of matrices with small nuclear norm (as in our paper) and small entrywise L1 norm (as in [KLL+23]), which is an intriguing direction for future work. On a separate Limitations section (Q5): We would like to bring to the reviewer’s attention that a separate "Limitations" section is not mandatory according to this year’s CFP and Paper Checklist. We stated all assumptions very precisely (e.g., Theorem 3.1) and our algorithm will succeed under these assumptions. [BNS16] Srinadh Bhojanapalli, Behnam Neyshabur, Nathan Srebro. Global Optimality of Local Search for Low Rank Matrix Recovery. NeurIPS 2016. [KLL+23] Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian. Semi-Random Sparse Recovery in Nearly-Linear Time. COLT 2023. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for clarifications on Q1-Q2. As for the modifications promised by the authors, it would be better to put them in color so it is easier to follow-up? I still believe that some experiments on synthetic data would be good for completeness of the paper, although the focus of the paper is theoretical. Or at least have a small section describing practical benefits of the approach.
Summary: This paper studies the matrix sensing problem in the semi-random model, where a subset of the matrices satisfy an RIP property and the rest are adversarially chosen so as to make the RIP not hold. The algorithm iteratively updates weights on the sensing matrices throughout the iterations to search for the set on which the RIP holds. Through this updating, the approximation error geometrically decreases. Strengths: - This work makes an important step towards provable methods when an RIP does not hold. It forms a natural and nontrivial extension of recent work on sparse recovery to the matrix sensing setting. - The work introduces novel weighted and relaxed RIP conditions, under which it is still possible to efficiently identify and recover low-rank matrices. Weaknesses: - Only the noiseless setting is studied. - The work is more expensive than nonconvex methods in the random model where the rank is known, which have complexity $O(ndr)$, whereas the proposed method has complexity $O(nd^{\omega}r)$. - There are no experiments on data. Furthermore, it is unclear in what real/practical settings wRIP/dRIP might hold. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - My main question is about how well the algorithm performs in practice. Can the authors give some experiments showing how well it actually runs? - What challenges arise in the analysis that did not happen in the case of sparse vectors? What I'm asking is how does the analysis differ from the past work [KLL+22] to make it sufficiently novel/non-incremental, and not just a translation to the matrix recovery setting? - The statement of algorithm 1 is not useful - there is only one step within the loop that calls another function, and everything else is just parameter initialization. - The work might read better if vectors and matrices are distinguished from parameters somehow (for example, by using bold symbols). - Can the authors give more intuition/explanation for the dRIP condition and how it is used? They mention that it would be explained more in Section 4 but I do not see it, other than it makes the oracle work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: - The work contains no experiments. - The computational complexity of the method is not able to adapt to the rank of the underlying matrix $X^*$. - The novelty of the theoretical methodology may be limited in how different it is from [KLL+ 22], but I have not checked that reference. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and positive feedback. On the challenges in the matrix case: While the generalization from vectors to matrices (by considering the singular values as a “vector”) may appear intuitive, such generalization should not be taken for granted. To illustrate, consider a simple equation $e^{a+b} = e^a e^b$, which continues to hold in the vector case $\sum_i e^{a_i + b_i} = \sum_i e^{a_i} e^{b_i}$, but its generalization to matrices is the well-known Golden–Thompson inequality $tr(e^{A+B}) \le tr(e^A e^B)$ which requires a non-trivial proof. We refer the readers to Section 2 of [ALO16], which pointed out several wrong matrix inequalities that found their way into published papers when people tried to generalize proofs to the matrix case. Concretely in our paper, we ran into some difficulties in proving our Lemma 5.3 (in Appendix B), and its proof required techniques that are distinct from the vector case. On another note, the algorithm in [KLL+23] runs in nearly-linear time, but despite our best efforts, we were not able to obtain the same runtime due to several technical obstacles. Regarding experiments: We believe that our results are substantial even without experiments, and we hope that our theoretical results are judged based on their merits. Our main contribution is to pose and study the problem of semi-random matrix sensing, and to provide descent-style algorithms with good asymptotic runtimes. We did not include experiments because our focus is not to obtain the fastest possible algorithm for semi-random matrix sensing, and experiments sometimes divert readers' attention toward such metrics. That said, we acknowledge that experimental evaluation is important and a fruitful direction for future work. On the remaining weaknesses and questions: - We chose to study the noiseless case because it is the most basic setting and our focus is the semi-random model (vs. the random setting with RIP). While our algorithm is simple, our analysis is already fairly involved, so we decided to leave the noisy setting to future work. - While it is true that our algorithm runs slower than the fastest non-robust matrix sensing algorithm (i.e., with RIP condition), our setting is inherently more challenging because we allow the adversary to add any number of arbitrary sensing matrices. Our goal is to initiate a line of research toward designing robust matrix sensing algorithms that are as efficient as their non-robust counterparts (or show that it is impossible to do so). - We believe Algorithm 1 helps improve the clarity of our paper. While Algorithm 1 appears trivial, it allows Algorithm 2 to focus on obtaining a matrix $X$ that is closer to the ground truth $X^\star$ by a constant factor. The alternative would be to put all its technical details into Algorithm 2 (including the for loop, the number of iterations, the contracting radius, and the union bound in failure probability), which would introduce unnecessary complexity in Algorithm 2. - Throughout the paper, we tried to use lower-case letters for vectors and upper-case letters for matrices. We notice that there are exceptions to this rule (e.g., L being a scalar) and we will try our best to address this. - Regarding more explanation on the dRIP condition and the weight oracle: In the proof of Lemma 4.2, we provided more intuition for the progress and decomposition conditions and explained how these conditions are used to prove the correctness of Algorithm 2. We apologize for any confusion caused by directing readers to Section 4, as we moved the proof of Lemma 4.2 to Appendix A in this submission. We will fix this. More details about the dRIP condition can be found in Appendix B. [ALO16] Zeyuan Allen-Zhu, Yin Tat Lee, Lorenzo Orecchia. Using Optimization to Obtain a Width-Independent, Parallel, Simpler, and Faster Positive SDP Solver. SODA 2016. [KLL+23] Jonathan A. Kelner, Jerry Li, Allen Liu, Aaron Sidford, Kevin Tian. Semi-Random Sparse Recovery in Nearly-Linear Time. COLT 2023.
Summary: The paper introduces a new descent algorithm for low rank matrix recovery in the setting of the semirandom model (where a subset of the measurements satisfy the RIP). The algorithm is based on a research of the best reweighting of the measurements (in order to navigate the non convex landscape) and a minimization of the data fidelity term. Strengths: There is clearly some novelty in the paper. Although as indicated it applies the ideas from Kelner et al 2022. It is sound and well written. Weaknesses: General comments It is a good extension of the idea in Kelner et al 2022. The introduction of the algorithm could be improved though. The algorithm is clearly and neatly stated but a clear intuition on the main steps would be a clear addition to the paper. E.x. the relation between the weight oracle and lemma 4.2 is not completely clear. The weight oracle is key and some additional intuition on why it enables a contraction of the distance to the ground truth would be helpful (see detailed comments below). I..e as you indicate, it is hard to reverse the action of the adversary so what makes the algorithm efficient ? I.e. it seems to me you are leveraging the diversity of the measurements to escape local minimas. Why does the weighted objective enable you to escape the local mins introduced by the non RIP part of the matrix? your gradient is a simple LS gradient so the weights are really key. This is my main criticism with the paper. See my comments below on the progress and decomposition guarantees. If I understand well, the weight oracle gives you a sufficiently large “escape” direction. Yet this only does not imply convergence. convergence comes from key lemma 4.2 which if I understand well implies that the existence of such a sufficiently large escape direction will lead to a gradient step that will decrease the distance to the global solution. Unfortunately there is almost no intuition provided for this lemma. Finally, I think you could better summarize your contribution with one sentence or two of the form “The algorithm relies on a combined optimisation of the weight (in order to escape non convexity) and minimization of the cost function (in order to get closer to the true minimizer)” You don’t have to explicitly use the above but some clear intuition on why the algorithm is working is definitely missing. In particular, you need to provide (some more) intuition on lemma 4.2. Detailed comments page 2 - line 33-38, you should directly detail how you generate the non RIP matrices. Also, if your measurements are generated as the union of the RIP and non RIP matrices, then I don’t see why having additional non RIP measurements will be a problem. You just add redundant constraints. If you use a subset of the union then you should say it clearly - Is finding a submatrix that satisfies RIP is hard ? if so you should say it as well because otherwise, you could just retrieve the RIP part of the matrix and solve the problem from there. page 3 - You should improve the statement of Theorem 1.1. recall the meaning of d, n and omega. Do you have convergence regardless of the number of adversarial measurements? this should also be clarified in the statement - Ok so as I indicated above, the first paragraph in section 1.2 (where you explain that it is NP hard to extract a RIP matrix) should appear earlier (or at last you should mention the hardness of the RIP submatrix detection problem in one sentence in the abstract or/and introduction) - lines 112 -122, you mention the “progress” and “decomposition” gaurantees. It would help to have a decomposition of the paragraph with two clear items providing a short explanation on each guarantee page 5 - When you introduce the distinction between the RIP and dRIP conditions. It is not clear why reweighting the matrices will be useful. Perhaps you could add a sentence explaining why the reweighting is important/how it will affect recovery. - Generally speaking it is good to indicate that the isometry condition in the wRIP is a relaxation of the isometry condition in the RIP, it could be good to indicate that the wRIP can be understood as requiring that only a subset of the measurements satisfy the RIP page 6 - The halveError subroutine in algorithm 1 should be introduced earlier. In fact I’m not sure Algorihtm 1 is essential here - I know you mention it on line 184 but the sample complexity is important for the matrix sensing and completion problem and they should appear in the statement of Theorem 3.1. - It is a detail byut the oracle which is named Oracle in Algorithm 3 is named O in Algorithm 2 - When you introduce the Progress and decomposition guarantees which are key to the algorithm, you indicate that “the purpose of those” will be further discussed in section 4. Yet section 4 barely adds to the definition of the properties, merely indicating that success of the oracle corresponds to satisfaction of the progress guarantee which in turns leads to a reduction of the distance to the groundtruth . We would like more intuition on why the weights make such a difference in the recovery. - Generally speaking when you introduce the “progress” and “”decomposition” guarantees, those are equivalent to finding a weight so that there is a large gradient. page 8 - lines 303 - 306 “which means with high probability the weight oracle produces output satisfying the progress and decomposition guarantees then each iteration decreases the distance” —> Why is that clear ? typos: - page 3: lines 108-109, “Ideally, the property should” —> “Ideally the properties should be (1) … (2)…” - page 3 line 109 “ensure the weighted gradient step makes ” —> “ensure that the gradient step …” would be more clear - same page, line 109 again: “secondly, can …” —> “secondly ensure that those steps can …” - page 8, line 304 “the weight oracle produces output ..” —> “the weight oracle produces an output ..” Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and positive feedback. Regarding how the non-RIP matrices are generated and why they may cause issues: The non-RIP matrices can be arbitrary. In fact, a computationally-unbounded adversary can generate (any number of) additional sensing matrices $A_i$ after examining the ground-truth matrix, the good (RIP) sensing matrices, and our algorithm. The only constraint is that the corresponding linear measurements $b_i = \langle A_i, X^\star \rangle$ must be accurate where $X^\top$ is the ground-truth matrix to be recovered. These additional sensing matrices indeed only add redundant constraints for convex approaches (e.g., nuclear norm minimization), but they pose a series risk for non-convex approaches, because they can destroy the *landscape* of many non-convex objectives. More specifically, all local optima are globally optimal with RIP, but now these extra sensing matrices may introduce bad local optima. Regarding the purpose of the progress and decomposition guarantees (the reviewer’s 4th comments for Page 6): In the proof of Lemma 4.2, we provided more intuition for the progress and decomposition conditions and their role in proving the correctness of Algorithm 2. We apologize for any confusion caused by directing readers to Section 4, as we moved the proof of Lemma 4.2 to Appendix A in this submission. We will fix this. On the correctness of Lemma 4.2 (the reviewer’s comments for Page 8): This follows from Lemma 5.3. We will make this clear. We appreciate the reviewer providing many detailed suggestions on how we could improve the presentation of our paper: - Define $d$, $n$, and $\omega$ in Theorem 1.1, and highlight that the adversary can add any number of arbitrary sensing matrices. - Add an earlier statement on checking the RIP condition is NP-Hard. - Provide more intuition on the progress and decomposition guarantees and how they affect the recovery of the ground-truth matrix. - Emphasize that the wRIP condition is equivalent to requiring only a subset of the sensing matrices satisfying RIP. - Discuss the sample complexity (i.e., the number of random matrices needed to satisfy RIP with high probability) in or around Theorem 3.1. - Name the weight oracle consistently in Algorithms 2 and 3. We will address all of them. We thank the reviewer for pointing out the typos in our paper. We will fix them.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Shape Non-rigid Kinematics (SNK): A Zero-Shot Method for Non-Rigid Shape Matching via Unsupervised Functional Map Regularized Reconstruction
Accept (poster)
Summary: The paper proposes a zero-shot method for non-rigid shape matching based on unsupervised functional map regularisation and non-rigid shape deformation regularisation. The proposed method combines both intrinsic information from the functional map regularisation and the extrinsic information from shape deformation regularisation (i.e. PriMo energy) leading to a better matching performance compared to methods solely based on either intrinsic or extrinsic information from the 3D shapes. In the standard shape matching benchmarks, the proposed method demonstrates better matching performance compared to axiomatic approaches and competitive results in comparison to both supervised and unsupervised learning-based methods. Strengths: 1. The paper is well-written and easy to follow. Both the motivation and the proposed method are well explained in the main paper. The technical details are also well illustrated in the supplementary material. 2. Unlike most existing shape matching methods solely based on functional map regularisation, the proposed method also considers spatial alignment and adds the shape deformation regularisation to incorporate also the extrinsic/spatial information leading to a better matching performance. 3. Instead of using common deformation regularisation such as ARAP, Laplacian, the paper utilises the PriMo energy leading to a more smooth and natural shape deformation, as demonstrated in the supplementary material. Weaknesses: 1. The paper claims that the method is a zero-shot method for non-rigid shape matching. But the main difference between the proposed method and the previous unsupervised methods is not well discussed. In theory, all previous unsupervised methods can also be used as a zero-shot method to optimise only for a single shape pair. More explanation and experiments should be provided. 2. In the supplementary material, it shows the average runtime on the FAUST dataset is about 70s, which is much slower compared to existing learning-based methods. Prior unsupervised works (e.g. Cao et al. Unsupervised Deep Multi-Shape Matching, ECCV 2022) typically train on the training data and utilise test-time-adaptation for a few iterations to reduce runtime. More discussion and experiments about the runtime is expected to be provided. 3. Unlike purely intrinsic/spectral methods, the proposed method also aims to spatially align two shapes. Therefore, more insights about the rotation robustness of the proposed method should be given. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Even though in the limitation part, the paper claims that the proposed method is specifically tailored for complete shape matching, it is also recommended to include some (potential failure) matching results of partial shapes in the supplementary material. This will benefit follow-up works addressing partial matching. 2. What is the number of the LBO eigenfunctions for functional map computation? Prior methods like Smooth Shells/Deep Shells optimise the matching with gradually increasing number of the LBO eigenfunctions - is the proposed method also robust to the choice of the number of LBO eigenfunctions? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: 1. The proposed method is specifically tailored for complete shape matching, so more adaptations should be taken for partial shape matching. 2. The runtime of the proposed method takes 70s for shapes with ~5000 vertices, which is much slower compared to existing learning-based methods. 3. The proposed method is tailored to shapes represented as triangle mesh, so the extension to other data representation like point cloud could be the future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and thoughtful comments and suggestions. We invite the reviewer to consider the general feedback that we provided to all reviewers in the "Author Rebuttal" answer. Below are our answers to the reviewers’ questions. **Q1: Difference between the proposed method and other unsupervised methods** Thank you for pointing this out. In the introduction (L29-33), we highlighted that while unsupervised techniques can work without labeled data, their best performance often requires training on numerous shape examples for extended periods [35, 36]. Such methods, once trained, can exhibit biases towards their training datasets, as seen with DeepShells' strong results on Faust and Scape but poorer performance on Shrec. Unlike them, our zero-shot approach requires no prior extensive training, allowing immediate use on new shape pairs and avoiding dataset-specific biases. On using unsupervised methods as zero-shot techniques, we agree. Our ablation study in the supplementary material's Table 4 (specifically row 5) shows that while possible (training an unsupervised approach individually on each shape pair), these techniques don't rival the efficacy of our method, underlining our approach's uniqueness and benefits. **Q2: Runtime Comparison Between the Proposed Method and Some Unsupervised Methods** We appreciate your observation. While our method may seem slower at test time compared to some unsupervised techniques, a direct comparison might be skewed. Our method is zero-shot, meaning it handles new shape pairs without prior training. A more apt comparison might be with axiomatic methods, which also work without training. This is the comparison we have shown in Table 3 of the supplementary material. On the other hand, unsupervised methods require significant datasets and long training times for optimal performance. Comparing them to our approach without accounting for their training context might not give a full perspective. **Q3: Rotation robustness of the proposed method** Thank you for highlighting this aspect. As referenced in L262, both our approach and unsupervised methods utilize datasets aligned in accordance with previous studies [29,35,62,36,53]. To delve deeper into rotation robustness, we conducted an experiment. We randomly selected 10 shape pairs from the Faust dataset and computed the point-to-point maps with our method. Concurrently, we progressively rotated one shape around the up-axis from 0 to 45°. The outcomes, depicted in Figure 9 of the "Author Rebuttal" answer, indicate a slight performance drop as rotation increases. It's pivotal to note that combining intrinsic and extrinsic properties is what empowers our approach to deliver robust results. We'll clarify this further in our revised manuscript. **Q4: Matching results on partial shapes** Thank you for your input. In Figure 10 of the "Author Rebuttal" answer, we showcase our method's performance on three partial pairs from the Shrec 16' Cuts and Holes subsets. Our method performs satisfactorily when dealing with moderate partiality. **Q5: Robustness to the number of eigenfunctions** Thanks for highlighting this. As noted in the supplementary material (L12), we use 30 eigenfunctions, aligning with many recent studies using deep functional maps, such as [25,27,26,43,29,62,75]. We also tested with 50 eigenfunctions, yielding comparable results (1.9 geodesic error on the Faust dataset). This will be clarified in the revised version. **Q6: The proposed method is tailored towards complete shapes** We acknowledge your point. While our method is primarily built for complete shapes, adapting it for partial shape-matching is a viable next step. We consider employing strategies like in DPFM [27], by refining functional map estimation for partial contexts and predicting a pointwise mask to highlight aligning regions. This mask could then refine the MSE loss. However, such adaptations are non-trivial and present an exciting direction for future work. **Q7: The proposed method is tailored towards triangular meshes** Thank you for noting this. Our method is primarily designed for triangular meshes. Adapting it to point clouds would entail using feature extractors like DGCNN [85] or PointMLP [86] tailored for such data. We see this as a promising direction for future exploration and plan to delve into it in subsequent research. --- Rebuttal Comment 1.1: Comment: Thanks for the clear explanation of my questions/concerns. I acknowledge that the proposed method achieves better zero-shot matching performance by combining both spectral and spatial alignment. However, I still would like to know if the proposed method is trained in a standard unsupervised manner instead of zero-shot, how well would it perform, especially for the test data. --- Reply to Comment 1.1.1: Title: Clarifications on Zero-Shot Versus Standard Unsupervised Training Performance Comment: Thank you for the insightful question. To address your query regarding our method's performance under standard unsupervised training, we offer the following clarifications: Firstly, it's important to emphasize that our primary focus in this paper has been the zero-shot setting. Our approach is particularly geared towards scenarios in which there are no extensive datasets or computational resources available for training. Moreover, as illustrated in our paper, methods that rely on training can sometimes become biased toward the training data. This can lead to potential failures when encountering new data, as exemplified by the performance of models like DeepShells. In contrast, our SNK method evaluates each shape pair individually, making it unaffected by prior data or remeshing. Remarkably, even under these constraints, SNK demonstrates competitive results, matching or even surpassing methods that have undergone extensive training, achieving state-of-the-art results on the SHREC'19 dataset (please see Table 1 in the main paper.). Nevertheless, following your suggestion, we did run an experiment on the FAUST dataset for additional comparisons. In this test, we trained our method using the training subset of FAUST using our unsupervised losses and then tested on its corresponding test set, as done in previous works in this domain. Preliminary results showed a score of 1.6 for the trained method, while our zero-shot approach recorded a score of 1.9. We'd like to point out that these experiments were conducted under limited time constraints, so we did not embark on any extensive optimization or hyperparameter tuning. Therefore, there's potential for further enhancement of these results. In conclusion, while these additional results provide interesting insights, they do not shift the primary contribution of our work, which remains the zero-shot approach. We trust that this response addresses your question adequately, and we remain available for any further discussions.
Summary: A zero shot method for computing correspondences between meshes is proposed. The core idea is to use DiffusionNet to produce features which are then used to produce a functional map from which a p2p mapping is produced. Using it, a deformation module using Primo is employed to deform the meshes to one another. This deformed shape is then used to reevaluate the functional maps and the process continues, training an autoencoder to predict the features that drive the correspondence and deformation. Strengths: Zero-shot correspondence computation is an important problem to solve in cases where no object categories are known. The combination of an extrinsic deformation module with an intrinsic functional maps approach is appealing and novel as far as I can tell. The paper is written in a clear manner for the most part (see below). Weaknesses: I have several concerns: * In this context, "zero shot" essentially means "no use of semantic information" (as opposed to, e.g., zero shot stemming from using some existing module that has semantic knowledge and applying it to the correspondence problem without further training). Thus, the term used in the paper for "axiomatic" methods simply means ones using geometric priors rather than semantic ones. As such, I am uncertain why use the word "zero shot" at all -- this is simply an optimization algorithm for shape correspondence. As such it lies within a different research area than learning which encompasses many other works. Two recent ones are ENIGMA and Neural Surface Maps. The advantage of this approach of theirs is not immediately apparent and they should be discussed at least, if not compared to. * It is unclear to me what is the exact prior being enforced here. At first I thought it's a an isometric-distortion minimization prior but the authors show results on non-isometric datasets, so it is unclear to me what is the prior here and what is the justification for it. Without a justified prior nor semantics, it seems concerning the method solely relies on empirical success. It is further unclear what specific class of deformations is Primo regularizing for. It does seem to be for reducing isometric distortion here, thus I do not fully understand the claim to work on a non-isometric case. I suspect that the method is still working on rather-isometric cases - SMAL's 4-legged animals are still rather isometric. Again in this context, ENIGMA shows success and seems like a good comparison. * Additionally, the method is currently limited to matching meshes that are homeomorphic and are made up of one connected component - this is not mentioned in the paper and is somewhat limiting. * in terms of novelty, the method is simply a mix of several techniques - DiffusionNet, Functional Maps, Primo, which indeed work well together, but there does not seem to be a great novel insight beyond combining them to have a method to predict correspondences and then update a deformation prediction, which is a very classic idea (in essence, a non-rigid ICP approach where the closest point is replaced with a functional map prediction). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: how is the use of the low eigenspectrum of the laplacian justified without assuming near-isometry? why is the autoencoder needed? why couldn't the functional maps and deformation module be optimized in block-descent manner directly? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and thoughtful comments and suggestions. We invite the reviewer to consider the general feedback that we provided to all reviewers in the "Author Rebuttal" answer. Below are our answers to the reviewer's questions. **Q1: The Use of "Zero-Shot"** Thank you for your remark on our terminology. We use "Zero-Shot" for two main reasons: (1) Instead of relying on fixed input features like HKS or SHOT, we use a deep neural network for feature computation, and (2) Our use of a neural network provides an important prior for our method, compared to purely geometric `axiomatic methods` (please see also our response to Q3 below). Nevertheless, we recognize the potential ambiguity and will articulate this more clearly in our revised manuscript. **Q2: ENIGMA & NSM** Thank you for referencing ENIGMA and Neural Surface Maps (NSM). In our comparison, we focused on the most recent SOTA techniques that evaluate their results on standard dense non-rigid benchmarks, such as FAUST, SCAPE, SHREC’19, etc. Nevertheless, we agree that these are relevant prior methods and will include a discussion in the final version. Briefly, 1. ENIGMA: While ENIGMA is indeed a relevant baseline, its code isn't publicly accessible, making direct comparison non-trivial. We've requested the authors for the implementation and will update accordingly. 2. NSM: NSM's main objective is 3D mesh representation rather than shape matching. When applied to shape-matching, NSM necessitates input keypoints, making it semi-supervised. Furthermore, NSM's cited correspondence computation between a shape pair ranges from 6-10 hours. **Q3: Underlying Priors** We're grateful for the insightful queries. Here's a concise overview of our method's key priors: 1. Elastic Energy as a Prior: PRIMO being an elastic energy, it penalizes non-linear stretching and bending. This directs our optimization towards smooth and realistic deformations. 2. Cyclic Bijectivity via the cycle consistency loss. 3. Near-isometry: Our method *weakly* enforces near-isometry in fmap predictions through Laplacian commutativity in a *reduced* (k=30) functional basis. It should be noted that using elastic energy and weakly promoting isometries with low-frequency functional maps for non-isometric matching has precedents in works like [81,82,53] and ENIGMA. 4. Neural Networks: Finally, a subtle yet *essential* prior in our method is in our use of a neural network for feature extraction. Our approach follows studies like Deep Image Prior [83] and Neural Correspondence Prior [84] which show that neural networks can act as strong regularizers, generating features that enable plausible mappings even without pre-training. To demonstrate this empirically, we conducted two additional experiments: a. HKS-based Features: We replaced our feature extractor with the more traditional HKS features. b. Free Variables: We treat F1 and F2 as free variables, optimized via gradient descent *without using a neural network*. Our findings (see Table 5 in the "Author Rebuttal") show that without the neural network regularization, the free variable model fails to converge. The HKS-based approach performs, but significantly worse than ours. This suggests neural networks offer extra regularization, surpassing traditional handcrafted features. Overall, our method's priors are synergistic and versatile, making our approach applicable to a broad range of shapes. We'll highlight this analysis in our revised manuscript. **Q4: Novelty Concerns** Thank you for feedback on our method. While building on prior work, our contributions are distinct: 1. Primo Energy Adaptation: We've reimagined Primo Energy's usage. Contrary to the original paper's user-constrained approach, ours is fully automatic, in a *deep learning setting*, marking a departure from its initial intent. Moreover, our deformation model uses a face-based representation, with strong regularization, which distinguishes our method from most previous works in this domain that use direct vertex-based deformations. 2. Novel approach and loss integration: (a) Our use of a neural network for feature extraction in the Zero Shot setting, as well as (b) our combination of extrinsic and intrinsic losses in a deep unsupervised setting, are both novel. Furthermore, we believe that demonstrating that the resulting approach can perform on par or even better than data-driven methods, but in a zero-shot setting, constitutes an important contribution to the domain. In essence, our novelty stems from the innovative integration and adaptation of existing components, bringing significant progress to shape matching. **Q5: Isometry, Topology, & Multiple Components** * Isometry: The SMAL dataset is diverse, including animals like lions, cats, and deer in different poses. Our method's successful matches here demonstrate its robustness to non-isometric challenges. * Homeomorphism: Our approach is not restricted to homeomorphic shapes. The SHREC dataset, which tests among other *topological* changes, proves our method's adaptability. Please see Figure 8 for visual examples of our method from SMAL and SHREC. * Multiple Components: We recognize DiffusionNet's limitation here. Considering alternative feature extractors or viewing shapes as point clouds might help. However, this challenge isn't unique to us; other recent SOTA methods using DiffusionNet [27,83,25,62,26] share it. **Q6: Sequential Optimization** The deformation module relies on the fmap module's outputs, so training it first isn't viable. Our ablation study (Table 4) shows an 8.7 error rate on Faust when only training the fmap, versus 1.9 with our full method. This quality wouldn't effectively train the deformation module, evidenced by a 9.2 error when we tried. The synergy between intrinsic and extrinsic modules is vital for optimal results. **Q7: Autoencoder Necessity** Due to length limits, refer to our response to Q2 from reviewer **osbV**.
Summary: The paper proposes a novel zero-shot method for non-rigid shape matching that eliminates the need for extensive training or ground truth data. The proposed method, Shape Non-rigid Kinematics (SNK), operates on a single pair of shapes and employs a reconstruction-based strategy using an encoder-decoder architecture, which deforms the source shape to match the target shape closely. SNK demonstrates competitive results on traditional benchmarks, simplifying the shape-matching process without compromising accuracy. Strengths: The paper introduces a novel method for non-rigid shape matching that does not require extensive training data, making it more practical for many applications. The proposed method also combines the benefits of axiomatic and learning-based approaches and addresses their limitations. Moreover, the paper introduces a new decoder architecture that facilitates the generation of smooth and natural-looking deformations. The proposed method is thoroughly evaluated and achieves state-of-the-art results on the SHREC dataset. Weaknesses: While the paper highlights the strengths of the proposed method, it lacks a thorough comparison with closely related work. For instance, although several learning-based methods are presented, there is no detailed comparison, such as a visual comparison and evaluation of other datasets (recommend moving some results to the main paper). Also, the limitations of the proposed method are mentioned but need to be adequately discussed. For performance, the proposed method can not achieve the best average, especially for some unsupervised methods. For the SOTA - Deep Shells, why do the proposed methods have the similar performance on Faust and Scape datasets instead of Shrec? Could there be some analysis and discussion of the results? The description of the Prism decoder is not clear, and I cannot find the corresponding components in Figure 3. I suggest the author re-illustrate the figure and add more details for easier understanding. Some mathematical notations are repeatable: > F_i and F_j in eq3, F_1 and F_2 at line 1130 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. For the Shape reconstruction module, why not directly use the vertex feature to predict the deformation on the vertex rather than using the features on the faces to predict the rotation of the faces? BTW, I think the optimization of eq. 5 is not trivial and needs more time to get optimal. 2. The proposed method is currently tailored towards complete shape correspondence. Have you thought of adapting it for partial shape-matching scenarios? 3. In the limitations section, you mentioned exploring ways to further expedite the training process. Could you provide more details on what you mean? 4. In the loss function, how do you determine the weight for each term and evaluate the importance of each term for the target? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper adequately addresses the limitations of the proposed method but could benefit from a detailed discussion of the potential negative societal impact. Besides, I think the running time should be evaluated. I think the optimization of Eq. 5 is too slow and takes other steps. The paper also mentions it in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and thoughtful comments and suggestions. We invite the reviewer to consider the general feedback that we provided to all reviewers in the "Author Rebuttal" answer. Below are our answers to the reviewer's questions. **Q1: Enhancing Comparisons with Related Work** We value your feedback. A visual comparison with three leading methods is in Figure 4 of the supplementary material, and quantitative comparisons across several well-known datasets [53, 25, 62, 35, 29, 77, 46] are provided in Tables 1 and 2 of our paper. We will be happy to relocate the visual comparison to the main content and add results from any other dataset you suggest, either during discussions or in the finalized manuscript. **Q2: Performance Comparison with Unsupervised Methods** We appreciate your observation. While our method may not top the average scores for Faust and Scape against certain unsupervised methods, these comparisons might not be apples-to-apples. The unsupervised methods train on extensive datasets for extended periods, sometimes for days [35, 36]. Our approach, in contrast, processes each shape pair without prior training, highlighting its zero-shot nature. Nevertheless, it still remains competitive with both supervised and unsupervised techniques and even achieves SOTA performance on the Shrec dataset. **Q3: Variability in DeepShells Performance Across Datasets** Thanks for noting this. DeepShells requires extensive training to exhibit proficient performance, as seen in its results on Faust and Scape datasets. However, Shrec is smaller, with 44 shapes just for evaluation. Usually, the best model from Faust or Scape is used for Shrec evaluation, as mentioned in several studies [26,70,50] and addressed in L263 of our paper. It's also crucial to recognize that DeepShells uses SHOT descriptors, which might be affected by remeshing changes [25, 70], a feature of the Shrec dataset. In contrast, our method evaluates each shape pair independently and is unaffected by prior data or remeshing. **Q4: Clarifying the Prism Decoder Illustration and Notation** Thanks for highlighting the ambiguities around the Prism decoder and repetitions in notation. In our revision, we will improve the Prism decoder illustration, capturing all details from Section 4.2, and include visuals for the latent code and the MLP's output. Additionally, we will address all repetitive notations for consistency across the paper. **Q5: Vertex vs. Face Features for Deformation Prediction** We appreciate your question about deformation prediction. We did a test using only DiffusionNet's output for direct vertex deformations, as detailed in the Supplementary Material's Table 4 (see the `DiffusionNet` row). Our results show that using the Prism decoder, which predicts each face's rotation and translation, offers about seven times better performance. **Q6: Addressing Concerns on the Optimization of Equation 5** We appreciate your feedback on Equation 5. To clarify, this equation is solved in closed form, as mentioned in L27 of the Supplementary Material. The matrix R is obtained from the SVD decomposition of a 3x3 matrix, which is very efficient. **Q7: Extending the Method to Handle Partial Shapes** We value your insight on partial shape correspondence. As mentioned, our method is geared towards complete shape correspondence. We believe that adapting to partial shape-matching is possible in principle, and for this, we would consider a strategy similar to DPFM [27]. This would require using the partial functional map estimation approach and potentially predicting a pointwise mask, indicating source shape regions aligning with the target, to adjust the MSE loss. This adaptation isn't straightforward, but a promising direction for future work. **Q8: Strategies to Accelerate the Optimization Process** We appreciate your query on optimization speed. As outlined in our limitations, we're actively seeking ways to expedite this: - **Convergence Speed**: Adapting our learning rate or using a more advanced gradient descent method could hasten convergence. We used the Adam optimizer, but newer techniques like Adan optimizer [79] offer quicker convergence. - **Network Implementation**: With our networks (DiffusionNet and Prism-decoder's MLP) being MLP-based, a "fully fused" implementation [80] could significantly boost speed. We're optimistic that these changes can improve our optimization speed and plan to explore them in future work. **Q9: Determining and Evaluating the Weights of Loss Terms** Thanks for your question on loss term weights. As mentioned in L34 of the supplementary materials, we've uniformly weighted all loss components across datasets. Optimizing these weights could indeed enhance results. **Q10: Run Time Evaluation Concerns** We appreciate your input on run time evaluation. Please refer to Table 3 in the supplementary materials, where we've provided a detailed run-time assessment for our method. **Q11: Societal impact** Shape matching and analysis are crucial in fields like medical imaging, archaeology, and computational biology. The proposed methodology in this paper is highly applicable across these sectors, especially where data annotation is challenging or infeasible, and training datasets are scarce. Our approach reduces the need for extensive data labeling, aiding smaller research groups and leading to cost-efficient research. Moreover, it offers increased accuracy vital in areas like biology, where intricate shape analyses inform health conditions. While our method addresses a core issue in computer graphics and vision, and we anticipate mostly positive outcomes, it's essential to note the potential misuse in areas like surveillance. We strongly discourage such applications. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the authors' detailed responses and clarifications. After reading the rebuttal and other reviews, most of the concerns have been fully addressed. I suggest all of the revisions should be presented in the revised paper. Besides that, I am happy to see some visual comparison with other baselines and the visual/numerical results on the diverse datasets.
Summary: This paper works on learning a point-to-point mapping between two sets of deformable mesh vertices in a self-supervised manner. To this end, the authors extract features of two meshes from a DiffusionNet, solve for functional maps by an optimization problem, and finally convert the functional maps to a point-to-point map iteratively. In the meantime, a per-face rigid transformation is generated from a shape decoder and is used to transform the source shape to the target shape. To demonstrate the effectiveness of the proposed method, the authors conduct experiments on both near-isometric and non-isometric datasets, and achieve reasonable performance as a self-supervised method. Strengths: - The paper is well organized and written. The problem is well defined in Sec.1, while the previous works and preliminary knowledge are also well introduced in Sec.2 and Sec.3. - The two-stream (implicit shape transformation decoder and explicit functional map) pipeline is a reasonable design: - The low-rank functional map estimation is not only efficient but also facilitates the learning of feature extraction. - The MLP shape decoder and the PriMo energy regularize the predicted transformation by both network structure and training losses. - As a self-supervised method, the performance is strong on both near-isometric and non-isometric data. Weaknesses: I do not see major weaknesses except for some details: - The feature extractor should have more discussion, since it is key for the functional map prediction. Specifically, the author could either visualize or quantitatively measure the consistency of the extractor features. In addition, it would be better to explicitly indicate that the DiffusionNet is not pre-trained (in spite of lines 36~37) to avoid confusion with existing models. - The pipeline figures could be better illustrated, such as the module blocks can be colored based on whether they are learned or have explicit formulation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The shape encoder might be redundant, and similar to DeepSDF the authors can directly learn the latent code jointly with other module parameters. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and thoughtful comments and suggestions. We invite the reviewer to consider the general feedback that we provided to all reviewers in the "Author Rebuttal" answer. Below are our answers to the reviewer's questions. **Q1: Further Insights on the Feature Extractor** Thank you for this question. Indeed, the features computed by our feature extractor are generally consistent with respect to the underlying maps. To illustrate this, we've included a visualization that showcases the consistency of the features generated by the feature extractor across a pair of shapes (See Figure 7 in the "Author Rebuttal" answer). In this figure, we reduced the dimension of the extracted features with PCA and then visualized them as RGB values. From this visualization, it's clear that the features from both shapes match well and emphasize the same areas. We will be happy to provide other similar illustrations in the final version. **Q2: Direct Latent Code Learning vs. Using an Encoder** We appreciate the reviewer's insightful observation. Indeed, we considered the choice of directly learning the latent code in a manner akin to DeepSDF. However, our empirical results indicated a superior performance when employing the encoder architecture. Specifically, for the Faust dataset, using a latent code gave a geodesic error of 4.0 versus our encoder's 1.9. For the Scape dataset, the errors were 9.9 and 4.7 respectively. A crucial distinction between our method and DeepSDF is the context of the optimization. Our SNK optimizes over a single shape pair in an **unsupervised** manner, whereas DeepSDF is trained on a substantial dataset using a supervised paradigm. This stronger training signal in DeepSDF can effectively assist in deriving robust latent codes. Moreover, our encoder offers the benefit of harnessing the input shape as an additional source of information, further enriched by the intrinsic regularization introduced by the network architecture (as discussed in our response to reviewer **j61z**, Q3). We recognize the importance of this comparison and will ensure to integrate this ablation study in the revised version of the paper for clarity. **Q3: Exposition / Enhancements to the Pipeline Figure** We thank the reviewer for finding the paper "well organized and written". In our revised version, we will improve the visualization by color-coding the pipeline blocks to differentiate between optimized components, like the feature extractor, shape encoder, and parts of the decoder, compared to those with explicit formulation, such as the FM solver and the rotation and translation predictions in the decoder. Furthermore, to prevent any confusion, we will explicitly indicate that our approach *does not* rely on any pre-training, including the DiffusionNet feature extractor. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Dear authors: Thank you for the response. The rebuttal has addressed all my questions.
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive comments. We find the suggestions to be beneficial for improving the quality of our work, making it clearer and more convincing. Before responding to individual concerns, we stress the following contributions of our work: * We have devised a new decoder architecture rooted in the PriMo energy concept [**osbV**, **nxSf**]. This architecture facilitates the production of deformations that appear smooth and natural [**osbV**, **E6YV**, **nxSf**]. * We have shown that a loss function, which imposes penalties on both spatial and spectral quantities [**osbV**, **j61z**, **nxSf**], is adequate for deriving matches on a single pair of shapes without any prerequisite training [**E6YV**]. * We have developed a method for zero-shot shape matching that attains state-of-the-art results among methods that operate on individual pairs [**E6YV**, **nxSf**]. Furthermore, it competes with, and frequently outperforms, several supervised training-based approaches [**osbV**, **E6YV**]. We believe that all of the suggested changes can be easily done within a minor revision and we will make sure to address all of the comments and concerns in the final version. We will also release our code and data for full reproducibility of our results and to facilitate future work in this area. --- In response to the reviewers' queries, we've added new references. For clarity, we'll enumerate them below. [79] Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models. X. Xingyu et. al. 2022. [80] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. T. Muller et. al. ToG 2022. [81] Interactive curve-constrained functional maps. A. Gehre et. al. CGF 2018. [82] Elastic Correspondence between Triangle Meshes. D. Ezuz et. al. CGF 2019. [83] Deep Image Prior. D. Ulyanov et. al. CVPR 2018 [84] NCP: Neural Correspondence Prior. S. Attaiki et. al. Neurips 2022 [85] Dynamic Graph CNN for Learning on Point Clouds. Y. Wang et. al. TOG 2019 [86] Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework. X. Ma et. al. ICLR 2022 Pdf: /pdf/a3555584dcafc9fcc97b16a65443e49ed0dfec44.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Differentiable Clustering with Perturbed Spanning Forests
Accept (poster)
Summary: This paper proposes a differentiable clustering method based on a stochastic perturbation of mnimum-weight spanning forests. They demonstrate its performance on several datasets in supervised and semi-supervised settings. Strengths: This paper introduces a differentiable clustering method based on linear programs for minimum-weight spanning forests. Their method starts with formulating the clustering method as spanning forest problem, which admits an LP (linear programming) formulation. Then, they make the parametrization differentiable using stochastic perturbation. They also propose a novel partial Fenchel-Young loss, which allows for learning through clustering in weakly supervised problem. The technical part of this paper is strong. The idea of partial Fenchel-Young loss is novel. The theoretical part is well written and not hard to follow. Weaknesses: For experiment part, - it might be better to elaborate more on the settings and what and how $M_\Omega$ is given in these settings so readers can better understand the tasks. - Comparison with other methods is lacking: maybe some SOTA nondifferentiable clustering methods - Since the goal is to learn $M$, an $n\times n$ matrix, would it be some computational efficiency/scalability issue of the proposed method when $n$ is large? Other methods might have more condensed parametrization than the proposed framework? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive evaluation of our work, as well as the interesting points that you raise. We have taken this opportunity to address them in the following manner. ### *Elaborating on the setting* We have added a remark clarifying how $M_\Omega$ is given, from label information. To summarize, for all pairs of elements $(i, j)$ who both have label information, $M_{\Omega, ij}$ is equal to $1$ if they have the same label (must-link) and $0$ if they have a different one (must-not-link). The coefficient $M_{\Omega, ij}$ is ‘$*$’ (no information) if one of the two labels is not known. ### *Comparison with nondifferentiable clustering methods* We have added a quantitative comparison for different clustering methods (k-means, Gaussian Mixture Model), both on the standard synthetic data showcased in Figure 3 of the submission, as well as on the CIFAR-10 dataset, using pre-trained features. The results on the standard synthetic data showcased in Figure 3 have been added to our manuscript - see Figure 1.(a) in the attached document. The results for CIFAR-10 are as follows, they have been added to our manuscript +++++ CIFAR-10 (ResNet50 for embeddings) [metric: L2 distance in coincidence matrix]: Differentiable clustering: 0.097 (ours) Classification embeddings + k-means: 0.104 (non-differentiable) Classification embeddings + GMM: 0.107 (non-differentiable) +++++ This allows us to highlight that our method not only allows us to perform differentiable clustering, it allows us to learn good representations of the data. ### *Parametrization of our method* Our method indeed scales quadratically in $n$, similarly to attention mechanisms. However, as our method can use mini-batches, it is suitable for training deep learning models on large data sets. For example, we use a batch size of 64 (similar to that for standard classification models) for all experiments detailed in our submission. Other methods that focus on cluster assignment, rather than pairwise clustering, often have an $n \times k$ parametrization, which can be smaller or larger, depending on the comparison between $n$ (batch size) and $k$ (number of clusters). We have added a remark to clarify both of these points. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Dear Authors, Thank you for taking time to address my concerns, and sorry for the late reply. As for $M_{\Omega}$, I understood the setting of must-link and must-not-link labels correspond to 1 and 0 entries in $M_\Omega$. I guess my question is more like how this info is given from datasets --- is this provided by the datasets or is it manually determined or is it randomly generated? For example, for the datasets used in the experiments, like standard synthetic data, as well as the CIFAR or MNIST/Fashion-MNIST data. I am good with other points. Thanks! --- Reply to Comment 1.1.1: Comment: Thank you for your reply. Regarding your question about $M_\Omega$: In our experiments, this is done in two parts - First, artificially removing the label information for some of the instances. This is done randomly ahead of any training. The set of elements with no label information is fixed and stays the same. This is meant to simulate a dataset where some information is indeed missing. In practice, this is not necessary, but done in our experiments of the method to evaluate the impact of number of labeled elements/classes. - Then, at every batch, this information is used to create $M_\Omega$, going from same class / different class information to must-link / must-not-link / no information, as described in our reply above. We will add a remark to clarify this in our manuscript, thank you for the opportunity to do so. Other kind of partial clustering information can of course be used if it is available: our method can tackle any $M_\Omega$ that is consistent with some true cluster information.
Summary: The paper presents a novel method for differentiable clustering in combination with a novel loss derived from Fenchel-Young losses. Strengths: The paper proposes a new concept for differentiable clustering. Differentiable clustering has been a topic with only few applicable methods so far, so it is great to see a new and at the same time strong method being proposed in this space. The method is sound and the utility of the method is experimentally demonstrated. Weaknesses: It would have been great to see a stronger experiment. Maybe CIFAR-10 / -100? Or for self-supervised learning on STL or ImageNet? I would increase my score for a stronger experiment. The main aspect of being able to transmit gradients backward through a clustering operation, a good representation of data points, as computed e.g. by some neural network, can be learned. That is, the original data, which might be difficult to cluster, maybe even due to its dimensionality, is not clustered directly, but a learnable representation of it is clustered, and this representation is informed by the gradients transmitted backwards through the clustering algorithm. I believe this should be emphasized more in the introduction. The clustering function \pi is usually described by a k x n matrix U that is called "partition matrix". The matrix that is called a "cluster membership matrix" here (not a good name, because it does not state which data points belong to which clusters -- that's the partition matrix) is usually known as "coincidence matrix" or "cluster connectivity matrix" (e.g. in the context of relative cluster evaluation measures -- it states for each set of data points whether they are assigned to the same cluster or not). I would recommend using this standard terminology. The entries of a coincidence matrix M can be computed from the entries of a partition matrix U as M_jl = \sum_{i=1}^k U_ij U_il. Note that this works also in the fuzzy or membership degree case (i.e. U_ij \in [0,1]). In this sense, Def. 4 does not specify membership constraints, but rather coincidence constraints or connectivity constraints. The terms "must link" and "must not link" that occur in the caption of Figure 1 could have been introduced after Def. 4, as "must link": M_\Omega,ij = 1, "must not link": M_\Omega,i,j = 0. In this sense M is a "cluster connectivity matrix" (see above). The connection to single linkage hierarchical agglomerative clustering with cluster merging carried out until k clusters remain could have been pointed out. 126: a n x n -> an n x n [A] seems to be a relevant but missing reference. [A] Struminsky, K., Gadetsky, A., Rakitin, D., Karpushkin, D., and Vetrov, D. P. Leveraging recursive gumbel-max trick for approximate inference in combinatorial spaces. Advances in Neural Information Processing Systems, 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The experimental evaluation is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive evaluation of our work, as well as the interesting points that you raise. We have taken this opportunity to address them in the following manner. ### *Scaling to more complicated datasets* Thank you very much for your suggestion, and the indication that you would raise your score. As suggested, we have added experiments on the CIFAR-10 dataset for both the fully supervised and semi-supervised settings (as well as a downstream classification transfer learning task, as in Figure 4 of the submission). For supervised clustering, we obtain a clustering error of 6.7% on the test set, in line with our results on MNIST and Fashion-MNIST, we also provide in Figure 1.(b) the t-SNE of embeddings obtained in this manner in the feature space. For semi-supervised clustering, the results are in Figure 2 in the attached document and have been included in the manuscript (analogous to Figure 4 on MNIST in our original submission). We observe analogous trends for clustering error to those detailed in Sections 4.2 (& 10.2) of the submission, with our model outperforming the baseline. The model attained a comparable classification error via a linear head (downstream transfer learning), to that of a baseline classification model for k=0, and for k=3, 6 outperformed the baseline. ### *On learning representations as the main aspect of being able to transmit gradients backward through a clustering operation* Thank you very much for bringing this up. Indeed, this is a very interesting way to present our work, and an important point, that we had not clearly expressed. We have added a remark highlighting this point in the introduction. ### *Notation and presentation* We have made all of the suggested terminology and presentation changes. ### *Suggested reference* Thank you for this very interesting reference, we have added a citation to it, as well as discussion of this work. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for addressing all of my concerns. Accordingly, I raise my score. In the vein of learning representations with differentiable grouping, [B] could be a relevant recent reference. [B] Nina Shvetsova et al. "Learning by Sorting: Self-supervised Learning with Group Ordering Constraints", ICCV 2023 --- Reply to Comment 1.1.1: Comment: Thank you for your positive response to our rebuttal, and for the relevant reference on representation learning via grouping (ICCV 2023). We have added a citation to it, as well as a discussion of the work.
Summary: This paper introduces a differentiable clustering method based on minimum-weight spanning forests. This method can be integrated into end-to-end trainable pipelines and handle datasets with high noise and challenging geometries. The key idea is to smooth the combinatorial clustering operation by taking expectations over stochastic perturbations of the input similarity matrix. By taking expectations over these perturbations, gradients can be computed and backpropagated through the clustering process, facilitating end-to-end training. Experiments on both supervised and semi-supervised tasks demonstrate the effectiveness of the proposed method. Strengths: 1. The authors propose an effective method to address a challenging problem: learning through clustering. The proposed method is well-motivated and reasonable, and it has the potential to be applied in a wide range of applications. This paper addresses the key challenges of incorporating combinatorial operations like clustering into neural network pipelines that can be optimized through gradient descent. The perturbed proxy for discrete operators can address the major issue of piece-wise constant by attaining useful properties like smoothness and computational traceability. 2. The paper is well-written and easy to follow. Detailed remarks are provided for readers to better understand the definitions. Weaknesses: My major concern is about the evaluation. 1. The proposed method is only evaluated on small-scale datasets such as MNIST. It remains unclear whether the proposed method can be scaled to large datasets. 2. The authors did not compare the proposed method with other differentiable clustering methods. Therefore, it remains unclear whether the proposed method is more efficient than other baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please check the weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors claim that the proposed method can be used to handle datasets with high noise, but this claim is not verified in the experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive evaluation of our work, as well as the interesting points that you raise. We have taken this opportunity to address them in the following manner. ### *Scaling to more complicated datasets* As our method can use mini-batches, it is suitable for training deep learning models on large datasets (we use a batch size of 64 for all experiments detailed in our submission). We have added experiments on the CIFAR-10 dataset for both the fully supervised and semi-supervised settings (as well as a downstream classification transfer learning task, as in Figure 4 of the submission). For supervised clustering, we obtain a clustering error of 6.7% on the test set, in line with our results on MNIST and Fashion-MNIST, we also provide in Figure 1.(b) the t-SNE of embeddings obtained in this manner in the feature space. For semi-supervised clustering, the results are in Figure 2 in the attached document and have been included in the manuscript (analogous to Figure 4 on MNIST in our original submission). We observe analogous trends for clustering error to those detailed in Sections 4.2 (& 10.2) of the submission, with our model outperforming the baseline. The model attained a comparable classification error via a linear head (downstream transfer learning), to that of a baseline classification model for k=0, and for k=3, 6 outperformed the baseline. ### *Comparison with other differentiable clustering methods* We have added a comparison between our method’s results on MNIST and CIFAR-10 and those reported by [Yang et al. 2017] and [Genevay et al. 2019], because they were the most comparable to our own experimental evaluations. As a reminder, using our methodology, we obtain a clustering error of 1.0% on MNIST and 6.7% on CIFAR-10 (and 4.0% on Fashion-MNIST). A remark describing this comparison has been included in the manuscript. + [Yang et al. 20177] - Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering, ICML 2017 Uses an bi-level optimization procedure (alternating between optimizing model weights and centroid clusters). They reported attaining 83% label-wise clustering accuracy on MNIST using a fully-connected deep network. + [Genevay et al. 2019] - ​​Differentiable Deep Clustering with Cluster Size Constraints, 2019 Casts k-means as an optimal transport problem, and uses entropic regularization for smoothing. Reported a 85% accuracy on MNIST and 25% accuracy on CIFAR-10 with a CNN. --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my concern by including experiments on the CIFAR-10 dataset. Although CIFAR-10 is still a small dataset, it is more complicated than MNIST and Fashion-MNIST. I prefer to increase my score to 6. It would be greatly appreciated if the authors could include the new results in the revised manuscript --- Reply to Comment 1.1.1: Comment: Thank you very much for your comment, and for increasing your score, we are glad that we were able to address your concerns. We have added these results to our work, and they will be in the revised manuscript.
Summary: The paper presents a differentiable clustering approach based on k-spanning forests. It formulates the clustering assignment as an NxN membership matrix while introducing partial Fenchel-Young losses to optimize the between the cluster constraint and similarity/affinity matrix. The study demonstrates the applicability of the proposed approach to both supervised and semi-supervised learning for representation learning through clustering. The results indicate the following: (1) On several synthetic datasets, the proposed method generated qualitatively better clusters compared to baseline clustering methods including k-means, MeanShift, and Gaussian mixture model. (2) In supervised/semi-supervised representation learning experiments, the proposed method effectively learns a robust representation based on the labels (cluster constraint), outperforming the classification approach on the MNIST/Fashion-MNIST datasets. Strengths: - The proposed approached based on k-spanning tree and the partial Fenchel-Young losses for clustering is technical sound. - The proposed approach enable end-to-end optimization for representation learning through clustering which is a novel and interesting direction. Experimental results on MNIST also shows that interestingly the method was able to outperform classification loss on semi-supervised tasks especially on low-data regime. Weaknesses: - Experimental results are only limited to toy datasets (synthesized data, MNIST), it would be interesting to see how the methods expands to real world data such as ImageNet. - Comparison between traditional clustering methods are also limited to qualitative comparison, it would be good to have more quantitive comparison between different clustering methods and demonstrate the benefits of the proposed clustering method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the proposed clustering methods compared to traditional clustering algorithm (e.g. k-means) on real worlds data? The paper only present qualitative experiments on toy dataset, it would be more interesting to know how it compare quantitatively on real world dataset. - How does the proposed method perform on more complicate dataset such as ImageNet compare to traditional classification approach? It would make the paper stronger if we could show that how the proposed method could be incorporate into ML pipeline to demonstrate benefits over more complicate dataset. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have listed the possible broader impact and limitations (e.g. batch size and current implementation efficiency) in the supplemental materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive evaluation of our work, as well as the interesting points that you raise. We have taken this opportunity to address them in the following manner. ### *Comparison with other clustering algorithms* We have added a quantitative comparison for different clustering methods (k-means, Gaussian Mixture Model), both on the standard synthetic data showcased in Figure 3 of the submission, as well as on the CIFAR-10 dataset, using pre-trained features. The results on the standard synthetic data showcased in Figure 3 of our original submission have been added to our manuscript - see Figure 1.(a) in the attached document. The results for CIFAR-10 are as follows, they have been added to our manuscript +++++ CIFAR-10 (ResNet50 for embeddings) [metric: L2 distance in coincidence matrix]: Differentiable clustering: 0.097 (ours) Classification embeddings + k-means: 0.104 (non-differentiable) Classification embeddings + GMM: 0.107 (non-differentiable) +++++ This allows us to highlight that our method not only allows us to perform differentiable clustering, it allows us to learn good representations of the data (see following point) ### *Performance on more complicated datasets* We have added experiments on the CIFAR-10 dataset for both the fully supervised and semi-supervised settings (as well as a downstream classification transfer learning task, as in Figure 4 of the submission). For supervised clustering, we obtain a clustering error of 6.7% on the test set, in line with our results on MNIST and Fashion-MNIST, we also provide in Figure 1.(b) the t-SNE of embeddings obtained in this manner in the feature space. For semi-supervised clustering, the results are in Figure 2 in the attached document and have been included in the manuscript (analogous to Figure 4 on MNIST in our original submission). We observe analogous trends for clustering error to those detailed in Sections 4.2 (& 10.2) of the submission, with our model outperforming the baseline. The model attained a comparable classification error via a linear head (downstream transfer learning), to that of a baseline classification model for k=0, and for k=3, 6 outperformed the baseline.
Rebuttal 1: Rebuttal: We would like to thank the reviewers and the area chair for their overall positive evaluation of our work. We are thankful for the high-quality, informative reviews, and their constructive suggestions. We have made the following changes, further described in individual replies to reviewers: - We have added an additional clustering experiment on the CIFAR 10 dataset, in both the fully-supervised and semi-supervised settings. The results are in the attached document (Figure 2) and have been included in our manuscript. - We have added a quantitative comparison for the clustering methods contained in Figure 3 of the submission, as well as evaluating different clustering methods on features of the pre-trained CIFAR-10 features. We detail these results in replies below and they have been included in our manuscript. - We have made suggested changes: clarifying some points, and improving our terminology and presentation (see replies below for details). Pdf: /pdf/9d7c6990f13529d5d2f84475d43984c0907ecb60.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Interpretability at Scale: Identifying Causal Mechanisms in Alpaca
Accept (poster)
Summary: This paper introduces an advancement of Distributed Alignment Search (DAS), termed Boundless DAS, where the brute-force search steps are replaced with learned parameters. This enhancement enables efficient exploration of interpretable causal structures in large language models and therefore results in a scalable interpretable model. Strengths: 1) The increasing significance of LLMs necessitates an evaluation of their interpretability. Developing a scalable method to achieve this goal is undeniably crucial and intriguing. 2) Current interpretability methods are inadequate for LLMs due to scalability issues, rendering them impractical. In contrast, the proposed method offers a solution that enables the interpretation of real-world LLMs, filling this crucial gap. Weaknesses: 1) My primary concern lies with the novelty of the paper, which appears to be exceedingly limited. The paper seems to be primarily an extension of DAS and a direct amalgamation of two existing works, namely DAS and neural PDE [58], without addressing any specific challenges or introducing significant innovative elements. 2) The paper heavily relies on the DAS method as its foundation, and it is imperative to provide a comprehensive explanation of DAS, including its limitations, which are currently absent in the paper. A thorough understanding of DAS and its shortcomings is crucial to fully comprehend the context and motivation of the proposed research. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: The authors have discussed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Thanks* for your insightful comments and it **leads us to picture more clearly the improvement we have done compared to DAS [Geiger et. al., 2023] and why is that important**. We further clarify our main contributions besides simply proposing a new method in finding alignments. Here, we address all concerns with point-by-point responses. > **Q1:** "The paper seems to be primarily an extension of DAS without addressing any specific challenges" **A1:** **Boundless DAS addresses the challenge of scaling causal explanation methods** to the scale of Alpaca (and beyond, assuming one has the compute budget for the work!) **This is significant** because prior methods do not scale to this level, which means that they do not apply to the most relevant present-day models. Boundless DAS achieves this scalability by removing essentially all aspects of manual search that limited previous methods. The relevant parameters are now learned directly. The resulting optimization problem is straightforward, which we regard as a virtue. This is also **the first paper (as far as we know) to offer a causally grounded explanation of instruction-following LLMs, with numerous robustness checks.** > **Q2:** "A thorough understanding of DAS and its shortcomings is crucial" **A2:** Thanks for the suggestions. Given an additional page in the next revision, **we will provide a detailed background introduction to DAS [Geiger et. al., 2023]**. In addition, we will add a thorough discussion of DAS in the appendix in our next revision. And we will provide a comparison of time complexity between these two methods. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanation of the DAS. It is definitely valuable in enhancing comprehension of the proposed approach. Regarding your comment on the novelty, I'm struggling to identify the distinctive technical contribution of the work. It seems to be a straightforward extension of DAS, incorporating an already existing technique (neural PDS [58]). Additionally, as highlighted by fellow reviewers, this extension hasn't been thoroughly assessed across various tasks, with outcomes predominantly confined to Alpaca. Due to these reasons, I would like to keep my score unchanged.
Summary: Finding alignments between variables in a user-hypothesized causal model for how a task could (or should) be performed and representations in neural networks, termed "causal abstraction", is a mechanistic interpretability technique for understanding how neural networks develop predictions at a granular level by using causal interventions. Prior work has recently proposed the Distributed Alignment Search algorithm for efficiently and robustly finding this alignment, but it still relies on brute-force search over the dimensionality of neural representations. This paper proposes to circumvent the brute force search by *learning* parameters of an orthogonal rotation matrix to maximize the alignment between a neural network representation's linear subspace and one of the variables in the high-level causal model. They test their method on one rather narrowly scoped task, which is determining if a price is between two numbers, and use the Alpaca (instruction-tuned LLaMA) 7B decoder-only language model. The authors also test whether the found alignments generalize across settings (both input prompts and output token choices for labels), and find that they do. Edit: I read the authors' rebuttal and the other reviews & discussions. I did not change my score as only one of the weaknesses (writing/presentation) was alleviated. I still think the paper should be accepted though. Strengths: Originality: - The application of causal abstraction (or even other variants, like causal mediation analysis/causal tracing) to large instruction-tuned LMs has not been done; this is highly original to the best of my knowledge. Quality & Clarity: - The experiments seem sound. The analysis in sections 4.3 - 4.6 is insightful and interesting. The overall findings of the paper are useful to a larger community for understanding math processing in LMs. - The writing is clear overall. - Related works section is comprehensive. Significance: - Causal analysis of language models at the representation-level is a very important goal with potentially far-reaching impacts for the broader NLP community. Using instruction-tuned models that are in widespread use today, such as Alpaca 7B, bridges the gap between theory and practice and makes this work more applicable to a wider audience. Weaknesses: - The paper does not read independently (particularly section 3.1) to a reader unfamiliar with prior work. I had to refer extensively to [32] to grasp what the various terms and variables used meant, and to understand what components of the proposed algorithm are novel w.r.t prior work. If there is not space in the main paper, the appendix should be used to thoroughly explain these concepts in a camera-ready version. The math notation could be improved; see "questions" section. - The method involves pre-registering some possible symbolic models for performing a task; it cannot discover novel symbolic models that are not specified by the user. - The method is tested on one narrowly-scoped task (determining if a price is between two numbers) and one model. There is no way to "validate" a mechanistic interpretability method beyond principle or axioms as we don't have the ground truth, so some of the findings are more suggestive rather than extremely concrete conclusions. However, this is a common problem in the subfield and I think this paper is convincing overall. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - On finding strong alignments between Alpaca’s representations and two of the proposed causal models ("left boundary","left and right boundary")— if the method reveals a faithful abstraction, wouldn't that imply that only one abstraction can have strong alignments with model weights, rather than multiple? Can you explain? - On writing clarification in Section 3.1-- it would be useful to establish early on that "variables" refers to hidden representations in a NN and "algorithm" to a high-level user-specified symbolic model. Relatedly, the terms "input variables" and "target variables" are a bit confusing as you also have input sequences ("base input", "source inputs") and target model outputs, which are different. - I’m not sure I would use the term “largest” to describe a 7B model. - You use the phrase “human-interpretable” to describe the explanations that DAS produces; it would be good to provide a definition or at least some citations of what this concretely means. - Lines 170-171 and x-axis in Figure 4: these refer to the same thing; it would be good to give a concrete in-line example in the text as you have done for the core instruction. - Will you open-source your code? Some other mathematical notation could be improved. For example, - what do $j$ and $k$ represent? (I'm assuming $k$ is the number of dataset instances and $j$ the instance being indexed, but this is unstated). $k$ appears doubly-defined in Eqn. 3. - Set notation is mixed between uppercase variables and brackets - $\mathbf{s}$ and $\mathbf{b}$ are not vectors but sequences of inputs (i.e., tokens), which is a bit nontraditional notation; introduction of $\textbf{Inputs}$ seems as though it should have been introduced earlier. - both $\mathcal{N}$ and $\mathcal{M}$ represent a neural network model, both "algorithm $\mathcal{A}$" and "model $\mathcal{C}$" (which is never used after defining) represent a high-level causal model, if I understand correctly. - no definition of $F$, $F^*$, $\Pi$, $L$ - I did not follow the connection between target variables $X$, $\mathbf{Z}$, and $\mathbf{N}$. - $y_p$ and $y$ are more traditionally notated as $\hat y$ and $y^*$; and should keep indexing consistent as $\hat y_i$ and $y^*_i$ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Yes- sufficiently detailed & comprehensive limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Thanks* for the useful suggestions. **They lead us to define more clearly how to interpret our results, and existing limitations** of Boundless DAS. > **Q1:** "The paper does not read independently (particularly section 3.1) to a reader unfamiliar with prior work.” **A1:** We plan to use the extra page to add background on DAS and add a detailed appendix on DAS. > **Q2:** "The method involves pre-registering some possible symbolic models for performing a task" **A2:** Yes, the framework verifies whether a causal model can be aligned with representations. Future works may look at how to automatically create hypotheses for Boundless DAS evaluation. > **Q3:** "The method is tested on one narrowly-scoped task (determining if a price is between two numbers) and one model." **A3:** Thanks for the comments. While we only work with the Alpaca-7B and a single task, we provide clear evidence that **Boundless DAS has the potential to scale to many other tasks and LLMs**. At the time of this project, the main bottleneck is the model’s ability to solve simple reasoning tasks as described in Appendix A.2.. We agree that the causal mechanism that is uncovered with Boundless DAS may not be the most fine-grained explanation. However, **the found mechanism grounded by our IIA metrics**, can be used to faithfully steer a model's behavior at the inference time and can generalize well as we have tried. > **Q4:** "if the method reveals a faithful abstraction, wouldn't that imply that only one abstraction can have strong alignments?" **A4:** **Not necessarily.** In our case, aligning “left and right boundary” entails aligning “left boundary” since the latter is one of the subspaces of the former. This also means our model implements both instead of one. In a case where we can only align the “left boundary” but not both, we would then observe lower IIA for "left and right boundary". > **Q5:** "terms like"input variables" and "target variables" are confusing” **A5:** Thanks! Here is a list of terms we will clarify in the next revision. - *Input variables:* These are essentially the input settings, or counterfactual example pairs that we use to train Boundless DAS. - *Target variables:* These are the hidden variables within the neural networks that we try to find the alignments for. > **Q6:** "I’m not sure I would use the term “largest” to describe a 7B model.” **A6:** We agree it is not the largest LLMs. **We will revise the sentence** to be clearer. We do, though, think that the method scales to our largest models, provided one has the hardware and compute budget (and access) to run those models! > **Q7:** "You use the phrase “human-interpretable” to describe the explanations that DAS produces; it would be good to provide a definition or at least some citations” **A7:** We will do this, thanks! We can draw more extensively on papers we cite: Lipton 2018, Geiger et al. 2020, Feder et al. 2021, and others > **Q8:** "Lines 170-171 and x-axis in Figure 4: these refer to the same thing; it would be good to give a concrete in-line example in the text as you have done for the core instruction." **A8:** Thanks! We will provide a concrete example. > **Q9:** "Will you open-source your code?" **A9:** Yes!. > **Q10:** "Some other mathematical notation could be improved. For example, what do $j$ and $k$ represent?” **A10:** You are right! We overload the notation of $j$ and $k$ to represent training instances and aligning variables. We will clarify in our methods section. > **Q11:** "Set notation is mixed between uppercase variables and brackets" **A11:** Thanks– we will adjust the notation in the next revision. > **Q12:** "$s$ and $b$ are not vectors but sequences of inputs (i.e., tokens), which is a bit nontraditional notation; introduction of Inputs seems as though it should have been introduced earlier." **A12:** Correct. We will make it clearer and use different symbols for sampled source and base inputs. > **Q13:** "Both $N$ and $M$ represent a neural network model, both 'algorithm' and 'model' (which is never used after defining) represent a high-level causal model, if I understand correctly.” **A13:** Right. $A$ is our high-level causal model as defined in Line 90. We use $A$ to train our Boundless DAS as in Eqn. 2, where the second part is the counterfactual output as if we were intervening on $A$. We will clarify this further. > **Q14:** "no definition of $F$, $F^*$, $\Pi$, $L$" **A14:** $F_N$ represents the causal mechanism of causal variable N before the distributed intervention, whereas $F_N^*$ represents the mechanism after the distributed intervention. For instance, $F_{N}(v)$ means calling the “rest” of the forward function (i.e., the causal mechanism of $F_N$) by setting $N$ with activations $v$. We will clarify $F_N$ and $F_N^*$ in our next revision. $\Pi$ can not be removed from Line 99, we will correct that to use only $\tau$ to describe our alignments. > **Q15:** "I did not follow the connection between target variables $X$, $Z$ and $N$" **A15:** $X$ is the input, $N$ represents intervened representations in the original basis, $Y$ is the representation after basis change (e.g., via rotation matrix), and $Z$ is a subspace of $Y$ meaning the actual neurons defined by our trainable boundaries we are intervening on the $Y$ space. > **Q16:** "$Y_p$ and $y$ are more traditionally notated as $\hat{y}$ and $y*$" **A16:** We use $Y$ to denote a different thing: $Y$ represents our intervening subspace in the neural representation. We will use a different symbol in the next revision to clarify. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for answering my questions and clarifying notational issues. Based on the authors responses, I am reasonably confident the authors will make the writing in Section 3 clearer and provide more adequate background on the DAS method in the updated version, though I strongly urge the authors to make these changes, as it seems I was not the only reviewer who raised these points. I think that the main claim of the rebuttal that Boundless DAS is the "only method in this class of causal explanation methods that can even be applied at the scale of Alpaca or above" is a bit strong; causal tracing/causal mediation analysis is another method that is likely applicable at this scale, so I would be careful making such a claim without empirical support (e.g., runtimes). Additionally, the authors make claims about scaling beyond the Alpaca model used in the paper here, but I think the paper could benefit from more theoretical or empirical justification (or maybe just clearer writing) about why this is the case, such as done in your response to reviewer iGjk. Regarding my first question and the authors' general response "the network may be abstracted by many other causal models as well. Di
Summary: This paper builds on Distributed Alignment Search by introducing a more efficient learned model to search for a given circuit in a model. They test this method on a specific task: asking the model whether a given numeral falls between two other given numerals. They have a single task, and find an alignment with 2 possible circuits for accomplishing this task and fail to find a good alignment to 2 other handcrafted circuits, demonstrating that this method may be capable of distinguishing between correct and incorrect circuits. EDITED to raise soundness score. Strengths: The approach of boundless DAS itself seems to be a worthwhile one, and I believe that it could work. The approach scales well, up to Alpaca, and therefore could be very useful in modern models. I'm impressed by the approach of carefully constructing possible circuits to test, and it seems to work surprisingly well in LLM settings. The observation that the IIA patterns are more structured for the seemingly correct circuits is an interesting one, though I have some reservations about this as a verification method (in questions). Weaknesses: An obvious criticism of the method itself is that we lose a degree of interpretability by introducing learned parameters into the interpretation method. How many learned parameters does it take before you’ve just introduced another blackbox as your lens? If the problem is that you don’t trust learned parameters to behave interpretable, how can you ensure that you are behaving interpretably? In particular, it doesn’t seem to get away from the complaint that the authors themselves introduce: failing to find an alignment does not mean that the alignment doesn’t exist. This is a mild criticism, and I still consider the method to be potentially useful. More significantly, the experiments are not convincing. There is only a single task evaluated. They consider only a single model, Alpaca. There are only four possible circuits considered. All of these factors add up to a fairly unconvincing set of experiments which have limited statistical power. Outside of the main experiments, these issues become more apparent: section 4.5 tests generalization from a fixed input to a different fixed input, with only a single sample considered for each case. A test of generalization with inserted context only considers two different possible contexts. Overall, these experiments are too constrained with too few samples to be convincing; while I don't think that this is cherry picked, results that comprise of case study of only a couple of samples could easily lead to publication bias. Furthermore, for the price task, the effectiveness of the alignment is measured by IIA, but the baseline for IIA is based on an assumption of complete randomness that I do not think is valid. After all, we have a method that has already found an alignment that maximizes the match with the circuit. I would trust this more if you had, e.g., tested it on random weights or weights trained for a completely different structured task that wasn't actually language modeling. On another note, 4.5 appears to be the method that they should be using as the main experiment. I don't trust results that train the boundary parameters and alignments on the same inputs that they test on, which appears to be the approach taken in 4.3. The math could be more carefully explained, with more precise definitions for the variables involved and more detailed intuitions for the method. One way to help is to turn Figure 1 into a step-by-step algorithm diagram. Minor: - "Boundary" is an overloaded term (used in both the method and the task). You should pick different wording. - "We find that Alpaca achieves near-perfect task performance because it implements a simple algorithm with interpretable variables." *Because* is a *very* strong claim. Do they actually support the idea that the simple algorithm is the reason for the task performance? Is performance worse for tasks that don’t have simple algorithms? They do not show either piece of evidence. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Why would baseline IIA be 50%? You are actively aligning a circuit to the model, so shouldn't you be able to find an alignment that performs above 50% even in a randomly weighted model? What is a target variable $Z_j$? What if $F_N$? Please explain the boundary index variable in more detail. When you restrict $b_j$ to be "a multiple of $b_{j
Rebuttal 1: Rebuttal: We *greatly appreciate* the reviewer’s in-depth feedback. > **Q1:** "how can you ensure that Boundless DAS is behaving interpretably? ... failing to find an alignment does not mean that the alignment doesn’t exist." **A1:** These are **incisive questions that are worthy of broader discussion**. A few thoughts: **It seems like we have to accept that neural models might store information in a *highly distributed fashion*, as *Smolensky and others* anticipated long ago. Our method is a sort of minimal response, as it allows only a limited kind of representation. **The issue of false negatives is important.** With our method, we can at least search vastly more hypotheses than any prior method, reducing the risks here. > **Q2:** "Overall, these experiments are not convincing." **A2:** **While we only evaluate with Alpaca-7B and on a single type of task, our method scales to larger open-source LLMs.** Moreover, **the original paper of DAS tried DAS with a set of tasks with different models and scales. Boundless DAS is strictly more scalable and expressive. **We ran new experiments with 20 random context generated by GPT-4@Aug05-2023** to validate one of our generalization tests with inserted context in Section 4.5. **The overall correlation of the mean IIA with random contexts inserted with our vanilla “left and right boundary” causal model is 0.99.** These results greatly strengthen our claims about alignment generalization.We want to emphasize that it was **unexpected** to find alignments that generalize given the fact that we freeze the rotation matrix and boundary indices at test time.** > **Q3:** "tested it on random weights or weights trained for a completely different structured task" **A3:** **Finding alignments on a model that could not perform the task would be uninformative**: if the LM head is randomly initialized, there is little chance of getting the output tokens space correct (e.g., {True, False} or {Yes, No}). **Our new results suggest good IIAs do not come for free.** **As shown **in Figure 1 of the attached pdf**, all positions drop significantly with random rotation matrix. These results calibrate IIAs in case of unbalanced counterfactual labels (e.g., each control causal model can reach about **0.60** IIA with a random rotation matrix). > **Q4:** "I don't trust results that train the boundary parameters and alignments on the same inputs that they test on." **A4:** Alignments and boundary parameters **are tested with unseen counterfactual pairs for all the sections**. Section 4.5 is more extreme: the inputs are not only unseen but with distribution shifts. > **Q5:** "I think that the math could be more carefully explained", "'Boundary' is an overloaded term" **A5:** We will provide a clearer description. > **Q6:** "'We find that Alpaca achieves near-perfect task performance because it implements a simple algorithm with interpretable variables.' *Because* is a very strong claim" **A6:** We agree that **the sentence should be clarified** to remove any ambiguity. The Boundless DAS analysis is a causal explanation for the model's success, and that is why we used "because". However, we need to clarify that there may be other explanations that highlight different factors in the model's success, as we saw in our analysis of alternative causal models. > **Q7:** "shouldn't you be able to find an alignment that performs above 50% even in a randomly weighted model?" **A7:** If a neural model performs at random with respect to a given high-level model which solves the task, we can know a priori that there is not a causal abstraction relation between the neural model and the high-level model. > **Q8:** "What is a target variable $Z_j$?” **A8:** The target variables $Z_j$ are sets of variables and the $X_j$ are variables. > **Q9:** "What is $F_N$?" **A9:** $F_{N}$ represents the causal mechanism of causal variable N before the distributed intervention, whereas $F_{N}^*$ represents the mechanism after the distributed intervention. > **Q10:** "Please explain the boundary index variable in more detail." **A10:** The boundary index variable marks how many neurons the intervention will use at the end. We penalize larger indexes since we want to push the intervention site to be smaller. > **Q11:** " do you mean that $b_j$ = 2*$b_{j-1}$ or that it could be any multiple as long as it is also even? Why would you use the notation in this way?" **A11:** We restrict $b_j$ to be "a multiple of $b_{j-1}$ with **a factor of 2 for simplicity**. > **Q12:** "Is it likely that we could have a very high IIA score without having more structured patterns?" **A12:** **We observed that but would not make any conclusion regarding high IIA score and structured patterns** in Figure 4. > **Q13:** "What is an intermediate boolean value?" **A13:** In our case study, it is a latent boolean value that is abstracted to check whether the input price is above the lower bracket or below the higher bracket. > **Q14:** "How large is the task dataset used to learn the alignment? Is it different from the set used to test IIA for the alignment?" **A14:** We random sample 20K counterfactual pairs to train the alignment. **Test is disjoint from train.** > **Q15:** "'We sample 100 experiment runs from our experiment pool and create two groups' Do the runs differ only by the input used? Or by some random seed?" **A15:** **These runs are sampled from our experiment pool with different inputs and different random seeds. > **Q16:** "In section 4.6, what does it mean to look at learned boundary width? What does a higher width indicate?” "Are the training steps (Figure 5 x axis) training steps for learning the alignment?" **A16:** Boundary width maps to how many neurons are needed in the rotated space to represent the causal variable we are aligning. Higher width meaning we need more neurons to represent the variable. Fig. 5 x-axis represents the training step for alignment. --- Rebuttal Comment 1.1: Title: stress tests Comment: Thank you, you've strengthened your results by checking a wider range of context. I think the method is potentially very valuable, and would eventually like to see this work published when it is ready. However, IIA itself doesn't seem to have any calibration for the user and I have no idea what a low value would entail. I believe that to evaluate your method, you need to compare the IIA to settings where you know that it should be low. To really be convinced that the value computed with your evaluation indicates a legitimate aligned circuit, I would want to see: - IIA after searching a random model (rather than comparing against the correct answer, you can compare against the answer that the model gives without intervention, so it's not true that it would be uninformative) - IIA for a known incorrect circuit of similar complexity to the ones found. - IIA on circuits for a task that is not consistently solved, in order to check whether intervening on the circuit actually corrects the model's output. Can you correct the answer by intervening on this circuit for the small number of cases where alpaca is otherwise incorrect? If not, it seems that this circuit is inadequate to explain the model's behavior even on the constrained task given. > A7: If a neural model performs at random with respect to a given high-level model which solves the task, we can know a priori that there is not a causal abstraction relation between the neural model and the high-level model. The problem is that I want to know the actual evaluation, not your a priori judgment of what the real circuit is. It is very easy to trick yourself into thinking that there is a meaningful circuit when there is not, and I don't think that I can accept this paper without knowing, at a minimum, whether you could trick yourself into believing that model had an interpretable circuit when it was in fact random. > A8,9: The target variables ... are sets of variables and the ... are variables. .... represents the causal mechanism of causal variable N before the distributed intervention, whereas ... This really doesn't answer my question very well, sorry. I think that your notation needs some rewriting and also if one of these is supposed to be a set of variables and the other is supposed to be a set of sets of variables, you should change the notation generally to reflect the different types. Generally, you need to work on your notation and define it in more detail so that the math is more readable. > A10: The boundary index variable marks how many neurons the intervention will use at the end. We penalize larger indexes since we want to push the intervention site to be smaller. So you have already chosen how many neurons the intervention actually uses? > A11: We restrict ... to be "a multiple of ... with a factor of 2 for simplicity." This really doesn't clarify anything. My point of confusion was this exact phase that I already read in the paper. What makes it simpler? Can you actually define this variable? Is it learned or is it preselected? > A13: In our case study, it is a latent boolean value that is abstracted to check whether the input price is above the lower bracket or below the higher bracket. If you know the answer, how is it a latent value? Overall, I am not satisfied as to how to assess a value for IIA, or how reliable your method is. I have described several experiments that could be run to calibrate the metric discovered by your method, but as is you have no actual evaluation for the method itself, instead using it straightforwardly on a very limited setting without any confirmation of what any given score indicates. I like the method, and I like the fact that you are trying to solve the problem of finding circuits at large scales in this way, but I would want to see the metric it computes be better calibrated through more experiments. To be clear, these problems exist in normal alignment search. However, your addition of more learned parameters exacerbates the possibility of finding a coincidentally high performing setting and fooling yourself into believing that the coincidence is meaningful. This is why I would like to see you try things like searching for irrelevant circuits. As a last comment, in your rebuttal you refer to whether this generalizes outside of the case of alpaca. I'm actually much less concerned about whether you generalize other language models, which I would accept you can, than whether you generalize to other tasks and circuits. You are advocating for this method on the basis of what is essentially a case study of searching a single set of parameters for behavior on a single very constrained task. --- Reply to Comment 1.1.1: Title: Evidence that IIA is Well Calibrated Comment: We appreciate the critical assessment of IIA, but we believe that **there are already results in the paper where we have low IIA and show that the method is calibrated**. According to our randomized alignment baseline, 50% IIA is the floor for performance on the boundary tasks and 60% IIA is the floor for performance on the mid point and bracket identity tasks. IIA is an accuracy metric, and so a gradient of success should not be surprising, and we consider values below the ALPACA model performance of 85% to be a failure to identify the causal variable. Small differences in accuracy can hide crucial aspects of the task! In figure 4 "Left Boundary", we can see that many of the locations we attempt to identify a variable are considered there are less than ten locations that we successfully identified the causal variable. **Of the remaining values, many are at random chance and others show some signal, but are below the threshold of strong evidence**. There are also low IIA values for two algorithms In figure 4, "midpoint" and "bracket identity" we see no strong evidence of these variables being represented. **All of the IIA values are low enough to be considered failures to confirm the hypotheses and most are at random chance**. **The lower IIA values in our main results provide calibration for the user** and show that this method cannot simply optimize to achieve perfect IIA wherever there is a representation that impacts the output. When compute is available, we will run a baseline with a randomly initialized model to add another point of calibration. Compute is scarce for us right now, though. --- Rebuttal 2: Title: general comment Comment: I think that the approach here is potentially very valuable, but this paper remain a case study of a single task, and the evidence is extremely limited for the reasons I've given. It provides a single very limited example without any further validation of the method, e.g., by showing that correcting the circuit found can fix an incorrect judgment. --- Rebuttal Comment 2.1: Title: New Baseline with Randomly Initialized LLM Comment: Thank you for your continued engagement. We were able to pursue a version of the random LM experiment you suggested, and this proved fruitful. We randomly initialized a model, which resulted in near 0% task performance on the price checking task, as expected. After running boundless DAS on first layer of the random LLM, the alignments to each token's residual stream ranged from 0 % to 69% IIA, which is comparable to a most frequent label dummy model (66%). For the representations with near chance IIA, this means that we were able to find a distributed neural representation that shifts the probability mass to "yes" or "no", but does not differentiate between the two. We also found that we could find a distributed neural representation that shifts the probability mass to the tokens "dog" and "give" instead of "yes" and "no", so this really is random causal structure that emerges from the massive number of parameters in the randomly initialized LLM. It's quite a neat result we plan to include in the paper! **Crucially**, we found that IIA drops to 0 for this run on all of our robustness checks from the paper. In addition, it drops to 0 if we use it on a different random LM. The results really nicely complement the ones we reported earlier for random rotation matrices on the highly structured, pretrained Alpaca model. These results will help calibrate people on the metric!
Summary: This work proposes an extension to Distributed Alignment Search (DAS) called "Boundless DAS" that replace the brute-force search with learned parameters in order to scale the method to large language models (LLMs). The method evaluated on the Alpaca 7B model tasked to output "yes" or "no" to a simple numerical problem such as "is X between Y and Z?". Experimental analysis showed that the method is able to scale and find an alignment with two potential causal mechanisms. Further experiments also show that the method is robust to slight modifications in the prompt of the model (additional sentence in the input, change of output format). Strengths: This work proposes a new explainability method that is scalable to LLMs, which is an important research question as the field quickly progresses towards larger models that become part of real-life products. Hence, research to understand the inner mechanisms of LLMs is very important. The proposed method is well-tested on various input/output perturbations of the target task, showing the robustness of Boundless DAS. Weaknesses: - the technical aspect of the paper is challenging to understand for someone that is not familiar with the field of causality - the scope of the task used to evaluate the method is very narrow. Some discussion on how this can be applied to more critical scenarios where interpretability is required (such as the medical, financial, or legal domains) could improve the paper. - It is mentioned that a good causal model could also explain the errors of the system. However, the current method does not explain why the model fails in certain examples. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Out of four possible causal mechanisms, two seem more plausible, but it is not clear which one is the one used by the model: "left boundary" or "left and right boundary"? Can we identify which causal method is the most likely being used, or is it the case that the model can use one or the other depending on the input? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Some limitations are mentioned. However, one limitation of this work that should be mentioned is the relatively narrow scope of the target task being evaluated. This may be ok if the authors discuss how to transfer their method to more "realistic" scenarios in which interpretability is required (such as the medical, legal, or financial fields). Another possibility is that it is challenging or impractical to apply the same type of analysis in more complex scenarios. Either way, the paper could benefit from this type of discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We *thank* the reviewer's *thoughtful comments*. We **address the major concern around the applicability of Boundless DAS as well as how to interpret alignment results** where multiple causal models reach good interchange intervention accuracy (IIA). Here, we address all concerns with point-by-point responses. > **Q1:** "the technical aspect of the paper is challenging to understand for someone that is not familiar with the field of causality" **A1:** Thanks for your comment! Given the page limit, we did not include a detailed introduction to DAS [Geiger et. al., 2023]. **In the next revision, we will provide a full detailed discussion of DAS** in the appendix in the next revision to ground Boundless DAS. We will **sharpen the notation in Section 3**. We also want to note that we provide a full pseudo-code implementation of Boundless DAS in Appendix A.5; this may provide additional clarifications. > **Q2:** "the scope of the task used to evaluate the method is very narrow." **A2:** While it is true that we only analyze the Alpaca model with a very specific task, **the goal of this paper is to present an alignment search method that works well with LLMs such as Alpaca-7B and beyond**. We see no obstacle to applying the methods for additional models and tasks. We will release our codebase that scales with any decoder-only model. The main bottleneck for us to try other tasks with Alpaca is that it is hard to find reasoning tasks that Alpaca does well. We outlined some of our task finding procedures in Appendix A.2. For domain experts working with stronger models, this kind of exploration might be easier. > **Q3:** "the current method does not explain why the model fails in certain examples." **A3:** **This is a good point and suggests great next steps!** At present, errors could be explored by seeing how their computations do not conform to the high-level model under investigation. One could also hypothesize high-level causal models of the errors and assess them as explanations for the underlying problem. > **Q4:** "Out of four possible causal mechanisms, two seem more plausible, but it is not clear which one is the one used by the model: "left boundary" or "left and right boundary"?" **A4:** In this case, aligning “left and right boundary” actually **entails** aligning “left boundary” since the latter is one of the subspace (i.e., the subspace for the left boundary) of the previous causal model. In other words, **the “left boundary” causal model can be seen as an abstraction of the “left and right boundary” causal model** but not the other way around. This also means our model implements both checks instead of one. In a case where we can only align the “left boundary” but not both, we would then observe lower IIA for "left and right boundary". --- Rebuttal Comment 1.1: Title: response to authors Comment: Thank you very much for your clarifications, the contributions are now more clear to me. I would still like the authors to discuss in their revised paper the limiting factor for this method, which is (if I understand correctly according to "_The main bottleneck for us to try other tasks with Alpaca is that it is hard to find reasoning tasks that Alpaca does well._") that in order to find a causal alignment, the model must first be very good at solving the task on its own. I think this is very important to highlight for transparency reasons in the revised paper. The revised paper will benefit from some discussion around this, like why is this the case, and what future work could help mitigate this limitation. Thank you for your time and effort in this important research direction. --- Reply to Comment 1.1.1: Title: Thank you for your comments. We will revise our paper. Comment: Yes, we will add discussion of this – thanks! It's really primarily an issue in the phase where we are seeking to motivate Boundless DAS. For that, we need to try to be sure that there is a systematic solution to identify. If we accept that Boundless DAS is a useful tool, then we can apply it to an under-performing model. It could be used to show that the model conforms to a very strange and problematic high-level causal model, or to accumulate some indirect evidence that the model doesn't implement any reasonable causal models. But one does need to trust the method at this stage, whereas we are currently seeking to build an argument for the method itself.
Rebuttal 1: Rebuttal: We thank the reviewers for their incisive comments and questions. In this comment, we summarize our overall response to major points, emphasizing new experiments. This is followed by point-by-point responses. 1. **On the question of novelty and utility**, we note that Boundless DAS is the only method in this class of causal explanation methods that can even be applied at the scale of Alpaca or above. This is a phase change in terms of what is possible -- after all, models this size and larger are the most relevant to the field right now. 1. We also want to take this opportunity to clarify **what Boundless DAS or DAS [Geiger et. al., 2023] could explain.** Having a good alignment means that the high-level model is a causal abstraction of the neural network, in terms of both factual and counterfactual behaviors. The two models can have different structure, and the network may be abstracted by many other causal models as well. Different causal abstractions highlight different explanatory aspects of model behavior. 1. **How to interpret the results when both causal models "left boundary" and the "left and right boundary" find strong evidence in Section 4.3?** The second model strictly extends the first, but the network may be implementing both solutions. The strong IIA scores show that one can reason about either high-level model and not be misled. We elaborate this point in our responses to *Reviewer SQSS* and *Reviewer AVCz*. 1. **Reviewers note that we report experiments only for Alpaca.** We do try to broaden this in Appendix A.2 to other models in this class, but we find publicly accessible models often fail to solve these reasoning tasks robustly. We argue that our experiments achieve our primary goal of showing that causal explanation methods are effective at this scale. We have a limited compute budget (and limited space in the paper), but we've shown that Boundless DAS can easily scale beyond Alpaca when budget allows. We will also release our code that is compatible with any decoder-only model. 1. **Building on the above: the original DAS paper [Geiger et. al., 2023] reports on a number of other experiments, and prior work on causal abstraction includes even more diverse experiments.** Boundless DAS is a more scalable and general version of all these methods, and so we think all this accumulated evidence ultimately supports the applicability of Boundless DAS for causal explanations across many models and tasks. 1. **New experiments are added to validate our main results in Section 4.3.** The reviewers asked for more baselines to more fully contextualize our results. To this end, we report IIAs with the same settings with random rotation matrices in the attached pdf. (We are not able to explore randomly initialized LMs because IIA is ill-defined if the model is not able to perform the task.) As shown in **Figure 1 of the attached pdf**, IIA score drops from **0.83** to **0.53** at layer 10, token position 75 for our “left and right boundary” causal model. Other positions drop significantly as well. These results also help us to calibrate IIAs in case of unbalanced labels (e.g., our two control causal models reach **0.60** and **0.61** IIA at the same location with a random rotation matrix). These results suggest that good IIAs for our first two causal models do not come for free. We will include these baseline IIAs in the next revision to better contextualize our findings. 1. **New experiments are added to validate our alignment generalization ability in Section 4.5.** Reviewer ExBe asked about tightening up the series of experiment-based arguments we offer in Section 4.5. To this end, we greatly expanded the range of inputs we consider, using **20 random contexts generated by GPT-4@Aug05-2023** with our “left and right boundary” causal model. Mean IIAs along with prompts we used are included in the attached pdf. As shown in **Figure 2 of the attached pdf**, the averaged IIA at layer 10, 6-th token position (relative since we have prefixes at different lengths) is **0.83** with a standard deviation of 0.02. This aligns with two prompts we reported in the paper (**0.80** for the first, and **0.84** for the second prompt as shown in Figure 9 of the Appendix). The overall correlation of IIAs with our vanilla “left and right boundary” causal model is **0.99**. These results *greatly strengthen* our claims about alignment generalization. We will update current results with these new prompts in the next revision. 1. **We focus on success cases for Alpaca, but we could use the method to understand errors.** This would proceed in the same mode as in the paper: one would hypothesize a source of the errors and use Boundless DAS to assess the idea. Reviewer SQSS suggested this and we are eager to explore it in future work. 1. **On writings and notations clarifications and suggestions:** The reviewers offered numerous suggestions for **improving terminology and notation**, and improving the paper overall. We are grateful for all this input and will make use of it in our next revision. Pdf: /pdf/a596467a26541475568ad322b1cfe9d0db2df63e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a method called Boundless DAS, an algorithm that can automatically test the "alignment" between human-specified symbolic algorithms and the computational structure found in the weights of a neural network. The authors apply Boundless DAS to a real-world 7B parameter language model called Alpaca and find evidence of the implementation of a particular symbolic algorithm that computes whether some number $q$ is in the range $[a, b]$. Strengths: Really cool paper! * I like the idea of using gradient descent to create a correspondence between a symbolic algorithm and a set of neural network features. Continuous optimization feels a little easier to tackle (initially) than discrete combinatorial stuff. * It's impressive that this works on a real-world, billion-parameter model. * While the tricks to go from DAS to Boundless DAS seem reasonable and slick, my favorite part is the empirical section — inspecting the alignments that the algorithm discovers, looking for generalization, etc. Well done. Weaknesses: Not much here. If I had to say anything, the paper is quite dense and hard to follow. For newcomers, it requires somewhat extensive background reading on causal abstraction and the previous DAS work(s). Otherwise, the writing is fairly clear, ideas are well-motivated, claims are well-supported and analyzed, etc. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * What kinds of tasks have you tried DAS or Boundless DAS on? Do you ever find gradient descent struggling to find a faithful solution? Do you have intuitions for what kinds of tasks this works well on, and what it doesn't? How about algorithms with lots of input variables, or complex ones requiring lots of intermediate ones? * I'm trying to get a sense for how far we can push Boundless DAS; what do you think? For any challenges you foresee, what do you think the right next steps are? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We *thank* the reviewer's *inspiring comments about future works* that can be built on top of Boundless DAS. We **provide more discussions about possible future extensions**, and we will revise our paper to reflect these additional thoughts. Here, we address all concerns with point-by-point responses. > **Q1:** "it requires somewhat extensive background reading on causal abstraction and the previous DAS work(s)." **A1:** Thanks for the comment! We plan to **use the additional page to add an introduction to DAS** [Geiger et. al., 2023], and add **a full detailed appendix on DAS** to provide readers more context about the background, and how Boundless DAS improves upon it. > **Q2:** "What kinds of tasks have you tried DAS or Boundless DAS on? Do you ever find gradient descent struggling to find a faithful solution? Do you have intuitions for what kinds of tasks this works well on, and what it doesn't? How about algorithms with lots of input variables, or complex ones requiring lots of intermediate ones?" **A2:** **The original DAS paper [Geiger et. al., 2023] provides a set of examples including synthetic logic inference task as well natural language inference (NLI) task with smaller models.** Earlier causal abstraction ideas have been applied to other NLI problems, ground language understanding tasks, and structured image tasks. We haven’t tried Boundless DAS on most of these, but Boundless DAS is a scalable and more general approach, so such applications would yield as good or better results. **For the current work, we wanted to focus on tasks that LLMs solve well with no fine-tuning.** The bottleneck for us to try other tasks is, given our compute and time limit, we struggled to find a publicly-available instruction-following model that solved interesting reasoning tasks in a zero-shot fashion (Appendix A.2). We did find failure cases such as those control conditions mentioned in the paper! These control models are interpretable models but we fail to find clear evidence of alignments. It has so far not been the case that gradient descent gets stuck in local optima, though this is certainly possible. **On aligning more complex causal graphs: Yes! We are working on it right now.** In this paper, we are only aligning individual causal variables, or a high-level model with a single layer of abstraction. The next step is to align a full, multilayer causal graph with a neural model, and evaluate the faithfulness of the alignment more rigorously. > **Q3:** "I'm trying to get a sense for how far we can push Boundless DAS, …, what do you think the right next steps are?" **A3:** **The biggest obstacle we see is the need to hypothesize a high-level causal model.** All explainability methods require a hypothesis of some sort, but often the burden seems smaller and so is less of a blocker to people running the analysis (e.g., for probing, one picks a supervised learning task; for feature attribution, one picks a set of neurons to analyze). We would like to automate the process of proposing high-level models combined with Boundless DAS to generate a faithfulness hypothesis or circuit that is realized by the model. Another interesting challenge would be **applying mechanistic interpretability tools to controlled language generation.** Thus far, we have only applied Boundless DAS to a single token or fixed length language generation tasks. It would be interesting to see how a token-level intervention through Boundless DAS causally affect the iterative decoding process where the generation length is unbounded. On the application side, a good counterfactual alignment (even though it is not a structural alignment) helps us steer the model in a robust way. One future step we are thinking to take is to align, for instance, story genre variables in the LLMs and **steer the model at the inference time** to generate different genres given a prompt starter. --- Rebuttal Comment 1.1: Comment: Thanks for the careful and detailed replies! I appreciate the inclusion of further DAS background. I think this is an interesting & potentially-valuable direction that deserves to be investigated further and discussed in the community.
Summary: This paper proposes a method for interpreting neural models by learning an alignment of their representations to a specified interpretable causal model. The method extends prior work, namely Distributed Alignment Search (DAS), by addressing a key limitation that prevents the scaling of DAS. The limitation is overcome by learning a set of masks that selects subsets of neural representations to align to different parts of the interpretable causal model. The method is evaluated experimentally by aligning Alpaca representations to a set of correct and incorrect hand-written causal models. Strengths: * The problem of interpreting large models is important. * The approach is reasonable and scales a previous approach. * The experimental results were convincing. Weaknesses: * Section 3 was hard to read. See the questions below. * Section 3 did not feel self-contained. The source of intractability in DAS did not jump out to me. * Giving the computational complexity for DAS and boundless DAS would make the efficiency argument stronger, unless they are the same. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. What is the value of introducing variable $Z$? It looks like only $X$ are used after section 3.1 2. Is $\Pi$ on line 99 used anywhere else? 3. In line 115, '$b_j$ is restricted to be a multiple of $b_{j-1}$ with a factor of 2'. However, $b_0 = 0$. What does this mean? 4. Why are target variables $Z_j$ bold but high-level variables $X_j$ not? 5. In Eq. (1), what is the difference between $F_N^*$ and $F_N$? Is $F_N$ defined? 6. What is the problem that was brute-forced in DAS, that $M$ learns to solve? Section 3.1 would be the best place to highlight intractability. 7. Line 37 would be another nice place to highlight limitations in scaling DAS. What is the largest dimensional representation it has been applied to? 1. What is the runtime improvement due to the method? 8. Line 216: "expectation" Clarifying the shortcoming of DAS, section 3, and giving the computational complexity would convince me. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: The limitations were adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We *thank* the reviewer's *great suggestions on detailing improvements* over DAS and **highlights the scalability that our method brings**. We will use the additional page to clarify our improvements and provide a detailed introduction to the DAS method [Geiger et. al., 2023]. Here, we address all concerns with point-by-point responses. > **Q1:** "Section 3 did not feel self-contained, ..., Giving the computational complexity for DAS and boundless DAS would make the efficiency argument stronger." **A1:** We will revise our draft to make the improvements to DAS clearer, focusing on how the changes enable us to scale. In short, DAS learns a rotation matrix but requires manual search to determine how many neurons are needed to represent the aligning causal variable. Boundless DAS automatically learns boundaries (i.e., how many neurons are needed is determined by the “soft” boundary index via a boundary mask learning). For instance, given a representation with a dimension of 1024, DAS should in principle be run for all lengths k from 1 to 1024. In practice, this would be infeasible, so some subset of the lengths need to be chosen heuristically, which risks missing genuine structure. For Boundless DAS, we turn this search process into a mask learning process. See our comments below for more detailed comparison in terms of time complexity. > **Q2:** "What is the value of introducing the variable $Z$? It looks like only $X$ are used after section 3.1" **A2:** The variable $Z$ is only used to define the interchange intervention operator and the projection operator, and plays no "contentful" role in the definitions. $X$ is the original representation space. We will make this clearer! > **Q3:** "Is $\Pi$ on line 99 used anywhere else?" **A3:** Thanks for catching this notational error. It should be just $\tau$ representing the alignment mapping between a high-level variable and neural representations. > **Q4:** "In line 115, $b_j$ is restricted to be a multiple of $b_{j-1}$ with a factor of 2'. However,. What $b_0$ does this mean?" **A4:** Since b_j marks the boundaries here, b_0 is equal to 0 as the starting index of the boundary. Essentially, Eqn. 3 will generate a high-pass filter ($Mask_{s}$) between [$b_{j-1}$, $b_{j}$] and the gating factor within the bounds will be approximately 1.0 and approximately 0.0 outside the bound. We then apply this filter to the source representations, and the inverse filter (1.0 - $Mask_{s}$) to base representations before adding them up to get our "soft-intervened" representations. We will revise this section to be clearer in the next revision. > **Q5:** "Why are target variables $Z_j$ bold but high-level variables not $X_j$?" **A5:** The target variables $Z_j$ are sets of variables and the $X_j$ are variables. We wanted to indicate this type difference with the bold typeset. > **Q6:** "In Eq. (1), what is the difference between $F_{N}^{*}$ and $F_N$ ? Is $F_N$ defined?" **A6:** $F_{N}$ represents the causal mechanism of causal variable N before the distributed intervention, whereas $F_{N}^*$ represents the mechanism after the distributed intervention. For instance, $F_{N}(v)$ means calling the “rest” of the forward function (i.e., the causal mechanism of $F_N$) by setting $N$ with activations $v$. We will clarify $F_N$ and $F_{N}^{*}$ in our next revision. > **Q7:** "What is the problem that was brute-forced in DAS, that M learns to solve? Section 3.1 would be the best place to highlight intractability." **A7:** Thanks again for raising this issue. This helps us to better ground our work and highlights our improvements. As we clarified above, Boundless DAS is approximately **O(N*m)** quicker where N is the number of population dimensions and m is the number of causal variables we are aligning. In the case of LLMs, it scales DAS to models with billions of parameters like Alpaca-7B and beyond. > **Q8:** "Line 37 would be another nice place to highlight limitations in scaling DAS. What is the largest dimensional representation it has been applied to?" **A8:** The largest dimension is BERT with a hidden dimension of 768 with two variables in DAS [Geiger et. al., 2023]. And the result in its original paper is also not guaranteed to be optimal since only a limited combination of dimension settings are tried. In this paper, each aligning representation is with a dimension of 4096 with two variables that are aligned simultaneously. > **Q9:** "What is the runtime improvement due to the method?" **A9:** It is **O(N*m)** quicker where N is the number of total dimensions and m is the number of causal variables we are aligning. Essentially, to brute-force search over different dimensionality for intervention on a token representation with a size of 4096, we need to scan through {1,2,3,...,4096} each covering a single data point for a single aligning causal variable depicting how many neurons we are swapping values for. With Boundless DAS, we only need to run it once. > **Q10:** "Line 216: 'expectation'" **A10:** Thanks! We will correct the typo in the next revision. > **Q11:** "Clarifying the shortcoming of DAS, section 3, and giving the computational complexity would convince me." **A11:** Thanks for the great suggestions. We believe this concern is addressed in responses above! We will add the discussion around complexity improvements a bit in the next revision. --- Rebuttal Comment 1.1: Comment: Could you also provide the full computational complexity of DAS? I believe my concerns are addressed in the rebuttal. As all the concerns were presentation-related, I can only increase my score to weak accept without seeing the next version.
null
null
null
null
Multiclass Boosting: Simple and Intuitive Weak Learning Criteria
Accept (poster)
Summary: This paper tackles the problem of multi-class boosting by proposing a novel definition for weak-learning based on list boosting. The authors define a weak-learning condition for multi-class learning based on the simple principle of slightly better than random guessing. This definition is then used as a stepping stone in order to filter the potential classes for a sample recursively, thus reaching a stage where either the binary weak-learning condition can be applied, or the weak-learning condition BRG leads to boostability. The authors propose several theoretical justifications and definitions for the results presented in the paper. Strengths: # Motivation Multi-class boosting has received a lot of attention since the introduction of binary boosting, in particular since the extension of the weak-learning condition from binary to multi-class setting is tricky. As such, the novel definition of weak-learning introduced in this paper is quite interesting, especially since its simplicity mirrors the binary one. # Theoretical results The strongest point of this paper. The authors propose several theoretical justifications throughout the paper, and the generalization result in Theorem 2 is quite promising, same for the relation between weak PAC learning and list PAC learning in Theorem 3. Weaknesses: # Existing frameworks There are several existing frameworks for multi-class boosting based on specific weak-learning conditions (some of which are cited in the paper). I strongly think that an actual comparison between the proposed framework and the existing ones should have been included. Particularly for methods such as Adaboost.MM and Adaboost.MR and their WL conditions, and the WL framework introduced in "Multiclass boosting and the cost of weak learning." [6]. # Experimental results I'm not entirely sure why no experimental results are proposed in the main paper. There are several multi-class boosting that have been successfully used in practice, despite their WL conditions, as such it is important for new/novel methods to be compared to state-of-the-art approaches. In particular how the proposed method fares against state-of-the-art ones both in accuracy (or other multi-class performance measures) and in runtime. The recursive nature of Algorithm 1 and its dependance on 3 hyper-parameters might lead to prohibitive runtimes, even though several optimizations have been mentioned in the main paper. # Conclusion and future works I strongly think that a section on the future works and possibilities would've been more appropriate than the proof of Theorem 2. More globally, the paper could benefit from a better global organization. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * How does the current framework compare to existing ones? Are the WL conditions introduced in [6] and [16] stronger than BRG? * Any experimental results available? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitation of have not been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and suggestions. - “comparison between … existing ones” - The reviewer implies that we have not included a comparison to prior works, yet all of the works they mention are explicitly discussed in the paper. Addboost.MR is mentioned in line 130, and is a rather old extension to multiclass boosting (other works such as [16] have generalized it). Adaboost.MM is given in [16], a framework we compare with both in the introduction and in the related work section. Similarly, [6] builds off of the framework of [6] and discussed in our paper as well. To briefly explain the comparison to these results, first we note that conditions of previous works, in particular [6,16] assume the PAC setting and realizability w.r.t a hypothesis class, whereas here these restrictions are removed. Moreover, these prior works require the WL to be agnostic and learn for any re-labeling of the data. Thus, these methods operate in the rather narrow setting of assuming realizability of the data, while also having access to an agnostic learner. However, it may be interesting to examine, when restricting to the PAC setting, how these conditions are related. - “lack of experiments” - we emphasize that the main contribution of this work is the theoretical framework extending classical boosting theory to the multiclass setting. The algorithmic contribution is not the main focus of the paper. The question of finding the simplest and minimal assumptions on weak learnability as in the binary case has been studied in various works over the years, and in this work we provide the first intuitive and simple condition that leads to a simple boosting procedure. We leave the question of obtaining an optimal running time and performing empirical evaluations as an interesting future work. We certainly hope this will be further explored in a following line of work. --- Rebuttal Comment 1.1: Comment: Thank you for your answers. I'd like to further extend upon my previous questions, as it seems both have been misunderstood. Concerning the comparison to previous methods, while I do agree, that both setting have been cited in the paper, I was mostly looking for a theoretical comparison, i.e. casting the WL conditions of [6] and [16] (or other frameworks, however ancient they might be) in the present framework and study how they fare, similarly to how [16] cast Adaboost.MR in their framework and showed that the WL is too strong for boostability. Concerning the empirical results, unless I'm missing something, both Algorithms 1 and 3 are straightforward to implement. As such it's puzzling why no empirical result is provided in the paper, at least as far as error rates are concerned. I do agree though that runtime optimization is not in the scope of the paper. Also, the paper is still lacking a Conclusion/Discussion Section. --- Reply to Comment 1.1.1: Comment: Thank you for your response. The reviewer has requested to cast the WL conditions of previous works, e.g. [16] in the present framework similarly to how [16] cast Adaboost.MR in their framework. Notice that this comparison is not well defined. The reason is that in all these previous works there are strong assumptions over the data. For example, all past works that we are familiar with (e.g., [6], [16], Adaboost.MR etc.) assume there is a *known* hypothesis class that the data is realizable with respect to. In this work we entirely remove that assumption. To clarify, is your question concerning the case that if we are forced to assume realizability w.r.t. a known class, then how do these two frameworks compare? We emphasize that our goal was to provide a framework that generalizes the binary setting, removing restrictive assumptions about the data while relying only on weak learnability. Our result is the first multiclass boosting framework without these assumptions, building on binary boosting theory.
Summary: Boosting algorithms is an important fundamental question in SLT starting with celebrated AdaBoost algorithm. While the original results consider binary setting and the condition on the weak learner is straightforward, it is not trivial to formulate the same result for classification with more than one classes. I did not have the chance to go through the proofs carefully due to time constraint. I do not see where there could be a potential mistake that cannot be fixed. I did not read through section 4 and I am not familiar with list learning. pros: - interesting fundamental problem - the paper is well written - a new condition for multiclass boosting - relatively simple proof - connection to list learning cons: - the only disadvantage I can see is that this paper would better fit other venues minor comments and questions: - should Theorem 1 be stated only for realizable case, since you relying on the compression scheme generalization bound? - it is better to refer to Theorem 30.2 (than to Corollary 30.3) in Shalev-Swartz and Ben-David since your "outer" risk is equal to zero only with probability close to one - perhaps this is obvious or I have missed it, but how does your condition compare to conditions of existing results for the multi-class boosting? Strengths: - Weaknesses: - Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and suggestions. - “Theorem 1 ” Notice that in our framework (in contrast to previous works on multiclass boosting) realizability assumption is not needed, and the empirical BRG assumption suffices to get the result. The reason is that by using compression bounds we only need to show that the boosting algorithm’s output is consistent with the training data, which is shown in the proof of Theorem 2. - “Theorem 30.2” we agree with the reviewer that this would be a better reference. We remark that in our current version of the paper we use a more accurate (and more general) statement of a compression bound that we prove in the paper. We will incorporate this in the final version of the paper as well. - “compare to conditions of existing results' ' - first we note that conditions of previous works, in particular [6,16] assume the PAC setting and realizability w.r.t a hypothesis class, whereas here these restrictions are removed. Moreover, these prior works require the WL to be agnostic and learn for any re-labeling of the data. Thus, these methods operate in the rather narrow setting of assuming realizability of the data, while also having access to an agnostic learner. However, it may be interesting to examine, when restricting to the PAC setting, how these conditions are related. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I am trying to understand why is it possible to have Theorem 2 for non-deterministic label. It seems the empirical BRG condition is not concerned with the underlying distribution at all, but the statement "for any $\mathcal{D}$ over $(X \times Y)$" in Theorem 2 does not add up. If we can guarantee $\Pr[H(x) \neq y] \leq \varepsilon$ for any $\varepsilon$, then we could not possibly have a noisy label. The risk value should be at least $1 - E \max_y \Pr(y | x) $. --- Reply to Comment 1.1.1: Comment: This is similar to binary boosting, where one can drop any explicit assumptions about the data, and instead only assume weak learnability holds (see discussion in Section 2.3.3 in the book "Boosting: Foundations and Algorithms" [19]). In essence, the weak learnability assumption implicitly asserts that the labels are not entirely noisy, and is in some sense a relaxed notion of the realizability assumption. For example, it removes the assumption that there is a *known* hypothesis class that generates the data. A similar idea is used here, where it suffices to only assume the BRG condition holds. Lastly, we note that indeed, the empirical BRG condition itself is not concerned with the underlying distribution. However, it is assumed in our theorem to hold for samples drawn from the underlying distribution.
Summary: The paper proposes a novel treatment of traditional (non-gradient) boosting to multiclass prediction problems. In particular it shows that a novel relaxation of the weak learning criterion allows the definition of boosting algorithm with the usual success guarantee of reducing the empirical misclassification rate on the given data sample arbitrarily with high probability. This relaxation requires the base learner to also accept as “hint” a subset of labels and to, for each possible subset size k, to return a hypothesis that achieves an error rate better than 1-1/k with high probability. The proposed boosting algorithm makes use of this property by recursively eliminating candidate labels from the hints of each training example based on the hypotheses produced by previous iterations. Moreover, the paper shows a connection of the proposed theory to list-PAC learning and in particular gives a new characterisation of list-PAC learnability. Strengths: - The paper develops a beautifully simple extension of boosting to the multiclass case that, in contrast to previous approaches, does actually work to define a successful strong learner - The connection to list learnability are very interesting - The paper is well written and accessible Weaknesses: - Papers on classical boosting, in contrast to the additive ensembles produced by gradient boosting, feel rather narrow at this point. In particular, generalisations from misclassification rate to proper statistical loss functions are unclear. - There is no empirical performance investigation of the proposed recursive boosting algorithm Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In footnote 3 the authors state the assumption that m0 does only depend poly-logarithmically on m. It is unclear to me how strong an assumption that is and in fact also how sensitive the feasibility of the method is to violations of that assumption. - My other questions is how to address the potential limitation listed below. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: One potential limitation might be the definition of weak learners that satisfy the weak learning assumption. This is because it seems to require the restriction to different subsets of classes for different inputs. It is not immediately obvious to me how one would incorporate this, e.g., with a decision tree (if it depends on the concrete model at all). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and suggestions. - “Papers on classical boosting … narrow” - Gradient boosting methods are valuable algorithmic tools but lack a theoretical framework for studying the task of interest: multiclass classification, with respect to classification loss. Unlike binary boosting theory, gradient boosting methods cannot characterize sample complexity needed to reach a desired level of accuracy or reveal fundamental limitations. In contrast, our approach directly generalizes the theoretical boosting framework for multiclass classification that can answer these questions for general multiclass learning settings. Moreover, it does so through the classification loss itself, rather than proxy losses, as is the case for the gradient-based approach. - “empirical performance” - We note that, as the reviewer points out, our generalization provides a “beautifully simple extension of boosting to the multiclass case”. As such, and as discussed above, we view the main contribution of this work to be the generalization of boosting theory. The algorithmic contribution is not the main focus of the paper. We leave the question of obtaining an optimal running time and performing empirical evaluations as an interesting future work. We certainly hope this will be further explored in a following line of work. - Footnote 3 - In the context of PAC learning, m_0 is only dependent on gamma, the edge of the WL. The comment says that the results hold even if m_0 is sublinear in m. We now see that this footnote is confusing and we will re-phrase it in the final version. - “weak learners that satisfy the weak learning assumption” - observe that when given a list of, let’s say the top-5 choices of labels y per example x, we can “re-name” the labels of each example to be each position in the list! Therefore, we have effectively reduced the original label space to contain only 5 labels. That is, because the label names themselves don’t matter, we can do this re-naming and apply any standard learning algorithm as a WL (e.g., decision trees). We hope this answers the question, and will clarify this point in the paper as well. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. My evaluation of the paper remains positive, but I still was not able to understand the practical construction of a weak learner that satisfies the required condition. I encourage the authors to clarify this even more in the final version. Further, as a complete side note, I do not fully agree with the characterisation of gradient boosting given in the rebuttal. In particular, note that the loss functions used in gradient boosting are not surrogate losses but correspond to log likelihoods of probabilistic response models. For classification, this is in a way more meaningful than just considering the zero/one loss because the loss values take the uncertainty of the Bernoulli response into account, which is fundamental if the model is supposed to be used for any practical purpose. Moreover, depending on the applied restrictions to the weight space, uniform convergence results should hold similar to those for 0/1 loss. Again all of this is a side note. I mentioned this "weakness" mainly to indicate that there would potentially be an even bigger interest for future results for gradient boosting. It takes nothing from the merit of the given excellent work. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We agree that log likelihoods and related loss functions are important to study, as they provide valuable additional insights useful in practice. We also agree it would be an interesting future work to extend our current results to apply more broadly as the reviewer suggested.
Summary: This paper proposes a generalization of boosting to the multiclass setting. To this end, the authors introduce a new weak learning assumption for multiclass based on a ‘hint’, which takes the form of a list of $k$ labels, named ‘Better-than-Random Guess’ (BRG). the authors present a main boosting method and provide theoretical analysis for PAC guarantees. Finally, the authors demonstrate applications based on the framework of List PAC learning. Strengths: - This paper proposes a new weak learning assumption, which encompasses both the binary case (i.e., $k=2$) as well as the cases with $k>2$. - The main method is well-written. - This paper provides abundant theoretical analyses. Weaknesses: - Although this paper is based on theories, it lacks experiments, including synthetic ones, and there is no conclusion. - Due to the lack of experiments, there is insufficient comparison with previous works, with only half a page dedicated to it in section 1.2. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could you provide some experiments, including synthetic ones? It would be very helpful to understand this paper. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and suggestions. - “lack of experiments” - we emphasize that the main contribution of this work is the theoretical framework extending classical boosting theory to the multiclass setting. The algorithmic contribution is not the main focus of the paper. The question of finding the simplest and minimal assumptions on weak learnability as in the binary case has been studied in various works over the years, and in this work we provide the first intuitive and simple condition that leads to a simple boosting procedure. We leave the question of obtaining an optimal running time and performing empirical evaluations as an interesting future work. We certainly hope this will be further explored in a following line of work. - “comparison with previous work” - we refer the reviewer to the introduction, where we compare the current paper to various lines of work in the context of multiclass boosting. Moreover, the related work section covers other works in the broader context of boosting, starting from early extensions from the binary case and up to the most recent work in the area. The most related works, and most recent results on multiclass boosting theory, are [16] and [6], which are discussed in both sections.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Eliminating Domain Bias for Federated Learning in Representation Space
Accept (poster)
Summary: This paper addresses the significant challenge of statistical heterogeneity among clients in federated learning. To tackle this issue, the authors propose personalized federated learning, where each client can learn and apply its specific model parameters. Specifically, the authors introduce DBE (Domain Bias Embedding), which comprises two crucial modules. The first module is a client-specific embedding designed to model the domain bias, while the second module is a regularization term incorporated into the objective function to encourage consensus in global representation learning. The effectiveness of these proposed modules is analyzed and demonstrated through theoretical results from domain generalization/adaptation. Moreover, extensive experiments are conducted to verify DBE's efficacy in reducing domain bias and its complementarity with other personalized federated learning methods. Strengths: 1. The paper is well-written, effectively conveying the core idea in a clear and understandable manner. 2. The authors address the persistent issue of non-iidness in federated learning, which remains a significant challenge. The visualization of embeddings pre- versus post-local updates provides valuable motivation for studying and understanding this problem. 3. The proposed DBE (Domain Bias Embedding) approach is both simple and effective, demonstrating strong performance from both theoretical and empirical perspectives. 4. The paper's exploration of the complementary nature of DBE with existing personalized federated learning methods is particularly interesting. This aspect enhances the overall impact and relevance of the paper. Weaknesses: 1. The introduction of the memory module requires improvement. While the module itself is relatively simple, the presentation of its implementation and functioning could be made clearer. It is challenging to understand the process and how to implement it based on the initial explanation. 2. The analysis of generalization risk, although provided, may have limited practical utility when it comes to the practical application of DBE. Further clarification or practical implications would enhance the relevance of this analysis. 3. The experimental evaluation appears to lack diversity in the considered tasks. Including evaluations on tasks such as graph learning would provide additional evidence and make the results more compelling. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What's the difference between the numbers of layers that have memorized domain bias? 2. Is it possible to show that, in the iid setting, the memory embedding is zero? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and constructive comments. We respond to your concerns as follows in the form of "**[A weakness or question]** Our responses". **[Improve the presentation of the memory module]** Thank you once again for your valuable suggestions. We will further refine the presentation of the memory module to enhance clarity. **[Practical application of $\texttt{DBE}$]** For the practical application, we apply our proposed $\texttt{DBE}$ to the IoT scenario on a popular Human Activity Recognition (HAR) dataset[1] with the HAR-CNN[2] model. HAR contains the sensor signal data collected from 30 users who perform six activities (WALKING, WALKING\_UPSTAIRS, WALKING\_DOWNSTAIRS, SITTING, STANDING, LAYING) wearing a smartphone on the waist. We show the results in R-xd4W-Table 1, where FedAvg+$\texttt{DBE}$ still achieves superior performance. R-xd4W-Table 1: The test accuracy (\%) on the HAR dataset. | | Accuracy | |:-|:-:| | FedAvg |87.20±0.27 | | SCAFFOLD |91.34±0.43 | | FedProx |88.34±0.24 | | MOON |89.86±0.18 | | FedGen |90.82±0.21 | | Per-FedAvg |77.12±0.17 | | pFedMe |91.57±0.12 | | Ditto |91.53±0.09 | | FedPer |75.58±0.13 | | FedRep |80.44±0.42 | | FedRoD |89.91±0.23 | | FedBABU |87.12±0.31 | | APFL |92.18±0.51 | | FedFomo |63.39±0.48 | | APPLE |86.46±0.35 | | FedAvg+$\texttt{DBE}$ |**94.53±0.26** | [1] Anguita D, Ghio A, Oneto L, et al. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. Ambient Assisted Living and Home Care: 4th International Workshop, IWAAL 2012. [2] Zeng M, Nguyen L T, Yu B, et al. Convolutional neural networks for human activity recognition using mobile sensors. 6th international conference on mobile computing, applications and services. IEEE, 2014: 197-205. **[Experimental evaluation lacks diversity]** In the FL field, most of the existing methods (e.g., SCAFFOLD, MOON, FedGen, Per-FedAvg, pFedMe, FedPer, FedRoD, FedBABU, APFL, FedFomo, and APPLE) primarily consider classification tasks within CV field, while we consider both CV and NLP fields. Graph learning is not a widely explored task in the traditional FL and pFL methods we have considered and compared. Nevertheless, our $\texttt{DBE}$ can eliminate domain bias in representation space for heterogeneous graphs on clients in theory, as the concept of representation is pervasive in deep learning tasks[3]. [3] Bengio Y, Courville A, Vincent P. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 2013, 35(8): 1798-1828. **[What's the difference between the number of layers that have memorized domain bias?]** Our Personalized Representation Bias Memory ($\texttt{PRBM}$) module only contains a trainable vector $\bar{z}^p _i$ without any additional layers. If you are inquiring about the influence of the number of layers in the feature extractors under different model splitting methods, as discussed in *Section 5.1.1 How to split the model*, it is important to note that feature extractors with different numbers of layers possess varying feature extraction capabilities. According to previous studies [4,5], it has been observed that representations extracted by higher layers, which are farther away from the input, tend to exhibit a higher degree of bias compared to the ones extracted by lower layers. [4] Yosinski J, Clune J, Bengio Y, et al. How transferable are features in deep neural networks?. Advances in neural information processing systems, 2014. [5] Luo M, Chen F, Hu D, et al. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. Advances in Neural Information Processing Systems, 2021. **[Is it possible to show that, in the iid setting, the memory embedding is zero?]** While the impact of our $\texttt{PRBM}$ may be diminished in IID settings, its parameter vector $\bar{z}^p _i$ cannot be zero as long as there exist variations among clients in the representation space. Even in ideal IID settings, where each client trains models with the same dataset, the extracted representations of the same input on different clients often differ. This discrepancy arises from the stochastic optimization process employed by deep neural networks, such as stochastic gradient descent (SGD). Moreover, the use of different random seeds on different machines can further contribute to the variation in the extracted representations during each communication iteration. In practical FL scenarios, the data distribution among clients is often non-IID and statistically heterogeneous [6]. [6] Li Q, Diao Y, Chen Q, et al. Federated learning on non-iid data silos: An experimental study. 2022 IEEE 38th International Conference on Data Engineering (ICDE). 2022. --- Rebuttal Comment 1.1: Title: Discussions Comment: Thanks for your response! Most of my concerns are resolved. I also read the discussions with other reviewers. It seems that all reviewers are fond of this simple yet effective method. I will keep my score unchanged to support this paper. --- Reply to Comment 1.1.1: Title: Thank You! Comment: Thank you once again for your timely feedback. We are grateful to have your support for our paper.
Summary: This paper tackles the problem of representation bias phenomenon lead by biased domains and proposes Domain Bias Eliminator (DBE), a framework that reduces domain discrepancy between server and client in representation space by Personalized Representation Bias Memory (PRBM) and Mean Regularization (MR). Experimental results show that DBE outperforming ten state-of-the-art methods in both generalization and personalization abilities. Strengths: - Introduce the PRBM and MR to address the representation bias issue, and the effectiveness of these modules is validated by experimental results. - Provide theoretical analysis on generalization bounds of the global and local feature extractors. - This method has good compatibility and can be combined with other models. Weaknesses: - It would be nice to evaluate these methods on real-world distribution shifts. - The introduction of MR increases the computational burden. - Some parts of the writing are not clear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I'm confused about the form of $\bar{z}_i^p$, is it a trainable embedding vector (since it’s the personalized representation of client) or is it a model parameter (as Algorithm 1 claims)? Please elaborate on how $\bar{z}_i^p$ is derived. 2. $\bar{z}^g$ is a consensus obtained during the initialization period, why is $\bar{z}^g$ not updated during the training process? 3. What’s the dimension of the global $z_i^g$ and personalized $\bar{z}_i^p$ implemented in experiments? 4. As mentioned in Line 249 “We run three trials for all methods until empirical convergence on each task”, does this mean that all models are trained until they converge and the communication rounds are different for each model (which is rare in federal learning settings)? And what are the specific conditions for convergence? 5. Figure 1 in Appendix only contains the results of FedAvg+DBE, how about other methods? It would be better to compare the results with other methods to judge the convergence rate. 6. The experimental results of the local model are missing in the paper 7. In Section E.3, the author evaluated the accuracy on 20 new clients. How is the data of these clients divided, and does the class of these data appear in the 80 old clients? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation of the proposed method has been discussed before. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and insightful feedback. We respond to your concerns as follows in the form of "**[A weakness or question]** Our responses". **[Evaluate methods on real-world distribution shifts]** Our $\texttt{DBE}$ is also effective in real-world scenarios. Here, we evaluate FedAvg+$\texttt{DBE}$ on the popular Human Activity Recognition (HAR) dataset[1] that contains the sensor signal data collected from 30 users who perform six activities wearing a smartphone on the waist. On HAR, following previous work[2], we use HAR-CNN as the model. We show the results in R-FLKZ-Table 1, where FedAvg+$\texttt{DBE}$ still achieves superior performance. R-FLKZ-Table 1: The test accuracy (\%) on the HAR dataset. | | Accuracy | |:-|:-:| | FedAvg |87.20±0.27 | | SCAFFOLD |91.34±0.43 | | FedProx |88.34±0.24 | | MOON |89.86±0.18 | | FedGen |90.82±0.21 | | Per-FedAvg |77.12±0.17 | | pFedMe |91.57±0.12 | | Ditto |91.53±0.09 | | FedPer |75.58±0.13 | | FedRep |80.44±0.42 | | FedRoD |89.91±0.23 | | FedBABU |87.12±0.31 | | APFL |92.18±0.51 | | FedFomo |63.39±0.48 | | APPLE |86.46±0.35 | | FedAvg+$\texttt{DBE}$ |**94.53±0.26** | [1] Anguita D, Ghio A, Oneto L, et al. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. Ambient Assisted Living and Home Care: 4th International Workshop, IWAAL 2012. [2] Zeng M, Nguyen L T, Yu B, et al. Convolutional neural networks for human activity recognition using mobile sensors. 6th international conference on mobile computing, applications and services. IEEE, 2014: 197-205. **[Computational burden brought by $\texttt{MR}$]** We discuss the additional computation overhead of our proposed $\texttt{MR}$ in *Section 4.5 Negligible Additional Communication and Computation Overhead*. The parameterless $\texttt{MR}$ computes the MSE loss for two representations of the $K$ dimension whose computational burden is negligible compared to model inference or backpropagation. **[Is the $\bar{z}^p _i$ a model parameter?]** Yes, as Algorithm 1 claims, $\bar{z}^p _i$ is a trainable model parameter in the client model when using $\texttt{DBE}$. As mentioned in *line 154*, $\bar{z}^p _i$ is the parameter of our personalized module $\texttt{PRBM}$, which is updated simultaneously with other parts of the client model. According to Figure 2, Equation (4), *line 155*, and Equation (8), we obtain representation ${z} _i$ via ${z} _i = {z}^g _i + \bar{z}^p _i$ during forward pass. For the backward pass, one can easily obtain the gradients of ${z} _i$, then the gradient of $\bar{z}^p _i$ can be derived by chain rule $\frac{\partial \mathcal{L} _{\hat{\mathcal{D}} _i}}{\partial \bar{z}^p _i} = \frac{\partial \mathcal{L} _{\hat{\mathcal{D}} _i}}{\partial {z} _i} \frac{\partial {z} _i}{\partial \bar{z}^p _i} = \frac{\partial \mathcal{L} _{\hat{\mathcal{D}} _i}}{\partial {z} _i}$. **[Why is ${z}^g$ not updated during the training process?]** We utilize ${z}^g$ to ensure consistent guidance for extracting client-invariant representation information in our $\texttt{MR}$. The $\texttt{MR}$ and $\texttt{PRBM}$ form a complementary pair. The updating of ${z}^g$ during the training process introduces dynamics to the guidance in $\texttt{MR}$, which can lead to a mismatch between the previously learned translation $\texttt{PRBM}$ and the current state of $\texttt{MR}$. Updating ${z}^g$ causes the test accuracy (\%) of FedAvg+$\texttt{DBE}$ to drop from 43.32 (Table 4, TINY, practical setting) to 41.55. **[The dimension of ${z}^g$ and $\bar{z}^p _i$ implemented in experiments]** The dimension of ${z}^g$ and $\bar{z}^p _i$ equals the dimension of feature representation space $\mathcal{Z}$, which depends on the model architectures and model splitting methods. Per *line 265-266*, we choose the last FC layer as the classifier, following existing methods, for a fair comparison. The dimensions $K$ are set to 512, 512, and 32 for the 4-layer CNN, ResNet-18, and fastText models, respectively, as indicated in our code. **[Specific conditions for convergence?]** If an FL method (algorithm) converges at the 100th round, running 100 rounds is equivalent to running 1000 rounds in terms of performance. Running all methods until empirical convergence is equivalent to running each method for a maximum required number of rounds, such as 1000 rounds, to ensure the empirical convergence of all methods. This approach is commonly used in FL experiments. **[Convergence rate of other methods]** We demonstrate Figure 1 in Appendix to show the convergence of FedAvg+$\texttt{DBE}$ rather than showing its superiority in convergence. Please note that we focus on improving the MDL and accuracy rather than emphasizing the superiority of the convergence rate. To compare the convergence rate, please refer to Table 5 (Overhead) to calculate the minimum communication iterations for convergence by dividing the values in the "Total time" column by the values in the "Time/iteration" column, i.e., Per-FedAvg: 34, pFedMe: 113, Ditto: 27, FedPer: 43, FedRep: 115, FedRoD: 50, FedBABU: 513, APFL: 57, FedFomo: 71, APPLE: 45, FedAvg: 230, FedAvg+$\texttt{DBE}$: 107. Using our $\texttt{DBE}$ reduces communication iterations for FedAvg. We will further improve this in the revised version. **[Results of the local model]** As stated in *line 244*, we adhere to the renowned pFedMe approach to present the results of the global model and personalized models for traditional FL and pFL methods, respectively. In pFL, the local model corresponds to the personalized model. **[The data distribution for *Section E.3* in Appendix]** We utilize the data distribution established in Table 4 (Cifar100†) consisting of 100 clients, as depicted in Figure 5 in the Appendix. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and the additional experimental results. Most of the questions I raised have been addressed comprehensively. As you mentioned in the rebuttal, I hope to see a more detailed analysis of model convergence in the revised version. I will raise the score to 6 to support this work. --- Reply to Comment 1.1.1: Title: Thank You! Comment: We are grateful for your valuable suggestions. Your support for our paper is greatly appreciated, and we will include a comprehensive analysis of model convergence in the revised version.
Summary: To address the performance drop via heterogeneous data distribution, the proposed method - Domain Bias Estimator (DBE) - decomposes image features into unbiased (global) and biased (personalized) representations. To guide the feature extractor to obtain the unbiased representation, Mean Regularization (MR) is proposed which is a regularization term for local objective functions that enforce to reduce the gap between mean of representations of local data and a mean of representation of all data across clients. Meanwhile, each client has learnable biased representation which is added to unbiased representation from the feature extractor, and then feed to the local classifier. Unlike the previous works, by considering both unbiased and biased representation, the proposed method could improve bi-directional (local and global) knowledge transfer and could mitigate the performance drop via heterogeneity. Strengths: - The proposed method handles an important problem of non-IID FL. - The paper is clearly written. - The paper provides theoretical guarantees and comprehensive experimental results that show the efficiency of the proposed method for the given problem. - The proposed method is novel in terms of decomposing representation generated by feature extractor into unbiased and biased representations, and efficient since it requires not much additional computational cost. Weaknesses: - The paper does not discuss about privacy issues that could be emerged by collection of client-specific mean over local data (line 2,3 in Algorithm 1, Supple). Since the client-specific mean contains representations from all local data, it is potentially exposed to reconstruct identifiable information of local data or even data itself. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Collecting averaged features extracted from all local data seems to be exposed to privacy attacks. It would be helpful to give more evidence (previous works or experiments) that the ‘client-specific mean’ collection is privacy-preserving. - Figure 3 shows the role of client-variant (biased) representations and client-invariant (unbiased) representations. However, the representation is not separated according to labels. Are they distinguishable by label for representations extracted by the global feature extractor? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors well describe the limitation of the proposed framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful comments. We respond to your concerns as follows in the form of "**[A weakness or question]** Our responses". **[Privacy issues for client-specific mean collection (*line 2,3* in Algorithm 1, Supplementary)]** Please note that the client-specific mean $\{\bar{{z}}^g _1, \ldots, \bar{{z}}^g _N\}$ we collect during initialization period before FL is a $K$ dimensional vector for each client, respectively, instead of a set "containing representations from all local data". Hence, the client-specific mean solely captures the magnitude of the mean value for each feature dimension within the context of the given datasets and models. Sharing this kind of information is recently *popular* in the federated learning domain. FedPAC[1], FedProto[2], and FedPCL[3] share client-specific and class-wise mean (multiple $K$ dimensional vectors per client). Compared to them, the privacy-preserving ability of FedAvg+$\texttt{DBE}$ is superior, as they share both client-level and class-level information in each iteration while no class-level information is shared in FedAvg+$\texttt{DBE}$ and *we share the client-specific mean only once*. Furthermore, it is convenient to add privacy-preserving techniques (e.g., adding noise) to the client-specific mean during the initialization period without influencing the performance of our $\texttt{DBE}$, since the magnitude of the client-specific mean hardly changes with noise. Following FedPCL, we add Gaussian noise to the client-specific mean with controllable parameters scale ($s$) and perturbation coefficient ($p$). As shown in R-8bi2-Table 1, using the privacy-preserving technique can also bring slight improvement for our $\texttt{DBE}$. In the revised version of our paper, we will provide a more comprehensive privacy analysis to further strengthen it. R-8bi2-Table 1: The test accuracy (\%) on TINY in the practical setting. | | Original | $s=0.05$, $p=0.1$ | $s=0.05$, $p=0.2$ | |:-|:-:|:-:|:-:| | FedAvg+$\texttt{DBE}$ | 43.32±0.12 | 43.81±0.15 | **44.10±0.10** | [1] Xu J, Tong X, Huang S L. Personalized Federated Learning with Feature Alignment and Classifier Collaboration. The Eleventh International Conference on Learning Representations. 2022. [2] Tan Y, Long G, Liu L, et al. Fedproto: Federated prototype learning across heterogeneous clients. Proceedings of the AAAI Conference on Artificial Intelligence. 2022. [3] Tan Y, Long G, Liu L, et al. Fedproto: Federated prototype learning across heterogeneous clients. Proceedings of the AAAI Conference on Artificial Intelligence. 2022. **[Are the representations extracted by the global feature extractor distinguishable by the label?]** Yes. As our primary goal is to demonstrate the elimination of representation bias rather than improving discrimination in Figure 3, we present the t-SNE visualization for our largest dataset in experiments, Tiny-ImageNet (200 labels). Given that the 200 labels are distributed around the chromatic circle, adjacent labels are assigned similar colors, resulting in Figure 3 being indistinguishable by the label. Using a dataset AG News with only four labels for t-SNE visualization can clearly show that the representations extracted by the global feature extractor are distinguishable in R-8bi2-Figure 1 (***please refer to the PDF in the global response field***). --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response and for providing additional representation visualization through t-SNE. Most of my concerns, including privacy issues, have been adequately addressed. I am particularly appreciative of their additional experiment using noisy representation (i.e. client-specific mean). It is intriguing that adding noise to client-specific mean achieves even better accuracy. This is noteworthy, especially considering that introducing noise to gradients typically causes a performance drop. Could the authors provide more insights into the phenomenon? I am wondering if it might be tied to the generalization ability, though I'm uncertain. --- Reply to Comment 1.1.1: Title: Thanks for New Comments Comment: We appreciate your timely feedback and suggestions. First of all, we upload client-specific means and average them to generate the consensus vector, which is used to provide the unbiased feature information and preserve the feature magnitude for our $\texttt{MR}$. By sampling the noise from the same Gaussian distribution for all clients, the addition of moderate noise does not impact the magnitude of the consensus vector. Instead, it can be seen as incorporating additional unbiased information, which is beneficial for the representation bias elimination and can further improve the performance to some extent. Besides, it is not surprising that adding too much noise can also bring accuracy decrease, as shown in R-8bi2-Table 2 and R-8bi2-Table 3. "w/o" is short for "without". However, setting $s=0.05$ and $p=0.2$ is sufficient to ensure privacy protection according to FedPCL[3] (We apologize for the mis-citation in our previous response). R-8bi2-Table 2: The test accuracy (\%) on TINY in the practical setting with $s=0.05$ and larger $p$. | | w/o noise | $p=0.1$ | $p=0.2$ | $p=0.5$ | $p=0.8$ | $p=0.9$ | |:-|:-:|:-:|:-:|:-:|:-:|:-:| | FedAvg+$\texttt{DBE}$ | 43.32±0.12 | 43.81±0.15 | 44.10±0.10 | 44.45±0.13 | 43.30±0.21 | 41.75±0.24 | R-8bi2-Table 3: The test accuracy (\%) on TINY in the practical setting with larger $s$ and $p=0.2$. | | w/o noise | $s=0.05$ | $s=0.5$ | $s=1$ | $s=5$ | |:-|:-:|:-:|:-:|:-:|:-:| | FedAvg+$\texttt{DBE}$ | 43.32±0.12 | 44.10±0.10 | 44.15±0.18 | 43.78±0.14 | 36.27±0.35 | It is important to note that we upload the client-specific mean from clients to the server ***only once* before the FL process**. Our approach significantly differs from previous methods that upload averaged representations (such as class-specific prototypes) or model parameters ***per iteration* during the FL process**. Specifically,we add noise only once while previous methods continuously add noise throught the FL process. Therefore, they are significantly impacted by the noise. [3] Tan Y, Long G, Ma J, et al. Federated learning from pre-trained models: A contrastive learning approach. Advances in Neural Information Processing Systems, 2022.
Summary: The paper introduces Domain Bias Eliminator (DBE) to address the representation bias and representation degeneration issues commonly observed in federated learning under statistically heterogeneous scenarios. DBE utilizes a Personalized Representation Bias Memory (PRBM) to preserve representation bias and mean regularization to guide local feature extractors toward producing representations with a consensual global mean. The authors provided a thorough theoretical analysis and empirical studies to validate their method. The results indicated that DBE can enhance the performance of existing FL methods. Strengths: 1. This paper has solid theoretical analysis. 2. The proposed method is well-motivated. 3. Comparison with existing methods is sufficient. Weaknesses: 1. The explanation on how DBE reduces the domain discrepancy between server and client in the representation space could be more detailed, as it forms a core part of the technique. 2. The authors could also consider exploring DBE's scalability to larger models, to understand the full potential of the method. The models used for experiments are all small, with only million-level parameters. 3. There are two main hyperparameters in the proposed method and it seems that the recommended choice is quite different for different datasets and models. It is difficult for practical scenarios to tune these hyperparameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there any criteria or empirical methods to choose hyperparameters in this work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and insightful comments. We respond to your concerns as follows in the form of "**[A weakness or question]** Our responses". **[More detailed explanation on how $\texttt{DBE}$ reduces the domain discrepancy]** As mentioned in *Section 4.4 Improved Bi-directional Knowledge Transfer*, our $\texttt{DBE}$ reduces $\mathcal{H}$-divergence (which measures domain discrepancy[1]) for both levels of representations. Here we show how $\texttt{DBE}$ reduces the domain discrepancy based on the widely used maximum mean discrepancy (MMD) metric[2]. $MMD[\Phi, P, Q] := \sup _{\phi \in \Phi}(\mathbb{E} _{p\sim P}[\phi(p)] - \mathbb{E} _{q\sim Q}[\phi(q)])$, where domains $P$ and $Q$ belong to a space $\mathcal{R}$ and $\Phi$ is a class of given functions $\phi: \mathcal{R} \rightarrow \mathbb{R}$. Our $\texttt{MR}$ is originally designed to reduce the mean discrepancy while our proposed translation $\texttt{PRBM}$ can keep the reduced mean discrepancy. Consider $P$ and $Q$ as the local representation domain and virtual global representation domain, respectively, and regard $\Phi$ as a class of identical summation function $\phi$, the MMD value can be reduced with the reduced mean discrepancy of $P$ and $Q$. We will continue to refine our paper in the revised version, aiming to enhance its impact and improve its clarity for better understanding. [1] Ben-David S, Blitzer J, Crammer K, et al. A theory of learning from different domains. Machine learning, 2010, 79: 151-175. [2] Gretton A, Borgwardt K, Rasch M, et al. A kernel method for the two-sample-problem. Advances in neural information processing systems, 2006. **[Small models with million-level parameters]** We follow prior approaches to adopt models for a fair comparison. The majority of existing FL approaches employ models with million (M)-level parameters. Specifically, shallow CNNs (around 5.6M parameters, used in FedAvg, FedGen, MOON, Ditto, FedRep, FedRoD, APFL, FedFomo, and APPLE) and ResNet-18 (around 11.2M parameters, used in FedBABU, FedALA[3], TCT[4], and ProgFed[5]) are popular models in the FL field. [3] Zhang J, Hua Y, Wang H, et al. Fedala: Adaptive local aggregation for personalized federated learning. Proceedings of the AAAI Conference on Artificial Intelligence. 2023. [4] Yu Y, Wei A, Karimireddy S P, et al. TCT: Convexifying federated learning using bootstrapped neural tangent kernels. Advances in Neural Information Processing Systems, 2022. [5] Wang H P, Stich S, He Y, et al. Progfed: effective, communication, and computation efficient federated learning by progressive training. International Conference on Machine Learning. 2022. **[Hyperparameter settings]** Please note that we **only** set different values for the hyperparameters $\kappa$ and $\mu$ on different model architectures but use identical settings for one architecture on all datasets. Different models exhibit diverse capabilities in both feature extraction and classification. Given that our proposed $\texttt{DBE}$ operates by integrating itself into a specific model, it is crucial to tune the parameters $\kappa$ and $\mu$ to adapt to the feature extraction and classification abilities of different models. As for the **criteria or empirical methods for hyperparameter tunning**, $\kappa$ and $\mu$ require different tunning methods according to their functions. Specifically, *$\mu$ is a momentum* introduced along with the widely-used moving average technology in approximating statistics, so for the model architectures that originally contain statistics collection operations (e.g., the batch normalization layers in ResNet-18) one can set a relatively small value by tuning $\mu$ from 0 to 1 with a reasonable step size. For other model architectures, one can set a relatively large value for $\mu$ by tuning it from 1 to 0. The parameter *$\kappa$ is utilized to regulate the magnitude of the MSE loss in our $\texttt{MR}$ (Equation (8))*. However, different architectures generate feature representations with varying magnitudes, leading to differences in the magnitude of the MSE loss. Thus, we tune $\kappa$ by aligning the magnitude of the MSE loss with the other loss term, i.e., $\frac{1}{n _i} \sum^{n _i} _{j=1} \ell(h(\texttt{PRBM}(f({x} _{ij}; {\theta}^f); \bar{z}^p _i); {\theta}^h), y _{ij})$.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable feedback on our manuscript. We have provided detailed responses in the rebuttal field following each reviewer’s comments. In this global response field, we provide a PDF file that includes a figure named R-8bi2-Figure 1. Pdf: /pdf/6f49535dd5726415a8e0837091316d2d50132f3b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Memory Efficient Optimizers with 4-bit States
Accept (spotlight)
Summary: This paper concentrates on quantizing the optimizer states into low-bit to reduce training memory consumption. By thoroughly analyzing the first and second momentums, the authors quantize the optimizer states to 4-bit via the use of a smaller block size and both row-wise and column-wise information. They perform extensive experiments across various tasks. Strengths: (1) The authors thoroughly analyze the outliers of momentums, which makes the existing block-wise method not work well. (2) They perform extensive experiments across various tasks. Weaknesses: Compressing the optimizer states is essential especially for training large language models, but the paper only deals with how much memory is saved when fine-tuning LLaMA-7B on Alpaca. It would be necessary to evaluate LLaMA-7B fine-tuned via 4-bit AdamW on common sense reasoning tasks (to check whether the common season reasoning could be preserved even after fine-tuning via 4-bit AdamW) and/or MMLU (to check whether fine-tuned models via 4-bit AdamW can possess the instruction-following ability as a few-shot learner). In addition, it would be much more convincing if the experimental results of LLaMA-13B, 33B, and/or 65B are provided. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer f2sV for the constructive comments. With respect to your questions: **Weakness: Common sense reasonging and MMLU evaluations on large models.** We have fine-tuned LLaMA-7B and LLaMA-13B with 32-bit AdamW and 4-bit AdamW on Alpaca and evaluated them on common sense reasoning tasks and MMLU. Results show that 4-bit AdamW will not destory the capability of pretraiend models while enabling them to obtain instruction-following ability. In MMLU and HellaSwag tasks, 4-bit AdamW fine-tuning outperforms 32-bit AdamW both on LLaMA-7B and LLaMA-13B. In other tasks, 4-bit AdamW is comparable with 32-bit AdamW. 4-bit AdamW does not get worse than 32-bit AdamW when the model size grows. | LLaMA-7B | MMLU(5-shot) | HellaSwag | ARC-e | ARC-c | OBQA | |-------------|-------------:|----------:|------:|------:|-----:| | Original | 33.1| 73.0| 52.4| 40.9| 42.4| | FT w. 32-bit AdamW| 38.7| 74.6| 61.5| 45.1| 43.4| | FT w. 4-bit AdamW | 38.9| 74.7| 61.2| 44.4| 43.0| | LLaMA-13B| MMLU(5-shot) | HellaSwag | ARC-e | ARC-c | OBQA | |-------------|-------------:|----------:|------:|------:|-----:| | Original | 47.4| 76.2| 59.8| 44.5| 42.0| | FT w. 32-bit AdamW| 46.5| 78.8| 63.6| 48.3| 45.2| | FT w. 4-bit AdamW | 47.4| 79.0| 64.1| 48.0| 45.2| --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. It would be better if the experimental results for LLaMA-33B or LLaMA-65B was given, but most of my concerns are well addressed. However, I have a question. For both LLaMA-7B and LLaMA-13B, FT w. 4-bit AdamW is always better than FT w. 32-bit AdamW on MMLU (5-shot). Could you provide the insight into such unexpected results? --- Reply to Comment 1.1.1: Title: Second Response to Reviewer f2sV Comment: Thank you again for your comments. Regrading your concern: **Experiments on LLaMA-33B** We fine-tuned LLaMA-33B with both 32-bit AdamW and 4-bit AdamW on Alpaca. The hyperparameters used are consistent with those of LLaMA-13B. Notably, both the 32-bit AdamW and 4-bit AdamW achieve comparable training losses of 0.057 and 0.059, respectively. Furthermore, their convergence curves align closely. Both the 32-bit AdamW and 4-bit AdamW achieve similar training losses of 0.057 and 0.059, respectively. Moreover, their convergence curves closely align. The results of MMLU and common sense reasoning is reported in the following table. Unlike LLaMA-7B and LLaMA-13B, on LLaMA-33B, the performance of 4-bit AdamW is lower than 32-bit AdamW on the MMLU task. Therefore, 4-bit AdamW is not always better than 32-bit AdamW on MMLU task. However, it's noteworthy that 4-bit AdamW exhibits lower MMLU loss on LLaMA-33B. Additionally, note that for the 33B model, instruction tuning does not improve performance much, so the implication of the performance might be limited. | LLaMA-33B| MMLU loss (5-shot) |MMLU (5-shot) | HellaSwag | ARC-e | ARC-c | OBQA | |-------------|-------------:|--------:|------:|------:|------:|-----:| | Original | 2.67| 54.9| 79.3| 58.9| 45.1| 42.2| | FT w. 32-bit AdamW| 0.98| 56.4| 79.2| 62.6| 47.1| 43.8| | FT w. 4-bit AdamW | 0.95| 54.9| 79.2| 61.6| 46.6| 45.4|
Summary: Today’s machine learning pipeline places a heavy load on GPU memory during training, due to the large number of parameters the optimizers have to maintain during training. This paper proposes a method to reduce the internal states of optimizers from their full-width counterparts to 4-bit numbers, while maintaining nearly the same performance across a range of tasks, such as vision and language models. Strengths: This paper presented some intriguing observations, such as the 'zero-point problem' discussed in relation to second-order momentum, and the first-order term appears to be robust under quantisation. Additionally, this paper examined how quantisation might be combined with factorisation. The results presented covered a variety of language tasks as well as a vision task with relatively advanced models. Weaknesses: The figures are too small, making it hard to read. It is also unclear to me how the block size was chosen and whether there are theories to support this decision. Additionally, it would be beneficial to demonstrate a wider range of vision tasks to demonstrate the breadth of the proposed approach. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How do one generally pick the block size when we have an unseen model? Do we have theories for backing up the block size picking? - Are there any other ways to demonstrate the convergence (from proof) rather than showing empirical results? - Regarding the empirical results, more results might be need on 1) generative models (eg. diffusion) and 2) more vision tasks (eg. segmentation or object detection). The core idea is to demonstrate an efficeint optimizer, then the capability of this optimizer should be then tested on a range of different tasks. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: I do not think this is applicable to this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer d65g for the careful review. With respect to the figures, we will use larger ones in a later revision. With respect to your questions: **Weakness + Question 1: How to pick the block size.** There are some rules to choose the block size: - Firstly, quantizing with a smaller block size approximates the tensors better than a larger block size due to the finer granularity of quantization. - Secondly, the block size is limited by the memory. A smaller block size would lead to larger memory overhead (e.g., with fp32 scaling constants, a block size of 32 causes ab extra overhead of 1-bit per parameter). Empirically, we have shown the quantization error of different block sizes via histograms in Appendix C.3 (Effectiveness of Block Size in Block-wise Normalization), which shows that a block size of 128 exactly reconstructs tensors, better than a block size of 2048. In general, we consistently choose a small block size of 128 to reduce quantization error and maximize accuracy, while keeping memory overhead under control. Actually, the granularity of quantization affected by block size is independent of the model size, and we suspect that a block size of 128 works well for unseen (large) models. **Question 2: Theoretical analysis about convergence of low-bit optimizers.** The convergence of low-bit optimizers can be guaranteed if their fp32 counterparts converge. Here we provide a theorem about the convergence of quantized SGD with momentum (Algorithm 2 in Appendix F) under some assumptions. We believe that the convergence of low-bit AdamW could be inferred from the convergence of AdamW. We use $f(\theta)$ to denote the objective function, $\beta$ to denote momentum, $Q$ to denote quantizer on first moment and $\alpha$ to denote learning rate. With assumptions: - (Convexity) The objective function is convex and has an unique global minimum $f(\theta^*)$. - (Smoothness) The objective $f(\theta)$ is continuous differentiable and $L$-smooth; - (Unbiasedness and Bounded variance) $\mathbb{E}[\nabla\hat{f}(\theta)] = \nabla f(\theta)$ and $ \mathbb{E}\left[\left\lVert\nabla\hat{f}(\theta) - \nabla f(\theta)\right\rVert^2\right] < \sigma^2$, $\forall \theta \in \mathbb{R}^d$, where $\nabla\hat{f}$ is stochastic gradient. - (Unbiased quantizer) $\forall x\in \mathbb R^d$, $\mathbb{E}\left[Q(x)\right]=x$. - (Bounded quantization variance) $\forall x\in \mathbb R^d$, $\mathbb{E}\left[\left\lVert Q(x)-x\right\rVert^2\right]\le \sigma_m^2$. We have the following theorem: **Theorem 1.** Suppose above assumptions hold. Consider the case where quantized SGD with momentum is performed on the objective function $f$, with momentum parameter $\beta$. Let $\alpha \in (0, \frac{1-\beta}{3L}]$, then for all $T > 0$ we have $$ \mathbb{E}[f(\bar{\theta_T}) - f_*] \le \frac{1}{2T}\left( \frac{L\beta}{1-\beta} + \frac{1-\beta}{\alpha } \right) \left\lVert\theta_0 - \theta_*\right\rVert^2 + \frac{\alpha \sigma^2}{(1-\beta)} + \frac{\alpha\sigma_m^2}{2(1-\beta)} $$ where $\bar{\theta_T} = \frac{1}{T}\sum_{i=0}^{T-1}{\theta}_{i}$. The theorem indicates that when $\alpha\rightarrow 0$ and $\alpha T\rightarrow \infty$, the optimizer converge to a stationary-point. Note that the unbiased quantizer assumption is only for technical simplification. Our proof mainly follows the prior analysis of SGD with momentum [1, 2]. [1] Ghadimi, Euhanna, Hamid Reza Feyzmahdavian, and Mikael Johansson. "Global convergence of the heavy-ball method for convex optimization." 2015 European control conference (ECC). IEEE, 2015. [2] Liu, Yanli, Yuan Gao, and Wotao Yin. "An improved analysis of stochastic gradient descent with momentum." Advances in Neural Information Processing Systems 33 (2020): 18261-18271. **Weakness + Question 3: Results on generative modeling and more vision tasks.** We are running our optimizers on generative modeling and more vision tasks. We will post updates on these tasks within the discussion period. --- Rebuttal Comment 1.1: Title: Response to Reviewer d65g (cont.) Comment: **Results on generative modeling and more vision tasks.** We tested our optimizers on generative modeling task and more vision tasks. For vision tasks, we conducted experiments on image classification with ResNet-50 on ImageNet, detection with Faster R-CNN (with ResNet-50 backbone) on COCO, segmentation with Mask R-CNN (with ResNet-50 backbone) on COCO. The optimizers are 4-bit SGDM / 6-bit SGDM, with the first moment compressed to lower precision using our method. These tasks require 6-bit SGDM to achieve lossless performance compared with 32-bit SGDM. Note that the memory consumption of 6-bit SGDM is still lower than 4-bit Adam, as the former only retains the first moment. For generative modeling task, we trained a DDPM++ model on class-conditional CIFAR-10. The results indicate that it is still challenging to pretrain diffusion models with compressed optimizers. We suspect it is because the high variance of the denoising score matching objective. Nevertheless, our proposed optimizer outperforms the existing compressed optimizers proposed by Dettmers et al. Specifically, while Dettmers et al.'s 8-bit Adam diverges on this task, our 8-bit Adam successfully converges and generates meaningful images. | Method | ResNet-50 CLS| Faster R-CNN| Mask R-CNN | |-------------------|-------------:|----------:|------:| | 32-bit SGDM | 77.2| 37.2| 34.3| | 8-bit SGDM (Dettmers et al., 2021) | 77.2| N/A | N/A | | 4-bit SGDM (ours) | 76.7| 33.8| 31.8| | 6-bit SGDM (ours) | 77.3| 37.2| 34.2| | Method | FID | |-------------------|-------------:| | 32-bit Adam | 1.89| | 8-bit Adam (Dettmers et al., 2021) | 655.12| | 4-bit Adam (ours)| 402.94| | 8-bit Adam (ours)| 53.56| [1] Dettmers, Tim, et al. "8-bit Optimizers via Block-wise Quantization." International Conference on Learning Representations. 2021. --- Rebuttal Comment 1.2: Comment: Thanks for your reply. I may have to clarify my questions on the theory side. It seems like the quantization variance is associated with your block size B, and I am interested in understanding how or would this would affect the convergence speed? If I pick different B values, surely the variance bound provided should be different? I look forward seeing your results on other generative tasks. --- Reply to Comment 1.2.1: Title: Second Response to Reviewer d65g Comment: Thank you again for your comments. Regrading the questions: **Results on generative modeling task.** We trained a StyleGAN2-ADA with both 32-bit Adam and our 4-bit Adam on the conditional CIFAR-10 dataset. We used the same hyperparameters as the official implementation, except for a larger batch size that facilitated quicker training. The following table shows the best achieved FID score throughout the training process. During the training dynamics, several quantites, including generator loss, descriminator loss, fake image scores, and real image score closely align for both optimizers. When evaluated in terms of FID, the two models trained with different optimizers exhibit a slight gap in performance. Furthermore, in the earlier conducted diffusion model task, we observed a close alignment in the loss dynamics between the 32-bit Adam and our 8-bit Adam optimizers. | Method | FID | |-------------------|-------------:| | 32-bit Adam | 2.40| | 4-bit Adam (ours)| 2.89| **Theory about block size.** In our analysis, the quantization variance only impacts the magnitude of the stationary point, i.e., the case when $T \to \infty$, and it does not relate to the convergence speed. This is similar to the variance from stochastic gradient. Then, we aim to characterize the influence of block-wise normalization on quantization variance. When employing a linear quantizer with an interval of $\delta$ (without block-wise normalization), the quantization variance is explicitly given by $$ \sigma_m^2 = \mathbb{E}\left[||Q_{\delta}(x) - x||^2\right] \le \frac{\delta^2 d}{4}, \forall x\in \mathbb R^d. $$ The value of $\delta$ is determined by the maximum absolute value (aka absmax) within the vector and the number of representable values, i.e., the number of bits used. Upon incorporating block-wise normalization with a block size of $B$, assuming there are $N$ blocks in total, and denoting the interval within each block as $\delta_i$, the quantization variance is given by $$ \sigma_m^{2} = \mathbb{E}\left[||Q_{\delta}(x) - x||^2\right] \le \sum_{i=1}^N \frac{\delta_i^2 B}{4}, \forall x\in \mathbb R^d. $$ However, the extent of improvement in quantization variance through blocking heavily relies on the tensor's structure. Here, we qualitatively analyze the impact of block size in various scenarios: - In an extreme case, if outliers comparable to, or same as, the absmax value occur within each block, block-wise quantization fails to yield any improvement. This instance finds empirical support in our investigations, where we have empirically observed regular distribution of outliers within moment tensors along rows and/or columns. This pattern suggests the advantages of employing smaller block sizes. - When the first moments are i.i.d. from $N(0, 1)$, we have $\delta_i \propto \sqrt{\log(B)}$ thanks to the nice property of the Gaussian distribution. Consequently, if two distinct block sizes, $B_1$ and $B_2$, are employed in quantization, the quantization variance ratio is given by $$ \frac{\sigma_{m, B_1}^{2}}{\sigma_{m, B_2}^{2}} = \frac{\log(B_1)}{\log(B_2)}. $$ For neural network training, considering $B_1 = 128$ and $B_2$ as the size of a single tensor, significant enhancements in quantization variance are achieved. Furthermore, with $B_1 = 128$ and $B_2 = 2048$, the quantization variance improves by 7/11. - In practical scenarios, the distribution of moments is difficult to characterize analytically, thus making it difficult to determine the impact of block size on quantization variance. We leave the theoretical results for future work and instead present empirical analyses regarding block size and quantization variance. We utilize block-wise normalization with different block sizes to quantize the first moment tensors in a GPT-2 medium model. We report both the mean and maximum relative quantization errors across tensors. The relative quantization error is defined as $||Q(x) - x||/||x||$. In this specific setting, the results show a rough log-correlation between relative error and block size. | Block size | 128 | 256 | 512 | 1024 | 2048| |---------------|----:|----:|----:|-----:|-----:| | first moment(mean) | 0.158|0.170|0.183 | 0.194|0.205 | | first moment(max) | 0.173|0.190|0.208 | 0.222|0.235 | However, it is important to note that the actual performance metric (e.g., accuracy) exhibits a complex relationship with quantization variance, and even training loss, exceeding the scope of our block size analysis.
Summary: The paper develops 4-bit optimizers by using a smaller blocksize for the first moment and by analyzing and finding solutions to outlier patterns in the second moment. In particular it is found that quantization to the zeropoint are problematic for the second moment. The analysis is extensive and robust leading to novel and valuable insights into how optimizer states work in the low-bit regime. Recommendation: This is a high-quality paper with extensive and valuable analysis and new methods with very robust evaluations. I am happy to fight for acceptance of this paper. I recommend that this paper be highlighted at the conference. Strengths: - The analysis in this paper is very robust, both theoretically and empirically. In particular, the identification of the zero-point problem is important. - Rank-1 normalization for quantization is ingenious and a valuable solution that goes beyond blocking/grouping that is done in many other quantization papers. - This work will have a strong impact on the general quantization literature way beyond quantized optimizers. Weaknesses: - This is outstanding work. I cannot find any weakness. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How do you store Rank-1 normalization constants to perform the dequantization? As I understand it, you use min(absmax row, absmax col) as the normalization constant for each x_{i, j}, but in this case, you need to store which constant is valid for each index (i, j), is that correct? Suggestion: You often write 2nd momentum. I believe the correct terminology would be first and second moment (look for central moment on Wikipedia). It is correct that the first moment is equivalent to the momentum term in Adam, but I think the second moment is usually not referred to as second momentum (rather a RMSProp term). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations are fully and honestly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer qs7T for the valuable and careful review. Consider a weight matrix $x$ of size $m \times n$. In Rank-1 normalization, we store the absmax of each row and the absmax of each column during the quantization stage, totaling $m + n$ elements. In the dequantization stage, we calculate the normalization constant for each entry in the same manner as in the quantization stage, i.e., min(absmax row_i, absmax col_j) for entry $x_{i, j}$. The subtle difference is that we do not need to calculate the absmax of the i-th row (and the j-th colum) but use the pre-stored values instead. Additionally, thank you for pointing out the typos concerning the first/second moment. We will correct them in a later revision.
Summary: This paper proposes a 4-bit optimizer to save the memory of model training. They use a smaller block size and propose to utilize both row-wise and column-wise information for better quantization. They identify a zero-point problem of quantizing the second-order momentum and solve this problem with a linear quantizer that excludes the zero point. Strengths: 1. aims at solving a key problem to reduce the memory occupation for model training 2. conduct experiments on vision and language tasks Weaknesses: 1. lack experiments on large models like 65B or 175B 2. The accuracy achieved by the 4-bit optimizer is still lower than the fp16 or int8 version. It is unsure whether it can behave stably under various settings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What about the effect on large models 2. Comparison with other optimization methods for large models (e.g., https://arxiv.org/abs/2306.09782) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This method can not achieve lossless accuracy. The stability is not clear and sufficiently validated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer rPDD for the constructive review. With respect to your questions: **Weakness 1 + Question 1: Experiments on large models.** We have fine-tuned LLaMA-7B and LLaMA-13B with 32-bit AdamW and 4-bit AdamW on Alpaca and evaluated them on common sense reasoning tasks and MMLU. Results show that 4-bit AdamW will not destory the capability of pretraiend models while enabling them to obtain instruction-following ability. In MMLU and HellaSwag tasks, 4-bit AdamW fine-tuning outperforms 32-bit AdamW both on LLaMA-7B and LLaMA-13B. In other tasks, 4-bit AdamW is comparable with 32-bit AdamW. 4-bit AdamW does not get worse than 32-bit AdamW when the model size grows. Also, we are trying to fine-tune LLaMA-30B, and will post updates when available. However, the amount of computation required for LLaMA-65B exceeds our accessible resources. | LLaMA-7B | MMLU(5-shot) | HellaSwag | ARC-e | ARC-c | OBQA | |-------------|-------------:|----------:|------:|------:|-----:| | Original | 33.1| 73.0| 52.4| 40.9| 42.4| | FT w. 32-bit AdamW| 38.7| 74.6| 61.5| 45.1| 43.4| | FT w. 4-bit AdamW | 38.9| 74.7| 61.2| 44.4| 43.0| | LLaMA-13B| MMLU(5-shot) | HellaSwag | ARC-e | ARC-c | OBQA | |-------------|-------------:|----------:|------:|------:|-----:| | Original | 47.4| 76.2| 59.8| 44.5| 42.0| | FT w. 32-bit AdamW| 46.5| 78.8| 63.6| 48.3| 45.2| | FT w. 4-bit AdamW | 47.4| 79.0| 64.1| 48.0| 45.2| **Weakness 2: About lossless accuracy and results on more settings.** It is true that our 4-bit optimizer does not converge losslessly on all tasks. However, our 4-bit optimizer can achieve lossless results on all language model fine-tuning tasks, including fine-tuning RoBERTa-Large on GLUE and SQuAD, fine-tuning GPT-2 Medium on E2E-NLG, and fine-tuning LLaMA-7B and LLaMA-13B on Alpaca. This already demonstrates its practical applicability. We are also running our optimizers on other tasks. We will post updates on these tasks within the discussion period. **Question 2: Comparison with other optimization methods for large models.** Recently, there have been many works focusing on optimization for LLMs. Sophia [1] is a second-order optimizer designed for language model pretraining. Sophia converges faster than the broadly used AdamW on GPT-2 pretraining task, *but it does not concern about memory efficiency and it has same memory cost as AdamW*. LOMO [2] is a memory efficient optimizer for fine-tuning language models. LOMO is essentially a vanilla SGD optimizer without momentum, and thus it naturally removes the memory consumption of optimizer states. Then, LOMO fuses backpropagation and optimizer update into one step to further reduce memory consumption of gradients and full-precision copies of weights. *However, as LOMO is just a vanilla SGD optimizer, its performance may be worse than Adam in some tasks.* Instead, our 4-bit optimizers focus on reducing memory usage of the first and second moments in stateful optimizers (e.g., Adam), and has a wider range of applicability. The fusion technique in LOMO and our 4-bit optimizers complement each other, and could bring a more memory efficient Adam while maintaining performance. MeZO [3] is a memory efficient zeroth-order optimizer for fine-tuning language models. MeZO adapts the ZO-SGD algorithm by using in-place operation and resetting random seed. It only requires the same memory as inference (i.e., only forward parameters). *However, as MeZO does not utilize gradient, its performance is worse than gradient-based optimizers.* Compared with MeZO, our 4-bit AdamW achieves better results on language model fine-tuning (including fine-tuning RoBERTa-Large on GLUE and SQuAD, GPT-2 Medium on E2E-NLG, and LLaMA-7B and LLaMA-13B on Alpaca). On the other hand, MeZO has performance gaps in some tasks compared with full parameter fine-tuning with Adam and it needs more steps to achieve strong performance. In summary, LOMO and MeZO are designed for fine-tuning language models with strong simplifications of the optimizer (vanilla SGD / zeroth-order optimization) and *they may not be applicable for pretraining.* In contrast, our 4-bit AdamW is essentially an AdamW, and should have similar range of applicability with AdamW. Finally, we notice that [1-3] are all contemporaneous work with our submission. [1] Liu, Hong, et al. "Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training." arXiv preprint arXiv:2305.14342 (2023). [2] Lv, Kai, et al. "Full Parameter Fine-tuning for Large Language Models with Limited Resources." arXiv preprint arXiv:2306.09782 (2023). [3] Malladi, Sadhika, et al. "Fine-Tuning Language Models with Just Forward Passes." arXiv preprint arXiv:2305.17333 (2023). --- Rebuttal Comment 1.1: Title: Response to Reviewer rPDD (cont.) Comment: **Results on more settings** We tested our optimizers on generative modeling task and more vision tasks. For vision tasks, we conducted experiments on image classification with ResNet-50 on ImageNet, detection with Faster R-CNN (with ResNet-50 backbone) on COCO, segmentation with Mask R-CNN (with ResNet-50 backbone) on COCO. The optimizers are 4-bit SGDM / 6-bit SGDM, with the first moment compressed to lower precision using our method. These tasks require 6-bit SGDM to achieve lossless performance compared with 32-bit SGDM. Note that the memory consumption of 6-bit SGDM is still lower than 4-bit Adam, as the former only retains the first moment. For generative modeling task, we trained a DDPM++ model on class-conditional CIFAR-10. The results indicate that it is still challenging to pretrain diffusion models with compressed optimizers. We suspect it is because the high variance of the denoising score matching objective. Nevertheless, our proposed optimizer outperforms the existing compressed optimizers proposed by Dettmers et al. Specifically, while Dettmers et al.'s 8-bit Adam diverges on this task, our 8-bit Adam successfully converges and generates meaningful images. | Method | ResNet-50 CLS| Faster R-CNN| Mask R-CNN | |-------------------|-------------:|----------:|------:| | 32-bit SGDM | 77.2| 37.2| 34.3| | 8-bit SGDM (Dettmers et al., 2021) | 77.2| N/A | N/A | | 4-bit SGDM (ours) | 76.7| 33.8| 31.8| | 6-bit SGDM (ours) | 77.3| 37.2| 34.2| | Method | FID | |-------------------|-------------:| | 32-bit Adam | 1.89| | 8-bit Adam (Dettmers et al., 2021) | 655.12| | 4-bit Adam (ours)| 402.94| | 8-bit Adam (ours)| 53.56| [1] Dettmers, Tim, et al. "8-bit Optimizers via Block-wise Quantization." International Conference on Learning Representations. 2021. **Experiments on LLaMA-30B** We finetuned LLaMA-30B with 32-bit AdamW and 4-bit AdamW on Alpaca and evaluated them on common sense reasoning tasks and MMLU. The hyperparameters used are consistent with those of LLaMA-13B. Both the 32-bit AdamW and 4-bit AdamW achieve similar training losses of 0.057 and 0.059, respectively. Moreover, their convergence curves closely align. The few-shot / zero-shot performance is reported in the following table. In the case of MMLU, while the accuracy of 4-bit AdamW is lower compared with 32-bit AdamW, it does exhibit a lower loss value. Note that for the 30B model, instruction tuning does not improve performance much, so the implication of the performance might be limited. | LLaMA-30B| MMLU loss (5-shot) |MMLU (5-shot) | HellaSwag | ARC-e | ARC-c | OBQA | |-------------|-------------:|------------:|------:|------:|-----:|-----:| | Original | 2.67|54.9| 79.3| 58.9| 45.1| 42.2| | FT w. 32-bit AdamW| 0.98| 56.4| 79.2| 62.6| 47.1| 43.8| | FT w. 4-bit AdamW | 0.95| 54.9| 79.2| 61.6| 46.6| 45.4| --- Rebuttal Comment 1.2: Comment: Thanks for your response. In Table 3, you give the time comparison of 32-bit AdamW, 8-bit AdamW, and 4-bit AdamW. It would be better to give a detailed presentation of the extra time overhead brought by the compress and decompress operation. Further, the time comparison may also become different since the weight size is larger and the elementwise compression might be heavier. --- Reply to Comment 1.2.1: Title: Second Response to Reviewer rPDD Comment: Thanks again for the comments. Regarding your concern: **Extra time overhead brought by compression.** We tested the time overhead introduced by the quantization of our 4-bit AdamW on all tasks listed in Table 3. The results are displayed in the following table. Dettmers et al.'s 8-bit optimizers fuse quantization and optimizer update into a single operator, making it difficult to measure the separate time usage of quantization. We also offer a fused implementation of our 4-bit AdamW and give its time and memory usage results. - For 4-bit AdamW without operator fusion, RoBERTa-Large and GPT-2 Medium, which have similar sizes, exhibit similar ratios of quantization time overhead over total time, at 32% and 26%, respectively. For LLaMA-7B, the quantization time overhead is exceedingly low compared to the total time. This is due to our use of a large gradient accumulation step of 32, and the fact that the communication overhead is more significant than the computation. More generally, in the FSDP or ZeRO settings, the time overhead resulting from quantization is determined by the size of the parameter slice on a single node, as 4-bit AdamW would quantize the optimizer state slices independently and in parallel. The speedup of 4-bit AdamW compared to 32-bit AdamW may stem from larger memory buffers/cache due to memory saving of the optimizer states. - 4-bit AdamW (fused) is always faster than 32-bit AdamW. On RoBERTa-Large and GPT-2 Medium, the speedup of 4-bit AdamW (fused) is due to faster optimizer operation. On LLaMA-7B, the speedup of 4-bit AdamW (fused) is due to the reduced global memory footprint (low-bit read/write and/or large memory cache). | Task | Optimizer | Total Time| Quantization Time Overhead | Total Mem. | |-----------------|-------------|----------:|---:|---:| | LLaMA-7B | 32-bit AdamW | 3.35 h | N/A | 75.40 GB | | LLaMA-7B | 4-bit AdamW | 3.07 h | 14 s| 31.87 GB | | LLaMA-7B | 4-bit AdamW (fused)| 3.11 h| N/A| 31.88 GB | | RoBERTa-Large | 32-bit AdamW | 3.93 min | N/A | 5.31 GB | | RoBERTa-Large | 4-bit AdamW| 5.59 min | 1.80 min| 3.02 GB | | RoBERTa-Large | 4-bit AdamW (fused)| 3.17 min| N/A| 3.00 GB | | GPT-2 Medium | 32-bit AdamW | 2.13 h | N/A | 6.89 GB | | GPT-2 Medium | 4-bit AdamW | 2.43 h | 0.63 h| 4.62 GB | | GPT-2 Medium | 4-bit AdamW (fused)| 2.11 h| N/A | 4.62 GB |
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks
Accept (poster)
Summary: This paper proposes a novel pre-training framework (GPST) for ST prediction, it mainly integrates the ST parameter personalization scheme and the region-wise semantic association mechanism into the pre-training model, and conduct the training in an unsupervised manner. Besides, the proposed model GPST adopts the hierarchical hypergraph structure to capture the semantic-level association of multiple regions from a global perspective. Extensive experiments are conducted. Strengths: 1. This paper is well-presented and well-organized. 2. This paper introduces a spatio-temporal pre-training framework that can be easily integrated into downstream baselines and improve their performance. 3. Detailed experimental results are provided to validate the model. Weaknesses: 1. The model's performance seems to heavily rely on the pre-training stage. If the pre-training is not done correctly or if the pre-training data is not representative of the test data, the model's performance could suffer, how can this case be handled? 2. Why not include POIs, which should benefit the prediction performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Noone Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Response to Reviewer 83m7*** **Comment 1:** Further discussion on the design of the pre-training stage and data distribution. **Response:** Thank you for your feedback. This work primarily introduces a spatio-temporal pre-training framework to enhance downstream baselines, where the effectiveness of the enhancement is highly dependent on the designs of the pre-training stage. If the pre-training model or mechanisms are not appropriately designed, the embeddings generated by the model may lack superior representational capacity, consequently compromising the performance of downstream baselines. To address this issue, we address the limitations of prior research in spatio-temporal modeling and comprehensively consider these concerns in the construction of the pre-training model. In terms of the pre-training mechanism, an adaptive mask strategy is proposed to guide the model to learn robust spatio-temporal representations. The performance of most models suffers significantly if the training data fails to represent the test data, which remains a major challenge in the field of machine learning. Following prior works (such as GWN[1] and STEP[2] ), we assume that the training (or pre-training) and test data distributions are approximately similar, and this work does not address the scenario where there exists a significant distribution discrepancy between the training and test data. There exists a research line dedicated to exploring this issue (such as domain adaptation, domain generalization, and out-of-distribution (OOD)), which is an intriguing research direction. In future work, we will continue to investigate the related issues concerning the generalization ability of spatiotemporal models (such as different distributions between test and training data, data distribution drift, and so on). [1] Graph wavenet for deep spatial-temporal graph modeling. IJCAI 2019. [2] Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting, KDD 2022. **Comment 2:** Explanation for the exclusion of POIs. **Response:** Thank you for your constructive comment. Intuitively, incorporating POIs information can enhance the predictive performance of the model. However, we did not utilize such information primarily based on the following two considerations. i) Difficulties in obtaining POIs data. Due to the absence of POIs in the utilized dataset, their inclusion would require manual collection. However, collecting and determining POIs data in real-world urban scenarios can be extremely challenging. This is primarily attributed to the dataset's extensive coverage of numerous urban regions with significant geographical spans, demanding substantial human resources and time commitment to accomplish this task. ii) Fairness of experiments. Previous works did not incorporate POIs when evaluating on the utilized datasets. To present a fair comparison and showcase the improvement achieved by our model, we refrained from introducing additional artificially created data during the pre-training stage. --- Rebuttal Comment 1.1: Title: Thanks for rebuttal Comment: The responses from the authors have clearly addressed the problems pointed out previously. Thanks for all responses. --- Reply to Comment 1.1.1: Comment: Thank you very much. If you have any further questions regarding this work, please feel free to engage in discussions with us.
Summary: This paper introduce pre-training task for spatio-temporal. The algorithm design includes a spatial-temporal knowledge extractor and a pre-training mechanism. The design of the pre-training methods is to resolve the two limitations of existing spatial-temporal learning: (1) lack of comprehensive personalization (2) insufficient consideration of modelling semantic. Strengths: This paper study an important problem: GNN-pretraining. Weaknesses: 1. The presentation wasn't very clear, please refer to questions. 2. Missing many recent baselines, e.g., TGAT, JODIE, TGN. The authors can compare with those works using https://github.com/amazon-science/tgl. Besides, there are most more recent methods [1-3] need to be included in discussion. [1] Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks [2] Provably expressive temporal graph networks [3] Do We Really Need Complicated Model Architectures For Temporal Networks? 3. Experiment dataset size are all relatively small. I am not sure this method can scale to very large graph from experiments. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. Could you please explicitly summarize what is the SSL pre-training method different from the method on static graph? My impression is the main contribution is due to spatial-temporal vs static graph setting. The mask and predict 2. The hyper-edge definition is not very clear to me. At line 116 "each of which connects multiple vertices", could you please explain how an edges can connect to more than 2 nodes? 3. How intra-class pattern and inter-class relation are defined (line 138). Is this coming from dataset, or the authors define them based on human knowledge per dataset. 4. Is the pre-training task a link prediction task? Do you explicitly consider the potential information leakage issue during pre-training. For example, the positive edges for pre-training might be sampled as a neighbor when computing the node representation. 5. Will GNN size affect the pre-training efficiency? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Response to Reviewer 8n4n*** **Comment 1:** The difference between our pre-training method and the static graph approach. **Response:** Thank you for your suggestion. Compared to static graphs, spatio-temporal (ST) graphs focus more on uncovering the temporal evolution patterns of node features and the dynamic correlations between nodes. The differences between these two approaches primarily lie in the design of the models and the pre-training strategies. (1) Model design. Static graph models typically rely on time-agnostic GNNs for capturing fixed graph structures without time information. However, in ST graph models, it becomes crucial to capture node-specific temporal information and time-evolving spatial dependencies. To tackle these challenges, our GPST employs a temporal encoder and personalized parameter learners to capture the dynamics of node features. Additionally, GPST utilizes a hierarchical hypergraph network to capture the evolving patterns of region-wise spatial relations. As a result, GPST can effectively encode complex ST relations that are not adequately addressed by static graph models. (2) Pre-training strategy: Pre-training in static graph methods typically involves masked autoencoding for features and structures. However, in ST modeling, handcrafted structure information is often noisy and incomplete. This limits the effectiveness of structure reconstruction for pretraining in ST graphs. To address this, we propose an adaptive mask strategy based on ST data for feature autoencoding. This strategy enables the reconstruction of valuable knowledge in an easy-to-hard manner, facilitating high-quality ST pre-training. **Comment 2:** Further explanation of hyperedge. **Response:** Thank you for your feedback. In contrast to conventional edges that transfer information between two nodes, hyperedges expand the notion of edges to act as intermediate information hubs among multiple nodes [1-2]. Specifically, the hyperedge connections is denoted $\textbf{H}\in\mathbb{R}^{N\times H}$, where $N$ and $H$ are the number of nodes and the number of hyperedges. Each hyperedge captures the connectivity among nodes and hyperedges, with messages initially propagated from all nodes to each hyperedge, followed by transmitting aggregated messages back to each node. [1] Hypergraph Neural Networks [2] Dynamic Hypergraph Neural Networks **Comment 3:** Further explanation of intra-class pattern and inter-class relation. **Response:** Thank you for your comment. Relevant concepts are mentioned in the introduction (starting from line 44). In urban settings, similar regions exhibit comparable ST patterns (e.g., residential areas sharing similar traffic patterns), while there are also some associations between different categories of regions (e.g., people moving from residential areas to office areas). These patterns are neither directly obtained from the dataset nor artificially defined. It is a potential knowledge that we aim to extract during the training process, achieved by the iterative clustering effect of capsule networks [3] integrated into the hypergraph network. [3] Dynamic routing between capsules **Comment 4:** Further clarification of the pre-training task. **Response:** Thank you for your feedback. The pre-training task is to mask and reconstruct node features, which is meticulously defined and explained in Preliminaries (Section 3 starting from line 121). Upon thorough verification, we have confirmed the absence of information leakage in this work. Additionally, since our pre-training task does not involve link prediction, the mentioned sampling-induced information leakage scenario does not occur. **Comment 5:** Further discussion on the impact of GNN Size on the proposed model. **Response:** Thank you for your suggestion. The complexity of our Intra- & Inter-class spatial patterns encoding module (spatial encoder) is proportional to the number of time slots $T$ and the number of regions $R$, similar to a vanilla GCN. Hence, our model exhibits scalability to large datasets, akin to other graph architectures. **Comment 6:** Clarification of recent methods for temporal graph networks. **Response:** Thank you for your suggestion. We observe that the mentioned methods predominantly involve works related to temporal graph networks (TGNs), rather than the field of ST prediction that we are concerned about, and we will elaborate from the following two perspectives. (1) Why we did not consider the mentioned methods as baselines and include them in the discussion of this paper? i) There is a gap between the tasks addressed by TGNs and ST prediction. ST prediction primarily deals with forecasting future values based on historical data in a spatial and temporal context. While temporal graph networks focus on dynamic reasoning over graph structures (e.g. link prediction). Different tasks require tailored model architectures to meet their specific requirements. ii) We followed the experimental setup used in existing research on ST prediction [4-5], and the baseline designed specifically for ST prediction tasks may be more representative. [4] Spatial-temporal fusion graph neural networks for traffic flow forecasting [5] MSDR: Multi-step dependency relation networks for spatial temporal forecasting (2) We adapted the mentioned baselines to suit the ST prediction task and discussed the works related to TGNs. i) Additional comparative experiment of JODIE and TGN. Results are presented in the following table (in terms of MAE). | model/dataset | PEMS08 | METR-LA | NYC TAXI | NYC Citi Bike | |:--:|:--:|:--:|:--:|:--:| | JODIE | 18.53 | 3.45 | 6.92 | 2.09 | | w/ GPST | 16.59 | 3.11 | 6.19 | 1.90 | | TGN | 19.33 | 3.61 | 7.58 | 2.09 | | w/ GPST | 16.11 | 3.16 | 5.94 | 1.89 | ii) Related works. Thank you for your suggestion. Due to character limitations, we include the discussion of related works in the global response. --- Rebuttal Comment 1.1: Comment: Q1: I think I didn’t make myself clear in the previous feedback. Let me rephrase in another way. Let’s say the time-encoding in Eq. 3 is a contribution as mentioned by author. But if we look at Eq. 3, its just applying an MLP on the input node features, and here the input node features is named as day feature $z^{(d)}$ and week feature $z^{(w)}$. By replacing the $z^{(d)}, z^{(w)}$ with any node features $x$, it looks exactly a static graph algorithm. The same for adaptive mask strategy, I am use it exactly for static graph. Q2: I am still confused. You mean hyper-edge is actually like another node. For example if hyper-edge connect node 1 with node 2+3, then its like (1, hyper-edge), (hyper-edge, 2), (hyper-edge, 3)? Q3: Sorry, I know what is intra-class pattern and inter-class relation. My original question is “Is the intra-class pattern and inter-class relation coming from dataset, or the authors define them based on human knowledge per dataset.” How do you know this exists? Any evidence? Q4: I was asking this because most graph learning model use link prediction to explicitly learn the graph structure. This is often the case for most temporal graph network methods: pre-train on link-prediction then use it for node-classification. Q5: Most pre-training method works well on large model, e.g., the ResNet in vision task and Bert Models in language task, and it doesn’t often works well on GNN. For example if we take a look at open graph benchmark, none of those methods are using pre-training. One of the hypothesis is GNN is small, not suitable for pre-training. Therefore, many graph transformers are proposed and they show pre-training on those large GNN models indeed help. My question originally want to ask whether you have same observation. I should use “effectiveness” instead of “efficiency”. Q6: Temporal graph networks can also handle node classification task if I recall correctly. Could you please elaborate how GPST are integrated with JODIE and TGN? --- Reply to Comment 1.1.1: Comment: Thank you very much for your response. We will address the issues you have raised. **Response to Q1:** Thank you for your feedback. The graph structure varies across different time steps in our method, distinguishing it from static graphs that remain unchanged over time. In specific, Eq (3) demonstrates the generation of temporal features based on actual time information. These features can be integrated with the hypergraph structure to produce the dynamic hypergraph structure. For example, the hypergraph connectivity $\textbf{H}'_t$ is customized to the $t$-th time slot using another initial temporal embeddings $\textbf{D}_t^{'}$ by $\textbf{H}'_t=\text{softmax}(\textbf{D}_t^{'\top} \cdot \bar{\textbf{H}}')$, where $\bar{\textbf{H}}'\in\mathbb{R}^{d'\times H_S\times R}$ contains embeddings for each hypergraph connection (Section 4.3.1 start from line 205). Here, $\textbf{H}'_t\in\mathbb{R}^{H_S\times R}$ (also $\textbf{H}'\in\mathbb{R}^{T \times H_S\times R}$) represents the generated dynamic hypergraph structure. This structure is employed for personalized message aggregation on node features at different time steps. However, for static graphs, the aforementioned process does not exist. The proposed adaptive mask strategy exhibits spatio-temporal awareness as it is based on the aforementioned dynamic hypergraph structure. **Response to Q2:** Thank you for your response. The concept of hyperedges and the mentioned another type of node share some similarities. It is akin to a virtual node at another hierarchical level that associates multiple actual nodes. The dinsinction lies in that hyperedges do not represent concrete entities (e.g. regions, roads, intersections) in the physical world, but nodes are usually used to represent entities in the observed data. Furthermore, related works [1-2] have indicated the effectiveness of hypergraphs in clustering or latent factor mining. [1] Inhomogeneous Hypergraph Clustering with Applications, NIPS 2017 [2] Hypergraph Clustering Based on PageRank, KDD 2020 **Response to Q3:** Thank you for your comment. In brief, these patterns are not labeled in the dataset. Instead, we expect the model to learn these latent relations. Our case study serves as evidence of GPST's capability to discover such patterns. Specifically, the employed dataset records urban ST data (e.g. traffic flows). Similar regions within the city exhibit analogous traffic patterns (e.g., high traffic during holidays in commercial districts), while dissimilar regions show interdependencies (e.g., people moving from residential to office areas during peak hours), representing normal human activities. Such patterns are unlabeled in the dataset. Thus, we empower GPST with the capacity to capture these patterns by constructing a learnable hierarchical hypergraph network. We confirm this through the case study (Section 5.3). In Figures 5(a) and 5(b), GPST groups three regions with akin traffic patterns into a single category. The latitude and longitude data further affirm their common classification, like being commercial districts. And Figure 5(c) illustrates potential traffic migration between different-category regions captured by GPST. **Response to Q4:** Thank you for your response. If you have any questions about pre-training tasks, please feel free to discuss them with us. **Response to Q5:** Thank you for your valuable feedback. We fully recognize the significance of having sufficient and high-quality training data for achieving successful pre-training. In the domain of ST prediction tasks, although the dataset may not contain a large number of nodes, each individual node encompasses a substantial amount of temporal features. For instance, the METR-LA dataset records traffic data over a four-month period with a sampling frequency of 5 minutes. This rich historical data serves as a solid basis for the pre-training process of GPST, and its ability to effectively enhance downstream baselines further substantiates this observation. **Response to Q6:** Thank you for your reply. Similar to other baselines, we utilize the representations generated by GPST as inputs to enhance the performance of JODIE and TGN (appendix A.1.3). The implementation details for JODIE and TGN are as follows. (1) JODIE: We assigned static embeddings to each node (or region) and used RNNs to handle the dynamic embedding updates of node features, where GNNs replaced the interactions between users and items. A time encoding layer combined with embedding mapping operations was used to update the dynamic embeddings of nodes to accommodate predictions at different time steps. (2) TGN: MLPs and RNNs were employed as the message and memory functions for updating node features. A multi-head attention mechanism was utilized to aggregate messages from neighbors (constructed based on distance). Thank you again. We welcome further discussion and are available to address any questions you may have.
Summary: The authors introduce a spatio-temporal pre-training framework that can be easily integrated into downstream baselines. The framework comprises personalized parameter learners and hierarchical hypergraph networks. The former enables the acquisition of spatio-temporal personalized representation and the latter capture the intra-class and inter-class correlations among regions. Besides, the authors design an adaptive mask strategy to guide the model in learning diverse relationships across regions. Experiments are conducted on representative benchmarks on four real-world datasets to evaluate the proposed framework. Strengths: 1. The authors propose a novel pre-training framework for Spatio-Temporal prediction which can be easily applied to existing advanced ST neural networks. 2. The authors devise an adaptive mask strategy that guides the model in a progressive manner, facilitating the acquisition of useful knowledge from other categories to recover unknown categories. The comparisons with other mask strategies demonstrate the superiority of the designed mechanism. 3. The authors evaluate both the original performance and enhanced performance with the proposed framework of different baselines on four datasets. The results indicate that the proposed model significantly improves the downstream baseline prediction performance and confirm the effectiveness of the framework. Weaknesses: 1. The presentation of this paper should be improved: (1) The explanations of some concepts are not clear. For instance, the meaning of the symbol $M_{r,t}$ in Section 4.2 is unclear. Does it represent the mask operation? Furthermore, the first-level hyperedges mentioned in Section 4.3.2 lack clear explanation, as it is the first occurrence of the term "first level" in the paper. (2) Equation (4) is referenced multiple times in the paper to illustrate the process of generating embeddings. However, it is necessary to provide a more detailed explanation of how the equation is actually implemented. For instance, in Line 173 of Section 4.2, it would be helpful to provide specific equations for the computations involved. The inclusion of concrete examples would greatly enhance the understanding of Equation (4) and its functionality. (3) In Section 4.2, there are multiple subscripts used, including (r,t), r, and t. It would be better if provide a more detailed explanation to clarify the differences between these descriptions. (4) In Section 5.1.1, the paper mentions the use of the absolute error loss function to optimize the parameters. However, there is a lack of detailed description regarding how this loss function is precisely employed. It would be beneficial to provide a more thorough explanation of how the optimization process utilizes the absolute error loss function, including any specific computations. 2. The authors adopt the hierarchical hypergraph structure to capture the semantic-level association of multiple regions from a global perspective. However, the construction of the hypergraph requires a more detailed description. Specifically, the definition of hyperedges and capsules in Section 4.3 needs to be clarified. The paper mentions that hyperedges are treated as high-level capsules, but what exactly does a capsule consist of? Furthermore, it would be helpful to explain the type of hypergraph connection that a hyperedge represents. 3. Some related work with similar ideas is not discussed. For example, the paper [1] also builds a pre-training model on urban data and builds a personalized model in a specific area. Though the idea is not the same, the author should discuss the difference in their related work at least. 4. It would be beneficial to include a brief discussion on future work in the paper. [1] A Contextual Master-Slave Framework on Urban Region Graph for Urban Village Detection, ICDE 2023. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What’s the “another initial” in Line 206 of Section 4.3.1 mean? 2. Is the aforementioned c in Section 4.4 from Section 4.3.1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have not adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Response to Reviewer fFpq*** **Comment 1:** Clarifications of notations and formulas. **Response:** We sincerely apologize for any confusion that may have arisen regarding the notations and equations, and greatly appreciate your feedback. In order to enhance the clarity of our work, we will provide detailed explanations for each of these issues and will supplement and update them in subsequent versions. (1) $\textbf{M}_{r,t}$ represents the mask operation for the $r$-th region in the $t$-th time slot. The first-level hyperedges denotes the final embeddings $\bar{\textbf{s}}\in\mathbb{R}^{H_S\times T\times d}$ obtained from Section 4.3.1. In detail, the designed hierarchical hypergraph neural architecture contains the hypergraph capsule clustering network (Section 4.3.1) and the classes aware hypergraph network (Section 4.3.2). We consider the former as the first-level and the latter as the second-level (also high-level). (2) For the time-dynamic encoding, GPST generates time-dynamic feature extraction parameters $\textbf{W}_t\in\mathbb{R}^{d\times d}, \textbf{b}_t\in\mathbb{R}^d$ using the temporal features $\textbf{D}_t\in\mathbb{R}^{d'}$. Formally, the customization is conducted by $\textbf{W}_t = \textbf{D}_t^\top \cdot \bar{\textbf{W}},~\textbf{b}_t = \textbf{D}_t^\top \cdot \bar{\textbf{b}}$, where $\bar{\textbf{W}}\in\mathbb{R}^{d'\times d\times d}, \bar{\textbf{b}}\in\mathbb{R}^{d'\times d}$ are learnable parameters for transformations and bias vectors. (3) $r$ and $t$ are the indexes of $R$ regions and $T$ time slots. For example, $\textbf{E}_{r,t}\in\mathbb{R}^d$ is the representation of the ST-data for the $r$-th region in the $t$-th time slot; $\textbf{E}_t\in\mathbb{R}^{R\times d}$ denotes the embedding matrix for all the $R$ regions in the $t$-th time slot; $\textbf{E}_r\in\mathbb{R}^{T\times d}$ denotes the embedding matrix for all the $T$ time slots in the $r$-th region. (4) The details of the loss function are presented in the supplementary material (Section A.1.3) for reference. Due to space constraints, we refrain from presenting it here again. (5) Another initial temporal embeddings $\textbf{D}_t^{'}$ is new temporal embeddings that generated based on Equation (3). The aforementioned $\bar{c}$ in Section 4.4 is from Section 4.3.1. **Comment 2:** Further explanation of hypergraphs and capsule networks. **Response:** (1) We provide an explanation of the definition and construction of hypergraphs in Section 3 (Starting from line 114), as below: A hypergraph $\mathcal{H} = \{\mathcal{V}, \mathcal{E}, \textbf{H}\}$ is composed of three parts: i) Vertices $\mathcal{V}=\{v_{r,t}: r\in R, t\in T\}$, each of which represents a region $r$ in a specific time slot $t$. ii) $H$ hyperedges $\mathcal{E}=\{e_1,...,e_H\}$, each of which connects multiple vertices to reflect the multipartite region-wise relations. iii)} The vertex-hyperedge connections $\textbf{H}\in\mathbb{R}^{N\times H}$, where $N$ denotes the number of vertices. To fully excavate the potential of hypergraphs in region-wise relation learning, we adopt a learnable hypergraph scheme where $\textbf{H}$ is derived from trainable parameters. (2) According to [1], a capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. This characteristic empowers GPST with the ability to express region features, such as the functionality of a region. In contrast to low-level capsules, high-level capsules can represent the characteristics of more advanced entities, such as common features of a certain category of regions. By computing the agreement between low-level capsules and high-level capsules using a dynamic routing algorithm, the model can better capture the dynamic semantic information of regions. (3) According to the definition of hypergraphs, the hyperedges are not predefined but derived from trainable parameters. Therefore, each hyperedge in the hypergraph structure can represent a certain underlying semantic correlation. For example, in section 4.3.1, a hyperedge (also high-leval capsule) represents the probability of a region belonging to a certain category. [1] Dynamic Routing Between Capsule, NIPS 2017. **Comment 3:** The inclusion of the suggested work CMSF. **Response:** Thank you very much for your suggestion. We will incorporate it into the discussion of related work. Due to character limitations, we have included the discussion of all related works mentioned by the reviewers in the global response. Please refer to the author rebuttal to access it. **Comment 4:** Further discussion of our future work for further investigation. **Response:** Thank you for your suggestions. We briefly mention future work in the supplementary material (Section A.4), and we will expand on it based on the latest developments. Future works:In future work, we will continue to explore more generalizable and versatile spatio-temporal pre-training frameworks. For example, one interesting research direction is addressing the issue of disparate distributions between training and testing data to enhance the practicality of pre-training models. Additionally, lightweight algorithms are also worth investigating, considering the high demand for computational efficiency in real-world urban scenarios. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Dear authors; Thanks for your effort to response my questions. I have no future points arising for the discussion. Regards --- Reply to Comment 1.1.1: Comment: We extend our sincere appreciation for your response.
Summary: This paper introduces a spatio-temporal pre-training framework called GPST (Generative Pre-training framework for Spatial Temporal prediction) to enhance the performance of existing models in traffic management and travel planning. The goal is to address the challenges of integrating and expanding refined models, while achieving better predictive performance. The framework consists of two main components. (1) a spatio-temporal knowledge extractor that utilizes personalized parameter learners and hierarchical hypergraph networks. These modules are designed to model personalized representations and semantic relationships between regions, which have been overlooked in previous work. (2) an adaptive mask strategy is proposed to guide the knowledge extractor in learning robust spatio-temporal representations. The paper conducts extensive experiments on representative benchmarks and demonstrates the effectiveness of the proposed GPST method. Strengths: 1. The two research questions are interesting. Question (1) in personalized information extraction is well studied in STGNN. Question (2) is relatively novel since region functions could make a difference but not considered too much in the literature. 2. The experiments results look promising. Weaknesses: 1. Related works should include SSL works from both Graph and Multivariate Time Series. I think current version has some missing references in esp. Graph SSL, which could be related to the strategy developed in this paper (needs further justification) 2. Figure 3 is really small to understand clearly. Visual illustration is better decoupled into text description. 3. Section 4 has too many details that could be moved to appendix since these detailed designs are not quite novel, though important. I'd suggest this section to be more compact and highlighted. 4. I have some concerns about the improvement by using GPST. It is clear in Table 1 that some methods benefit a lot and some not that much. I expect a more detailed analysis on this. Is some design of GPST redundant in GWN, MTGNN, etc? What is the core part of GPST that makes performance better? Or, is GPST dependent of certain architecture designs? How are these two factors (architecture and ssl strategy) decoupled? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Thanks for clarifying the above mentioned weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Response to Reviewer H5wT*** **Comment 1:** The inclusion of the suggested related work. **Response:** Thank you for your suggestion. The self-supervised learning methods for graph networks and multivariate time series forecasting will be incorporated into the related work in our revised version. Due to character limitations, we have included the discussion of all related works mentioned by the reviewers in the global response. Please refer to the author rebuttal to access it. **Comment 2:** Explanation of figure size and corresponding text description. **Response:** We sincerely apologize for reducing the size of Figure 3 due to page limit. We will adjust it to its regular size in our revision. In addition, all textual descriptions for the figures will be added and updated. In instance, for Figure 1: Upper left figure shows that the traffic patterns of same region in different time period behaves difference, while this situation also occurs in different regions during the same period (lower left); the middle figures show the relationship between regions changes dynamically over time (lower), while most of the existing works only construct static graphs (upper).The right figure shows the migration relationship between regions of different categories. **Comment 3:** Further discussion of the organization in Section 4. **Response:** Thank you for your valuable feedback. We appreciate your suggestions and will make the necessary adjustments to the structure and description of Section 4 to highlight the key points of our model. Specifically, to emphasize the key elements in each section, we will simplify the personalized parameter operations mentioned in Sections 4.2-4.4 by referencing their standard forms and moving their specific implementations to the supplementary material. Additionally, the specific details of the embedding layer mentioned in Section 4.2, which involve conventional operations, will also be included in the supplementary material. In Section 4.3, we will highlight the concept of Intra- & Inter-class Spatial Patterns Encoding and focus on describing the key module for modeling semantic correlations among multiple regions. The fundamental operations related to capsule networks will also be streamlined (e.g., Equation (6)). **Comment 4:** Further analysis of the performance improvement over the baseline GPST. **Response:** (1) Explanation of the performance improvement of GPST on different baselines. For the mentioned GWN and MTGNN, GPST shares some similar insights with these baselines, as all approaches adopt learnable modules to model spatial correlations. In GWN and MTGNN, learnable node embeddings are utilized to generate graph structures and learn node associations, while GPST employs a learnable hypergraph structure combined with capsule networks to capture region associations. Particularly, GPST considers the spatial-temporal semantic correlations in urban scenes, providing effective compensatory signals for GWN and MTGNN. Furthermore, experimental results demonstrate that with the inclusion of GPST, GWN and MTGNN exhibit faster loss reduction during training, as shown in the table below. | model/epoch | 5 | 10 | 20 | |:-----:|:-----:|:-----:|:-----:| | GWN | 19.79 | 18.67 | 17.14 | | w/ GPST |19.77 | 17.61 | 15.94 | | MTGNN | 17.84 | 16.75 | 16.21 | | w/ GPST |16.94 | 15.93 | 15.05 | (2) Does GPST rely solely on a specific module? From ablation study (Section 5.3), we observed that the personalized parameter learner, Intra- & Inter-class spatial patterns encoding and the adaptive mask strategy are all crucial modules in GPST, and they collectively contribute to its performance improvement. The injection of discriminative parameters can help the model to better capture the spatio-temporal relationship in different periods and different regions. Intra- & Inter-class spatial patterns encoding empowers the model with the ability to aggregate the information from regions with similar functionalities in the global perspective and discover inter-class correlation patterns. The adaptive mask further promotes Intra- & Inter-class association learning. These three components complement each other, contributing to the optimal performance achieved by GPST. (3) GPST exhibits decoupling and correlation in model architecture and training strategy. On one hand, model architecture and training strategy are distinct components with different responsibilities. The training strategy serves as a guide for the model architecture by employing masked signals to direct the learning process. The model architecture, acting as a feature generator, encodes complex spatio-temporal features driven by the training task. On the other hand, their common objective is to generate effective spatio-temporal representations. The proposed adaptive masking strategy is designed based on the model architecture, guiding the model to learn intra-class and inter-class associations in an easy-to-hard way, thereby generating higher-quality representations. --- Rebuttal Comment 1.1: Comment: Thank the authors for reply. I don't have further questions. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your response.
Rebuttal 1: Rebuttal: ***Responses Regarding Literature Review*** We extend our heartfelt gratitude to the reviewers for their valuable time and effort dedicated to evaluating our work. Due to constraints on character count, we will address the discussion of all relevant works mentioned by the reviewers in this section. ***Response to Reviewer H5wT*** **Comment 1:** The inclusion of the suggested related work. **Response:** Thank you for your suggestion. The self-supervised learning methods for graph networks and multivariate time series forecasting will be incorporated into the related work in our revised version. SSL methods for Graph and MTS: In recent years, self-supervised learning methods for graph have received significant attention in recent years. GNNs based on contrastive learning generate different views through data augmentation. Following that, a loss function is employed to maximize the consistency of positive sample pairs while minimizing the consistency of negative sample pairs across views. For instance, GraphCL [1] generates two views of the original graph by applying node dropping and edge shuffling, and performs contrastive learning between them. Furthermore, JOAO [2] proposes an approach that automatically selects graph augmentations to facilitate contrastive learning. Another research direction involves generative graph neural networks, where these methods leverage the graph data itself as a natural supervisory signal and achieve representation learning through reconstruction. GPT-GNN [3] pretrains by reconstructing graph features and edges, while GraphMAE [4] employs node feature masking in both the graph encoder and decoder to reconstruct features and learn graph representations. In multivariate time series prediction, STEP [5] integrates long-term temporal features to generate temporal representations using a MAE-based pretraining method. COST [6] argues that unraveling temporal features is beneficial for time series prediction and proposes a contrastive learning approach applied to decouple trend and seasonal representations. However, spatio-temporal forecasting requires considering complex temporal evolution patterns and spatial correlation mechanisms simultaneously, and the pre-training paradigm for such tasks is still an area of exploration. [1] Graph Contrastive Learning with Augmentations, NIPS 2020. [2] JOAO: Graph Contrastive Learning Automated, ICML 2021. [3] GPT-GNN: Generative Pre-Training of Graph Neural Networks, KDD 2020. [4] GraphMAE: Self-Supervised Masked Graph Autoencoders, KDD 2022. [5] Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting, KDD 2022. [6] CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting, ICLR 2022. ***Response to Reviewer 8n4n*** **Comment 6:** Discussion of recent methods for temporal graph networks. Thank you for your suggestion. In response to this valuable feedback, we will incorporate a dedicated paragraph in our revised version to provide a detailed analysis and discussion of the mentioned methods and their relevance to the field of spatio-temporal prediction scenarios. Temporal graph networks serve as a similar research baseline aimed at reasoning about dynamic graph structures. CAWs [7] proposed a causal anonymous walks approach to inductively represent dynamic graph networks, specifically addressing interaction prediction for newly appearing nodes. PINT [8] introduced injective temporal message passing and relative positional features to achieve more expressive temporal graph networks. Additionally, GraphMixer [9] argued that complex model structures may not necessarily suitable for temporal networks and presented a conceptually and technically simple architecture based on MLP for link prediction. [7] Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks, ICLR 2021. [8] Provably expressive temporal graph networks, NIPS 2022. [9] Do We Really Need Complicated Model Architectures For Temporal Networks, ICLR 2023. ***Response to Reviewer fFpq*** **Comment 3:** The inclusion of the suggested work CMSF. **Response:** Thank you very much for your suggestion. We will incorporate it into the discussion of related work. CMSF [10] proposes a master-slave framework for identifying urban villages. The master model generates region representations in a pre-trained manner, while the slave model fine-tunes specific regions for accurate identification. The proposed model can be distinguished from CMSF in the following three aspects. (1) Task. CMSF focuses on identifying urban villages in the city, which falls under the classification task. In contrast, our model aim to predict the future spatio-temporal conditions given the historical records, thus belonging to a regression task. (2) Technical approach. While CMSF employs techniques like GNNs to establish semantic correlations in static scenes, our task involves temporal dynamics. To address this, we have designed a hierarchical hypergraph framework with personalized parameter learner to model time-aware semantic correlations in regions. In addition, an adaptive masking strategy is proposed to further enhance the model's performance. (3) Generalizability: After pre-training, CMSF requires specific configurations for the slave model to adapt to the master model's predictions, limiting its applicability to enhancing other similar models. Recognizing that different downstream baselines have distinct advantages and expressive capabilities when dealing with different data distributions, we have designed a universal pre-training framework that can enhance existing baselines to improve their performance. [10] A Contextual Master-Slave Framework on Urban Region Graph for Urban Village Detection, ICDE 2023.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Provably Efficient Algorithm for Nonstationary Low-Rank MDPs
Accept (poster)
Summary: The paper presents a new way to perform off-policy RL coupled with representation learning over non-stationary low-rank MDPs, both in a parameter-based and a parameter-free fashion. The paper proposes some interesting new algorithmic techniques to tackle this (off-policy exploration, data-transfer model learning, and target policy update with periodic restart). Strengths: # originality The work seems quite original in both the analysis and the contribution. # quality The quality of this work is satisfactory and the analysis is thoughtful. # clarity The exposition is clear and neat, I would suggest presenting the algorithms differently (See weaknesses) # significance The work seems to provide a significant contribution to the field, due to the several techniques it introduces. Weaknesses: My only comment is about the exposition of the algorithm. I would suggest not citing equations but reporting them, and providing the output of the subroutines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I didn't find any section where the limitations of the proposed method were addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing the helpful review! Below we address the reviewer’s comments. Q: My only comment is about the exposition of the algorithm. I would suggest not citing equations but reporting them, and providing the output of the subroutines. A: Thanks for the suggestions. We will revise the presentation of the algorithm as the reviewer suggested. Finally, we thank the reviewer again for the helpful comments and suggestions for our work. We are more than happy to answer any further questions that you may have during the discussion period.
Summary: This paper present an algorithm for solving non-stationary low-rank MDP. It also provide a theoretical analysis of their algorithm, Strengths: This is an interesting question. The presentation is clear. Weaknesses: I sketch through the results and the proof. I doubt that the proof might be wrong. In page 14, the inequality (i) require the author to provide an upper bound of E_{P^*} f_h^k - E_{\hat P}f_h^k. The paper says that inequality (i) holds since Lemma A.13 and Lemma A.15. However, none of the lemma is related with the term I mentioned above. Therefore, I am worried that the proof might be wrong. More over, the writing of the proof can be improve, and the clearity of the paper have room for improvement. For example, it would be helpful to provide the definition of $\lambda$ in the main part. It would also be helpful to mention that $W\leq K$ in the main part. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: See the 'weakness' section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: See the 'weakness' section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing the helpful review! Below we address the reviewer’s comments. Q1: I sketch through the results and the proof. I doubt that the proof might be wrong. In page 14, the inequality (i) require the author to provide an upper bound of $E_{P^*,{\pi^{\star,k}}} f_h^k - E_{\hat P,{\pi^{\star,k}}}f_h^k$. The paper says that inequality (i) holds since Lemma A.13 and Lemma A.15. However, none of the lemma is related with the term I mentioned above. Therefore, I am worried that the proof might be wrong. A1: **We confirm that the proof is correct!** The term $E_{P^*,{\pi^{\star,k}}} f_h^k - E_{\hat P,{\pi^{\star,k}}}f_h^k$ can indeed be upper bounded by using Lemma A.15. To see this, Lemma A.15 provides upper bounds on the value function difference between estimated models and true models for any generic reward. Now consider a specific reward function $r^k$, which takes the value $f_h^k$ at step $h$ and takes the value 0 at all other steps. For such a reward, we have $ V_{P^{\star,k},r^k}^{\pi^{\star,k}}-V_{\hat{P}^k,r^k}^{\pi^{\star,k}}=E_{P^*,{\pi^{\star,k}}} f_h^k - E_{\hat P,{\pi^{\star,k}}}f_h^k$. In this way, the bound in Lemma A.15 on $ V_{P^{\star,k},r^k}^{\pi^{\star,k}}-V_{\hat{P}^k,r^k}^{\pi^{\star,k}} $ becomes a bound on $E_{P^*,{\pi^{\star,k}}} f_h^k - E_{\hat P,{\pi^{\star,k}}}f_h^k$. Q2: More over, the writing of the proof can be improve, and the clearity of the paper have room for improvement. For example, it would be helpful to provide the definition of $\lambda$ in the main part. It would also be helpful to mention that $W \leq K$ in the main part. A2: Thanks for the suggestions. We will make our best efforts to improve both the clarity and readability of our paper in the revision. Specifically, regarding the regularizer $\lambda$, we have defined it in line 1 of Algorithm 1 and specified its value in line 285 in Theorem 4.4. We will further emphasize these in the revision. We will mention $W \leq K$ as the reviewer suggested in the main part. Finally, if our response resolves your concerns to a satisfactory level, we kindly ask the reviewer to consider raising the score of your evaluation. Certainly, we are more than happy to answer any further questions that you may have during the discussion period. We thank the reviewer again for the helpful comments and suggestions for our work. --- Rebuttal 2: Title: Dear Reviewer ga5z, your feedback is important to us Comment: Dear Reviewer ga5z, As the author-reviewer discussion period has started for one week, and will end very soon, we would greatly appreciate any feedback you could provide. Could you please check our response at your earliest convenience? This way, if you have further questions, we will still have time to respond before this discussion period ends. We thank the reviewer very much in advance for your time and efforts. Best regards, Authors. --- Rebuttal 3: Title: Your feedback is important for us Comment: Dear Reviewer ga5z, Since the author-reviewer discussion period will end in about one and half days, could you please check our response at your earliest convenience? This way, if you have further questions, we will still have time to respond. We thank the reviewer very much in advance for your time and efforts. Best Regards, Authors
Summary: This paper investigates nonstationary RL in the context of episodic low-rank MDPs, where both transition kerns and rewards may change from round to round, and where the low-rank model contains unknown representations. The goal of this work is then to develop methods with low average dynamic suboptimality gap, which is a measure of how well the agent performs across all rounds. The authors start by proposing PORTAL, a base algorithm with three main ideas: (a) off-policy exploration for data collection which is particularly useful for non-stationarity; (b) the transfer of recent history data collected under previous different transition kernels for benefiting the estimation of the current model; and (c) updating target policies with periodic restart. PORTAL requires few hyperparameters, which greatly affect performance but may not be easy to set. for this reason, the authors also propose Ada-PORTAL, a second method that uses PORTAL as a subroutine and additionally treats the selection of the hyperparameters as a bandit problem. Under standard assumptions for low-rank MDPs and assuming that non-stationarity is not significant, the authors show that both methods achieve arbitrarily small average dynamic suboptimality gap with polynomial sample complexity. Strengths: 1. The paper extends the literature in low-rank MDPs by incorporating non-stationarity. In particular, the work advances the recent research on nonstationary linear (mixture) MDPs. Overall, the contribution is interesting, given the attention that both low-rank MDPs as well as non-stationarity have recently attracted. 2. The paper makes assumptions that are consistent with both the low-rank MDP literature as well as the non-stationary RL literature. 3. I was only able to check part of the derivations, which looked correct. I liked that the authors clarified how their work is different from recent works on non-stationarity or low-rank MDPs, not just in terms of the problem statement but also in terms of the new technical challenges. Weaknesses: 1. I appreciate the fact that the authors provided insights and clarifications in several parts of the paper. But the paper is still hard to follow. I think what generally helps with such works is to provide an overview of the proof strategy, explain which exactly elements from prior works are used, and what elements are new. I acknowledge that the paper contains some discussions here and there, but I feel that a more structured approach would significantly improve the exposition. 2. In its current form and presentation, I feel that the paper is only relevant to a very small audience of the ML/AI community. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why periodic restart in this work does not require a smooth visitation frequency assumption (compared to Fei et al., 2020)? 2. The authors use EXP3-P in Ada-PORTAL. Is this choice critical or could other choices be made instead? Are there reasons to prefer one over the others? 3. The authors stress the significance of \tau and W. For the first algorithm W and \tau, how exactly should the be selected? Could the authors provide clear guidelines? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing the helpful review! Below we address the reviewer’s comments, and we will revise the paper as suggested. If our response resolves your concerns to a satisfactory level, we kindly ask the reviewer to consider possibly raising the score of your evaluation. Q1: Provide an overview of the proof strategy. A1: Thanks for the comments. Please note that we have provided a proof sketch starting from line 451 in Appendix A. Based on the suggestions, we will further explain new elements as follows in the revision. In step 1, we decompose the average dynamic suboptimal gap into three terms, which can be divided into two parts: $T_1$ and $T_2$. In step 2, for part $T_1$ corresponding to model estimation errors, the proof contains two sub-steps. (i) First by Lemmas A.13 and A.15, we show model estimation errors can be bounded by the average of the truncated value functions of bonus terms plus a variation budgets term. In this sub-step, we provide a new nonstationary MLE guarantee as in Lemma A.10, which is novel as it characterizes the performance of MLE under nonstationary environment for the first time. We also develop the Nonstationary Step Back Lemma to handle distribution shift from the training data distribution to target data distribution. (ii) We then upper bound the average of the truncated value function as in Lemma A.18. Here, we adopt an auxiliary anchor representation for each block to deal with the challenge arising from time-varying representations when using standard elliptical potential based methods. This design of auxiliary anchor representation is new because of the unknown and nonstationary representations. In step 3, we bound part $T_2$ corresponding to performance difference bound. The analysis of step 3 is inspired by [4], which explores policy optimization under tabular MDPs, and we extend it to low-rank MDPs. Our novelty lies in adopting different techniques to bound Eq. (11) (please refer to A3), which removes the Assumption 1 in [4]. Q2: The paper is only relevant to small audience of ML/AI community. A2: We will revise the paper as follows in order for the paper to be appreciated by a large set of audience in ML/AI community. (i) We will provide application examples to connect the nonstationary low-rank MDP model to realistic applications such as real-time online auctions, arm manipulation in robotics, autonomous driving, etc. In this way, we hope to attract a broad set of audience in these application areas. (ii) In order for the paper to be appreciated by a large set of audience who are interested in nonstationary RL, we will revise our description of the algorithm to help readers catch our main design ideas to handle nonstationarity and improve the sample efficiency. We will also explain our results on the sample complexity from the high level to make it easier for people to understand the performance guarantee. (iii) On the theory side, we will explain our proof techniques better and clearer to make them more accessible to people who will potentially use these techniques to study various other RL problems with nonstationarity, such as offline RL, multi-agent RL, RL with safety constraints, etc. Q3: Why this work does not require a smooth visitation assumption? A3: Fei et al., (2020) [4] chose to make the following smooth visitation assumption in proving Lemma 5: $\max_{s_j}\sum_{s_{j+1}}|P^{\pi^{(j)}}(s_{j+1}|s_j)-P^{\pi^\prime}(s_{j+1}|s_j)| \leq C\\|\pi^{(j)}-\pi^\prime\\|_{\infty}$. Interestingly, we discovered that such an inequality can be proven to hold with equality as follows. \begin{align*} & \quad \max_{s_j}\sum_{s_{j+1}}|P^{\pi^{(j)}}(s_{j+1}|s_j)-P^{\pi^\prime}(s_{j+1}|s_j)|\\\\ &=\max_{s_j}\sum_{s_{j+1}}|\sum_{a}P(s_{j+1}|s_j,a)\pi^{(j)}(a|s_j)-\sum_{a}P(s_{j+1}|s_j,a)\pi^\prime(a|s_j)|\\\\ &\leq \max_{s_j}\sum_{s_{j+1}}\sum_{a}P(s_{j+1}|s_j,a)|\pi^{(j)}(a|s_j)-\pi^\prime(a|s_j)|\\\\ &=\max_{s_j}\sum_{a}\sum_{s_{j+1}}P(s_{j+1}|s_j,a)|\pi^{(j)}(a|s_j)-\pi^\prime(a|s_j)|\\\\ &=\max_{s_j}\sum_{a}|\pi^{(j)}(a|s_j)-\pi^\prime(a|s_j)|=\\|\pi^{(j)}-\pi^\prime\\|_{\infty}\\\\ \end{align*} Q4: Is this choice of EXP3-P critical or could be made instead? Reasons? A4: We believe other choices can be made instead. In Ada-PORTAL, the selection of the hyper-parameters such as $W$ and $\tau$ can be regarded as an adversarial bandit problem. In this paper, we choose EXP3-P because it is one of the most popular algorithms for adversarial bandits. Any other efficient algorithms for adversarial bandits, for example SAO in [5], can also be applied to serve the purpose. Given that SAO exhibits the same regret performance as EXP3-P (omitting the logarithmic term), there is no special preference on these two algorithms. Q5: The authors stress the significance of $\tau$ and $W$. For the first algorithm $W$ and $\tau$, how exactly should the be selected? Could the authors provide clear guidelines? A5: Thanks for the comments. Please note that in Appendix B, we explained how to select $W$ and $\tau$ in detail. More specifically, Theorem 4.4 provides an upper bound on the average suboptimality gap $\mathrm{Gap_{Ave}}(K)$ w.r.t. the values of $\tau$ and $W$. We can then select their values to minimize such an upper bound. The optimal values of $\tau$ and $W$ can be found in Appendix B. [1] Cheung, W. C., et.al (2023). Nonstationary reinforcement learning: The blessing of (more) optimism. Management Science. [2] Cheung, W. C., et.al (2022). Hedging the drift: Learning to optimize under nonstationarity. Management Science, 68(3), 1696-1713. [3] Guo, X., et.al (2019). Learning mean-field games. NeurIPS 32. [4] Fei, Y., et.al (2020). Dynamic regret of policy optimization in non-stationary environments. NeurIPS 33. [5] Bubeck, Sébastien, and Aleksandrs (2012) Slivkins. The best of both worlds: Stochastic and adversarial bandits. COLT. --- Rebuttal Comment 1.1: Title: thank you for the response Comment: I thank the authors for their detailed response. I will increase my score from 5 to 6, since all my concerns and questions were addressed. I encourage the authors to incorporate the various clarifications into the final version of this work for improved clarity, presentation and exposition.
Summary: This paper studies reinforcement learning (RL) under non-stationary MDPs where the state transition and reward function changes across episodes. The paper focus on the low-rank MDP setting, in which the state transition function admits a low-rank decomposition. The authors propose a novel algorithm, namely PORTAL, along with theoretical results that characterize a bound for the average suboptimality gap of the learned policies over the episodes. Additionally, the paper also introduces an algorithm, namely Ada-PORTAL, for adaptive tuning of the hyperparameters of PORTAL. Strengths: * The paper is the first to study non-stationarity in low-rank MDPs. * The proposed method is sound and results in a better suboptimality gap than previous related work (Wei & Luo, 2021). Weaknesses: * A few algorithmic decisions and assumptions require better explanation/justification to improve the clarity of the paper (see below). * The method relies on an assumption that is not very realistic (reachability) and should be better discussed (see Limitations below). * (Minor) The paper does not have an experimental section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Below, I have a few questions and constructive feedback for the authors: 1) In Section 3.2, I would suggest making it clearer what a “round” is. For instance, does each round have a fixed number of episodes? 2) The current version of Section 4.1 is a bit hard to follow. For instance, in Step 1, terms such as “bonus term” and “target policy” are used without being previously defined. 3) “However, in low-rank MDPs, the bonus term $b^k_k$ cannot serve as a point-wise uncertainty measure”. I suggest elaborating on why this is true. 4) In Step 1, why the exploration policy takes two uniformly chosen actions? This algorithmic decision is not discussed or explained. 5) I suggest writing comments in Algorithm 1 mentioning which lines correspond to Step 1, 2, and 3 for improving the clarity. 6) In Algorithm 1, why states are indexed two times (subscript and superscript) with the time step $h$, e.g., $s_h^{k, h}$? What the indexes mean was not defined. 7) What is the role of the coefficient $\tilde{\alpha_{k,W}}$? How is this selected? 8) “Compared with the previous work using periodic restart (Fei et al., 2020), whose choice of $\tau$ is based on a certain smooth visitation assumption, we remove such an assumption and hence our choice of $\tau$ is applicable to more general model classes.” Why can you remove this assumption? How is the value of $\tau$ selected? Please elaborate on why it is necessary to reset the target policy every $\tau$ episode. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Assumption 4.3 is not very realistic: in most MDPs, it is not possible to reach any state of the MDP in a single step. For instance, in a maze, the probability of reaching the end of the maze from the initial state in a single step is zero. Is this assumption also made in related works, e.g., Wei & Luo (2021)? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing helpful reviews! Below we address the reviewer’s comments, and we will revise the paper as suggested. If our response resolves your concerns to a satisfactory level, we kindly ask the reviewer to consider possibly raising the score of your evaluation. Q1: Indicate what a “round” is. A1: Yes, each "round" has a fixed number of episodes. Q2: Revise current version of Section 4.1. Define “bonus term” and “target policy”. A2: The term "bonus term" is the level of uncertainty introduced to the actual reward to account for the estimation error, and hence this term leads to a near optimistic estimation of the value function. The term "target policy", as defined in Sec. 3.2, represents the policy generated by the algorithm, which is employed to assess the algorithm's performance through a comparison with a series of optimal policies. Q3: Elaborate why $\hat{b}^k_h$ cannot serve as a point-wise uncertainty measure. A3: The main reason is because low-rank MDPs have unknown representations $\phi(s,a)$ and performance guarantee on the estimation error of such unknown representation $\phi(s,a)$ via MLE is given on the expected value over $(s,a)$ as in Lemma A.10. This further results in the following uncertainty bound on the expected value over $(s,a)$ (as in Lemma A.13), not on each individual $(s,a)$: $$ E_{(s_h,a_h) \sim (P^k, \pi)} [f_h^k(s_h,a_h)] \leq E_{(s_{h-1},a_{h-1}) \sim (P^k,\pi)}[\hat{b}^k_h(s_{h-1},a_{h-1})]+\Delta $$ where $f_h^k$ is total variation distance between estimated models and true models, i.e. estimation errors, and $\Delta$ is a term related to the variation budget. Q4: In Step 1, why the exploration policy takes two uniformly chosen actions? A4: The main reason that exploration takes uniformly chosen actions is due to the fact that the representation is unknown in low-rank MDPs. So the agent needs to uniformly explore all actions and directions over the state-action space to get a comprehensive knowledge of the environment. Further, we note a single uniform step is not sufficient. Covariance matrix $U$ stores historical data information for future use of exploration. As explained above, the collected data is required to take at least one-step uniformly chosen actions. Subsequently, to effectively control estimation errors within target distribution, we use $U$ in the step back lemma (lemma A.12) to deal with distribution shifts, where importance sampling and one more uniform step is required. This leads the design of exploration policy with two-step uniformly chosen actions. Q5: Mention which lines correspond to Step 1, 2, and 3 in Alg. 1. A5: Step 1 corresponds to lines 5-8, Step 2 corresponds to line 9 (which calls Algorithm 2 E$^2$U), and Step 3 corresponds to lines 13-17. Q6: Define index of $s_h^{(k,h)}$ in Alg 1. A6: Subscript $h$ in ${s}_{h}^{(k,h)}$ indicates the data is collected at time step $h$ of each trajectory. Superscript $(k,h)$ indicates in which loop the data is collected (as in lines 3 and 4 of Algorithm 1, there are two loops regarding $k$ and $h$). Q7: What is the role of $\tilde{\alpha}_{k,W}$? A7: $\tilde\alpha_{k,W}$ quantifies data distribution shift from training data distribution to any target distribution. Consider Lemma A.13, where we bound estimation error of estimated model under target distribution (which is different from distribution of the training data used to estimate the model). $\tilde{\alpha}_{k,W}$ arises from applying the step back lemma to transmute target distribution back to distribution of training data. Specially, $\tilde\alpha_{k,W}$ is determined by the upper bound of distribution shift level given by $O((A+d^2)\log(kH|\Phi||\Psi|))$. Q8: Why can you remove assumption of [1]? How is $\tau$ selected? Elaborate on necessity to reset target policy. A8: For the reason of removing assumption, refer to A3 to Reviewer FzWn. Regarding how to select $\tau$, refer to A5 to Reviewer FzWn. We next elaborate the necessity to reset target policy every $\tau$ episode. Intuitively, if environment (i.e., MDP transition and/or reward) changes significantly during some episodes, then samples far in the past are not so informative to represent current transitions/reward and hence are not very useful for updating the current policy. Hence, periodic restart helps to stick with the most relevant samples and stablize the algorithm's performance against the nonstationary drift of the environment. Q9: The paper does not have an experimental section. A9: We will try our best to work out some experimental results. Q10: Assump. 4.3 is not very realistic. Is this assumption also made in related works, e.g., Wei and Luo (2021)? A10: Assumption 4.3 is introduced to deal with the nonstationarity. In order to improve the model estimation efficiency, here we need to apply historical data collected across different transition kernels for estimating the current model. To control such a model estimation error, both historical and current transition kernels should maintain non-zero transition probabilities (Assumption 4.3) to avoid singularity issues. Notably, Wei and Luo (2021) does not introduce an algorithm for nonstationary low-rank MDPs. Instead, for any types of MDPs, they require an off-the-shelf algorithm that performs effectively within nonstationary scenario with low variation as an input, and then uses a black-box approach to output an algorithm that performs efficiently under arbitrarily nonstationary environment without knowing variation budgets. In other words, Wei and Luo (2021) would need our Algorithm 1 as an input in order to specialize to the nonstationary low-rank setting. This will make them also rely on our Assumption 4.3. [1] Jin, C., et. al (2020) Provably efficient reinforcement learning with linear function approximation. COLT. [2] Wei, C. Y., & Luo, H. (2021). Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach. COLT. --- Rebuttal Comment 1.1: Comment: I thank the authors for carefully responding to all my questions/concerns. I am hence increasing my score appropriately. I would like to ask the authors to use the extra page to add the clarifications made in the rebuttal to the main body of the paper. This is particularly important for the clarity of the paper. --- Reply to Comment 1.1.1: Title: Thank you for your feedback! Comment: We thank the reviewer very much for the further suggestions and for raising the score of your evaluation. We will add all the clarifications and discussions made in the rebuttal to the main body of the paper in the revised paper. Thanks again for your time and efforts during the review process!
Rebuttal 1: Rebuttal: Dear all reviewers, We thank all the reviewers for providing the helpful review comments! We further thank Reviewers niTs, FzWn and 5jBS for your positive evaluation of our paper. We have responded to your specific questions and will revise our paper based on your comments. We also thank Reviewer ga5z for going through our proof, and want to clarify that the proof step you pointed out as the main concern is correct. Please see our response for detailed explanation. We thank all the reviewers again for your efforts!
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Framework for Fast and Stable Representations of Multiparameter Persistent Homology Decompositions
Accept (poster)
Summary: Persistent homology (PH) is the most important method in topological data analysis (TDA), and multiparameter persistence (MPH) is its natural generalization which is expected to significantly boost its performance. However, because of several mathematical roadblocks, MPH could not be effectively used in real life applications. In this work, the authors proposes a general framework to vectorize MPH. Their framework generalizes all the known existing work as special cases. Furthermore, they provide a subset of vectorizations which are stable. Finally, another obstruction to use MPH in real life applications was their computational costs. The authors new framework provides a much faster way to obtain these vectorizations. Strengths: Framework provided is very general and it addresses a critical need in TDA. Stable MPH vectorizations, and their convergence studies are excellent contribution to the theory of TDA. Computationally feasible MPH vectorizations will finally enable to use MPH effectively in real life applications. Weaknesses: Experiments section could be more detailed, and more in-depth analysis could have been provided for the performance of these vectorizations in real life applications. In particular, the authors compare the performance of their vectorizations only with respect to other TDA models, and some simple other models in point cloud setting. Instead, more thorough analysis could have been provided to compare these MPH vectorizations with respect to SOTA models from different domains. A performance evaluation in graph setting would be nice as it is another very important application area of PH. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: From theoretical standpoint, stable vectorizations are always preferable, however, from ML side, when enforcing stability, we might be losing too much information and scarifying performance. With this in mind, can you also suggest a practical, computationally efficient T-CDR methods which potentially provide better performance? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: As authors stated, one of the main limitation of the approach is coming from the generality/flexibility of their framework. For ML applications, there are several choices to make (hyperparameter tuning) to define a suitable vectorization for a given situation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Experiments section could be more detailed, and more in-depth analysis could have been provided for the performance of these vectorizations in real life applications. In particular, the authors compare the performance of their vectorizations only with respect to other TDA models, and some simple other models in point cloud setting. Instead, more thorough analysis could have been provided to compare these MPH vectorizations with respect to SOTA models from different domains.* Thank you for this suggestion. Given that our goal was mostly to show improvement over the topological baselines, we put less emphasis on the non topological ones, as advocated also in the cited earlier works. However, we added a new experiment on graph data, for which MPH is known to work well, and that incorporates more SOTA baselines. See general comment and Table 3 in the rebuttal pdf. *A performance evaluation in graph setting would be nice as it is another very important application area of PH.* Thank you for this suggestion. We have added a new experiment on graph data, which shows that our framework can be applied to more general structured data than geometric point clouds. See our answer above and our general comment to all reviewers. *From theoretical standpoint, stable vectorizations are always preferable, however, from ML side, when enforcing stability, we might be losing too much information and scarifying performance. With this in mind, can you also suggest a practical, computationally efficient T-CDR methods which potentially provide better performance?* Thank you for this suggestion. Notice that the parameters we use in the experiments ($p=0,1$) are already not stable for the interleaving distance between persistence modules, but only for the bottleneck distance, as per Theorem 1. One future direction is indeed to investigate other types of T-CDR parameters: an interesting step in this path would be to borrow descriptors from the computer vision literature (such as, e.g., spin images from `Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes` (Johnson, Hebert, 1999)) in order to capture more information (even if it deteriorates stability) about the module summands of the decomposition in the $w$ or $\phi$ T-CDR parameters. We will make this clear in the text. --- Rebuttal Comment 1.1: Comment: Thank you for your answers and additional experiments. I have no further questions. Good luck with your submission.
Summary: The paper investigated the persistent homology with multiple filtration parameters. The authors proposed a framework, T-CDR, which generalized the previous studies in multi-parameter persistent homology. They further presented stability and convergence guarantees on S-CDR, which is a special case of T-CDR introduced to ensure robustness. Third, their theoretical claim and contributions were supported by the empirical convergence studies as well as classification tasks on several immunohistochemistry datasets. Strengths: 1. [Originality] The proposed framework, T-CDR in Definition 1 not only generalized previous work in candidate decomposition (e.g., MPI) as well as approaches in rank-invariant (MPL, MPK). I believe that it opens up new research on ensuring different guarantees by varying parameters/operators in (1). 1. [Significance] The stability and convergence guarantee (Theorem 1 and (8)) of S-CDR, which is a special case of T-CDR, provides a more stable and useful multi-parameter persistent homology framework. 1. [Quality] The authors supplement the convergence rate claim with informative empirical convergence studies, showing a clear trend of error rate reduces with $~n^{-1/2}$ as $n$ grows matching eq. (8). Weaknesses: 1. In the empirical convergence rate experiment, it was not clear to me what is the “ground truth” representation you are comparing against. For synthetic data in Figure 3, it makes sense that you can get access to the density $f$ (therefore $\mathcal F_{C, f}$ and $\mathbb M$) given that you generated the data from some probability distribution. How do you get the $\mathbb M$ for the immunohistochemistry data (as in Figure 4)? Provide more clarifications on this will be beneficial. 1. It seems like MPL runtime can be improved 25x-50x with the sparse implementation in Algorithm 4 (see, e.g., Row #2 of Table 2 vs. Rows #4 and #6 of Table 2); this suggests that the runtime win might be due implementation rather than faster algorithmic time complexity. The claim will be more convincing if the authors can provide more insights, justifications, or analyses of the time complexity as to why the proposed algorithm is more efficient than prior work. 1. What is the bifiltration parameters used for each experiments in Sections 4? Are they radius and co-density for Figure 3, and CD8 and CD68 for the immunohistochemistry data? Adding more explanation there will increase the clarity more. 1. In Sections 1-3, the function $f$ is used to define the multi-parameter filtration function; however, in Section 4, the notation is defined as the density (e.g., in L254). I would suggest to choose another notation to avoid confusion. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. It looks to me most of Section 4.1 is a continuation of Theorem 1, is there any specific reason to put this in this section rather than in Section 3 (and make it a Proposition/Corollary)? 1. Empirically, how sensitive is the algorithm for the larger intrinsic dimension $d$? If I understand the experiments correctly, they all seem to have $d=2.$ I am curious about how the intrinsic dimension $d$ will impact convergence and/or performance. 1. I am also curious about how the proposed framework can be applied in higher-order homology descriptors (empirically). This is also related to Question #2 above. 1. TDA has been applied in numerous different domains such as galaxy, proteins, single-cell sequencing, 3D CAD point clouds, medical imaging [A-C] etc. I am curious if the proposed multi-parameters filtraion work can be expanded in fields outside immunohistochemistry. 1. How optimal/tight is the bound in (8)? Can we get a better convergence result if we choose op, $\omega$, and/or $\phi$ differently? 1. [Minor] Given that T-CDR is a generalization of both the rank-invraint and candidate decomposition method, should the name template “candidate decomposition” representation be modified to better reflect what it can be capable of? 1. [Minor/Typo?] Should the citation in L477 be [CB20] instead of the PersLay paper? --- [A] Wasserman, Larry. “Topological Data Analysis.” Annual Review of Statistics and Its Application 5 (2018): 501–32. [B] Chen, Yu-Chia, and Marina Meila. “The Decomposition of the Higher-Order Homology Embedding Constructed from the k-Laplacian.” Advances in Neural Information Processing Systems 34 (2021). [C] Wu, Pengxiang, Chao Chen, Yusu Wang, Shaoting Zhang, Changhe Yuan, Zhen Qian, Dimitris Metaxas, and Leon Axel. “Optimal Topological Cycles and Their Application in Cardiac Trabeculae Restoration.” In International Conference on Information Processing in Medical Imaging, 80–92. Springer, 2017. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors have addressed the limitations of their work. Negative social impact is not applicable because this work is a theoretical contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *In the empirical convergence rate experiment, [...] How do you get the for the immunohistochemistry data (as in Figure 4)?* This is a very good question. As you mentioned, we do not know the true distribution, so the target that we use (and that we measure the distance to) is the S-CDR obtained from the density computed with a KDE that was fit on the whole point cloud. We then generated the curves by drawing several subsamples, and measuring the differences between the S-CDRs of the KDE fit on the subsamples and the target S-CDR. Note that subsampling in this way is heuristically justified by the theory of the bootstrap/ resampling methods. *the authors can provide more insights, justifications, or analyses of the time complexity as to why the proposed algorithm is more efficient than prior work.* This is a very good observation: indeed, concerning MPL, the huge improvement we observe is definitely due to the data structure we use; while the code in `multipers` needs all barcodes be stored in a heavy list of arrays, we use an implicit, sparse representation of candidate decompositions that allows for fast queries of barcodes and thus efficient MPL and MPI computations. We believe this is also what makes our implementation of T-CDRs and S-CDRs quite fast, in addition to their already favorable theoretical complexity (which is roughly equal to the product of the numbers of birth and death corners of our implicit representations of modules). We will make this more clear in the text. *What is the bifiltration parameters used for each experiments in Sections 4?* We always used Rips + codensity (estimated with kernel density estimators (KDE)), which is more stable yet very similar to the filtrations used in earlier works. Concerning the immunohistochemistry data, CD8 and CD68 were used only to create several different point clouds with their own labels (e.g., the point cloud of CD8 cells, with label "CD8", and the one of CD68 cells, with label "CD68"), and then they were all processed with Rips + codensity filtrations. We will make this clear in the text. See also our answer to question 3 of reviewer MKJT. *is there any specific reason to put this in this section rather than in Section 3 [...]?* Thank you for this suggestion. We initially wanted to emphasize the fact that S-CDRs and T-CDRs are general representations that can handle any candidate decompositions, while results in Section 4.1 are specific to the Rips + codensity filtrations. *Empirically, how sensitive is the algorithm for the larger intrinsic dimension $d$ ?* This is a very good question. In terms of convergence, we did some new experiments on point clouds in three dimensions, see Figure 1 in the rebuttal pdf, and did observe convergence with specific rates. In terms of classification performance, notice that while immunohistochemistry data is in two dimensions, UCR data was processed with time delay embedding with a length $3$ window, which produced point clouds in dimension $3$. In terms of running time performances, the intrinsic dimension only has an impact on the candidate decomposition methods (e.g., `MMA`, `Rivet`, etc) and/or the filtration values (e.g., density computed with KDE), but no impact on our T-CDRs and S-CDRs, as they only process persistence modules and are oblivious to how these modules were computed. *I am also curious about how the proposed framework can be applied in higher-order homology descriptors (empirically).* We are a bit unsure of what is meant by higher-order homology. --- If this refers to the homology dimension (e.g., $H_1$ vs. $H_{2}$), note that S-CDRs and T-CDRs can be applied straightforwardly as they are oblivious to the homology degrees and only needs persistence modules. We also run new convergence experiments to check whether convergence was still occurring in $H_2$ for a synthetic data set in $R^3$ in Figure 1 of the rebuttal pdf; while rates can be different than homology dimensions $0$ and $1$, we did always observe convergence happening. --- If this refers to the number of filtrations, the only limitation is the computational complexity, as the definitions of the T-CDRs and S-CDRs apply in any dimension. *I am curious if the proposed multi-parameters filtraion work can be expanded in fields outside immunohistochemistry.* We have added a new experiment on graph data, which shows that our framework can be applied outside of geometric point clouds. See Table 3 in the rebuttal pdf and our general comment to all reviewers. *How optimal/tight is the bound in (8)? Can we get a better convergence result [...]?* We believe our bound is not necessarily optimal. Roughly speaking, our bound is obtained with: - Stability of interleaving between Prokhorov and Wasserstein (a) - Control of the Haussdorff distance (b) and of density estimation (c) Note that (a) is tight by the universality property (see section 5.2 in `The Theory of the Interleaving Distance on Multidimensional Persistence Modules` (Lesnick, 2015)). However, it should be possible to improve the bound by finding a specific class of data sets that would provide a tighter control of either (b) or (c). Whether it is possible to also get a better bound by fine tuning T-CDR parameters is a very interesting suggestion that we plan to study in future work. *Should the name template “candidate decomposition” representation be modified to better reflect what it can be capable of?* Thanks for the suggestion. Since T-CDR can only work once a decomposition has been provided (wherever it comes from), we believe that it is important that this appears in the representation name. *Should the citation in L477 be [CB20] instead of the PersLay paper?* Are you referring to L177 (the paper only has 458 lines)? In this case, we actually meant PersLay, since $w'$ and $\phi'$ are applied on restrictions of the persistence module to lines, or, in other words, to single-parameter persistence diagrams. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response! - For "higher-order homology descriptors", yes, I am referring to homology dimension. Thanks for the additional experiments, I have no further question in this. - I am referring to L477 in the appendix. Quote what you write there as following: "In this section, we provide theoretical and experimental evidence that the multiparameter persistence image (MPI) [CCI+20], which is another decomposition-based...". Given that you are talking about MPI, I naturally think that you should cite [CB20] rather than [CCI+20]. Maybe this is a typo? The rest look good! --- Reply to Comment 1.1.1: Title: Answer to reviewer BEmH Comment: Yes you are totally right, thank you for noticing this mistake! We will put the correct reference [CB20] instead. Thanks again for all your comments and suggestions on our work.
Summary: As the title clearly suggests, a general representation of multiparameter persistent homology (MPH) is introduced. Unlike earlier representations that either yield a loss in information or are unstable, the proposed vectorization is strictly more informative, and is shown to be stable for a specific settings of the parameters in the general approach. The approach is evaluated on several real data sets. Strengths: (S1) I do not have an overview of MPH literature, but assuming that the authors provide a complete and honest overview of earlier work, the contributions in this paper are substantial. (S2) Figure 1 and the discussion on Lines 170 – 185 clearly and nicely position the work in the existing literature. Weaknesses: The main experimental results in Table 1 show that the proposed S-CDR approach is half of the time significantly outperformed by other existing methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: (Q1) Looking at Figure 2, it does not seem as challenging, but rather straightforward to represent the plot in the middle with the image on the right? (Q2) You claim that S-CDR is strictly more powerful than MPK, but MPK achieves better scores on DPOC, PPTW, GPAS, GPMVF data sets (Table 1). This deserves at least a brief discussion. (Q3) Why do you exclude some data sets considered in [CB20] from your experiments (Table 1)? Are the filtrations used to calculate your S-CDR the same as the filtrations to calculate MPK, MPL and MPI in earlier works? Why do you not evaluate the running times also on the Table 1 data set, and compare it against the results in [CB20]? Other minor comments: - The order of Figures 1 and 2 should be reversed, since the latter is referenced first in the text? Similarly, Sections 4.2 and 4.3, or Table 1 and Table 2 should be reversed? - When listing the contributions, add “(Section 3)” after “statistical convergence of our representations”. The Outline paragraph soon below then becomes almost redundant. - Line 169, Line 177, Line 343: Perslay -> PersLay - Line 202: “the two following S-CDR”. I would remove “two,” as it might imply a diminished contribution, since you actually define an S-CDR for every p in N. - I would start every item in a list on Lines 194-195 and Line 204 on a new line, as this is important information that the reader should be able to find and read easily. If there are space limitations, you can sacrifice the Outline paragraph in Section 1, see one of the minor comments above. - Table 1: What is highlighted in red? - Line 314: “the same bifiltration as in the previous section”. Subsection? In this case too, the previous subsection does not contain any information on bifiltration. - Table 1: What does “P” denote for the last row in the table? - Line 326: “almost always outperform” is an overstatement. - Table 2: For better readability, only consider or s or ms, remove them from the individual cells and place in brackets in column titles. - Line 344: parameter -> parameters - References: Be consistent between capital case vs. lower case for the paper titles. Remove # from 2D, and remove double reference [CdGO15], [CdSGO15]. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations and future research directions are clearly presented in the final section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *The main experimental results in Table 1 show that the proposed S-CDR approach is half of the time significantly outperformed by other existing methods.* This is correct. The table indicates that for some time series and graph tasks, topological methods are less effective than "classical" methods; in these cases, the close scores of the topological methods indicate limits of what TDA can achieve. The real strength of the topological methods (and notably S-CDR) can be seen from the results on the immunohistochemistry data. *Looking at Figure 2, it does not seem as challenging, but rather straightforward to represent the plot in the middle with the image on the right?* Thanks for the suggestion. We will make the required changes. *You claim that S-CDR is strictly more powerful than MPK, but MPK achieves better scores on DPOC, PPTW, GPAS, GPMVF data sets (Table 1). This deserves at least a brief discussion.* You are right. We believe that S-CDR shines particularly when candidate decompositions of modules matter the most (w.r.t. rank-invariant based methods). We believe this is what happens for the immunohistochemistry data (and what explains the improvement), while UCR data can be classified using only the information encoded in the rank invariant, so that the extra content in the candidate decompositions and our corresponding S-CDRs is not used so much, leading to comparable yet not strongly superior accuracies. We will update the text to make this clear. *Why do you exclude some data sets considered in [CB20] from your experiments (Table 1)?* This is a very good question. Unfortunately, the immunofluorescence data presented in [CB20] is protected and has not been released publicly, so we could not reuse it. Instead, we provided experiments on a similar public data set from immunohistochemistry. *Are the filtrations used to calculate your S-CDR the same as the filtrations to calculate MPK, MPL and MPI in earlier works?* The filtrations we used are slightly different than the ones of earlier work for MPK, MPL and MPI, but they are very similar: earlier work use Alpha + Distance-to-Measure (DTM) filtrations while we use Rips + codensity (computed with kernel density estimators (KDE)) filtrations. For sake of completeness and fairness, we rerun the experiments using also Alpha and DTM filtrations, see Table 1 in the rebuttal pdf; we did not observe drastic changes in the corresponding results, and the main takeaway of the table stays the same: S-CDRs are quite competitive with both topological and non topological baselines. The reason we initially used different filtrations is the following: despite being theoretically related, Alpha and Rips filtrations differ in the final complex they filter. Indeed, on a dataset of n points, Rips filters the complete, abstract simplex with n vertices, while Alpha filters (roughly speaking) the Delaunay triangulation of the points. This induces some lack of stability and convergence rates for Alpha + DTM in the multi-parameter setting, as some critical simplices might never appear in the single-parameter filtrations of some lines in the module if their densities are too small, leading to infinite bars, while the complete simplex associated to Rips contains so many faces that every cycle is eventually closed by some simplex at some point. *Why do you not evaluate the running times also on the Table 1 data set, and compare it against the results in [CB20]?* This is a very good question. The running times improvements for UCR data sets (in Table 1) were not particularly striking due to the quite small data set sizes (leading to running times that are quite small for all competitors) so we did not include them. For sake of completeness, we put them back (for a few representative UCR data sets) in Table 2 of the rebuttal pdf. *What is highlighted in red?* Red and bold numbers correspond to the best scores in each category (non-topological baselines / single-parameter persistence / multi-parameter persistence) for the UCR data sets, and to the best score overall for the immunohistochemistry data. We will make this clear in the text. See also our answer to reviewer pDMj. *What does “P” denote for the last row in the table?* "P" stands for single-parameter persistence. Since all three such methods ("PSS-K", "P-I", "P-L") provided the (almost) same result, we only provided one number in order to save space. Thank you very much also for all the minor suggestions and comments concerning the text. We will make the required changes. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response!
Summary: The paper addresses topological data analysis in the multiparameter setting and focuses on the problem of representation of multiparameter persistent homology by vector space elements so that standard ML algorithms can be applied. The key advance in the paper is the leveraging of decomposition ideas in multiparameter settings to devise a representation framework that is shown to be theoretically stable with practical, efficient computation. The strengths in statistical convergence, accuracy and computational speed are validated in specific UCR classification datasets. Strengths: Originality: The paper draws inspiration from past work on multiparameter persistent homology by Carriere et al. It offers a theoretical formulation that provides stability guarantees (e.g. statistical rates of convergence) and is a generalization of the past work. Quality: The paper is comprehensive including the appendix. I did not go through the details of the proofs and cannot comment on the theoretical correctness of the paper. Clarity: The paper is very clear with comprehensive references to the latest in the field. Significance: The paper appears to be of theoretical significance and will be of importance to researchers in Topological Data analysis. Weaknesses: While the experimental results show the power of S-CDR on several datasets, It will be interesting to see how the method applied to more complex datasets (e.g. from imaging related applications). The experimental validation is limited but sufficient. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I could not follow the notation in table 1 as it is not clear why there are multiple highlighted numbers (I would assume that the red numbers correspond to the best classification results, but in both table 1 and in the appendix on time series classifications there are multiple flagged columns of relevance). Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *Experiments on more complex datasets, e.g., imaging related applications. The experimental validation is limited but sufficient* We have included a second set of experiments on graph data that demonstrates that our representations can be successfully applied to other types of data sets. Results are available in Table 3 of the rebuttal pdf. See also our general comment to all reviewers. *Table 1 notations : clarify the highlighted numbers* We agree that our notations were confusing. Red and bold numbers correspond to the best scores in each of the three categories (non-topological baselines / single-parameter persistence / multi-parameter persistence) for the UCR data sets, and to the best score overall for the immunohistochemistry data. We will make more clear in the text by putting the best score overall in bold font and decorate (e.g., by underlining) the best in each category. --- Rebuttal Comment 1.1: Title: Satisfied with the rebuttal. Comment: Thanks for the additional experiments and the revisions.
Rebuttal 1: Rebuttal: We first want to thank all reviewers for their careful and insightful reviews. We have responded to their comments, and will be happy to provide further explanations if needed. We have also prepared a rebuttal pdf that contains a few more experiments that were rightfully suggested by the reviewers. If the paper is accepted, these new results will be included in the submission, either in the main paper or in the supplementary. In short, 1. we re-did classification experiments on UCR data sets and made sure we used the same filtrations as topological baseline competitors (Table 1), 2. we added the running times of our method for UCR data sets (Table 2), 3. we added a new classification experiments on graph data sets (Table 3), 4. we measured convergence rates for a point cloud in $R^3$ in homology degrees $0$ ,$1$ and $2$ (Figure 1) More details about the new graph classification experiment: the results we provide were obtained after cross validating across several pairs and triplets of filtrations (including HKS, Ricci curvature and node degree), with the remaining parameters being the same than the ones we used for the UCR classification experiment. We compare S-CDRs to the Euler characteristic based multiparameter persistence methods ECP, RT, and HTn, introduced in `Euler characteristic tools for topological data analysis` (Hacquard, Lebovici, 2023). In order to also include non topological baselines, we also compare against the state-of-the-art graph classification methods RetGK introduced in `Retgk: Graph kernels based on return probabilities of random walks` (Zhang et al, NeurIPS 2018), FGSD introduced in `Hunt for the unique, stable, sparse and fast feature learning on graphs` (Verma, Zhang, NeurIPS 2017), and GIN introduced in `How powerful are graph neural networks?` (Xu et al, ICLR 2019). These SOTA methods performed the best in the analysis of (Hacquard, Lebovici, 2023) so we decided to go with them and use the accuracies reported there. All methods are averaged over 10 folds with 90/10 training/test splits. Overall, one can see from Table 3 in the rebuttal pdf that results are competitive with both topological and non-topological baselines; S-CDRs perform even slightly better on COX2. We believe this is yet another indication that our method can be applied successfully in quite general settings involving multiparameter topology. Pdf: /pdf/3554b0bc040f14f197a4123304c558786b06cc61.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts
Accept (poster)
Summary: This paper proposes a benchmark for evaluating model robustness against structural distributional shifts. The authors propose three different metrics to split the datasets for the evaluation. In the experiments, the authors evaluate different models under the proposed distributional shifts and show that they can be quite challenging for existing graph models. Strengths: 1. This paper targets an understudied problem of evaluating model robustness and uncertainty estimation under structural distribution shift. 2. It presents a novel idea of using different evaluation metrics to create data splits for structural distribution shifts. 3. The authors evaluate different methods based on the proposed metrics and show some empirical findings. Weaknesses: 1. The authors consider less important nodes are OOD nodes. However, real-world graphs follow the power-law distribution and only have a small number of nodes that have a high degree/node importance. So the proposed metric is kind of counterintuitive to me. 2. The experimental setup, particularly the equal splitting of in-distribution (ID) and out-of-distribution (OOD) data, raises concerns about the generalizability of the findings. Real-world scenarios typically involve a smaller proportion of OOD data, and investigating varying ratios of ID to OOD data would enhance the practical relevance of the evaluation. 3. Since the authors argue the existing methods only focus on feature distribution shift, while this paper only focuses on the structural distribution shift. It would be better to consider them together rather than separately evaluate them, as they often occur simultaneously and have interdependencies. 4. While the proposed metrics for creating data splits are new, the authors could provide more detailed justification and analysis for their choices. It would be beneficial to explore the sensitivity of the evaluation results to different metric configurations. 5. I have a concern that should this paper be submitted to the datasets and benchmark datasets since this paper doesn't focus on technical contribution. To strengthen the paper, it would be beneficial to address these additional points and provide more comprehensive justifications, analyses, and experiments. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In addition to the above weaknesses, another question is (line 290-292), the authors"... including DANN, CORAL, EERM, and Mixup, we use the experimental framework from the GOOD benchmark, whereas the remaining methods are implemented in our custom experimental framework ...", what does it mean? Why not just use the same experimental framework? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! We appreciate that you also consider the problem of evaluating robustness and uncertainty of graph models under structural distributional shift to be understudied, and that you find our idea to use structural node properties for inducing distributional shifts novel. Below, we address your concerns and questions. W1. If we understand correctly, you consider here another notion of OOD that is based on how many objects in the given dataset have a particular structural property and suggest that the observations with the rarest values of a structural property should be treated as OOD (for instance, as you mentioned, the nodes with high degree in the power-law distribution should belong to OOD). At the same time, rather than focusing on which proportions of nodes share some specific structural property, we try to construct OOD based on how labels might be assigned to the nodes in real-world applications (and in what order they become available for a graph model to extract knowledge from them). Even if there are much more unimportant nodes in the network, they can easily appear to be OOD for a graph model that has learned the structural patterns of the most important nodes that had been investigated first. So, your remark about the power-law distribution of node degrees in real-world networks is true, but it does not contradict our approach to model structural shifts. W2. Thank you for pointing this out. We believe that in most real-world scenarios, the presence of a large amount of labeled data is rather an exception. In our empirical study, we try to simulate such a situation when the lack of a sufficient amount of labeled data is combined with the presence of structural distributional shift. Nevertheless, we agree with the importance of investigating other ratios of ID and OOD data to understand whether our findings can be transferred to other cases. Because of that, we have conducted an additional series of experiments where we change the proportion of ID and OOD data, and we discuss them in our [general response](https://openreview.net/forum?id=nJFJcgjnGo&noteId=EvXe6dwLzG). W3. As you have mentioned, the problem of evaluating model robustness and uncertainty estimation under structural distribution shift is under-explored. In this work, we focus on this aspect and isolate ourselves from the task of simulating complex distributional shifts that are explicitly based on both the structural properties and the node features. It facilitates our study of existing graph models and investigation of various effects in their performance associated with the changes of graph structure. Please also note that graph structure is the only common modality of different graph datasets that can be explored in the same manner, whereas node features may vary significantly across different domains and applications. However, we have conducted additional analysis to show that our proposed structural shifts also induce the distributional shift in feature space of the considered graph datasets. Several figures demonstrating this effect can be found in our [general response](https://openreview.net/forum?id=nJFJcgjnGo&noteId=EvXe6dwLzG). As future research, it would be interesting to consider some techniques for inducing distributional shifts in graph data where the changes in graph structure and node features could be controlled simultaneously. W4. Our main idea is that data splits can be created based on various structural properties. Thus, one can potentially use any other node characteristics within our split strategy. In our experiments, we have chosen three particular options that are diverse and reasonable: please, refer to Section 3.2 for a detailed motivation of our choices. In Section 5.1, we investigate how challenging the proposed structural shifts are for some existing graph models in terms of their robustness and uncertainty estimation. Moreover, Appendix A demonstrates how these shifts affect other important graph properties, such as class balance, degree distribution, and pairwise node distances. Finally, in Appendices B and C, we discuss how our approach to create distributional shifts, along with three particular choices of structural node properties, allow us to create a more challenging setting for evaluating the performance of graph models, while complementing and extending the existing techniques. We hope that these sections make the analysis of the proposed structural properties fairly detailed and complete. However, if you have any ideas on how we can further improve this part of our study, could you please share your thoughts so we can adjust the paper accordingly? W5. Please refer to our [general response](https://openreview.net/forum?id=nJFJcgjnGo&noteId=EvXe6dwLzG). Q1. In this sentence, we clarify that the experiments with the methods for improving OOD robustness, where we measure the predictive performance in standard node classification, are conducted using the GOOD framework, as it provides all necessary functions. However, the experiments with uncertainty estimation methods, which are evaluated in OOD detection, required a substantially different set of functions (e.g., basic routines for predictions, architecture design, evaluation procedure, metrics, etc.), so we found it much easier to develop a custom experimental framework rather than extending that of GOOD benchmark. --- Rebuttal Comment 1.1: Comment: Thanks for the response, part of my concerns have been addressed. I would like to consider raising my score if the following questions can be answered. Regarding W2, I appreciate that the authors provide a new analysis on the setting with 70% ID, and 30% OOD, but 30% OOD is still a lot, if it is possible to show the prediction/detection performance for 5% or 10% OOD data? And what would be findings? For W3, the authors show that the structure distribution shifts can induce the feature distribution shift as shown in Appendix (Figure 1). It's a bit unclear to me by saying "reducing the original node feature space into the 2D space of t-SNE representations". Why the structural shift will affect the original node features? Or you use models such as GNN to learn the embeddings and then do T-SNE. My last concern is about the contribution/insights of this paper. This paper proposes multiple ways to create structure shifts and evaluate the performance of existing methods on it. We know it is quite challenging for existing methods, but I would expect the authors to propose a solution (could be straightforward) for solving the problem as well, or at least provide the insights or design principles for addressing the structural distribution shifts. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and involvement in the discussion! > … if it is possible to show the prediction/detection performance for 5% or 10% OOD data? And what would be findings? Following your suggestion, we have conducted an additional series of experiments where the ID to OOD ratio is 90% to 10%. Our findings are the following: The drop in predictive performance under the popularity-based and locality-based shifts reaches almost 7% instead of 3% in the previous setup (where we had 70% to 30% ratio) and 10% instead of 6%, respectively. This was expected and can be explained by the fact that graph models are now tested on the nodes with the most extreme structural properties (i.e., nodes with the lowest PageRank or clustering coefficient), and their inaccurate predictions are no longer compensated by the more accurate ones, which were made on the nodes with far less anomalous properties. At the same time, the performance drop on locality-based shift appears to be no more than 1% on average. This result is quite reasonable, since now graph models have access to the whole variety of graph substructures and node features, which allows them to predict equally well for any graph region regardless of its distance to some particular node. The observed effects are interesting and also important to consider when using our approach for evaluating graph models. Regarding the OOD detection performance, we observe that results on the locality-based shift keep decreasing and reach 72 points, in contrast to 78 points in the previous setup (and this might happen for the same reason as we discussed the rebuttal). At the same time, the detection metrics on the remaining structural shift appear to be nearly the same (80 points instead of the previous 81 on the popularity-based shift) or even better on average (62 points instead of 56 on the density-based shift), which is consistent with the changes in predictive performance discussed above. As for the ranking of baseline models, it remains almost the same, with the most simple methods providing top performance on the majority of prediction tasks. > … It's a bit unclear to me by saying "reducing the original node feature space into the 2D space of t-SNE representations" We apologize for causing misunderstanding in our response. In Figure 1, we have just applied t-SNE to the original node features (i.e., we did not use any supervised GNN embeddings) to visualize the distribution of multi-dimensional node features in 2D. This Figure shows that the considered structural shift not only affects the graph structure, but may also lead to a distributional shift in the node features, although the significance of this effect may vary between graph datasets. Our finding is natural, as node features and graph structure are correlated in real-world graphs. > … but I would expect the authors to propose a solution (could be straightforward) for solving the problem as well, or at least provide the insights or design principles for addressing the structural distribution shifts. Regarding your last question, we partially cover this in Section 5.3 and demonstrate that it is possible to make several simple modifications to the GNN model which help it to deal with distributional shifts. Additionally, we have conducted several experiments with data augmentation techniques that were expected to improve the performance of graph models. In particular, we tried using DropEdge or stacking the original node features with the structural node characteristics that correspond to particular distributional shifts. However, we have not achieved consistent improvements in these experiments. In summary, this is a very important question, but a principled solution to it is non-trivial and requires further investigation. We believe that our proposed method for creating distributional shifts will support studies in this area. We hope that our response answers your questions and we are open to further discussion on the raised points.
Summary: The authors propose three domain selections, namely, popularity, locality, and density to equip node prediction datasets with structural distribution shifts. Extensive experiments are conducted to compare present OOD generalization methods and OOD detection methods. Strengths: S1: The proposed distribution shifts are novel. S2: The paper is well written and easy-to-follow. S3: The evaluations are clear and sufficient. Weaknesses: W1: The title is misleading. Since this benchmark only focuses on node prediction tasks, the title should reflect it. As we know, structural distribution shifts are different in graph-level and node-level tasks. W2: The chosen shifts are somehow limited because they are not extracted from real-world scenarios. W3: Line 39: "Also, the splitting strategies in GOOD are mainly based on the node features and do not take into account the graph structure" is somehow misleading. I suggest the authors to clarify that "GOOD mainly focuses on node feature shifts in **node-level tasks**". Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Q1: Can you find a way to compare your proposed shifts with the real shifts? Q2: I would suggest submitting benchmark works to the Datasets and Benchmarks track, which offers more transparent evaluations to your benchmark. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Broader impacts are expected to be discussed. The licenses of the datasets and the code (GOOD) are expected to be mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! We appreciate your encouraging feedback about our work and address your comments below. W1. Thanks for this remark. Indeed, the notion of structural distributional shift might be significantly different in graph-level problems, so it is important to emphasize that our approach to induce structural shifts is primarily intended for node-level problems. We will change the title accordingly. W2. This concern is reasonable. Because of that, we devote a significant part of the text to the discussion of this limitation. In Section 3.2, we provide the motivation behind the proposed structural shifts — specifically, we describe how they can arise in real-world applications and how they can be modeled by our split strategies based on various structural node properties. Of course, having real distribution shifts for a particular application would be preferable, but our goal was to address the situations when such natural splits are unavailable. Moreover, in Appendix C, we show that the proposed structural shifts enable us to catch some properties of the distributional shifts between train and test subsets in realistic data splits of the OGB benchmark. Note that we outline our limitations in a separate paragraph of conclusion. W3. Thanks for this comment, we will make the necessary clarification in the revised version of our paper. Q1. Yes, this is done to some extent in Appendix C, where we discuss how the chosen node-level characteristics that describe particular structural properties, including PageRank, PPR and clustering coefficient, correlate with the distributional shifts induced by the split strategies of the OGB benchmark. Unfortunately, there are not many examples of real distributional shifts in known graph benchmarks. Otherwise, we would not need such an approach that allows us to create data splits with synthetic structural shifts that replicate the properties of the real ones. Q2. Please refer to our [general response](https://openreview.net/forum?id=nJFJcgjnGo&noteId=EvXe6dwLzG). --- Rebuttal Comment 1.1: Comment: Thank you for your replies. Most of my concerns are addressed. And I also noticed that some of them cannot be addressed in this single work, so please list them and discuss them in the limitations and further works extensively. Although this is not a paper on the dataset and benchmark track, I have to clarify that my current score of 7 is partially based on the trust that the authors can be responsible and responsive for their released code, which is *essential even though the authors claim that it is not a traditional benchmark paper*. Furthermore, I'm willing to defend my review: Although I acknowledge that the suggestions of Reviewer 2ynW are beneficial, many concerns proposed are **not major** to reject this paper, since I don't think these concerns can undermine the contributions and conclusions of this paper. Score: $6\rightarrow 7$: Accept. --- Reply to Comment 1.1.1: Comment: Thanks for your support! We plan to extend the discussion of future work based on your questions. Our code will be supported in our open source repository and also in the most popular frameworks for deep learning on graphs: PyG and DGL.
Summary: This paper provides a benchmark solution for producing diverse and meaningful distributional shifts from existing graph datasets. The experiments conducted demonstrate that the proposed distributional shifts can be difficult for existing graph models, and surprisingly, simpler models often outperform more sophisticated ones. The experiments also reveal a trade-off between the quality of learned representations for the base classification task and the ability to distinguish nodes from different distributions using these representations. Strengths: The proposed data-splitting approach is quite interesting. The overall presentation is quite easy to follow. Based on the provided data-splitting strategy, most graph/node classification methods may not achieve a good classification performance and uncertainty quantification quality. It is also nice to see the authors conduct many empirical studies to validate the idea and compare the difference between various baselines. Weaknesses: 1. The technical novelty is somewhat limited. It seems like the technical contribution is based on existing techniques (PageRank, PPR, ...). I would like to see more discussions of the rationale/motivation for choosing the specific metric. For example, the PageRank is selected because of the need to quantify the node popularity, but other node centrality-based metrics can also be used to quantify the node popularity. 2. The dataset size is also quite small. It's not promising the experience learned from these small networks can be extended to million-scale networks. 3. Some important baselines (e.g., GOOD: A Graph Out-of-Distribution Benchmark) are not included in the main context. I feel like this paper should be more suitable for the dataset and benchmark track. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I am not seeing limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! We appreciate that you find our approach to create data splits interesting and the presentation easy to follow. Below, we reply to your comments. W1. Our main contribution is an approach for creating data splits with distributional shifts based on various structural properties. For our experiments, we consider some particular examples of node characteristics (specifically, PageRank, PPR, and clustering coefficient) as they are quite diverse, widely adopted in the community, and related to situations that may arise in practice. However, as mentioned in Section 3.2, other node characteristics can also be used instead of the chosen ones (e.g., node degree as the measure of popularity or graph distance as the measure of locality). As for PageRank, we choose this metric because it is less discrete than node degree and also widely used in the literature. W2. We conduct our experiments on several graph datasets that have been used in most previous works and adopted in established graph benchmarks. They come from various domains and applications, while having different structure and number of node features. Although these datasets do not fall under the category of large-scale datasets, they still enable us to conduct the necessary experiments and support our claims about the performance of graph models under structural distributional shifts. However, if you feel that our discussion can remain incomplete without some large-scale graph datasets, please let us know which of them we should include, and we will do our best to conduct additional experiments on them and include our results into the revised version. W3. Could you please explain what you mean by such baselines? * If you would like to see the evaluation of other methods for improving OOD that have been considered in the GOOD benchmark and have some particular recommendations, please let us know the names of these methods so that we could test them on the proposed structural shifts and include the results into the revised version; * If your comment refers to the lack of comparison between our distributional shifts and those proposed in the GOOD benchmark, we can extend the main text with the content of Appendix B, which describes how our approach to create structural shifts extends GOOD and allows us to overcome some of their practical limitations. > I feel like this paper should be more suitable for the dataset and benchmark track. Please refer to our [general response](https://openreview.net/forum?id=nJFJcgjnGo&noteId=EvXe6dwLzG). --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledgement Comment: Thanks for the rebuttal. Part of my concerns have been addressed, I would still like to raise my score from 5 to 6. However, my concerns about the network size and potential impact still remain. In terms of network size, the large scale typically means million scales of node numbers, but the largest network used in this work is CoauthorPhysics (34493, 495924), which is far from large-scale. --- Reply to Comment 1.1.1: Comment: Thanks for your positive feedback! Following your suggestion, we plan to conduct experiments on ogbn-products, an OGB dataset that contains more than 2M nodes, and include them in the revised version.
Summary: This paper proposes a general approach for inducing diverse distributional shifts based on graph structure and evaluates the robustness and uncertainty of graph models under these shifts. The authors define several types of distributional shifts based on graph characteristics, such as popularity and locality, and show that these shifts can be quite challenging for existing graph models. They also find that simple models often outperform more sophisticated methods on these challenging shifts. Additionally, the authors explore the trade-offs between the quality of learned representations for the base classification task and the ability to separate nodes under structural distributional shift. Overall, the paper's contributions include a novel approach for creating diverse and challenging distributional shifts for graph datasets, a thorough evaluation of the proposed shifts, and insights into the trade-offs between representation quality and shift detection. Strengths: Originality: The paper's approach for inducing diverse distributional shifts based on graph structure is novel and fills a gap in the existing literature, which has mainly focused on node features. The authors' proposed shifts are also motivated by real-world scenarios and are synthetically generated, making them a valuable resource for evaluating the robustness and uncertainty of graph models. Quality: The paper's methodology is rigorous and well-designed, with clear explanations of the proposed shifts and the evaluation metrics used. The authors also provide extensive experimental results that demonstrate the effectiveness of their approach and highlight the challenges that arise when evaluating graph models under distributional shifts. Clarity: The paper is well-written and easy to follow, with clear explanations of the proposed shifts and the evaluation methodology. The authors also provide helpful visualizations and examples to illustrate their points. Significance: The paper's contributions are significant and have implications for the development of more robust and reliable decision-making systems based on machine learning. The authors' approach for inducing diverse distributional shifts based on graph structure can be applied to any dataset, making it a valuable resource for researchers and practitioners working on graph learning problems. The insights into the trade-offs between representation quality and shift detection are also important for understanding the limitations of existing graph models and developing more effective ones. Overall, the paper's contributions have the potential to advance the field of graph learning and improve the reliability of machine learning systems. Weaknesses: Limited scope: The paper focuses solely on node-level problems of graph learning and does not consider other types of graph problems, such as link prediction or graph classification. This limited scope may restrict the generalizability of the proposed approach and its applicability to other types of graph problems. Synthetic shifts: While the authors' proposed shifts are motivated by real-world scenarios, they are synthetically generated, which may limit their ability to capture the full complexity of real distributional shifts. The paper acknowledges this limitation, but it is still worth noting that the proposed shifts may not fully reflect the challenges that arise in real-world scenarios. Limited comparison to existing methods: While the paper provides extensive experimental results that demonstrate the effectiveness of the proposed approach, it does not compare the proposed approach to other existing methods for evaluating graph models under distributional shifts. This limits the ability to assess the relative strengths and weaknesses of the proposed approach compared to other approaches. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Here are some questions and suggestions for the authors: Can the proposed approach be extended to other types of graph problems, such as link prediction or graph classification? If so, how might the approach need to be modified to accommodate these different types of problems? How might the proposed approach be adapted to handle real-world distributional shifts, rather than synthetically generated shifts? Are there any limitations to the proposed approach that might make it less effective in handling real-world shifts? How does the proposed approach compare to other existing methods for evaluating graph models under distributional shifts? Are there any specific strengths or weaknesses of the proposed approach compared to other approaches? The paper notes that there is a trade-off between the quality of learned representations for the target classification task and the ability to detect distributional shifts using these representations. Can the authors provide more details on this trade-off and how it might impact the effectiveness of the proposed approach in different scenarios? The paper proposes several different types of distributional shifts based on graph structure. Can the authors provide more details on how these shifts were chosen and whether there are other types of shifts that might be relevant for evaluating graph models? The paper focuses on evaluating graph models under distributional shifts, but does not provide any guidance on how to modify existing models to improve their robustness to these shifts. Can the authors provide any suggestions or guidelines for modifying existing models to improve their performance under distributional shifts? The paper notes that the proposed approach can be applied to any dataset, but does not provide any guidance on how to choose the appropriate node property to use as a splitting factor. Can the authors provide any suggestions or guidelines for choosing an appropriate node property for a given dataset? Overall, the paper presents an interesting and novel approach for evaluating graph models under distributional shifts. However, there are several areas where the authors could provide more details or guidance to help readers better understand and apply the proposed approach. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper briefly acknowledges some of the limitations of the proposed approach, such as the fact that the synthetic shifts may not fully reflect the complexity of real-world distributional shifts. However, the paper does not provide a detailed discussion of the potential negative societal impact of the work. Given the technical nature of the paper, it is possible that the authors did not see a direct connection between their work and potential negative societal impacts. However, it is always important for authors to consider the broader implications of their work, especially in fields like machine learning where there is a growing awareness of the potential risks and harms associated with these technologies. In future work, the authors could consider providing a more detailed discussion of the potential societal impacts of their work, including any ethical or social considerations that may arise from the use of their approach. This could help to ensure that the work is being developed and applied in a responsible and ethical manner. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! We appreciate your support and address your comments below (the order of questions is preserved). Q1. In graph-level problems, the notion of structural distributional shift might be significantly different and depend on particular applications. On the contrary, many node-level and edge-level problems have a lot in common in terms of what a plausible structural shift may look like. For instance, in link prediction problems, one may have to train their model on the edges that connect very popular nodes to unpopular ones, which is a very common and meaningful situation in real-world scenarios, and then test on the edges between mostly unpopular nodes. The same sort of reasoning can be done for the structural shifts based on locality, where one has access only to some local region of a graph and need to predict edges between the nodes of a distant region. Thus, our proposed structural shifts can be easily transferred for evaluating the robustness and uncertainty of graph models in edge-level problems. Q2. Our approach is designed specifically for situations when real-world distributional shifts are not available. In such cases, synthetically generated shifts can serve as a reasonable approximation. In Appendix C, we show that the proposed structural shifts enable us to catch some properties of real-world ones. Q3. Please refer to Appendix B, where we discuss how our method for creating data splits with structural shifts extends the GOOD benchmark, and how it allows us to overcome some of their practical limitations. Moreover, in Appendix C, we show that the proposed structural shifts enable us to catch some properties of the distributional shifts between train and test subsets in realistic data splits of the OGB benchmark. If these sections do not cover some important aspects of comparison, please let us know. Q4. We note that our approach for generating distributional shifts is orthogonal to the problem of developing models capable of dealing with them. We hope that our diverse and complex structural shifts will help researchers in investigating this trade-off and developing models that are both robust to distributional shifts and able to detect them. Q5. This question is covered in Section 3.2, where we discuss the motivation behind our proposed structural shifts and how they can arise in real-world applications. In addition, we mention there some other examples of node-level characteristics that can be used to describe the chosen structural properties, such as node degree as the measure of popularity and graph distance as the measure of locality. The structural shifts considered in our paper are intuitive and practically motivated. However, any node characteristic can be used for splitting, and also different characteristics can potentially be combined to create more complicated shifts. Q6. In Section 5.3, we partially cover this question and demonstrate that it is possible to make several modifications to the GNN model which help it to deal with distributional shifts. However, a more principled solution to this question is non-trivial and requires further investigation. We hope that our proposed method for creating distributional shifts will support studies in this area. Q7. In fact, the choice of some structural node property as a splitting factor might depend significantly on a particular application. In our paper, we consider quite diverse node characteristics that have different motivations behind them and impact on the performance of graph models, so we recommend using all of them to conduct experiments and aggregate the obtained results for making final decisions.
Rebuttal 1: Rebuttal: **General response** We would like to thank all the reviewers for their valuable feedback and suggestions! In this general response, we address questions raised by several reviewers and describe additional experiments we conducted. **Additional experiments** As requested by Reviewer 2ynW, we extend our empirical study with greater ID to OOD ratio (70% to 30% instead of 50% to 50% as in the paper) and provide the results in the attached PDF. Let us revisit our analysis of the proposed structural shifts. As can be seen in Table 1, the results in both OOD robustness, which is evaluated in node classification task, and OOD detection, which is done by means of uncertainty estimation, have not changed much compared to the previous setup — the considered graph models show the average drop in performance of 3% on the popularity-based shift and 6% on the density-based shift, while the OOD detection performance remains at nearly 81 and 56 points, respectively. Despite the fact that the graph models now have access to more diverse structures in terms of popularity and local density, making decisions about unimportant and sparsely surrounded nodes remains to be as difficult as before. Interestingly, the drop in predictive performance on the locality-based shift appears to be less significant compared to the original setup and reaches only 5% on average instead of the previous 15%. At the same time, the OOD detection performance on this structural shift drops to 78 points, whereas it was 85 points in the original setup. These results also match our intuition — as the distance between the ID and OOD nodes decreases, the OOD samples become less distinguishable from the ID ones. This makes the OOD detection performance drop on average, while the gap between ID and OOD metrics in standard node classification disappears. Now we may compare the existing graph models based on the results in Table 2. As can be seen, the ranking of methods remains almost the same as in the original setup. Specifically, the most simple data augmentation technique Mixup often shows the best performance in OOD robustness, while DE provides the second best results on most structural shifts. As for OOD detection, the methods based on the entropy of predictive distribution again outperform the Dirichlet ones, and the uncertainty estimates produced by DE are best correlated with the OOD examples. Thus, our observations regarding the performance of graph models are consistent with those reported in the paper. **Additional analysis** As a response to Reviewer 2ynW, we have also conducted a brief analysis of distributional shift in feature space induced by the proposed structural shift. In particular, we used t-SNE to embed the original feature space of the considered graph datasets into 2D representation space. In Figure 1 of the attached PDF, one can see the examples of such a visualization for the locality-based structural shift that also creates a notable distributional shift in node features. It supports our reasoning in Appendix B that a realistic shift most commonly impacts both node features and graph structure. In the revised version of our paper, we will include the figures for other graph datasets and shifts. **On submission track** Several reviewers expressed their doubts about whether our work should be at the main conference track or at the Datasets and Benchmarks (DB) track. Let us explain our choice. According to the DB track description, its main purpose is to present new datasets to the community, with a special focus on their maintenance and accessibility via a properly designed API. Among other things, a submission to the DB track requires the information about how the data is collected and organized, what kind of information it contains, how it should be used ethically and responsibly, etc. On the contrary, our work is about a new approach to induce structural distributional shifts that can be applied to any graph dataset. In other words, our work presents a way of looking at and dealing with data, rather than a new source of data. Further, according to the call for papers, our work fits the scope of the main track. It is written in the conference FAQ that if a paper fits both tracks, it remains the authors' choice where to submit it. Taking into account the above arguments, we decided to submit to the main track. **Broader impact** As requested by the reviewers, we will add a discussion of broader impact to our paper. In particular, we assume that the proposed approach for evaluating robustness and uncertainty of graph models will support the development of more reliable systems based on machine learning. By testing on the presented structural shifts, it should be easier to detect various biases against under-represented groups that may have a negative impact on the resulting performance and interfere with fair decision-making. Pdf: /pdf/7a03d43b33d0e0ee42c2ad56dbf71ad764d66e45.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PromptRestorer: A Prompting Image Restoration Method with Degradation Perception
Accept (poster)
Summary: This paper introduces the PromptRestorer, a method for image restoration that effectively utilizes raw degradation features to guide the restoration process. PromptRestorer consists of two branches: a restoration branch and a prompting branch. The restoration branch restores the images, while the prompting branch perceives degradation priors and provides reliable content to guide the restoration process. Experimental results demonstrate that PromptRestorer achieves state-of-the-art performance on four image restoration tasks: deraining, deblurring, dehazing, and desnowing. Strengths: - The idea of utilizing intermediate layer features of pre-trained models to enhance the effectiveness of image restoration is innovative. - The experimental validation tasks and comparison methods employed in this study are comprehensive and diverse. - The comparative figure (Figure 1) between the proposed method and previous approaches are presented in a clear manner, highlighting the distinctive aspects of the framework. Weaknesses: - The motivation behind the introduction of the prompting degradation perception modulator is not clearly articulated in the paper. The authors fail to thoroughly analyze the existing deficiencies of previous methods and instead claim that the incorporation of prompting can lead to more accurate degradation priors or more reliable perceived content learned from the degradation priors. However, these descriptions lack quantifiable metrics and robust support from relevant literature. Despite the inclusion of visualizations depicting feature maps in the experimental section, there remains an ambiguity regarding the definition of better degradation priors. - Similarly, I am confused regarding the motivation behind the introduction of local prompting attention and global prompting attention. It remains unclear to what extent these components address existing shortcomings in the framework. Furthermore, the results of the ablation experiments indicate relatively minor improvements resulting from the incorporation of these components. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is a more accurate degradation priors? Can we measure it? - What are the motivations behind the introduction of key modules and the issues they address in previous methods? - It appears that the network parameter count and FLOPS of this method may increase significantly compared to those of Case 1 and Case 2 methods. Is there more detailed comparative data available in this regard? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - The performance improvement of this method appears to be relatively modest compared to some previous approaches, and it remains unclear what the magnitude of the associated numerical variances is. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: What is a more accurate degradation prior? Can we measure it? A: The most accurate degradation priors should be the degraded image itself. In feature space, better degradation should have the property that can better represent the features of the degraded image. We note that VQGAN can reconstruct the input images and thus can accurately represent the features of the input images [1]. Hence, we use the pre-trained VQGAN to accurately extract degradation features as the degradation priors. Fig. 8 (GT features and Case 3) clearly shows that pre-trained VQGAN can better preserve the features of the input images. [1] Gu et al. VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder. ECCV2022. Q: What are the motivations behind the introduction of key modules and the issues they address in previous methods? I am confused regarding the motivation of local prompting attention and global prompting attention. A: Our method is motivated by that existing image restoration models usually appear to degradation vanishing, as shown in Case 1-2 of Fig. 8, as the model parameters are optimized toward producing clean content. However, the degradation content is critical for image restoration, which implies important cues for better image restoration. To solve this problem, we propose to use raw degradation features to overcome degradation vanishing so that helps the networks constantly perceive the degradation priors to facilitate better recovery. To better perceive the degradation, we propose G2P and L2P to respectively perceive the degradation from global and local perspectives, enabling to generate more useful content to guide the restoration branch. To control the propagation of the perceived content, we propose the GDP, enabling the restoration network to adaptively learn more useful features for better restoration. Our global perception attention consists of Q-InAtt and KV-InAtt. The Q-InAtt considers re-forming the query vector induced by degradation features to build a representative query to perform attention, while the KV-InAtt re-considers key and value vectors induced by other degradation counterparts to search for more similar content with the restoration query. The local perception modulator contains Deg-InBan and Res-InBan. The former is achieved by exploiting the degradation features to induce spatially useful content from restoration content to guide restoration gating fusion, while the latter utilizes the deep restoration features to induce more useful features from another degradation counterpart to form the degradation gating. Previous commonly-used fusion methods SFT usually only consider spatial feature transformation with a single direction transformation, i.e., from conditions to restoration fusion at pixel-level, while neglecting the global perspectives. We consider both local and global transformation by re-forming attention and local modulation with bidirectional interaction, i.e., from restoration to degradation, and from degradation to restoration, to guide restoration. Q: It appears that the network parameters count and FLOPs of this method may increase significantly compared to those of Case 1 and Case 2 methods. Is there more detailed comparative data available in this regard? A: The learnable parameters of our method are fewer than those of the model with learnable conditional modulation (Case 2), as shown in Tab. 5. This is because we use a pre-trained model to extract raw degradation information which keeps frozen when training the restoration models. However, our method still performs better than the model with learnable conditional modulation. For Case 1, since it does not have the conditional branch, our model (Case 3) and model with learnable conditional modulation (Case 2) naturally contain more parameters and FLOPs. Q: The improvement of this method appears to be relatively modest compared to some previous approaches. A: Our method consistently achieves better performance on $4$ image restoration tasks on various benchmarks, which demonstrates the effectiveness of our method. Q: The ablation indicates relatively minor improvements resulting from the incorporation of these components. A: Our experiments demonstrate that each component is effective for restoration. Note that our method with fewer parameters performs better than the model with learnable conditional modulation, which validates our key contribution that using raw degradation content is more effective than learnable conditions. --- Rebuttal Comment 1.1: Title: Reviewer reply Comment: After considering the response provided in the rebuttal and taking into account the feedback from other reviewers, I recognize the novelty of this paper. The authors highlight that utilizing raw degradation is more impactful compared to the current approach of learnable conditional modulation, offering a different perspective in this research area. So, I have raised my rating.
Summary: In this paper, the authors find that the degradation vanishes in the learning process of the image restoration problem. Which further hinders the model's capacity. To solve this problem, this paper proposes the PromptRestorer, a prompting image restoration method by exploring the raw degradation features extracted by a pre-trained model from the given degraded observations to guide the restoration process. The authors demonstrate that the proposed guidance manner is better than both the encoder-decoder model without any guidance and the model with learnable guidance. The experiments on image deraining, deblurring, dehazing, and desnowing demonstrate the superiority of the proposed algorithm. Strengths: 1, The method is simple and effective and is easy to follow. The writing is professional and clear to understand. 2, The proposed Prompting Degradation Perception Modulator including global prompting perceptor and local prompting perceptor is novel and effective. 3, The proposed Gated Degradation Perception Propagation is simple and meaningful. 4, This paper clarifies a new perspective that the raw degradation features are better than the learnable features which are usually utilized in former methods. 5, The extensive experiments on various image restoration tasks demonstrate the generalization of the proposed algorithm. Weaknesses: 1, Although this paper clarifies many strengths of the method, what’s the limitation? The author should present it in this paper. 2, The authors should give more detailed explanation of the proposed global prompting perceptor and local prompting perceptor to facilitate better understanding. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors are suggested to discuss the limitations of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: Although this paper clarifies many strengths of the method, what’s the limitation? The author should present it in this paper. A: Since our method needs to utilize a pre-trained model to extract degraded priors, it consumes more parameters and FLOPs compared with the model without any conditional modulation (i.e., Case 1), as illustrated in Tab. 5. However, we demonstrate that using raw features to guide restoration produces better performance than previous widely-used learnable conditional modulation (Case 2) while consuming fewer learnable parameters. Q: The authors should give a more detailed explanation of the proposed global prompting perceptor and local prompting perceptor to facilitate better understanding. A: The G2P fully exploits the self-attention mechanism to form the global prompting attention induced by the degraded features. The G2P contains the global perception attention followed by an improved ConvNeXt. Our global perception attention consists of 1) Query-Induced Attention (Q-InAtt) and 2) Key-Value-Induced Attention (KV-InAtt). The Q-InAtt considers re-forming the query vector induced by degradation features to build a representative query to perform attention, while the KV-InAtt re-considers key and value vectors induced by other degradation counterparts to search for more similar content with the restoration query. The L2P adequately considers the pixel-level degradation perception, enabling it to better perceive degradation from spatially neighboring pixel positions. The L2P consists of a local perception modulator followed by a separable depth-level convolution. The local perception modulator contains two core components: 1) Degradation-Induced Band (Deg-InBan) and 2) Restoration-Induced Band (Res-InBan). The former is achieved by exploiting the degradation features to induce spatially useful content from restoration content to guide restoration gating fusion, while the latter utilizes the deep restoration features to induce more useful features from another degradation counterpart to form the degradation gating. --- Rebuttal Comment 1.1: Title: Reviewer reply Comment: Thanks for the efforts in the response. All my concerns have been addressed.
Summary: This paper proposes the PromptRestorer by taking advantage of the prompting learning for image restoration. The author considers raw degradation features into restoration which consistently retains the degradation priors to facilitate better restoration. To better perceive degradation, the authors propose Global Prompting Perceptor and Local Prompting Perceptor. To control the propagation of the perceived features, the authors propose gated degradation perception propagation, enabling the model to adaptively learn more useful features for better image restoration. Strengths: 1, Good writing skills. The paper shows excellent presentation and clearly presents former disadvantages and better solves them by introducing the prompting strategy. 2, Clear motivation. To solve the problem of degradation vanishing in image restoration (Fig. 1 and 8), the authors introduce a Prompting Degradation Perception by extracting the raw degradation features to directly guide the image restoration process. The experiments (Tab. 5) show that the proposed learning strategy is better than the former methods. 3, Interesting module. The proposed Prompting Degradation Perception Modulator is interesting and Gated Degradation Perception Propagation is simple. Weaknesses: 1, The authors can give a clearer understanding of the degradation priors guidance. Why does it work better than the model with a learnable conditional branch? 2, Although the Prompting Degradation Perception Modulator including the Global Prompting Perceptor and Local Prompting Perceptor is interesting, it would be better if the authors can give a more direct explanation of them as the two modules seem to look the key contributions of this work. 3, The paper exists some typos. For example, in line 89, knew->known. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the questions in Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors do not present the limitations in the paper, which are suggested to add them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: The authors can give a clearer understanding of the degradation priors guidance. Why does it work better than the model with a learnable conditional branch? A: The learnable nature of the conditional network in these models does not effectively provide degradation information for the restoration network since the parameters in the learnable conditional branch are optimized in the learning process so that the features become gradually clear, which cannot effectively provide the restoration networks with degraded priors well. In contrast, we directly exploit the raw degraded features extracted by a pre-trained model from the degraded inputs to generate more reliable prompting content to guide image restoration. Raw degraded features preserve accurately degraded information, which can consistently prompt the restoration network with accurate degraded priors, enabling the restoration network to perceive the degradation for better restoration. Q: Although the Prompting Degradation Perception Modulator including the Global Prompting Perceptor (G2P) and Local Prompting Perceptor (L2P) is interesting, it would be better if the authors can give a more direct explanation of them as the two modules seem to look the key contributions of this work. A: The G2P fully exploits the self-attention mechanism to form the global prompting attention induced by the degraded features. The G2P contains the global perception attention followed by an improved ConvNeXt. Our global perception attention consists of 1) Query-Induced Attention (Q-InAtt) and 2) Key-Value-Induced Attention (KV-InAtt). The Q-InAtt considers re-forming the query vector induced by degradation features to build a representative query to perform attention, while the KV-InAtt re-considers key and value vectors induced by other degradation counterparts to search for more similar content with the restoration query. The L2P adequately considers the pixel-level degradation perception, enabling it to better perceive degradation from spatially neighboring pixel positions. The L2P consists of a local perception modulator followed by a separable depth-level convolution. The local perception modulator contains two core components: 1) Degradation-Induced Band (Deg-InBan) and 2) Restoration-Induced Band (Res-InBan). The former is achieved by exploiting the degradation features to induce spatially useful content from restoration content to guide restoration gating fusion, while the latter utilizes the deep restoration features to induce more useful features from another degradation counterpart to form the degradation gating. Q: The paper exists some typos. For example, in line 89. A: Thanks for your careful reading. We will fix them in the revised paper. --- Rebuttal 2: Title: Good paper, suggest to accept. Comment: After thoroughly reviewing the authors' feedback, I find that most of my questions have been adequately addressed. Thank you for your efforts.
Summary: This paper proposes a prompting image restoration method with degradation perception. The authors show that raw degradation features can effectively guide deep restoration models, providing accurate degradation priors to facilitate better restoration. To perceive the degradation, the authors propose prompting degradation perception modulator. To control the propagation of the perceived content for restoration branch, the authors propose gated degradation perception propagation. Strengths: + This paper is professionally wrote and presents a better organization. + The proposed prompting image restoration method is interesting, which clearly demonstrate that the raw degradation features are better guidance than the learnable parameters used in previous methods. + The proposed prompting degradation perception modulator is novel. It has potential applications to latter features fusion manner. + The gated degradation perception propagation seems to be useful. Weaknesses: 1, The author claim that the pre-trained model is the same with the learnable conditional network. Hence, the FLOPs in Tab. 5 should be same. 2, The authors would better to explain why the raw degradation features are better than features from learnable parameters network since more learnable parameters should have better results within the similar network framework. 3, SFT is a commonly used feature fusion module, is the proposed PromptDPM than that? What is the result if replacing the PromptDPM with SFT? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: see weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: The author claims that the pre-trained model is the same as the learnable conditional network. Hence, the FLOPs in Tab. 5 should be the same. A: Thanks for your reminder. The reported FLOPs of Case 3 in Tab. 5 are the restoration network. We will correct it in the revised paper. Q: The authors would better explain why the raw degradation features are better than features from the learnable parameters network since more learnable parameters should have better results within a similar network framework. A: The learnable nature of the conditional network in these models does not effectively provide degradation information for the restoration network. As parameters are optimized in the learning process, features become gradually clear, which cannot effectively guide the restoration networks. In contrast, we directly exploit the raw degraded features extracted by a pre-trained model from the degraded inputs to generate more reliable prompting content to guide image restoration. Raw degraded features preserve accurately degraded information, which can consistently prompt the restoration network with accurate degraded priors, enabling the restoration network to perceive the degradation for better restoration. Tab. 5 clearly shows using the raw degraded features to guide restoration produces 0.369dB PSNR gains than the model with a learnable conditional branch. Q: SFT is a commonly used feature fusion module, is the proposed PromptDPM better than that? What is the result if we replace the PromptDPM with SFT? A: We compare the SFT with our PromptDPM by replacing the PromptDPM with SFT in our model. The PSNR results are summarised as follows: SFT: 30.699; Ours: 31.015. Our PromptDPM improves by 0.316dB PSNR compared with the commonly used SFT module, which further demonstrates the effectiveness of the proposed PromptDPM. --- Rebuttal Comment 1.1: Comment: After reading the rebuttal, I would like to keep my rating.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes PromptRestorer for image restoration, addressing the issue of degradation vanishing in existing methods. Unlike previous approaches that do not explicitly consider degradation information or fail to effectively model degradation priors, PromptRestorer leverages the raw degraded features extracted from pre-trained models to generate reliable prompting content. The contributions of the work include PromptRestorer, the first approach to leverage prompting learning for general image restoration by considering raw degradation features, overcoming degradation vanishing while consistently retaining degradation priors. Additionally, the proposed prompting degradation perception modulator and gated degradation perception propagation enhance the restoration process by providing more reliable perceived content and controlling feature propagation. Extensive experiments show the superiority of the proposed method. Strengths: - The paper introduces a new approach called PromptRestorer for image restoration, which addresses the issue of degradation vanishing. The integral designs take advantage of prompting learning and explicitly consider degradation information, making it a unique and innovative contribution to the field. - The proposed method effectively perceives and retains degradation priors by leveraging raw degraded features and designing a prompting degradation perception modulator, resulting in improved performance compared to existing methods. - This comprehensive framework ensures that all aspects of restoration, including generating reliable prompting content and controlling feature propagation. - The paper provides a detailed analysis of the proposed method and the experimental results demonstrate the effectiveness of the PromptRestorer and achieve better restoration outcomes. Weaknesses: - The main problem **degradation vanishing** is not well defined and discussed in this paper, leading to the confusing and distracting presentation that what problem the authors are trying to solve. In Section 4.3 the authors claim sharper details in features fail to provide sufficient degraded information. Why this is the case? Is there any evidence that can support sharper features of the conditional branch would deteriorate the restoration performance? Overall the analysis of degradation vanishing is insufficient. - The core idea of this paper seems like another attempt to utilize prior information from pre-trained large models (*e.g.*, StyleGAN, DDPM, CLIP) to facilitate restoration, which has long been explored [1,2,3,4] in the image restoration community, though the designs and technical details are rather different. Could the authors further elaborate on the main differences between previous works? - As claimed that PromptRestorer can consistently retain the degradation priors (L62), the reviewer would expect a better adaptation of this model to various degradations. Hence, it would be better if the authors can show how well this algorithm can perform on different degradations (*e.g.*, various sizes of blur kernels, various levels of noise). [1] Chan, Kelvin CK, et al. "Glean: Generative latent bank for large-factor image super-resolution." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [2] Lugmayr, Andreas, et al. "Repaint: Inpainting using denoising diffusion probabilistic models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [3] Kim, Geonung, et al. "Bigcolor: colorization using a generative color prior for natural images." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [4] Chen, Chaofeng, et al. "Real-world blind super-resolution via feature matching with implicit high-resolution priors." Proceedings of the 30th ACM International Conference on Multimedia. 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations and broader societal impacts are not discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: The main problem ''degradation vanishing'' is not well defined and discussed in this paper, leading to a confusing and distracting presentation. A: ''Degradation vanishing'' means the degradation features gradually vanish to become sharper features in the learning process of restoration networks. It is a straightforward phenomenon in most restoration networks as the model parameters are optimized toward producing clearer content, as shown in Case 1-2 of Fig. 8. We discuss that using previously widely-used learnable conditional modulation to guide restoration models is worse than our solution which directly provides raw degradation information into deep restoration models (see Fig. 1, Tab. 5, and Case 2-3 in Fig. 8). Hence, we introduce and show the degradation vanishing by visualization in Fig. 8, solve it by providing raw degradation features to restoration networks, discuss and compare it with previous related works in Tab. 5, and show that the proposed solution is simpler and more effective. Q: In Sec. 4.3 the authors claim sharper details in features fail to provide sufficient degraded information. Why this is the case? Is there any evidence that can support sharper features of the conditional branch would deteriorate the restoration performance? A: Sharper details contain less degraded content as the model parameters are optimized in the learning process of restoration networks, while degraded features can provide the restoration networks with accurate degraded priors so that the restoration networks can perceive degraded content to guide the restoration process for better recovery. The visualization of Case 2 in Fig. 8 reveals that the learnable conditional branch learns sharper features, which thus cannot effectively provide restoration networks with effective degradation priors. Tab. 5 shows that the learnable conditional manner deteriorates the restoration performance, where our method produces 0.369dB gains. Moreover, Fig. 1(b) shows that the Case 2 has better performance in early iterations (about 25K iterations) but deteriorates the restoration performance after latter iterations compared with our method since Case 2 exhibits sharper results in later iterations while our method can constantly provide the restoration branch with raw degradation information, which would help better recovery. Our visualization and quantitative results support that using a learnable conditional branch to produce sharper features is not more effective than our method that using raw degradation to guide restoration networks. Q: The core idea seems like another attempt to utilize prior information from pre-trained large models to facilitate restoration, which has long been explored [1,2,3,4]. Could the authors further elaborate on the main differences between previous works? A: GLEAN utilizes pre-trained GANs to generate a latent bank, which produces the ''generative priors'' to embed into the encoder-decoder framework for image SR. Repaint employs a pre-trained unconditional DDPM as the ''generative prior'' to condition the generation process by sampling from the given pixels during the reverse diffusion iterations. BigColor also adopts the ''generative color prior'' to guide the image colorization. FeMaSR utilizes the VQGAN to generate the codebook of clean images to supervise the networks. Different from all previous methods that utilize the generative priors or codebook of clean images, our method directly utilizes the ''degradation information'' of the degraded image itself to guide restoration. We note that there is NOT any literature to explore the original degraded features to guide image restoration up to now. We give the validation that raw degradation content extracted from the degraded images itself is more effective than previous commonly-used methods that usually use a learnable conditional branch to guide the image restoration. We make it possible that NOT learnable condition is better than learnable condition methods. Q: As claimed that PromptRestorer can consistently retain the degradation priors (L62), the reviewer would expect a better adaptation of this model to various degradations. Hence, it would be better if the authors can show how well this algorithm can perform on different degradations (e.g., various sizes of blur kernels, various levels of noise). A: First, blur kernels or various levels of noise are not available in practical images. Second, estimating blur kernels or various levels of noise may be not an easy task, which will introduce additional estimating errors that may subsequently affect the restoration quality. Our PromptRestorer does not require any additional estimation modules to estimate the gap between the degraded images and clean ones. In contrast, we directly utilize the degradation features extracted from the degraded image itself by a pre-trained model to provide the restoration networks with more precise degraded information. Hence, our PromptRestorer is simpler, more general, and more suitable for general image restoration tasks. --- Rebuttal Comment 1.1: Comment: Thanks for the efforts in the response. I still have some concerns after reading the rebuttal. - How is the learnable net in Case 2 initialized? - The claim "sharper details in features fail to provide sufficient degraded information" is still weak and unconvincing. Simply visualizing features of *ONE* image and relating it to an overall PSNR gain is weak evidence to make strong conclusion. For example, Case 2 exhibits sharper features in its later iterations (500K) but achieves better performance (~30.5 dB in Fig (b)), which contradicts the claim and explanation in the rebuttal. The authors should dig deep into this phenomenon and do comprehensive analysis before making strong claims. Is it possible to fix the well-trained restoration branch and only fine-tune conditional branch? In this case, we can rule out the optimization of restoration branch and explore the relation between conditional branch and final performance. - The so-called *generative priors* in previous works, more or less, are different levels of features of the pre-trained models. Some prefer to freeze the pre-trained models, while others adopt joint training. Considering this, the core idea of using "original degraded features to guide image restoration" is, in its nature, not that novel in the community. The authors should avoid overclaiming the contribution. - In the last question, the authors are not asked to add an additional estimation module. If "PromptRestorer can consistently retain the degradation priors", it should be able to handle images with various degradations in a *BLIND* setting. This claim, however, is not supported by any evidence. --- Reply to Comment 1.1.1: Title: Response to Reviewer NMEt (1/2) Comment: Q: How is the learnable net in Case 2 initialized? A: In Case 2, the learnable conditional branch has similar learning manners to existing conditional modulation methods [1,2,3,4,5,6]. It is directly trained together with the restoration network without initialization, which aims to adaptively learn useful conditional content to guide the restoration networks. We find that the learnable manner in Case 2 is not better than our method (Case 3) which directly uses raw degradation features of the degraded image itself without learning. The learnable conditional branch tends to forget the degradation content of the input images with more iterations (Case 2 in Fig. 8), while our NOT learnable method can provide the restoration branch with better degradation priors (Case 3 in Fig. 8) so that it can better restore images. Note that the learnable conditional branch in Case 2 has the same network structure as our Case 3 for fair comparisons. [1] Interactive multi-dimension modulation for image restoration. TPAMI, 2022. [2] Interactive multi-dimension modulation with dynamic controllable residual learning for image restoration. In ECCV, 2020. [3] Conditional sequential modulation for efficient global image retouching. In ECCV, 2020. [4] Hdrunet: Single image hdr reconstruction with denoising and dequantization. In CVPR Workshops, 2021. [5] A new journey from sdrtv to hdrtv. In ICCV, 2021. [6] Toward interactive modulation for photo-realistic image restoration. In CVPR Workshops, 2021. Q: The claim "sharper details in features fail to provide sufficient degraded information" is still weak and unconvincing. For example, Case 2 exhibits sharper features in its later iterations (500K) but achieves better performance (30.5 dB in Fig. 1 (b)). A: In Fig. 1 (b), our method (Case 3) shows better performance than Case 1-2 at later iterations (500K). | Case | PSNR (dB) | |:--------:|:---------:| | 1 | 30.138 | | 2 | 30.646 | | 3 (Ours) | 31.015 | And our method uses a not learnable network to extract the raw degradation features while Case 2 uses a learnable conditional network which consumes more learnable parameters. Notably, both Cases 1-2 exhibit sharper results in later iterations compared to earlier ones, which fail to provide the restoration branch with sufficient degraded information. In contrast, as the restoration branch needs to adapt perceived features from the PromptDPM which is to perceive the raw degradation features from the degradation image itself, our model (Case 3) initially exhibits inferior performance (around 20K iterations) as shown in Fig. 1(b). However, with better adaptation to the degradation information after more iterations, the prompting branch can better prompt the restoration branch with more reliable perceived content learned from the raw degradation, enabling our restoration branch to overcome degradation vanishing to achieve better restoration quality. Q: Is it possible to fix the well-trained restoration branch and only fine-tune the conditional branch? A: Thanks for your good suggestions. We will discuss the case in the revised paper as the time does not allow us to finish the training of the model. Moreover, we would like to further express that fixing the well-trained restoration branch and only fine-tuning the conditional branch seems to learn suitable conditional content for a specific restoration network. Furthermore, this case is more complicated since we need to individually train each restoration network for the specific restoration task and then individually fine-tune each conditional branch. In contrast, our method is simpler that only needs to pre-train the conditional network once and then can be applied to all restoration networks to serve as the accurate degradation feature extractor. Our approach greatly simplifies the method mentioned by the reviewer. Moreover, our method that uses raw degradation features to guide the restoration network may have better insights since we explore the accurate degradation prior in deep feature space for degraded image restoration. For the case that the reviewer suggests, it seems to discard the degradation prior but is a good suggestion that will be discussed in the revised paper.
Summary: This paper proposes a Prompt image Restorer, which contains a restoration branch and a prompting branch. A pre-trained model is utilized to extract features in the prompting branch and a prompting modulator is proposed to better perceive features from global and local perspectives in the restoration branch. Extensive experimental results of multiple restoration tasks show the effectiveness of their method. Strengths: 1. The results in some experiments have shown improvement compared to previous methods. 2. The paper is well-written and easy to understand. Weaknesses: 1. The prompting way proposed in this paper is similar to the approach taken in some other papers, such as "Take a Prior from Other Tasks for Severe Blur Removal", where features are added as conditions. PromptRestorer doesn't have a significant difference, which makes me feel like it is more of a repackaging of prompting learning. 2. As a model for image generation, why does the pre-trained model provide raw degradation information instead of image content information? What advantages does this approach have compared to some other methods of degradation representation? For example, "Learning Degradation Representations for Image Deblurring", and "Learning Disentangled Feature Representation for Hybrid-distorted Image Restoration". 3. The comparisons with the latest methods in Table 2, such as Restormer and NAFNet, are missing. 4. The article lacks comparisons with previous methods in terms of computational complexity and the number of parameters. 5. The method in this paper employs a pre-trained VQGAN, which introduces additional data. Is this experimental comparison fair? Did the authors account for the impact of this aspect? 6. From Figure 8, it can be observed that your method visually resembles the features of the ground truth (GT). However, GT does not contain any degradation information, which contradicts the claims of raw degradation information made in your paper. This makes the motivation and claims of this paper unconvincing. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q: The prompting way proposed in this paper is similar to the approach taken in some other papers, such as "Take a Prior from Other Tasks for Severe Blur Removal", where features are added as conditions. PromptRestorer doesn't have a significant difference. A: The abovementioned method uses a semantic/classification prior to guide the restoration branch. We note that the condition networks in this method are learnable and optimized by the features of GT. This manner is similar to the model of Case 2 in our paper. However, our method is different from the abovementioned method, where we use a pre-trained model to extract the raw degraded features and the pre-trained model keeps freezing when optimizing the restoration branch. Such a design provides the restoration branch with original degradation features instead of optimized features. Our method is more direct, simpler, and more effective than the ones that usually use a learnable branch to guide the restoration network. Moreover, different from existing methods, we also show that using the pre-trained model to extract raw degraded features to guide the restoration network overcomes the problem of ''degradation vanishing'' (as shown in Fig. 8), which provides the restoration networks with precise degraded information, enabling the restoration network to perceive the degradation priors for better restoration. To the best of our knowledge, we are the first to use the raw degraded features to guide image restoration and show that raw degraded features are more effective than learnable conditions. Q: As a model for image generation, why does the pre-trained model provide raw degradation information instead of image content information? A: Since the pre-trained VQGAN can reconstruct the input images as illustrated in VQFR [1], the pre-trained VQGAN model represents the features of the input images in feature space. While the input is the degraded image, the pre-trained VQGAN model generates degradation features. Hence, naming the raw degradation information is more precise than image content information. [1] Gu et al. VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder. ECCV2022. Q: What advantages does this approach have compared to some other methods of degradation representation? e.g., "Learning Degradation Representations for Image Deblurring" (Ref-A), and "Learning Disentangled Feature Representation for Hybrid-distorted Image Restoration" (Ref-B). A: We note that the method in Ref-A first jointly trains the conditional encoder that extracts the degraded information with the deblurring networks within 200K iterations in the first training phase, then freezes the conditional encoder and trains the deblurring networks in the second training phase. The first training phase will make the encoder cannot effectively preserve the degraded information as the encoder is optimized toward sharper features. In contrast, our pre-trained model which extracts the degraded information keeps freezing when training the restoration networks and thus can more precisely generate the degraded information. Hence, our method is a pure method that provides degraded information for restoration networks. The method in Ref-B uses the learnable disentangled features in the restoration networks instead of using degraded information to guide the deep networks. Hence, our method is different from the above methods. Q: Comparisons with Restormer and NAF. A: We re-implement our method and compare it with the Restormer and NAF and the results are summarised in the following Table. Compared with Restormer, our method consumes fewer parameters while improving the PSNR by 0.14dB and producing better generalization on the HIDE dataset. On RealBlur benchmarks, our PromptRestorer is competitive. Compared with the NAF, our method only consumes about 1/3 parameters of NAF but achieves consistent improvement on HIDE, RealBlur-R, and RealBlur-J. These results demonstrate our PromptRestorer is competitive. | Datasets | Restormer | NAF | Ours | |:----------:|:-----------:|:-----------:|:-----------:| | GoPro | 32.92/0.961 | 33.71/0.967 | 33.06/0.962 | | HIDE | 31.22/0.942 | 31.31/0.943 | 31.36/0.944 | | RealBlur-R | 36.19/0.957 | 35.97/0.951 | 36.06/0.954 | | RealBlur-J | 28.96/0.879 | 28.31/0.856 | 28.82/0.873 | | Para (M) | 26.1 | 67.9 | 24.4 | | FLOPs (G) | 140.99 | 63.33 | 186.3 | We set the number of channels as 32, 64, and 128, the number of CGT is [2, 3, 5] from level-1 to level-3, and the expanding channel capacity factor is 3. The iterations are 600K and the batch size is 15. Q: The method in this paper employs a pre-trained VQGAN, which introduces additional data. Is this comparison fair? Did the authors account for the impact? A: The comparison is fair. We compare our method with the model with learnable conditional modulation, where the learnable conditional modulation has the same network architecture as the pre-trained VQGAN encoder used in our method for fair comparisons (footnote on Page 8). Tab. 5 shows that our method performs better. Q: From Fig. 8, it can be observed that your method visually resembles the features of the GT. However, GT does not contain any degradation information, which contradicts the claims of raw degradation information made in your paper. A: As GT features and the raw degraded features are extracted from the same pre-trained VQGAN, their colors are similar but the GT features are sharper and the raw degraded features are blurry. As other features are shown from their individual learning networks, both of them have different patterns (colors). By viewing on a high-resolution display, the degraded features are more blurry while other features become sharper. The GT feature put in this paper only plays a reference. Hence, our conclusion is consistent with our motivation. --- Rebuttal Comment 1.1: Comment: Your approach involves a pre-trained prompting branch with freezing, while the learnable ways include a randomly initialized prompting branch with fine-tuning, and a pre-trained prompting branch with fine-tuning. From the context provided in the text, it seems that Case 2 refers to the randomly initialized prompting branch with fine-tuning. Therefore, in your Case 2 experiments, how was the prompting branch initialized? --- Reply to Comment 1.1.1: Title: Response to Reviewer TXGv Comment: Q: Your approach involves a pre-trained prompting branch with freezing, while the learnable ways include a randomly initialized prompting branch with fine-tuning, and a pre-trained prompting branch with fine-tuning. From the context provided in the text, it seems that Case 2 refers to the randomly initialized prompting branch with fine-tuning. Therefore, in your Case 2 experiments, how was the prompting branch initialized? A: In Case 2, the learnable conditional branch has similar learning manners to existing conditional modulation methods [1,2,3,4,5,6]. It is directly trained together with the restoration network without initialization, which aims to adaptively learn useful conditional content to guide the restoration networks. We find that the learnable manner in Case 2 is not better than our method (Case 3) which directly uses raw degradation features of the degraded image itself without learning. The learnable conditional branch tends to forget the degradation content of the input images with more iterations (Case 2 in Fig. 8), while our NOT learnable method can provide the restoration branch with better degradation priors (Case 3 in Fig. 8) so that it can better restore images. Note that the learnable conditional branch in Case 2 has the same network structure as our Case 3 for fair comparisons. [1] Interactive multi-dimension modulation for image restoration. TPAMI, 2022. [2] Interactive multi-dimension modulation with dynamic controllable residual learning for image restoration. In ECCV, 2020. [3] Conditional sequential modulation for efficient global image retouching. In ECCV, 2020. [4] Hdrunet: Single image hdr reconstruction with denoising and dequantization. In CVPR Workshops, 2021. [5] A new journey from sdrtv to hdrtv. In ICCV, 2021. [6] Toward interactive modulation for photo-realistic image restoration. In CVPR Workshops, 2021.
null
null
null
null
$H$-Consistency Bounds: Characterization and Extensions
Accept (poster)
Summary: This work proposes a general characterization and an extension of H-consistency bounds for multiclass classification. By introducing an error transformation function, the paper provides a general tool for deriving H-consistency bounds with tightness guarantees. The paper demonstrates that the proposed tools and bounds can recover or even improve results from various recent works. Strengths: This is good work in the field, original with innovative methodologies and new insights. The paper provides a general characterization and extension of H-consistency bounds for multiclass classification. Focusing on two widely studied types of loss functions in the literature, comp-sum losses and constrained losses, a new tool is introduced to derive H-consistency bounds with tightness guarantees. Before this work, deriving such bounds often required proofs for each instance. The tool proposed in this work, based on error transformations (Theorem 2, 3, 5, 10, 11, 12), is general and can be applied to derive H-consistency bounds (Theorem 4, 6, 9, Corollary 7, 8) for various loss functions and hypothesis sets. The paper demonstrates that the proposed tools and bounds can recover or even improve results from various recent works (Line 201-206, 246-248, 253-255, 261-272, 301-306, 318-322). The quality of research is good, with meticulous comparisons with several recent works and detailed analysis leading to convincing conclusions. Well-organized and well-written, the paper adeptly communicates complex concepts in a relatively understandable way. The work is significant, with findings that resonate beyond the specific area of study and are likely to stimulate further research in the field of H-consistency bounds for multiclass classification. Overall, this paper is a nice addition to the literature. Weaknesses: As far as I am concerned, I do not see any significant weaknesses. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you provide further explanation on the connection between the results presented in this study and the theory of calibration functions as described by Steinwart in 2007? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I could not find the location where the authors explicitly described the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. Below please find responses to specific questions. **Could you provide further explanation on the connection between the results presented in this study and the theory of calibration functions as described by Steinwart in 2007?** **Response:** The calibration function is a general tool tailored to the family of all measurable functions, and is designed to provide excess error bounds between a surrogate loss and the target loss. The calibration function does not take into account the specific hypothesis set. On the other hand, our error transformation function is a general tool designed to provide $H$-consistency bounds between a surrogate loss and the target loss. These functions are tailored to the hypothesis set $H$ adopted. In the special case where $H$ is the family of all measurable functions, the error transformation function coincides with the calibration function, and the derived $H$-consistency bound coincides with an excess error bound. We will further clarify and detail these explanations in the final version. **Limitations:** **I could not find the location where the authors explicitly described the limitations of the work.** **Response:** Thank you for pointing it out. We will add an explicit discussion in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I have read all the reviews and responses. I appreciate the authors' effort. I retain my rating of 7.
Summary: Unlike previous work on H-consistency bounds for surrogate loss functions, this paper attempts to analyze them from a more unified perspective. For this purpose, the paper proposed a new error transformation method to present new tight H-consistency bounds for comp-sum losses and constrained losses in multi-class classification. With this method, the authors derived the H-consistency bounds in a unified manner for various loss functions, and a new theoretical analysis have developed for non-complete hypothesis sets. Strengths: # Writing and Organization - The paper is well organized such that it is easy to understand the contribution of this paper, especially the technical difference from existing work. - The extensive comparison in Appendix A is very helpful in understanding the recent progress in this field, and I thought this Appendix A should be placed in the main paper if there would be any spaces. # Technical contribution - This paper presents a significant improvement over the existing H-consistency bounds by developing a unified and consistent approach based on error transformation, which seems novel. Weaknesses: - I think the argument of section 2 should be improved to understand more easily. 1. The notation introduced in Sec 2 is hard to understand at first read. It would be helpful to add the summary of notations in the Appendix so that readers can easily follow the importance of this setting 2. I think the current argument in paragraph 106 is hard to follow. I finally understood the paragraph after reading Appendix B.2. It might be better to introduce $I_l(\mathcal{H})$ in the main paper, which only appears in the Appendix, better to understand the difference between $M_l$ and $A_l$. 3. I like the explanation in Appendix B.2 and B.3 more than the current main paper since the importance of each concept is discussed mathematically and clearly. - The statement, especially Theorem 2, is very general and hard to understand. I understand that this is unavoidable because the developed analysis is a kind of unified viewpoint, but it would be better to add a little more discussion of the statements, for example, under Theorem 2. Typos - In line 107, H_all is used before its formal definition. Currently, it is defined in Line 114, so it should be corrected. - Some parentheses are missing in equations from lines 532 to 533 in the Appendix. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In line 59, the author explained that the bound in Awasthi 2022b may not be tight due to its ad-hoc analysis. I would like to know what it means by ad-hoc analysis and when it is not tight to clarify the limitation of existing work. - In Appendix C.1, I could not follow the equation between lines 574 and 575. Could you explain how the condition is derived from the second equality to the third equality, where $\inf_{\tau_1 \leq \max(\dots...)}$ appeared. - I found it hard to understand what is the intuition behind J^{comp} for $n>2$ in Theorem 2 since the constraint is very complex. Is there any clear explanation for that? - In theorem 2 and 5, the statement says that there exists a distribution D and h such that the equality holds. Could you present an explicit example of that ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitation and assumption of the analysis is presented in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review. We have carefully addressed all the questions raised. Please find our responses below. **Weaknesses:** **1. I think the argument of section 2 should be improved to understand more easily.** **Response:** We thank the reviewer for their suggestions. We will certainly incorporate them and will further improve our notation and presentation. In particular, we will include a notation summary in the appendix and a comprehensive discussion of each concept in the main body, by leveraging the extra page available in the final version. **2. The statement, especially Theorem 2, is very general and hard to understand. I understand that this is unavoidable because the developed analysis is a kind of unified viewpoint, but it would be better to add a little more discussion of the statements, for example, under Theorem 2.** **Response:** Thank you for your suggestions. We will definitely provide a more comprehensive discussion of the general results, particularly focusing on Theorem 2, in the final version. **3. Typos.** Response: Thank you, we will correct them. **Questions:** **1. In line 59, the author explained that the bound in Awasthi 2022b may not be tight due to its ad-hoc analysis. I would like to know what it means by ad-hoc analysis and when it is not tight to clarify the limitation of existing work.** **Response:** When referring to their ''ad-hoc analysis'', we mean that their method of upper-bounding the estimation error of the target loss by that of the surrogate loss heavily depends on the specific loss function and does not extend to a new loss function. Their upper bounds may not be tight, as the ad-hoc inequality used in their derivation could, in some instances, be further improved. For instance, their bounds for constrained exponential losses are found not to be tight. In contrast, we offer a general tool to derive $H$-consistency bounds in a more systematic manner. Moreover, by using the error transformation functions, we succeed in obtaining bounds that are tighter than those provided by Awasthi et al. [2022b]. We will further clarify this in the final version. **2. In Appendix C.1, I could not follow the equation between lines 574 and 575. Could you explain how the condition is derived from the second equality to the third equality, where $\inf_{\gamma_1\leq \max(\ldots)}$ appeared.** **Response:** Sorry for the confusion. There is a typo; the third equality should actually be an inequality, derived by directly taking the infimum. We will correct that in the final version. **3. I found it hard to understand what is the intuition behind J^{comp} for $n>2$ in Theorem 2 since the constraint is very complex. Is there any clear explanation for that?** **Response:** We acknowledge the complexity of the formulation for $n>2$, which is a result of its general and unifying nature, encompassing various loss functions. The conditional probability vector and scoring functions take on more flexible forms when $n>2$, leading to intricate constraints. We will provide a thorough explanation and clarification in the final version to make it more understandable. **4. In theorem 2 and 5, the statement says that there exists a distribution D and h such that the equality holds. Could you present an explicit example of that?** **Response:** The example of a particular distribution $\mathcal{D}$ and a hypothesis $h$ will be contingent on the specific loss function and hypothesis set. This is because the conditional probability vector of $\mathcal{D}$ and the scoring functions of $h$ must closely achieve the infimum in the transformation. In general, they are challenging to fully characterize, and often only existence can be shown. Nevertheless, we will seek to include in the final version an explicit example in simple cases for illustration.
Summary: The authors propose a general characterization and extension of H-consistency bounds for multi-class classification. They introduce an error transformation function that serves as a general tool for deriving these guarantees with tightness guarantees. The paper demonstrates that calculating the error transformation function enables the derivation of H-consistency bounds for various loss functions and hypothesis sets. The general tools and tight bounds presented in the paper offer several advantages. They improve existing bounds for complete hypothesis sets, encompass a wide range of previously studied and new loss functions, extend beyond the completeness assumption, provide guarantees for bounded hypothesis sets, and offer a stronger guarantee for logistic loss with linear hypothesis sets compared to previous work. Overall, this paper contributes to the understanding and derivation of H-consistency bounds, providing a more general framework and tools for analyzing surrogate loss functions in machine learning. Strengths: This paper is well presented. It introduced a new general tool for deriving these H-consistency bounds with tightness guarantees. The results are carefully articulated and compared with existing work. I did not have enough time to verify all the math details, but the results look solid. Weaknesses: See next section Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the theory be extended to derive tight bounds for more complicated models such as simple neural networks? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. Below please find responses to specific questions. **Can the theory be extended to derive tight bounds for more complicated models such as simple neural networks?** **Response:** Good question! Our $H$-consistency bounds, as demonstrated in Theorem 2, Theorem 4, Theorem 5, Theorem 6, Theorem 9, Theorem 10, and Theorem 12, are not limited to specific hypothesis set forms. They are directly applicable to various types of hypothesis sets, including linear functions and complex neural networks. For instance, Corollary 8, derived from Theorem 6, provides $H$-consistency bounds for linear hypothesis sets by explicitly computing and incorporating the term $\Lambda(x)$ into the general formulation of $\Psi$. The same derivation can be extended to neural networks studied in [Awasthi et al., 2022a] and their multi-class generalization, where we calculate and substitute the corresponding $\Lambda(x)$ value. As a result, we obtain novel and tight $H$-consistency bounds for bounded neural network hypothesis sets in multi-class classification, highlighting the remarkable versatility of our general tools. We will further elaborate on that and include specific results related to the hypothesis set of neural networks in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification.
Summary: H-consistency bounds for surrogate loss function plays an important part in learning theory. This manuscript provides a comprehensive characterization and extension of H-consistency bounds for multi-class classification. The authors introduce a novel error transformation function that enables the derivation of tighter bounds, even under weaker assumptions compared to existing related works. Technically, this research presents a systematic approach to H-consistency bounds. Moreover, it contributes to the refinement of results concerning the consistency of multi-class classification on a theoretical level. Strengths: The authors present a systematic technique for H-consistency analysis and apply it to various types of loss functions. The authors compare the results in the manuscript with those in related works in detail and show a significant advance. Weaknesses: This manuscript includes a wealth of work but only limited technical innovations compared to [1]. [1] P. Awasthi, A. Mao, M. Mohri, and Y. Zhong. Multi-class H-consistency bounds. NeurIPS 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: • As stated in the manuscript, H-consistency is a generalization of Bayes consistency, and new techniques have been developed in the analysis of H-consistency. Is it possible to illustrate the advantages of H-consistency over Bayes consistency with more specific examples? • The H-consistency analysis for not only the linear hypothesis class but also the neural network class has been done in [2]. Can the results in the manuscript be extended to the neural network class as well? As far as I understand, the analysis in the manuscript relies heavily on the explicit form of the hypothesis, does it mean that it is difficult to extend these inequalities to the deep neural network class? • Between Line 256 and 257: Φ−1 → Ψ−1 [2] P. Awasthi, A. Mao, M. Mohri, and Y. Zhong. H-consistency bounds for surrogate loss minimizers. ICML 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I suggest the authors illustrating more specifically the advantages of H-consistency over Bayes consistency. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review. We have carefully addressed all the questions raised. Please find our responses below. **Weaknesses:** **This manuscript includes a wealth of work but only limited technical innovations compared to [1].** **Response:** Our work introduces a key technical innovation: error transformation functions. These functions enable a more systematic and straightforward derivation of $H$-consistency bounds. The versatility of this novel tool offers significant advantages, including tighter bounds surpassing those in [1] for complete hypothesis sets and, importantly, tailored guarantees for bounded hypothesis sets, going beyond completeness assumptions. We are confident that our innovative approach will pave the way for advances in multi-class classification consistency research and benefit the analysis of consistency in various other scenarios. **Questions:** **1. As stated in the manuscript, H-consistency is a generalization of Bayes consistency, and new techniques have been developed in the analysis of H-consistency. Is it possible to illustrate the advantages of H-consistency over Bayes consistency with more specific examples?** **Limitations:** **I suggest the authors illustrating more specifically the advantages of H-consistency over Bayes consistency.** **Response:** Certainly, in the final version, we will provide illustrative examples. A critical issue with Bayes consistency lies in its assumption of access to the entire family of measurable functions, which contrasts with the limited hypothesis set $H$ a learning algorithm can rely on. The concept of $H$-consistency (and $H$-consistency bound) specifically aims to capture the properties of the particular hypothesis set $H$ used. For instance, Long and Servedio (2013) presented a compelling case where they demonstrated both theoretically and empirically that the expected error of an algorithm minimizing a Bayes-consistent loss remains bounded by a positive constant. In contrast, the expected error of an algorithm minimizing an inconsistent but realizable $H$-consistent loss approaches zero. This example highlights the significance of considering the characteristics of the chosen hypothesis set $H$ in practical learning scenarios. **2. The H-consistency analysis for not only the linear hypothesis class but also the neural network class has been done in [2]. Can the results in the manuscript be extended to the neural network class as well? As far as I understand, the analysis in the manuscript relies heavily on the explicit form of the hypothesis, does it mean that it is difficult to extend these inequalities to the deep neural network class?** **Response:** Good question! Our $H$-consistency bounds, as demonstrated in Theorem 2, Theorem 4, Theorem 5, Theorem 6, Theorem 9, Theorem 10, and Theorem 12, are not limited to specific hypothesis set forms. They are directly applicable to various types of hypothesis sets, including linear functions and complex neural networks. For instance, Corollary 8, derived from Theorem 6, provides $H$-consistency bounds for linear hypothesis sets by explicitly computing and incorporating the term $\Lambda(x)$ into the general formulation of $\Psi$. The same derivation can be extended to neural networks studied in [2] and their multi-class generalization, where we calculate and substitute the corresponding $\Lambda(x)$ value. As a result, we obtain novel and tight $H$-consistency bounds for bounded neural network hypothesis sets in multi-class classification, highlighting the remarkable versatility of our general tools. We will further elaborate on that and include specific results related to the hypothesis set of neural networks in the final version. **3. Between Line 256 and 257: $\Phi^{-1} \to \Psi^{-1}$** **Response:** Thank you, we will correct the typo.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents an extension of previous works on $\mathcal{H}$-consistency bounds for multi-classification using a novel method. The strength of the paper lies in proposing a new method and tool that guarantees tight $\mathcal{H}$-consistency bounds for multi-classification. Strengths: The paper proposes a general characterisation $\mathcal{H}$-consistency bounds for multi-classification via new tools. Weaknesses: One weakness of the paper is the lack of clarity in the discussion of tightness guarantees. While the statement in lines 191-192 suggests the existence of a distribution $\mathcal{D}$ and a hypothesis for which the upper bound in Theorem 2 is tight, it is not explicitly clarified if this bound holds tight for all data distributions. Therefore, it is recommended to provide further clarification regarding the tightness statement. To enhance the comprehension of the paper, it would be beneficial to include an experimental design demonstrating the tightness of the proposed bounds in comparison to previous results. The authors mention the application of a tool to derive new results, but it would be helpful to explicitly mention and explain the techniques employed in the main body of the paper. Currently, it is unclear what these tools entail. In the proof of Theorem 3, it is advised to provide additional elaboration on the equalities following line 587. It should be noted that in the second equality, the infimum with respect to $\tau$ is taken over all terms. In line 318-319, the authors mentioned "Next, we illustrate the application of our theory through an example of constrained exponential losses,". However, the paper is finished. The proof of Theorems 2 and 5 for the tightness is limited to n=2. For n>2, there is no discussion. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Please refer to the weaknesses section for the identified limitations. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: Please refer to the weaknesses section for the identified questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your useful comments. We have carefully addressed all the questions raised. Please find our responses below. **1. One weakness of the paper is the lack of clarity in the discussion of tightness guarantees. While the statement in lines 191-192 suggests the existence of a distribution $\mathcal{D}$ and a hypothesis for which the upper bound in Theorem 2 is tight, it is not explicitly clarified if this bound holds tight for all data distributions. Therefore, it is recommended to provide further clarification regarding the tightness statement.** **Response:** We will further clarify that in the final version. In short, our $H$-consistency bounds are distribution-independent and we do not claim tightness across all distributions. Our analysis can be extended, however, to derive finer distribution-dependent bounds under assumptions such as Massart’s noise condition, as in (Awasthi et al., 2022). **2. To enhance the comprehension of the paper, it would be beneficial to include an experimental design demonstrating the tightness of the proposed bounds in comparison to previous results.** **Response:** This is certainly a natural suggestion. We will seek to add such experiments in the final version to empirically illustrate the tightness of our proposed bounds. **3. The authors mention the application of a tool to derive new results, but it would be helpful to explicitly mention and explain the techniques employed in the main body of the paper. Currently, it is unclear what these tools entail.** **Response:** We will further clarify that in the final version. Our error transformation function serves as a very general tool for deriving $H$-consistency bounds with tightness guarantees. These functions are defined within each class of loss functions including comp-sum losses and constrained losses, and their formulation depends on the structure of the individual loss function class, the range of the hypothesis set and the number of classes. To derive explicit bounds, all that is needed is to calculate these error transformation functions. Under some broad assumptions on the auxiliary function within a loss function, these error transformation functions can be further distilled into more simplified forms, making them straightforward to compute. **4. In the proof of Theorem 3, it is advised to provide additional elaboration on the equalities following line 587. It should be noted that in the second equality, the infimum with respect to $\tau$ is taken over all terms.** **Response:** Thank you for the suggestion. You are absolutely correct, the infimum with respect to $\tau$ is taken over all terms in the second equality. We will definitely add more explanation and clarify the derivation further in the final version. **5. In line 318-319, the authors mentioned "Next, we illustrate the application of our theory through an example of constrained exponential losses,". However, the paper is finished.** **Response:** Sorry for the confusion. The example of constrained exponential losses is deferred to Appendix F.2 due to space limitations. We will further improve our presentation for readers and use the extra page in the final version to add more discussions in the main body. **6. The proof of Theorems 2 and 5 for the tightness is limited to n=2. For n>2, there is no discussion.** **Response:** Sorry for the confusion. The proof for n>2 closely follows and directly extends from the case when n= 2, by considering the distribution that concentrates on a singleton. This was inadvertently omitted, but we will definitely include a thorough discussion in the final version. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response. It seems that the paper is not well-organised (item 6 and 5) and some parts are omitted. The authors had the chance to provide a simple experiments and discuss their results in rebuttal one-page. Regarding the proof of Theorem 3, in line 587, the second equality is not clear. could you please clarify this equality. --- Reply to Comment 1.1.1: Title: Thank you Comment: We appreciate the reviewer’s feedback and comments. **1. It seems that the paper is not well-organised (item 6 and 5) and some parts are omitted.** **Response:** We appreciate the positive feedback we have received from multiple reviewers regarding our paper's organization. It is unfortunate that your perspective differs but we value your input. Due to space limitations, we had to relocate a portion of the content to the appendix shortly before the deadline. Furthermore, a minor section was inadvertently commented out, as you previously noted. As we have highlighted before, rest assured that these issues will be addressed and easily rectified in the final version. **2. The authors had the chance to provide simple experiments and discuss their results in rebuttal one-page.** **Response:** As indicated in our previous response, we already *prove* the tightness of our bounds. As already promised, we will seek to present an experiment illustrating the tightness of our bounds in the final version. But, we should emphasize that presenting a specific empirical example showcasing this tightness is not straightforward. **3. Regarding the proof of Theorem 3, in line 587, the second equality is not clear. Could you please clarify this equality.** **Response:** The equality simply corresponds to the fact that the supremum of a negative term can be equivalently written as the infimum of that term. Since the terms depending on $\mu$ are both negative, they can be first grouped together and next the equivalence mentioned is applied. In detail, the second equality repositions the supremum from the preceding equality within the curly brackets by using the fact that $\sup_{\mu \in [-\tau, 1-\tau]} \bigg \\{ -\frac{1+t}{2} \Phi(1-\tau-\mu) - \frac{1-t}{2} \Phi (\tau + \mu) \bigg \\} = -\inf_{\mu \in [-\tau, 1-\tau]} \bigg \\{ \frac{1+t}{2} \Phi(1-\tau-\mu) + \frac{1-t}{2} \Phi (\tau + \mu) \bigg \\}$. As you rightly noted, the infimum with respect to $\tau$ in the second equality should encompass all terms. We will correct that typo in the final version. Further, the subsequent equality is grounded on the observation that for any $\mu$ within the interval $[-\tau, 1-\tau]$, the values $(1-\tau-\mu)$, $(\tau + \mu)$ are confined to $[0,1]$ and $(1-\tau-\mu) + (\tau + \mu) =1$. This leads us to express $\inf_{\mu \in [-\tau, 1-\tau]} \bigg \\{ \frac{1+t}{2} \Phi(1-\tau-\mu) + \frac{1-t}{2} \Phi (\tau + \mu) \bigg \\}$ in an equivalent form in the third equality as $\inf_{\mu \in [-\frac12, \frac12]} \bigg \\{ \frac{1-t}{2} \Phi(\frac12+\mu) + \frac{1+t}{2} \Phi (\frac12 - \mu) \bigg \\}$. Consequently, the infimum with respect to $\tau$ pertains exclusively to the first part of the third equality.
Summary: The authors present some extension study on the H-consistency bounds for various loss functions. They also introduce an error transformation function that the authors claim could be a general tool in deriving H-consistency bound. Strengths: The authors seem very well aware of the related work and have solid comparison with them. Weaknesses: Maybe the authors can simplify the notation and presentation so as the paper can be consumed by more general audience. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. I understand the importance of introducing a more general mathematical tool in deriving H-consistence bound for a wide range of function (but I did not check the correctness of this part). I don't know what's the value of a tigher H-consistent bound. This bound does not seem really useful (or maybe I am wrong) in pratise to guide people's design decision. Whereas Awasthi et al introduced the H-consistent bound. 2. Essentailly following 1 above, while we pushed the theoretical guranttee forward a little bit, application wise, what are the add-on values that this paper brings compared to the related work? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: A bit hard for general audience to understand. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your useful comments. We have carefully addressed all the questions raised. Please find our responses below. **Weaknesses:** **Maybe the authors can simplify the notation and presentation so as the paper can be consumed by more general audience.** **Limitations:** **A bit hard for general audience to understand.** **Response:** We will definitely follow your suggestions and will seek to simplify and improve the notation and presentation for readers. The extra page in the final version will also allow us to expand the discussion in the main body. **Questions:** **1. I understand the importance of introducing a more general mathematical tool in deriving H-consistence bound for a wide range of functions (but I did not check the correctness of this part). I don't know what's the value of a tighter H-consistent bound. This bound does not seem really useful (or maybe I am wrong) in practice to guide people's design decisions. Whereas Awasthi et al introduced the H-consistent bound.** **Response:** Indeed, one of our principal contributions lies in a more general and convenient mathematical tool for proving $H$-consistency bounds. However, the derivation of tighter $H$-consistent bounds is equally significant. As mentioned by Awasthi et al, given a hypothesis set $H$, $H$-consistency bounds can be used to compare different surrogate loss functions and select the most favorable one, which depends on 1) The functional form of the $H$-consistency bound; 2) The smoothness of the loss and, more generally, its optimization properties; 3) Approximation properties of the surrogate loss function: for instance, given a choice of $H$, the minimizability gap for a surrogate loss may be more or less favorable; 4) The dependency of the multiplicative constant on the number of classes. Consequently, a tighter $H$-consistency bound provides a more accurate comparison, as a loose bound might not adequately capture the full advantage of using one surrogate loss. In contrast, Bayes-consistency does not take into account the hypothesis set and is an asymptotic property, thereby failing to guide the comparison of different surrogate losses. Another application of our $H$-consistency bounds involves deriving generalization bounds for surrogate loss minimizers, expressed in terms of the same quantities previously discussed. Therefore, when dealing with finite samples, a tighter $H$-consistency bound could also result in a corresponding tighter generalization bound. We will further elaborate on these aspects in the relevant section of our paper, underscoring the importance and the value of tighter $H$-consistency bounds. **2. Essentially following 1 above, while we pushed the theoretical guarantee forward a little bit, application wise, what are the add-on values that this paper brings compared to the related work?** **Response:** As already mentioned, our significant contributions include a more general and convenient mathematical tool for proving $H$-consistency bounds, along with tighter bounds that enable a better comparison of surrogate loss functions. These improved bounds also lead to tighter finite sample generalization bounds compared to previous work. Moreover, our novel results extend beyond previous completeness assumptions, offering guarantees applicable to bounded hypothesis sets commonly used with regularization. This enhancement provides meaningful learning guarantees. Further details will be expanded in the final version.
null
null
null
null
What Truly Matters in Trajectory Prediction for Autonomous Driving?
Accept (poster)
Summary: This paper presents a timely and well-executed study on motion prediction, focusing on the relationship between the dominant evaluation paradigm (static, offline metrics) and the true objective of research in this domain (safer planning). The study uncovers several important and surprising findings: (1) static offline metrics are not correlated to planning performance, (2) evaluating on the frames observed by the planner (dynamic offline metrics) are significantly better correlated, (3) computational efficiency in prediction is increasingly important for sophisticated planners, and (4) simple baselines for prediction (constant velocity/constant acceleration) perform best when considering downstream planning performance. Strengths: The study setup is excellent, making well-motivated choices for the task, simulator, predictors, planners, metrics, etc. The writing is clear. The presentation of the key findings both visually and numerically also makes this paper an interesting and easy-to-understand read. Most importantly, it highlights a flaw in the widely used metrics that the community optimizes motion prediction models for, and shows that naive baselines are sufficient to outperform SoTA learned forecasting when using the right metrics. Weaknesses: Overall, the paper has very few weaknesses in my opinion. While the selected ML prediction models do not include the current leaderboard winners, I believe they are representative. The experiments could have been conducted on more realistic scenarios (e.g. the nuPlan simulator), but Section 5.1 shows that the simulator used is sufficiently aligned to real datasets for the purpose of this study. However, some technical details regarding the study are unclear (please see the “Questions” section #2, #3, #4 for specifics). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. A very interesting finding that I suggest should be highlighted more is that in all closed-loop evaluation settings, the best among the prediction methods used in this study is always either CV or CA - the naive baselines. 2. Do the CV and CA baselines forecast the positions along a straight line (heading direction), or do they take into account angular speed/acceleration as well? 3. In L316, could you please elaborate on what "complete/incomplete observations" refers to? 4. L328 recommends Dynamic FDE with Closest, when closed loop simulation is possible. However, why would this be preferable to simply ranking prediction methods by the “Driving Performance” of the planner? 5. Please cite https://arxiv.org/abs/1809.04843 while wrapping up Section 2.2 - this is a well-known study with a similar setup, aimed at planning instead of motion prediction. It would also be interesting to contrast your findings to theirs, given the change in task. 6. Could part of the discrepancy arise from the fact that prediction models penalize errors on all vehicle instances equally, while some are less relevant for planning than others? The result in Table 4 showing the improved correlation when considering only the nearest agents seems to indicate this. 1. Could you add a row with the “Closest” checkmark for the static, multimodal ADE? 2. If it is possible to include a task-aware offline metric such as PKL [15], CAPO [18] or some equivalent in the analysis, this would add significant value to the paper, and I would be happy to increase my rating. Minor: 1. L078: Lastname et al. instead of Firstname et al. Update: Thank you for taking the time to carefully answer my questions, I appreciate the effort. The rebuttal clarifies most of my questions. I understand the choice of Dynamic ADE/FDE for the experiments in this paper (as the rebuttal points out, "a prediction metric that employs the same calculation methodology but behaves distinctively in these two evaluation frameworks" helps provide evidence for their claims). I am still leaning towards a positive rating as I see the main contribution of this paper as highly relevant to the motion prediction community for driving. This is a large research area with hundreds of published papers, and little incentive to move away from static metrics (even though the lack of alignment between open-loop and closed-loop metrics for planning is reasonably well-known). This paper opens up a discussion on the inadequacy of the prevalent motion forecasting metrics, and could be a valuable first step towards more comprehensive evaluation of prediction models in the future. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are discussed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive comments! We kindly ask the reviewer to let us know if further clarification or information is needed. >A very interesting finding that I suggest should be highlighted more is that in all closed-loop evaluation settings, the best among the prediction methods used in this study is always either CV or CA - the naive baselines. Yes, that’s an important observation! When the time budget is limited, the simple rule-based model gives very good driving performance, which is in line with previous research: a simple Constant Velocity Model can outperform even state-of-the-art neural models [1]. We will include this paper as a reference in our final version. >Do the CV and CA baselines forecast the positions along a straight line, or do they take into account angular speed/acceleration as well? That’s a good question. We solely utilize position for our predictions, extracting speed and acceleration information. As a result, angular speed or acceleration is not taken into consideration. This approach aligns with the baseline of the Argoverse Competition. >In L316, could you please elaborate on what "complete/incomplete observations" refers to? The term "complete" signifies the presence of all 20 observations of an agent, while "incomplete" denotes instances where certain observations are absent due to factors like occlusion, being outside the receptive field, etc. >L328 recommends Dynamic FDE with Closest, when closed loop simulation is possible. However, why would this be preferable to simply ranking prediction methods by the “Driving Performance” of the planner? This is a great point for us to clarify more! The correlation between Dynamic ADE/FDE and driving performance is strong for both planners, rendering them suitable for ranking prediction models. However, the core contribution of this paper lies in revealing the absence of correlation between static evaluation and driving performance. To illustrate this, we adopt a prediction metric that employs the same calculation methodology but behaves distinctively in these two evaluation frameworks. By showing this, we emphasize the significance of dynamic evaluation. >Please cite https://arxiv.org/abs/1809.04843 while wrapping up Section 2.2 - this is a well-known study with a similar setup, aimed at planning instead of motion prediction. It would also be interesting to contrast your findings to theirs, given the change in task. Thank you for bringing this paper to our attention. We concur with the significant factors highlighted within it, like the asymmetry of prediction error. We will include this paper in related works and contract our findings with it in Q6. >Could part of the discrepancy arise from the fact that prediction models penalize errors on all vehicle instances equally, while some are less relevant for planning than others? The result in Table 4 showing the improved correlation when considering only the nearest agents seems to indicate this. Yes. Even when two prediction models present the same ADE, their driving performances are not identical. This discrepancy arises from the different influence of prediction errors to planning. For instance, errors associated with the closest exo-agent can have a higher impact on the ego-agent's plan. The conflict between the predicted state of the exo-agent with the ego-agent's plan also matters. **However, it is important to note that the asymmetry of prediction error is not universal, which means it may occur only in specific scenarios or for specific agents**. As a result, the influence of this factor is less significant than dynamic evaluation and computational efficiency, both of which impact prediction evaluation are ubiquitous. To provide evidence, we have accounted for the most critical aspect of asymmetry prediction error by comparing "Closest Dynamic ADE" with "Dynamic ADE." Consequently, the influence of asymmetry on the correlation between ADE and Driving Performance is less than dynamic evaluation. (Influence For RVO Planner: 0.16 vs 0.61; Influence For DESPOT Planner: 0.20 vs 0.96) Hence, in this paper, our primary focus is on discussing the significance of dynamic evaluation and computational efficiency while giving less emphasis to this particular factor, even though it remains crucial in prediction evaluation. >Could you add a row with the “Closest” checkmark for the static, multimodal ADE? The static and multimodal ADEs are already “Closest”. Within the Alignment dataset, we randomly select the three nearest agents as the “interested agent” for prediction, with predictions solely made for this "interested agent" in each scenario. This approach is implemented to mirror the configuration of an interested agent as in the Argoverse dataset. >If it is possible to include a task-aware offline metric such as PKL [15], CAPO [18] or some equivalent in the analysis, this would add significant value to the paper, and I would be happy to increase my rating. These task-aware offline metrics offer valuable insights in open-loop evaluation where static datasets are available. However, applying these metrics poses challenges within closed-loop, where the ego planner functions in real-time, causing the ground truth to dynamically shift based on the planner's choices. This dynamic nature eliminates the existence of a fixed ground truth, against which to benchmark the planner's performance. Given this absence of a consistent ground truth within closed-loop, conducting PKL or CAPO measurements becomes complicated. While we recognize the value of task-aware offline metrics, we will explore means of adapting such measures for closed-loop evaluation in future. >L078: Lastname et al. instead of Firstname et al. Thank you for bringing this error to our attention. We will correct this to "McAllister et al." in our final version. [1] Schöller et al. What the constant velocity model can teach us about pedestrian motion prediction. RAL 2020. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to carefully answer my questions, I appreciate the effort. The rebuttal clarifies most of my questions. I understand the choice of Dynamic ADE/FDE for the experiments in this paper (as the rebuttal points out, "a prediction metric that employs the same calculation methodology but behaves distinctively in these two evaluation frameworks" helps provide evidence for their claims). However, I am still missing an answer as to whether the Dynamic ADE/FDE has any benefit as a metric to a practitioner building prediction systems for driving scenes. The same question has been raised by reviewer Aygk, and I am curious to see the answer. I am still leaning towards a positive rating as I see the main contribution of this paper as highly relevant to the motion prediction community for driving. This is a large research area with hundreds of published papers, and little incentive to move away from static metrics (even though the lack of alignment between open-loop and closed-loop metrics for planning is reasonably well-known). This paper opens up a discussion on the inadequacy of the prevalent motion forecasting metrics, and could be a valuable first step towards more comprehensive evaluation of prediction models in the future. --- Reply to Comment 1.1.1: Comment: Thanks for your recognition of our work. In this paper, our main contribution stands on: 1. Establish the existence and significance of the dynamics gap and computational efficiency. 2. Emphasize the efficacy of the alternative evaluation protocol (simulation) when real-world tests are unaffordable. In this perspective, dynamic ADE/FDE serve solely as the tools we employ to substantiate these two contributions, rather than constituting our primary contribution. The difference in the correlation coefficient between dynamic ADE/FDE and static ADE/FDE versus driving performance is notable, demonstrating the significance of the dynamics gap. Similarly, the importance of computational efficiency is emphasized by the remaining gap between dynamic ADE and driving performance. However, the benefit of dynamic ADE to a practitioner building autonomous driving systems is also significant, as the primary step in solving dynamic gap is to perceive it. Dynamic ADE versus static ADE can serve as a metric to value how much the dynamics gap is solved. We present a potential application here: motion model evaluation. Motion Model Evaluation: While each simulator employs its unique motion model for exo-agents, the assessment of motion model quality remains challenging. Through the utilization of dynamic ADE, three values can be computed utilizing the same predictor and planner: static ADE within the simulation dataset, dynamic ADE within the simulation environment, and dynamic ADE during real-world testing. These metrics solely involve changes in the motion model of exo-agents, enabling the evaluation of the simulation's fidelity to real-world agents' motion. e.g. $$ fidelity =\frac{ADE_{real} - ADE_{sim}}{ADE_{real} - ADE_{static}} $$ We would like to thank the reviewer for thoughtful and positive comments. We would be more than glad to discuss any remaining concerns you might have.
Summary: The authors examine the discrepancy between trajectory prediction accuracy and driving performance in the task of autonomous driving when a fixed dataset is used and in the presence of multiple surrounding traffic participants (such as other vehicles). Specifically, they assert that this discrepancy arises from: - A dynamics gap between the static dataset that the trajectory predictor is trained on and its subsequent usage for planning in real-world driving scenario. This is because the ego-vehicle’s planner may act differently to the static dataset, which results in novel reactions from the other entities in the environment (vehicles, pedestrians, etc.) and thus out-of-distribution scenarios occur. - Computational efficiency issues caused by the planner and trajectory predictor being too slow to accommodate real-time operation. Although the work does not propose a novel method, the authors conduct of study of existing methods to illustrate the importance of a dynamic approach where the ego-vehicle collects data with the specific predictor (instead of relying on a single fixed static dataset) and efficient trajectory prediction. A wide variety of trajectory prediction models are used (Table 1) to ensure high test coverage along with two planners (RVO and DESPOT) (section 4.2) For trajectory prediction evaluation they use the Average/Final Distance Error (ADE/FDE) in Table 2. To evaluate driving performance a combination of collision avoidance, speed to goal and comfort (minimize jerk) are considered (section 4.4). Experiments first attempt to illustrate how their chosen simulator (SUMMIT, extended from Carla) is suitable to evaluate real-world performance based on simulation (section 5.1, Figure 2) as trajectory prediction simulation results (SUMMIT) and real-world dataset results (Argoverse dataset) are positively correlated. They then illustrate the performance gap that occurs when evaluating against a static dataset versus one collected interactively (dynamically) from simulation where the ego vehicle operates with the predictor (section 5.2, Figure 3. versus Figure 4). Most notably, they find that efficiency concerns are most prevalent under a high planner-simulation tick rate. In this case the predictor efficiency has the greatest effect on driving performance instead of the dynamics gap since it cannot keep up with the high tick rate (Table 3). Table 4 further shows that the discrepancy between driving performance and trajectory prediction accuracy (ADE) is reduced when only nearby agents with full observations is preferred in addition to dynamic data collection. Strengths: - Experiments are well done and compare a variety of approaches (ten trajectory prediction models, two planners). - Claims are backed clearly by data. The results show the value of using an interactive simulation environment for data gathering and favoring efficient predictors when doing real-time planning for autonomous driving. - In general, well written and clear. Weaknesses: - The work does not propose a novel approach but is instead of survey of existing methods. - It could also be argued that the survey results are somewhat obvious (which compounds with the first criticism): 1) relying solely on a static dataset for model-based training may result in a distributional shift during real-world evaluation once planning is done, causing failure and 2) efficient predictors are important for real-time planning. Minor Criticism: - The combination of performance metrics $P_{\mathrm{Safety}}, P_{\mathrm{Efficiency}}, P_{\mathrm{Comfort}}$ could be clarified in the text (section 4.4). Sensibly, $P_{\mathrm{Safety}}$ and $P_{\mathrm{Comfort}}$ are to be minimized while $P_{\mathrm{Efficiency}}$ is maximized. But all metrics are labeled “performance” so I was originally expecting them to be handled uniformly. The text does mention that “we normalized the direction of these three metrics” but this is still somewhat ambiguous. Things are fully clarified once looking at the supplemental material where the sign on $P_{\mathrm{Safety}}$ and $P_{\mathrm{Comfort}}$ are reversed during normalization. However, I would find it helpful if this was more clearly mentioned to in the text. - I feel that the first experimental section (section 5.1) is perhaps not relevant to the main purpose of the paper besides illustrating that trajectory prediction accuracy in simulation and the real-world are positively correlated across different models. I imagine most semi-photorealistic simulation environments would possess this property, especially given its extension from the mature Carla simulator. They also do not explicitly show that driving performance is ultimately related (only trajectory prediction performance). There does appear to be a notable ADE/FDE error increase when moving to the real-world dataset. Thus, once applied to planning, I could see a case where a model might perform “well” in simulation but “poorly” in the real world (despite performing better than its peers). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Should the minADE and minFDE equations in Table 2 be a function of k? I assume that each predicted $\hat{x}_i$ is a sample taken K times? - In Table 3, is the “Inference Time” for one network inference loop and not the overall time for planning? If it is the overall time for planning I would be confused as to why performance changes for some models below the 30hz refresh rate when going from 30 to 3 to 1 hz. - In section 5.1, lines 237-238 it says that the “nearest three agents are randomly selected to be the interested agent for prediction”. Is there a reason why only 3 agents were considered? Was this the best trade-off between computation time and performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors did not explicitly list any limitations of their method. However, their work is primarily a study of existing methods and so a limitations section is perhaps less relevant. Nonetheless, one possible limitation is the focus solely on simulation for illustrating static versus dynamic driving performance discrepancy. Please refer to the weaknesses section for more possible limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments! We kindly ask the reviewer to let us know if further clarification or information is needed. >The work does not propose a novel approach but is instead a survey of existing methods. We do not provide a new approach, but instead a prediction evaluation protocol, which is also important in the field of autonomous driving. Proper evaluation is an important yet often overlooked aspect of machine learning research. Prior works focusing on evaluation from NeurIPS [1,2] play an important role in their respective field. >It could also be argued that the survey results are somewhat obvious. This is a great point for us to elaborate more! This paper shed light on **the equally essential need for evaluating seemingly static modules, such as prediction, through closed-loop evaluation**, which has been overlooked in recent studies [3,4] and SOTA of Argoverse Competition [5]. Another important idea is **the trade-off between computational efficiency and accuracy even when the predictor is fast enough for the planner's execution**. In the context of autonomous driving, a consensus exists that the perception system must meet a specified threshold for proper planning. Once this threshold is satisfied, the focus shifts to ensuring accuracy. (e.g., 100ms [6,7]) However, our experiments demonstrate the trade-off persists even below the threshold. When the predictor runs much faster than the threshold (2ms), the driving performance remains dominated by the fastest prediction method (CV). This trade-off holds importance, as various planners exhibit their trade-offs in addition to their fundamental requirement. >The combination of performance metrics could be clarified in the main text. Sure! We will put the explanations in our main text to make it clearer, detalis in the general response due to word limitation. >The first experimental section is not relevant to the main purpose of the paper. As pointed out, our first experiment is to establish the positive correlation between trajectory prediction accuracy across various environments. This serves as the foundation of our experiments, but not supporting the main idea. **To avoid misunderstanding of our core concept, we will include the Sim-Real Alignment experiment in the Appendix.** >A model might perform “well” in simulation but “poorly” in the real world. The gap you mentioned arises from the disparities between simulation and real-world, which we can refer to as interaction gap. This mismatch arises due to unreal exo-agent behavior, as the simulator controls their actions. Unfortunately, this can only be addressed through real-world tests, which is often unaffordable. Our paper focuses on addressing the dynamics gap that exists between static evaluation and dynamic evaluation. In static evaluation, both interaction gap and dynamics gap remain unresolved. In dynamic evaluation, only the interaction gap remains. This is our main contribution, and how to solve the interaction gap lies beyond the scope of this paper. >Should the minADE and minFDE equations in Table 2 be a function of k? I assume that each predicted xi Is a sample taken K times? Thank you for bringing this to our attention. We apologize for the error in formulating minADE/FDE. We take our experiment setting based on the consensus of Argoverse/Waymo Dataset, setting K=6. We will explicitly state it in the final version. >Is the “Inference Time” for one network inference loop and not the overall time for planning? Yes. The inference time is for one network inference loop and not the overall time for planning. It is essential to distinguish the tick rate as the time of simulator, not the inference time. In a simulator with Tick Rate=30Hz, planners are constrained to act within 0.3s (real time). Conversely, Tick Rate=3Hz allows 3s for planning. Adjusting the tick rate grants planners more time to act, enabling them to explore more potential future states, thereby influencing driving performance. >Is there a reason why only 3 agents were considered? Was this the best trade-off between computation time and performance? This setting is designed to replicate the data generation process of Argoverse. In the Argoverse dataset, each scenario selects one agent as the interested agent, which must possess complete observation and future states. As the nearest agents are more likely to have full observation, we choose it as the interested agent in our data generation. To preserve multi-modality, we randomly choose one from the three nearest agents. However, during our experiments in Section 5.2~5.3, predictions are made for all agents. >The authors did not explicitly list any limitations. One possible limitation is the focus solely on illustrating static versus dynamic driving performance discrepancy. **Though we have discussed our limitation on planner selection and simulation scope in Section 6, we totally agree it is crucial to emphasize the limitation posed**. One common and critical factor is the asymmetry of prediction error. Even when two prediction models exhibit the same ADE, their driving performances will not be identical. This discrepancy arises from the different influence of prediction errors to planning. We will add this limitation in our final version. [1]Agarwal et al. Deep reinforcement learning at the edge of the statistical precipice. NeurIPS 2021. [2]Pillutla et al. Mauve: Measuring the gap between neural text and human text using divergence frontiers. NeurIPS 2021. [3]Hu et al. Planning-oriented autonomous driving. CVPR 2023. [4]Liang et al. Learning lane graph representations for motion forecasting. ECCV 2020. [5]Zhou et al. Query-Centric Trajectory Prediction. CVPR 2023. [6]Lin et al. The architectural implications of autonomous driving: Constraints and acceleration. ASPLOS 2018. [7]Yamaguchi et al. In-vehicle distributed time-critical data stream management system for advanced driver assistance. JIP 2017. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my comments. I appreciate that this work may give a more comprehensive discussion of relevant metrics to indicate actual driving performance after training (static versus dynamic ADE, computational performance) but I still currently feel that the level of contribution and novelty is not quite sufficient for NeurIPS. I imagine static datasets will still be used in training for quite some time (ex: for real-world applications) and so I would of found it more useful if ways to address this misalignment were proposed. The proposed metrics do more strongly indicate actual driving performance, but they also require direct evaluation in an interactive simulation environment and pairing with a planner and so I am not sure of their usefulness as a final evaluation metric instead of simply using driving performance itself, although they can act as a useful tool to indicate why driving performance may be poor. But as a tool for indicating why driving performance may be poor, these again seem like obvious metrics where dynamic ADE is essentially the same prediction error computed during evaluation (a sort of test set error) and computation time is clearly relevant for real-time planners. I agree that closed-loop metrics are often overlooked when evaluating the motion prediction model but I am unsure if this is due to the fact that they are unknown or simply to do with the hurdles in integrating them into the evaluation pipeline: they require an interactive simulation environment (difficult for real-world data) and pairing with a planner from which results may vary from planner to planner. Or is it possible that since the prediction models are now being evaluated on different datasets as an outcome of their interaction with the environment, the comparison is no longer fair (i.e. one dataset is harder than the other)? Thus I am still somewhat unsure of the level of contribution of this work. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in the discussion! Regarding concerns about the novelty of our paper, it is essential to emphasize that the dynamics gap is not a sort of test set error. **The dynamics gap is the inherent limitation of open-loop evaluation, in which exo-agents will not react to the change of predictor and corresponding ego-motion**. Conversely, in the real-world, different predictors result in varied behaviors of the ego-agent, which, in turn, influence the future behaviors of other road users, leading to different dynamics within the environment. Without addressing this gap, all prediction methods remain limited in real-world utility. While we agree that datasets will still be used for training purposes in the future, a shift towards dynamic datasets will happen, rather than relying on static datasets. Dynamic datasets (or offline simulators) like MixSim [8] and nuPlan [9] are promising to replace the role of current open-loop datasets. Researchers have the opportunity to train and evaluate their models using interactive agents, instead of static records. However, the awareness of the urgency to evaluate predictors within closed-loop remains limited. These dynamic datasets are all designed for evaluating planners. Besides, there is no comprehensive study about the limitation of current open-loop evaluation, and the key factor leading to it. Although dynamic datasets have yet to be employed for prediction evaluation, the reveal of dynamics gap can still make substantial contributions to the autonomous driving field. We present a potential application here: motion model evaluation. Motion Model Evaluation: While each simulator employs its unique motion model for exo-agents, the assessment of motion model quality remains challenging. Through the utilization of dynamic ADE, three values can be computed utilizing the same predictor and planner: static ADE within the simulation dataset, dynamic ADE within the simulation environment, and dynamic ADE during real-world testing. These metrics solely involve changes in the motion model of exo-agents, enabling the evaluation of the simulation's fidelity to real-world agents' motion. e.g. $$ fidelity =\frac{ADE_{real} - ADE_{sim}}{ADE_{real} - ADE_{static}} $$ Again, we would like to thank the reviewer for thoughtful and comprehensive comments. We would be more than glad to discuss any remaining concerns you might have. [8] Suo et al. MixSim: A Hierarchical Framework for Mixed Reality Traffic Simulation. CVPR 2023. [9]nuPlan Planning Challenging: Closed-loop reactive agents, 2023
Summary: The paper provides an extensive study on the discrepancy between model prediction accuracy and driving performance. It explores two major factors: 1. The dynamics difference; and 2. The computational efficiency of predictors. Various methods are tested in two settings: 1. Fixed Number of Predictions; and 2. Fixed Planning Time. The paper evaluates the model performance on both open-loop prediction and closed-loop diving. Strengths: The experiments are well-designed and extensive, which provides strong evidence for the statements in the paper. Weaknesses: While the two factors are well supported by the experiment results given, it is insufficient to say whether other factors are less important, such as the data quality of the open loop eval, the unknowability (future events influencing the ground truth future trajectory) issue for open loop prediction, as well as multimodality in the diving scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be great if the author could provide add some discussions on possible other factors that cause the discrepancy and why the two factors mentioned in the paper are dominant. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thoughtful analysis and constructive feedback on our paper. Your insights have been valuable, and we are grateful for the time you spent evaluating our work. Below, we address your main concerns: > While the two factors are well supported by the experiment results given, it is insufficient to say whether other factors are less important, such as the data quality of the open loop eval, the unknowability (future events influencing the ground truth future trajectory) issue for open loop prediction, as well as multimodality in the diving scenarios. Thank you for raising this insightful point. We acknowledge the importance of considering factors beyond the two emphasized in our study. Your comment has prompted us to further investigate the correlation between temporal consistency [1] and driving performance, focusing on how sudden changes in prediction might impact driving outcomes. Additionally, we have examined the relationship between prediction variance and driving performance to deepen our understanding of the influence of multimodality. The detailed findings from these analyses are provided in the PDF file, which will be available on August 9th "global response" . We believe that these additional investigations will contribute to a more comprehensive understanding of the complex interplay between these various factors. > It would be great if the author could provide some discussions on possible other factors that cause the discrepancy and why the two factors mentioned in the paper are dominant. We appreciate your suggestion to delve into other possible factors contributing to the observed discrepancies. One common and critical factor is the asymmetry of prediction error [2]. Even when two prediction models exhibit the same ADE, their driving performances will not be identical. This discrepancy arises from the different influence of prediction errors to planning. For instance, errors associated with the closest exo-agent can have a higher impact on the ego-agent's plan. The conflict between the predicted state of the exo-agent with the ego-agent's plan also matters. **However, it is important to note that the asymmetry of prediction error is not universal, which means that it occurs only in specific scenarios or for specific agents**. As a result, the influence of this factor is less significant than our proposed dynamic evaluation and computational efficiency, both of which impact prediction evaluation in every scenario and at each timestep. To provide quantitative evidence, we have effectively accounted for the most critical aspect of asymmetry prediction error by comparing "Closest Dynamic ADE" with "Dynamic ADE." Consequently, the influence of asymmetry on the correlation between ADE and Driving Performance is considerably less than the impact of dynamic evaluation. (Influence For RVO Planner: 0.16 vs 0.61; Influence For DESPOT Planner: 0.20 vs 0.96) [1] Ye, Maosheng, et al. “DCMS: Motion Forecasting with Dual Consistency and Multi-Pseudo-Target Supervision.” ArXiv.org, 27 Feb. 2023, arxiv.org/abs/2204.05859. [2] Ivanovic, Boris, and Marco Pavone. “Injecting Planning-Awareness into Prediction and Detection Evaluation.” IEEE Xplore, 1 June 2022, ieeexplore.ieee.org/document/9827101/.
Summary: This work attempts to look for evaluation metrics for trajectory prediction beyond the widely used ADE/FDE metrics. In particular, the authors focus on two overlooked aspects in prediction evaluation, which are: 1) the dynamics gap between the dataset and closed-loop driving scenarios; 2) the trade-off between computational efficiency and performance. The authors argue that we should turn to closed-loop prediction evaluation to address these two problems. Strengths: This work focuses on a very important and open question for trajectory prediction, which is looking for evaluation metrics for trajectory prediction beyond the widely used ADE/FDE metrics. It is good that the authors emphasize and validate the limitation of open-looped prediction accuracy and the importance of closed-loop evaluation for trajectory prediction. Weaknesses: 1. It is good to see that the authors showed the limitation of open-looped evaluation and the importance of closed-loop evaluation in their experiments. However, I did not think the authors provided any new perspective or insightful solution to the problem of closed-loop evaluation. The value of closed-loop evaluation has been widely acknowledged in the autonomous driving community. It is one of the main motivations behind the emerging research on data-driven traffic simulation. The bottleneck is not the lack of prediction evaluation metrics under the closed-loop evaluation scheme, which is targeted by one of the two claimed contributions of the paper (i.e., dynamic ADE vs. static ADE), but the closed-loop simulation itself. How to synthesize a reliable simulator with realistic reactive agent behavior is an open question. We even do not have a commonly acknowledged set of metrics for evaluating simulation at this stage. Unfortunately, the authors did not discuss the impact of the simulation environment on the closed-loop evaluation result in this paper (e.g., whether the simulated driving performance and the simulated dynamic ADE metric are correlated with real-world driving performance). While the authors adopted different prediction models and planners in their evaluation, they only used a single simulator (i.e., SUMMIT). Moreover, the authors did not include a literature review on traffic simulation and evaluation metrics for traffic simulation. 2. The inherent logic behind the experiments is fundamentally flawed. In Sec. 5.2, the authors showed that static ADE is not correlated with dynamic ADE and driving performance. However, the authors used the correlation of static ADEs on the Argoverse and the Alignment dataset collected from SUMMIT as evidence to validate the realism of SUMMIT. Just like the authors' observation in Sec. 5.2, the correlation in static ADE does not necessarily indicate that the SUMMIT simulation is reliable and realistic under closed-loop simulation and evaluation. To justify the realism of SUMMIT, the authors should adopt those evaluation metrics from the latest literature and compare SUMMIT against those state-of-the-art simulation models. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. I do not think the authors appropriately answer the question raised in the title (i.e., what truly matters in trajectory prediction for autonomous driving?). The paper focuses on two perspectives on this question, which are very limited and far from compiling a comprehensive and insightful answer. The experimental setting is also insufficient as an attempt to answer such a bold question (e.g., lacking analysis on the impact of simulation on closed-loop evaluation). 2. It would be beneficial to have some qualitative analysis and visualization beyond statistics. For instance, visualizing some representative examples could be helpful in understanding why static ADE and dynamic ADE are not correlated. 3. The citation for SUMMIT is missing in the main text. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors discussed some limitations and future directions in Sec. 6. However, I think the limitations are not sufficiently addressed. For instance, I would consider the limited evaluation on a single simulator and the missing discussion on the impact of simulation realism as crucial drawbacks of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments! We kindly ask the reviewer to let us know if further clarification is needed. >I did not think the authors provide any new perspective or insightful solution. The value of closed-loop evaluation has been widely acknowledged in autonomous driving. We disagree with the question’s premise. While the value of closed-loop evaluation has been widely recognized, **its emphasis mainly centered on planning and control [7,8,9]**. It would be great if the reviewer can provide a pointer for prediction. Our work aims to shed light on the equally essential need for evaluating seemingly static modules, such as prediction, through closed-loop evaluation, which has been largely overlooked in most studies [1,2] and SOTA of competitions [3]. >The bottleneck is not the lack of prediction evaluation metrics under the closed-loop evaluation, but synthesizing a reliable simulator with realistic reactive agent behavior. Not quite. As we have claimed in A1, the primary motivation behind this work is to draw attention to the importance of evaluating seemingly static prediction models with closed-loop simulation. We challenge the current evaluation system and propose a better evaluation protocol. The dynamic ADE is presented as an additional contribution. Synthesizing a real simulator falls beyond the scope of this research and suffices as a standalone research. >Unfortunately, the authors do not discuss the impact of the simulation environment… They only use a single simulator. **The impact of the simulation environment is not related to our core contribution**. To cope with realistic scenarios, the only way is to drive in the real-world, which is often unaffordable. Thus, simulators are usually taken as the proxy to carry out research. We would like to reiterate that our contribution is to point out that static prediction evaluation is not correlated with driving performance, which is stated in some papers [4,5], but is never adequately discussed and validated. We conduct extensive experiments with representative predictors and planners to support this claim. In addition, SUMMIT is built upon Carla, the widely-used simulator in recent competitions [6] and research [10,11]. >Moreover, the authors did not include a literature review on traffic simulation and evaluation metrics for it. **Although traffic simulation is not relevant to our core contribution**, we agree that incorporating a comparison of simulators and explaining our choice of SUMMIT will strengthen our paper. We provide a literature review in the general response. >The inherent logic behind the experiments is fundamentally flawed… To justify the realism of SUMMIT, the authors should adopt those evaluation metrics against those SOTA simulation models. **The objective of this experiment is not to validate the realism of SUMMIT, but to ensure the consistency of prediction performance across different environments**. Even in different real-world datasets, the performance of the same predictor will differ. Thus, what we care about is predictors’ ranking of accuracy. We conduct the alignment experiment to demonstrate that the ranking of prediction metrics keeps stable across real-world dataset Argoverse and simulation dataset Alignment. >I do not think the authors appropriately answer the question raised in the title (i.e., what truly matters in trajectory prediction for autonomous driving?). We partially agree that our paper does not fully answer what truly matters in motion prediction. Our work focuses on proving that static evaluation does not correlate with driving performance, and advocates the superiority of dynamic evaluation. **What truly matters in motion prediction is a bold question to answer**. Expecting one paper to completely resolve this issue is unrealistic. Nevertheless, our work takes a stride forward, and our experiments fully support the current lack of prediction evaluation, leaving this question open for further exploration. With that being said, we concur that a more precise title would enhance the understanding of our main idea, and we will modify our title after further discussions. >It would be beneficial to have some qualitative analysis and visualization beyond statistics. We add the visualization for showing why Static ADE and Dynamic ADE are not correlated in the attached PDF. In static evaluation, the ground truth is determined based on dataset records and remains unaffected by predictors. However, in dynamic evaluation, different predictors result in varied behaviors of the ego-agent, which, in turn, influence the future behaviors of other road users, leading to different dynamics within the environment. This directly affects the ground truth of prediction as other agents behave differently, thus causing the disparity between Static ADE and Dynamic ADE. >The citation for SUMMIT is missing. Thank you for pointing it out. We will add the citation [12] in final version. [1]Hu et al. Planning-oriented autonomous driving CVPR 2023 [2]Liang et al. Learning lane graph representations for motion forecasting ECCV 2020 [3]Zhou et al. Query-Centric Trajectory Prediction. CVPR 2023 [4]Ivanovic et al. Injecting planning-awareness into prediction and detection evaluation IV 2022 [5]Ivanovic et al. Rethinking trajectory forecasting evaluation. arXiv 2021 [6]NeurIPS CARLA Autonomous driving challenge, 2022 [7]nuPlan Planning Challenging: Closed-loop reactive agents, 2023 [8]Phan-Minh et al. Driving in Real Life with Inverse Reinforcement Learning. ArXiv 2023 ‌[9]Cheng et al. MPNP: Multi-Policy Neural Planner for Urban Driving. IROS 2022 [10]Danesh et al. LEADER: Learning Attention over Driving Behaviors for Planning under Uncertainty CoRL 2023 [11]Ulfsjöö et al. On integrating POMDP and scenario MPC for planning under uncertainty–with applications to highway driving IV 2022 [12]Cai et al. Summit: A simulator for urban driving in massive mixed traffic ICRA 2020 --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed response. Please find my follow-up comments to the response below: 1. Novelty: I agree with the authors that the closed-loop evaluation of trajectory prediction is a perspective that has not been directly studied in the literature. However, I still consider the work to have limited novelty, given that closed-loop evaluation has been studied for planning and control [7,8,9] and the limitation of open-loop evaluation has been stated in prior works [4, 5]. 2. Simulation matters for closed-loop evaluation: I disagree with the author's statement that "the impact of the simulation environment is not related to our core contribution". The main difference between dynamic ADE and static ADE lies in evaluating the prediction model over state distribution under closed-loop driving. If the reactive agents' behavior in the simulation is not realistic in those OOD states, how could we trust the computed dynamic ADE metric? Thus, Discussing the effect of simulation to closed-loop evaluation of prediction models is a crucial aspect of the subject the authors aim to study. I understand the current set of results still have their value, but I don't think they are sufficient for a NeurIPS paper. 3. Realism of SUMMIT: In Lines 244-245 of the submitted manuscript, the authors claim that "The consistent results suggest that the Argoverse and Alignment datasets share similar underlying features. Therefore, the SUMMIT simulator can be employed to evaluate real-world performances." I think the second sentence does imply that the SUMMIT simulator is realistic. In fact, related to my last comment, I think evaluating the realism of SUMMIT is necessary in order to validate that the dynamic ADE computed in SUMMIT is reliable. If the authors agree that Fig. 2 cannot validate the realism of SUMMIT, then they should include additional results to justify the realism of SUMMIT. Overall, I still lean towards rejecting the paper in its current form. --- Reply to Comment 1.1.1: Comment: >Novelty: I agree ...However, I still consider the work to have limited novelty, given that closed-loop evaluation has been studied for planning and control [7,8,9] and the limitation of open-loop evaluation has been stated in prior works [4, 5]. To address the concerns of novelty, we respond in three aspects. 1. While closed-loop evaluation has been explored extensively for planning and control [7, 8, 9], assuming its equivalent influence on static modules like predictions, without comprehensive experimentation, is unjustified. The inadequacy of such studies has been acknowledged by reviewers as well. In this context, we present comprehensive analysis as the first to undertake this exploration, marking its novelty. 2. We stated that the limitations of the present prediction evaluation have been highlighted in prior works[4, 5]. However, as these papers state, they do not aim to address the inherent limitation of open-loop evaluation, but focus on the asymmetry of prediction error. Despite connecting prediction evaluation with driving performance, they still employ real-world datasets as the standard for evaluating predictors. 3. Conversely, we discover the dynamics gap between current prediction evaluation systems and real-world applications, which is overlooked in previous works. We substantiate this gap with experiments, and demonstrate its strong impact on the correlation between prediction performance and driving performance. As real-world tests are often unaffordable, an alternative approach–closed-loop evaluation– is suggested to ease the dynamics gap. The main contribution of our paper is to reveal such a dynamics gap and its significance, not the closed-loop itself. >Simulation matters for closed-loop evaluation: I disagree with the author's statement that "the impact of the simulation environment is not related to our core contribution"... Thus, Discussing the effect of simulation to closed-loop evaluation of prediction models is a crucial aspect of the subject the authors aim to study. Our results comprise two layers of analysis. The first layer establishes the existence and significance of the dynamics gap concerning the correlation between prediction performance and driving performance. The relationship between simulation datasets and simulation scenarios mirrors that of real-world datasets and real-world scenarios. Our investigation substantiates that the dynamics gap between datasets and real-time systems stands as a foundational factor contributing to the observed weak correlation between prediction evaluation metrics and real-time performance. Crucially, the dynamics gap can solely be effectively mitigated when the predictor is evaluated and applied within the same real-time environment, leading to a substantial correlation between prediction evaluation metrics and driving performance. Our core contribution lies in the analysis of the dynamics gap. This contribution remains unaffected by the realism of the simulator and necessitates real-world tests for the prediction evaluation. The second layer underscores the efficacy of the alternative evaluation protocol (simulation) when real-world tests are unaffordable. To substantiate this, we conduct the Sim-Real Alignment experiment, which serves to establish the alignment between prediction performance of the simulation dataset and real-world datasets. Likewise, the alignment of driving performance is verified via the simulator itself. Thus, we take effort in identifying the optimal simulator, SUMMIT, which is built upon Carla, the most widely-used simulator in recent competitions [6] and research [10,11]. The impact of the simulation environment partially affects the realism of the simulator and offers marginal influence to our contributions. Therefore, we opt not to engage in an in-depth discussion of it within the main text. >Realism of SUMMIT: In Lines 244-245 of the submitted manuscript, the authors claim that "The consistent results suggest that the Argoverse and Alignment datasets share similar underlying features. Therefore, the SUMMIT simulator can be employed to evaluate real-world performances.".... If the authors agree that Fig. 2 cannot validate the realism of SUMMIT, then they should include additional results to justify the realism of SUMMIT. As aforementioned, our core contribution revolves around the concept of the dynamics gap. The closed-loop simulation is served to address this gap only when real-world tests are unaffordable. Even if the simulator is not totally realistic, our core conclusion still stands. The realism of SUMMIT only supports part of our secondary contribution. Besides, although the requirement of realism of the simulator can be eased, we still tried our best to find the best simulator SUMMIT, which is built upon Carla, the most widely-used simulator in recent competitions [6] and research [10,11].
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for your constructive feedback and recognition of this work. The reviewers found: * The problem is “original” (R-Aygk) and “interesting” (R-1Qcn). * The experiments are “well-done” (R-tS95, R-RCyQ, R-1Qcn) and “extensive” (R-tS95, R-RCyQ). * The work provides “an excellent study setup” (R-1Qcn) and “highlights the flaws in the widely used metrics of trajectory prediction evaluation,” (R-1Qcn, R-Aygk). We would like to re-emphasize this work's technical contributions: * We shed light on the equally essential need for evaluating seemingly static modules, such as prediction, through closed-loop evaluation. * We reveal the trade-off between computational efficiency and accuracy even when the predictor is fast enough for the planner's execution. * We propose Dynamic ADE as a new evaluation metric in the closed-loop evaluation. We have made the following claims more clear according to all the reviewers’ insightful comments. * **The purpose of Sim-Real Alignment (Section 5.1)**. Even in different real-world datasets, the performance of predictors will differ. (SOTA of nuScenes: minADE5 = 1.092m in 6s, SOTA of Waymo: minADE6 = 0.5345m in 8s) Thus, what we care about is predictors’ ranking of accuracy. As a result, we conduct the alignment experiment to demonstrate that the ranking of prediction performance keeps stable across real-world dataset Argoverse and simulation dataset Alignment. * **The detailed definition of Dynamic ADE and its difference to Static ADE**. Similar to Static ADE, Dynamic ADE is also calculated by the average L2 distance between the forecasted trajectory and the ground truth. However, in closed-loop evaluation, different predictors result in varied behaviors of the ego-agent, which, in turn, influence the future behaviors of other road users, leading to different dynamics within the environment. This directly affects the ground truth of prediction as other agents behave differently, as explained in the attached PDF. * **The trade-off between computational efficiency and accuracy**. A consensus exists that the perception system must meet a specified threshold (e.g. 100ms) for proper planning. Once it is satisfied, the focus shifts to ensuring accuracy. However, our experiments demonstrate a notable trade-off even when the predictor runs considerably faster than the threshold (2ms). Only in experiments with ample planning time budget (Tick Rate = 1Hz), the method with better accuracy dominates. We also include more analysis to support our ideas: * Other possible factors that may cause the discrepancy and why the two factors mentioned in the paper are dominant. * Exploring the correlation between temporal consistency and driving performance to investigate whether sudden changes in prediction (mainly occur when encountering unknown future issues) impact driving performance. * Examining the correlation between prediction variance and driving performance to gain a more profound understanding of the influence of multimodality. We add the related works to make it clearer why we choose SUMMIT: * Several rule-based simulators are designed for autonomous driving systems. CARLA [1] offers a range of sensors and agent types but relies on predefined maps and exhibits relatively low density with simple rule-based behaviors. Another simulator [2] supports real-world maps, yet its rule-based planner complicates prediction performance evaluation. * Recently, learning-based simulators [3,4] have emerged, generating diverse and realistic behaviors from actual data using open-source datasets. However, these simulators are restricted to predetermined maps. Generating realistic behaviors for new maps may prove challenging. * SUMMIT [5] stands out by offering rich-context urban maps, realistic visuals, and intricate traffic behavior. Agent behaviors are generated using GAMMA [6], an advanced multi-agent motion prediction model. Leveraging SUMMIT's advanced capabilities, we can provide a comprehensive assessment with various planners and predictions. We also add the equation for the combination of performance metrics to the main text, following the valuable comment of reviewer RCyQ: $$ \bar{P}_{\textrm{metrics}}=\begin{cases} \frac{P\_{\textrm{metrics}} - P^{\textrm{min}}\_{\textrm{metrics}}}{P^{\textrm{max}}\_{\textrm{metrics}} - P^{\textrm{min}}\_{\textrm{metrics}}}, & \textrm{metrics} = \\{\textrm{efficiency}\\} \\\\ 1-\frac{P\_{\textrm{metrics}} - P^{\textrm{min}}\_{\textrm{metrics}}}{P^{\textrm{max}}\_{\textrm{metrics}} - P^{\textrm{min}}\_{\textrm{metrics}}}, & \textrm{metrics} = \\{\textrm{safety}, \textrm{comfort}\\} \end{cases} $$ where $P^{\textrm{min}}\_{\textrm{metrics}}$ and $P^{\textrm{max}}\_{\textrm{metrics}}$ represent the minimum and maximum pairs of each performance metric among all scenarios. We kindly ask the reviewers to let us know if further clarification or information is needed. [1] Dosovitskiy et al. CARLA: An open urban driving simulator CoRL 2017 [2] Lopez et al. Microscopic traffic simulation using sumo ITSC 2018 ‌[3] Caesaret et al. NuScenes: A Multimodal Dataset for Autonomous Driving CVPR 2020 ‌[4] Igl et al. Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation ICRA 2022 [5] Cai et al. Summit: A simulator for urban driving in massive mixed traffic ICRA 2020 [6] Luo et al. Gamma: A general agent motion model for autonomous driving RAL 2022 Pdf: /pdf/8ac655f9d58a6f0b29977359cf32101d0bc6947e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In the article, the author discusses the limitations of the current open-loop trajectory evaluation method: 1. It lacks consideration of closed-loop interactions with the ego-planner 2. It overlooks computational timeliness. These two factors result in a misalignment between open-loop trajectory evaluations and closed-loop driving scores. The author also proposes a new metric, Dynamic ADE/FED, which enables a positive correlation to be established between open-loop trajectory evaluations and closed-loop driving scores. Strengths: * Originality: This paper is original, providing a new evaluation method for the field of trajectory prediction. * Significance: The foundation of this paper is sound. It focuses on analyzing the flaws and issues in the standards of trajectory prediction evaluation, and conducts a substantial amount of experiments across different models. * Quality & Clarity: I believe this section is the weakness of the paper. Please refer to the 'Weakness' part. Weaknesses: * Clarity & Quality: - 1. The core Dynamic ADE/FDE metrics in this paper are not clearly explained; there is hardly any section dedicated specifically to elaborating on this aspect. It would be beneficial to include an illustration for better understanding. - 2. Despite the numerous experiments conducted, some of them fail to support the arguments made. For instance, on lines 243-245, you stated that the positive correlation demonstrated by different prediction models on the Argoverse and Alignment datasets indicates that there isn't a significant domain gap between the SUMMIT simulator and real-world data. I find this reasoning flawed. A positive correlation merely indicates consistency in the predictive ability of the models across the two datasets; it does not necessarily imply that the behaviors in the two datasets are consistent. Moreover, a closer look at the specific numerical values on the x and y-axes in Figure 2 reveals significant discrepancies between the Argoverse and Alignment datasets. The ADE distribution for Argoverse lies between 1.5-3.5, while that for Alignment falls between 0.5-1.5. This suggests that the Argoverse dataset is substantially more challenging than the Alignment dataset, which contradicts your claim. - 3. Besides this, there are many other similar issues. The overall experimental conclusions lack substantial empirical support and are simply based on statistical analyses of a small number of data points, yielding correlations that aren't particularly strong (the paper only employs eight different prediction models and two different planning models). I remain unconvinced by statistical conclusions drawn from such a limited set of data points. * Novelty: - 4. This paper seems more akin to an experimental report. Despite the multitude of experiments conducted, they remain rather superficial. For a NeurIPS paper, a deeper level of analysis is required, such as quantitative analysis, comparing what specific interactions in which scenarios cause differences between static and dynamic ADE in open-loop and closed-loop evaluations. Additionally, as I previously mentioned, many of the experimental results do not effectively support the conclusions drawn. - 5. The work done in the 'Computational Efficiency' section is quite trivial. The conclusions drawn here are rather obvious, and in the field of autonomous driving engineering, this aspect has already been thoroughly considered. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the 'weaknesses' section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Please refer to the 'weaknesses' section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful comments! We kindly ask the reviewer to let us know if further clarification is needed. >The core Dynamic ADE/FDE metrics are not clear… It would be beneficial to include an illustration. This is a great point for us to clarify more! We add a figure in the attached PDF. Similar to static ADE, dynamic ADE is also calculated by the average L2 distance between the forecasted trajectory and the ground truth. However, in dynamic evaluation, different predictors result in varied behaviors of the ego-agent, which, in turn, influence the future behaviors of other road users, leading to different dynamics within the environment. This directly affects the ground truth of prediction as other agents behave differently. >Some experiments fail to support the arguments… A positive correlation does not necessarily imply that behaviors in the two datasets are consistent. **The argument we support is the consistency of prediction performance across Argoverse and Alignment datasets**. By guaranteeing this, real-world prediction performance is provided for experiments. Real-world behavior is a means to gather actual driving performance, as conducting real-world tests is often unaffordable, we employ SUMMIT to conduct our research. The SUMMIT simulator, built upon Carla, a widely-used simulator in recent competitions [4] and research [5,6], offers behavior approximating reality. >The ADE for Argoverse lies … while that for Alignment lies… Argoverse is more challenging than Alignment. The prediction results for a given predictor can vary, even when applied to two real-world datasets. However, such variations do not necessarily imply that one dataset is more challenging than the other. (SOTA of nuScenes: minADE5=1.092m in 6s prediction horizon, SOTA of Waymo: minADE6=0.535m in 8s prediction horizon) Nevertheless, the underlying behavior of drivers should be similar across real-world datasets. The observed differences can be influenced by many factors, e.g. average speed, number of agents, or other unidentified variables. **We argue that if the accuracy ranking of predictors remains consistent (as indicated by ADE/FDE alignment), the prediction ability of models are aligned, enabling us to evaluate predictors with SUMMIT**. >Conclusions lack solid empirical backing, relying on limited data points for statistical analysis, resulting in weak correlations. **We want to clarify that we have conducted extensive experiments to support our claim**. Specifically, we collected 50 scenarios for each predictor in each setting, resulting in a total of 1600 simulation scenarios. To ensure clarity of results, we only utilize the average performance for each predictor, and the correlation to driving performance is strong. **Additionally, the incorporation of a predictor or planner is expensive**. Assuming a predictor requires 0.02s to execute once, the DESPOT planner calls the predictor 1000 times at each step to explore adequate tree nodes, leading to 20s per planning step. Given the planner's requirement to operate at 3Hz, we set tick rate to 1Hz, allowing for 10s real time for each planning. Each scenario comprises around 200 steps, the total runtime for one predictor sums up to 200\*50\*10/3600=28 days. As many predictors are even much slower than 0.02s (e.g., KNN: 0.224s), it is infeasible to exhaust all possible predictors given the computational cost, not to mention the implementation time required. >Despite the multitude of experiments conducted, they remain rather superficial. **This paper shed light on the equally essential need for evaluating seemingly static modules, such as prediction, through closed-loop evaluation**, which has been largely overlooked in most studies [1,2] and SOTA of Argoverse Competition [3]. In addition, we reveal the **trade-off between computational efficiency and accuracy even when the predictor is fast enough for the planner's execution. We are the first to provide a comprehensive investigation on these problems**. We further analyze the correlation between other two possible factors and driving performance to support our core research in the attached PDF. >The work done in Computational Efficiency is trivial. The conclusions are obvious and have been thoroughly considered. The important idea we want to point out is **the trade-off between computational efficiency and accuracy even when the predictor is fast enough for the planner's execution**, which has been neglected by most research. In the context of autonomous driving, a consensus exists that the whole perception system must meet a specified threshold for proper planning. Once this threshold is satisfied, the focus shifts to ensuring accuracy. (e.g., 100ms [7,8]) However, our experiments demonstrate a notable trade-off that persists much below the specified threshold. When the predictor runs much faster than the threshold (2ms), the driving performance still remains dominated by the fastest prediction method (CV). Only with an ample planning time budget (Tick Rate = 1Hz), methods with better accuracy dominate. This trade-off holds crucial importance, as various planners exhibit unique trade-offs in addition to their fundamental requirement. [1]Hu et al. Planning-oriented autonomous driving. CVPR 2023 [2]Liang et al. Learning lane graph representations for motion forecasting. ECCV 2020 [3]Zhou et al. Query-Centric Trajectory Prediction. CVPR 2023 [4]NeurIPS CARLA Autonomous driving challenge, 2022 [5]Danesh et al. LEADER: Learning Attention over Driving Behaviors for Planning under Uncertainty. CoRL 2023 [6]Ulfsjöö et al. On integrating POMDP and scenario MPC for planning under uncertainty–with applications to highway driving. IV 2022 [7]Lin et al. The architectural implications of autonomous driving: Constraints and acceleration. ASPLOS 2018 [8]Yamaguchi et al. In-vehicle distributed time-critical data stream management system for advanced driver assistance. JIP 2017 --- Rebuttal Comment 1.1: Comment: Thank you for the further elaboration. Some of my concerns have been resolved. I would like to clarify my previous question further: 1. Regarding the consistency issue between Argoverse and Alignment datasets: My intention was not to say that SUMMIT cannot be employed to evaluate real-world performances, but rather to say that your experiment does not adequately reflect your claim. Your designed experiment only shows that different models have positively correlated trajectory prediction performance on these two datasets, which doesn't prove that SUMMIT has the same behavior with the real world scenario. It is possible that SUMMIT and real-world datasets have significant differences in performance, but due to factors intrinsic to the model itself (such as the amount of model parameters, structure, etc.), they show positive correlations in prediction metrics across different datasets. An experiment that could successfully validate your claim might be to demonstrate the consistency of agent behavior in SUMMIT with that of Argoverse agents, such as constructing a SUMMIT scenario that mimics the Argoverse dataset, and then calculating the consistency of agent trajectories in that scenario. Additionally, I have some questions about the starting point of the paper: 1. Since dynamic ADE/FDE requires closed-loop evaluation, why don't we directly use a closed-loop Planning and Control metric, such as Driving Score? 2. Closed-loop evaluation of dynamic ADE/FDE indeed exhibits better consistency with closed-loop metrics, but we can only test it in a closed-loop simulator. This metric cannot be computed on real-world data. 3. Moreover, this metric is influenced by the planner involved in the evaluation; different planners would lead to significant inconsistencies in the dynamic ADE/FDE measurements of trajectory prediction models, as referenced in Figure 4. These factors could have a significant impact on the practical utility of the dynamic ADE/FDE metric. --- Reply to Comment 1.1.1: Comment: To answer these questions, we would like to explain more about the starting point of the paper. Our paper comprises two layers of analysis: The first layer establishes the existence and significance of the dynamics gap and computational efficiency, concerning the correlation between prediction performance and driving performance. **The relationship between Alignment dataset and SUMMIT simulator mirrors that of Argoverse dataset and real-world autonomous driving, as each system pair shares identical ego-planner and exo-agents motion model.** The difference in the correlation coefficient between dynamic ADE/FDE and static ADE/FDE versus driving performance is notable, demonstrating the significance of the dynamics gap. Similarly, the importance of computational efficiency is emphasized by the remaining gap between dynamic ADE and driving performance. These two gaps can solely be effectively mitigated when predictors are evaluated in real-world autonomous driving. This layer constitutes our core contribution. The second layer emphasizes the efficacy of the alternative evaluation protocol (simulation) when real-world tests are unaffordable. **To substantiate this, we conduct the Sim-Real experiment, which establishes the alignment of prediction performance between simulation and real-world. Likewise, the alignment of driving performance is verified via the simulator itself.** SUMMIT is built upon Carla, the most widely-used simulator in recent competitions [4] and research [5,6]. The attached motion model GAMMA [9] responds to ego-motion and outperforms that of Carla. Afterward, the ranking of dynamic metrics reflects the ranking of real-world driving performances and is more obtainable. To ensure the protocol's effectiveness across diverse planners, we employ the RVO planner to explore the correlation between dynamic ADE and driving performance. Significantly, a strong relationship between the two persists. >Regarding the consistency between Argoverse and Alignment datasets: I'm saying that your experiment does not adequately reflect your claim. As aforementioned, the Sim-Real experiment ensures alignment in prediction performance, while the alignment of driving performance (the same as behavior) is verified by the SUMMIT simulator along with its affiliated motion model GAMMA [9]. Given our primary focus is not the development of a simulator, our efforts are dedicated to finding the best simulator. >Why don't we directly use a closed-loop Planning and Control metric rather than dynamic ADE/FDE? We agree that the ultimate goal is driving performance. The subtle difference behind using dynamic ADE lies in emphasizing the significance of the dynamics gap. We select a prediction metric that calculates in the same way but exhibits distinct performance in static and dynamic evaluations. In addition, computational efficiency is emphasized by the remaining gap between dynamic ADE and driving performance. The underlying question lies in the significance of dynamic metrics. As aforementioned, our main contributions are: 1. establish the existence and significance of the dynamics gap and computational efficiency. 2. emphasize the efficacy of the alternative evaluation protocol. It's essential to note that dynamic ADE serves solely as the tool we employ to substantiate these contributions, rather than constituting our primary contribution. >We can only test closed-loop metrics in a closed-loop simulator. This metric cannot be computed on real-world data. As aforementioned, the relationship between Alignment dataset and SUMMIT simulator mirrors that of Argoverse dataset and real-world. Thus, dynamic metrics also prove effective in real-world tests. However, they are unattainable from datasets. This underscores our primary finding regarding the importance of real-time evaluation. >Moreover, this metric is influenced by the planner involved in the evaluation...influence the practical utility of the dynamic metrics. This concept forms one cornerstone of our paper: prediction evaluation is dependent on the downstream modules. Previous planning-aware metrics [10] also tried to capture the impact, but in an open-loop manner. It is inappropriate to posit that the 'optimal' predictor will excel 'normal' predictors across all planners. The best predictor for a given planner might vary, depending on its real driving performance. This, naturally, leads to corresponding changes in the ranking of dynamic ADE. Therefore, our main contributions are: 1. identifying the importance of dynamics gap and computational efficiency 2. highlighting the efficacy of the alternative evaluation protocol, rather than proposing the dynamic ADE. We extend our gratitude to the valuable insights and constructive feedback, which greatly enhance the quality of our manuscript. [9]Luo et al. Gamma: A general agent motion model for autonomous driving [10]Ivanovic et al. Injecting planning-awareness into prediction and detection evaluation
null
null
null
null
null
null
Regret-Optimal Model-Free Reinforcement Learning for Discounted MDPs with Short Burn-In Time
Accept (poster)
Summary: The paper proposes a nearly optimal model-free algorithm in discounted tabular MDPs which has a shorter burn-in time than previous works and provides valid theoretical guarantees. Strengths: The paper is well motivated, trying to shorten the burn-in time and obtain a more sample-efficient model-free tabular MDP algorithm. The writing is awesome. The detailed comparison with previous algorithms is highly appreciated.

 The proof seems correct with many previous techniques organically integrated together. I must say the author should be an expert on the related techniques and proof arguments. To achieve a tighter burn-in time, it seems vital to update the reference advantage at an appropriate frequency and learn the Q-value of the current policy more accurately. To tackle the issue, the author proposes two new components. The first is the lazy update for the Q-value function which the greedy policy is generated; the other is an adaptive low-switching mechanism to determine what is the time to perform such an update. Both components might inspire future works. Weaknesses: If I must point out some weaknesses, I think that the proof contains too many instances where the author didn’t include brackets when utilizing the summation symbol $\sum$, such as in (46)-(48). The frequency of these omissions is quite high, making it difficult to enumerate them here. I think it is important that all brackets are appropriately included for clarity and correctness. I have some questions about the results and the theoretical analysis. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The author discusses the impact of large state-action spaces on burn-in time several times. It is reasonable to expect that tabular MDPs with large state-action spaces are generally more challenging to learn, leading to longer burn-in times. However, the paper lacks a lower bound for the minimum burn-in time, although it does match the well-known lower bound on the dominant term (or variance term). It would be valuable for the author to address this gap and provide insights into the possible smallest burn-in time. 2. Lemma 10 presents a rough estimate for $\sum_{t=1}^T (V_{t-1}\left(s_t\right)-V^{\pi_t}\left(s_t\right))$, which is subsequently utilized multiple times in the proof of Lemma 7 (e.g., in (153), (228), (229)). Instead of using this crude bound repeatedly, it seems to be more advantageous to bound the regret without this rough estimate. I mean to maintain the dependence of $\sum_{t=1}^T (V_{t-1}\left(s_t\right)-V^{\pi_t}\left(s_t\right))$ while analyzing intermediate terms in Lemma 4 and then to use Proposition 1 to disentangle this dependence at the end. By using this alternative argument, it may be possible to eliminate the need for the crude bound entirely. Additionally, it is worth investigating if this alternative argument can enhance the dependence of $(1-\gamma)^{-1}$ on the burn-in time. 3. It is notable that the regret of UCBVI-$\gamma$ exhibits a more favorable dependence on $(1-\gamma)^{-1}$ compared to the algorithm proposed in this work. It would greatly enhance the paper if the author could provide a discussion on the reasons behind this superiority. Understanding the factors contributing to better dependence would provide valuable insights into the comparative performance of the two algorithms and contribute to a deeper understanding of their respective strengths and weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Please see the questinos. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate you taking the time to provide this valuable review! Let us provide responses to your comments and questions as follows. > If I must point out some weaknesses, I think that the proof contains too many instances where the author didn’t include brackets when utilizing the summation symbol $\sum$, such as in (46)-(48). The frequency of these omissions is quite high, making it difficult to enumerate them here. I think it is important that all brackets are appropriately included for clarity and correctness. Thank you for the suggestion! We will correct these in the final version of our paper. > The author discusses the impact of large state-action spaces on burn-in time several times. It is reasonable to expect that tabular MDPs with large state-action spaces are generally more challenging to learn, leading to longer burn-in times. However, the paper lacks a lower bound for the minimum burn-in time, although it does match the well-known lower bound on the dominant term (or variance term). It would be valuable for the author to address this gap and provide insights into the possible smallest burn-in time. This is a good question. We had some discussion about what the lower bound suggests about burn-in cost in our introduction (towards the end of Section 1.3). In short, the current lower bound does not rule out the possibility of having a burn-in term as small as $\frac{SA}{(1-\gamma)^2}$. However, it is possible that this lower bound is not tight, so this can be a good open question for future work. We will add more discussion about this gap and the reason for our suboptimal $(1-\gamma)^{-1}$ dependence in the burn-in when we revise the paper. > Lemma 10 presents a rough estimate for $\sum_{t=1}^T (V_{t-1}(s_{t}) - V^{\pi_t}(s_t))$, which is subsequently utilized multiple times in the proof of Lemma 7 (e.g., in (153), (228), (229)). Instead of using this crude bound repeatedly, it seems to be more advantageous to bound the regret without this rough estimate. I mean to maintain the dependence of $\sum_{t=1}^T (V_{t-1}(s_{t}) - V^{\pi_t}(s_t))$ while analyzing intermediate terms in Lemma 4 and then to use Proposition 1 to disentangle this dependence at the end. By using this alternative argument, it may be possible to eliminate the need for the crude bound entirely. Additionally, it is worth investigating if this alternative argument can enhance the dependence of $(1-\gamma)^{-1}$ on the burn-in time. This is a very good observation. To be precise, (153) is the proof of Lemma 10 and not an invocation of the lemma, so we can focus on the other instances. We use Lemma 10 at these places because we want to replace $\sum_{t=1}^T (V_{t-1}(s_{t}) - V^{\pi_t}(s_t))$ with a simple upper bound without having to carry it around in the following analysis. Even if we carried $\sum_{t=1}^T (V_{t-1}(s_{t}) - V^{\pi_t}(s_t))$ all the way to the end without invoking Lemma 10 as you suggested, the burn-in cost would still have the current $(1-\gamma)^{-1}$ dependence. The invocations of Lemma 10 are only for clarity. Take (228) for a concrete example. (228) is an intermediate step to obtain the inequality in (216). Despite the use of Lemma 10, the $\log T$ term in (216) is only on the order of $(1-\gamma)^{-3}$. This is dominated by the $\log T$ term in (217), which is on the order of $(1-\gamma)^{-13/4}$, and this term in (217) is not a result of Lemma 10. Note that both (216) and (217) are part of $\mathcal{R}_1$, so using Lemma 10 or not does not affect the burn-in component of $\mathcal{R}_1$ or the final burn-in term. > It is notable that the regret of UCBVI-$\gamma$ exhibits a more favorable dependence on $(1-\gamma)^{-1}$ compared to the algorithm proposed in this work. It would greatly enhance the paper if the author could provide a discussion on the reasons behind this superiority. Understanding the factors contributing to better dependence would provide valuable insights into the comparative performance of the two algorithms and contribute to a deeper understanding of their respective strengths and weaknesses. This is also a good question. The reason why UCBVI-$\gamma$ has a lower $(1-\gamma)^{-1}$ dependence in the burn-in than ours is not so much what they did in their algorithm as what we did in ours that gives rise to a high $(1-\gamma)^{-1}$ dependence. To be specific, the high $(1-\gamma)^{-1}$ dependence comes from our use of the reference-advantage technique. In fact, similarly large factors of $1/(1−\gamma)$ can also be observed in the burn-in terms of other reference-advantage methods [1,2,3]. Mathematically, this can be attributed to the use of Cauchy-Schwarz in the analysis to balance the factors of $1/(1−\gamma)$ and $T$. For example, we need to manipulate a term like $T^{1/4}/(1−\gamma)^{13/4}$ into $\sqrt{T/(1−\gamma)^{3}} + 1/(1−\gamma)^{6}$ so that the $T$-dependent term can have the optimal $1/(1−\gamma)$ dependence at the expense of a larger burn-in term. Such uses of Cauchy-Schwarz can quickly amplify the power of $1/(1−\gamma)$ in the burn-in term. Note that terms like $T^{1/4}/(1−\gamma)^{13/4}$ are a result of the reference-advantage technique (e.g., higher-order variance terms like $(P-\widehat{P}) (V - V^{\mathrm{R}})$). [1] Li, G., Shi, L., Chen, Y., Gu, Y., and Chi, Y. Breaking the sample complexity barrier to regret-optimal model-free reinforcement learning. Advances in Neural Information Processing Systems, 34:17762–17776, 2021. [2] Zhang, Z., Zhou, Y., and Ji, X. Almost optimal model-free reinforcement learning via reference-advantage decomposition. Advances in Neural Information Processing Systems, 33:15198–15207, 2020. [3] Zhang, Z., Zhou, Y., and Ji, X. Model-free reinforcement learning: from clipped pseudo-regret to sample complexity. In International Conference on Machine Learning, pp. 12653–12662. PMLR, 2021. Please let us know if you have any further questions and we will be happy to answer. Thanks again! --- Rebuttal Comment 1.1: Comment: Thanks for the response which addresses my questions well so I decide to keep my score.
Summary: This paper studied infinite discounted tabular Markov decision process (MDP). The authors proposed a model-free algorithm: Q-SlowSwitch-Adv that achieves optimal regret with low burn-in cost, low space complexity and low computational cost. They introduce an innovative technique that changes the execution policy in a slow, adaptive manner, and employ variance reduction to enhance efficiency. Strengths: 1. The major novelty of this work lies in the solution to achieve optimal regret using model-free algorithm for discounted tabular MDP. 2. The property of the low burn-in cost seems interesting and I believe that the low computation cost and low space complexity could be good contributions, enhancing the feasibility of implementing reinforcement learning in real-world scenarios. 3. The design of a slow, adaptive execution policy-switching technique is novel. Weaknesses: 1. The paper lacks empirical evidence to substantiate the theoretical claims and the efficacy of the proposed algorithm, especially for the 'low' burn-in cost. 2. The reason why the proposed algorithm can reduce the burn-in cost is not clear, please see the Question part for more details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors elaborate on the methodology used to reduce the burn-in cost? Specifically, is this reduction achieved through refined theoretical analysis, novel algorithm design, or a combination of both? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The paper's main limitation is the lack of empirical evidence to support the proposed method. Performance comparisons against classic algorithms would have made the theoretical claims more credible. 2. Given the high level of complexity involved in real-world RL problems, the generalizability of the proposed method is not addressed. Therefore, it is unclear how the proposed algorithm will perform in more complex scenarios such as function approximation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate you taking the time to provide this valuable review! Let us provide responses to your comments and questions as follows. > The paper lacks empirical evidence to substantiate the theoretical claims and the efficacy of the proposed algorithm, especially for the 'low' burn-in cost. Thank you for raising the point about empirical experiments. Our original goal is to resolve a longstanding theoretical optimality gap, but we agree that how an algorithm performs empirically can also be important. We will try to find an RL environment that can best demonstrate our point and conduct some experiments for the final version of our paper, since the rebuttal period is short. > Could the authors elaborate on the methodology used to reduce the burn-in cost? Specifically, is this reduction achieved through refined theoretical analysis, novel algorithm design, or a combination of both? This is a good question. This reduction can be mostly attributed to the algorithm. The model-free nature of our algorithm is a factor of this reduction. Since we only estimate the Q-function, which only has $SA$ entries, it is natural to see we can start making meaningful estimations as soon as samples exceed $\frac{SA}{\mathrm{poly}(1-\gamma)}$. The variance reduction technique and the novel adaptive slow-switching technique we used are also important. These techniques as well as their tight analysis ensure the dominant term in the regret is optimal without introducing extra $SA$ factors in the burn-in term. > Given the high level of complexity involved in real-world RL problems, the generalizability of the proposed method is not addressed. Therefore, it is unclear how the proposed algorithm will perform in more complex scenarios such as function approximation. Our result can provide some insights into more complex settings, such as how regret should scale with respect to each problem parameter and the fact that we need to control variance and policy switching carefully for optimality. While the general idea of our adaptive slow-switching technique is applicable to general function approximation, it is not obvious how it should be implemented exactly for that setting (the variance reduction idea we used has existed for a long time but does not have a general function approximation implementation yet). This is beyond the scope of this work and is a good question for future work to answer. Please let us know if you have any further questions and we will be happy to answer. Thanks again! --- Rebuttal Comment 1.1: Comment: Thanks for the response. My concerns are addressed. I decide to keep my score.
Summary: This paper studies discounted infinite-horizon MDPs. The authors propose an online algorithm that attains optimal (i.e., least) regret with finite-sample performance guarantees. Strengths: Optimal regret & finite-sample performance guarantees Weaknesses: Lack of numerical illustration, but not a deal-break. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Some numerical experiments in the main body would be great. 2. Assumptions on the probability distribution for the algorithm to achieve the optimal regret and finite-sample performance guarantees should be made explicitly and crystal clear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Lack of numerical illustration, but not a deal-break. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really thank you for taking the time to provide this valuable review as well as your approval of our paper! Let us provide responses to your questions as follows. Thank you for raising the point about numerical experiments and illustration. Our original goal is to resolve a longstanding theoretical optimality gap, but we agree that how an algorithm performs empirically can also be important. We will try to find an RL environment that can best demonstrate our point and conduct some experiments for the final version of our paper, since the rebuttal period is short. > Assumptions on the probability distribution for the algorithm to achieve the optimal regret and finite-sample performance guarantees should be made explicitly and crystal clear. Thank you for raising this question. We did not make any assumption on the probability distribution. With a number $\delta$ decided, if we run the algorithm for $T$ steps (with no assumption), then Equation (9) in Theorem 1 holds with probability at least $1-\delta$. The transitions in the online trajectory is the only source of randomness in this problem, and no assumption is made about them. Please let us know if you have any further questions. Thanks again! --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. My concerns are well addressed. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! We are happy that our response was able to help.
Summary: The paper provides a version of Q-learning algorithm with Bernstein bonuses and control variate that allows or the correct (in terms of the dependence upon state and action space sizes $S$ and $A$) second-order term in the regret bound. This allows for an algorithm with the provably shorter burn-in period (again in terms of $S$ and $A$). This result for discounted MDPs is, to the best of my knowledge, novel. Strengths: The paper presents a valuable contribution to the theory of model-free algorithms for solving MDPs with finite state- action spaces. The suggested idea of incorporating Berstein style bonuses with the UCB-advantage type of control variates is appealing and novel in the discounted MDP setting. Weaknesses: Actually, I would also like to ask the authors why not considering submitting to a journal. The proof appendix is massive and it is obviously not possible to check all the technical details properly due to a very limited time for reviewing period. Minor issues and comments: 1. The example concerning the Go game (lines 94-98) is spectacular, yet with $|S| = 3^{161}$ the linear dependence on $S$ is clearly as bad as $S^2$. I understand the author's willingness to explain the motivation beyond improving the dependence of the second-order term upon $S$ and $A#, yet I suggest to come up with a slightly less ambitious example; 2. I would also suggest removing "outstanding" in line 108: it is clearly a matter of taste; Technical Quality: 3 good Clarity: 3 good Questions for Authors: I suggest the authors to add explicitly a paragraph when discussing the techniqus currently used to obtain the second-order term in the regret whch is optimal either in terms of episode length $H$ (resp. $1-\gamma$ factor), or $S$ and $A$ factors. I would also suggest to add explicit comments on why the current technique in the paper is not suitable to get the second-order term, which is optimal also in terms of the $1-\gamma$ factor. With this discussion added, I can increase my score by $1$ point. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper is theoretical and I do not expect any negative societal impact of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate you taking the time to provide this valuable review! We also thank the reviewer for suggesting submission to a journal. We submitted it to this conference because we believed our paper can be seen by more people here and thus maximize its impact and contribution to the community. In addition, NeurIPS does not seem to discourage submissions with long proofs. But ultimately, this discussion is really beyond the scope of what we should discuss during rebuttal. > The example concerning the Go game (lines 94-98) is spectacular, yet with $|S| = 3^{161}$ the linear dependence on $S$ is clearly as bad as $S^2$. I understand the author's willingness to explain the motivation beyond improving the dependence of the second-order term upon $S$ and $A$, yet I suggest to come up with a slightly less ambitious example; This is a good point. Instead of Go, we can also take tic-tac-toe as an example. The state space for a 5x5 game is $3^{25}$. While it is feasible to store the Q-function in memory, anything on the order of $|\mathcal{S}|^2$ can be difficult. We will include this example in the final version of our paper when explaining the importance of finding a model-free approach. > I would also suggest removing "outstanding" in line 108: it is clearly a matter of taste; Thank you for the suggestion! We will remove that in the final version as you suggested. > I suggest the authors to add explicitly a paragraph when discussing the techniques currently used to obtain the second-order term in the regret which is optimal either in terms of episode length $H$ (resp. $1-\gamma$ factor), or $S$ and $A$ factors. I would also suggest to add explicit comments on why the current technique in the paper is not suitable to get the second-order term, which is optimal also in terms of the $1-\gamma$ factor. With this discussion added, I can increase my score by 1 point. We really appreciate your willingness to raise the score. It is certainly a good question. We weren’t able to discuss much about it in our initial submission due to the page limit, but we can certainly add this explanation in the final version, which will allow for an additional page. Let us first explain why our algorithm has a burn-in with high $\frac{1}{1-\gamma}$ dependence. This comes from our use of the reference-advantage technique. In fact, similarly large factors of $1/(1−\gamma)$ can also be observed in the burn-in terms of other reference-advantage methods [1,2,3]. Mathematically, this can be attributed to the use of Cauchy-Schwarz in the analysis to balance the factors of $1/(1−\gamma)$ and $T$. For example, we need to manipulate a term like $T^{1/4}/(1−\gamma)^{13/4}$ into $\sqrt{T/(1−\gamma)^{3}} + 1/(1−\gamma)^{6}$ so that the $T$-dependent term can have the optimal $1/(1−\gamma)$ dependence at the expense of a larger burn-in term. Such uses of Cauchy-Schwarz can quickly amplify the power of $1/(1−\gamma)$ in the burn-in term. Note that terms like $T^{1/4}/(1−\gamma)^{13/4}$ are a result of the reference-advantage technique (e.g., higher-order variance terms like $(P-\widehat{P}) (V - V^{\mathrm{R}})$). As to obtaining a burn-in with optimal dependence on $1/(1−\gamma)$, it is not clear if any existing algorithm can achieve this yet, not even UCBVI-$\gamma$ in [4]. It is a good future direction to study how to lower the $1/(1−\gamma)$ factors in the burn-in term when using reference-advantage algorithms. As to how we are able to achieve a linear dependence with $SA$ in the burn-in cost, the main reason is the model-free nature of our algorithm. Since we only estimate the Q-function, which only has $SA$ entries, it is natural to see we can start making meaningful estimations as soon as samples exceed $\frac{SA}{\mathrm{poly}(1-\gamma)}$. The variance reduction and adaptive slow-switching techniques we used are also important, as they ensure the dominant term in the regret is optimal without introducing extra $SA$ factors in the burn-in term. [1] Li, G., Shi, L., Chen, Y., Gu, Y., and Chi, Y. Breaking the sample complexity barrier to regret-optimal model-free reinforcement learning. Advances in Neural Information Processing Systems, 34:17762–17776, 2021. [2] Zhang, Z., Zhou, Y., and Ji, X. Almost optimal model-free reinforcement learning via reference-advantage decomposition. Advances in Neural Information Processing Systems, 33:15198–15207, 2020. [3] Zhang, Z., Zhou, Y., and Ji, X. Model-free reinforcement learning: from clipped pseudo-regret to sample complexity. In International Conference on Machine Learning, pp. 12653–12662. PMLR, 2021. [4] Jiafan He, Dongruo Zhou, and Quanquan Gu. Nearly minimax optimal reinforcement learning for discounted mdps. Advances in Neural Information Processing Systems, 34:22288–22300, 2021. In the very end, we just want to gently point out that getting an $SA$ dependence in the burn-in or second-order term is only one of our contributions. The other, perhaps more important, contribution is that our algorithm is the first model-free algorithm that achieves minimax-optimal regret $\widetilde{O}(\sqrt{\frac{SAT}{(1-\gamma)^3}})$. Compared to the state-of-the-art result in [4], we reduced the space complexity from $O(S^2A)$ to $O(SA)$ and the computational complexity from $O(S^2AT)$ to $O(T)$. Please let us know if you have any further questions and we will be happy to answer. Thanks again! --- Rebuttal Comment 1.1: Comment: Dear Reviewer wb6F, Thank you again for taking the time to review our paper! You asked some very important questions in your review that we really hope to resolve. Could you please let us know if our rebuttal has addressed them sufficiently? If it has, could you kindly consider raising your score as you mentioned? Please don’t hesitate to contact us if you have any further questions. --- Rebuttal Comment 1.2: Comment: Thank you for your detailed answer. I would raise my score to 6. --- Reply to Comment 1.2.1: Comment: Dear Reviewer wb6F, We are very grateful for your support. Thank you!
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies online learning in tabular infinite horizon MDPs, and proposes a model free algorithm that obtains (nearly) minimax optimal regret with space complexity $SA$, and such that the number of time steps needed to reach the minimax-optimality regime is $O(SA/(1-\gamma)^{13})$. Prior art achieving minimax optimal regret required space complexity $S^2 A$ and $O(S^3A^2/(1-\gamma)^{4})$ time steps to reach the optimal regime. It should be noted the definition of regret used here is not the same as that of prior art. Strengths: * The paper proposes a new approach to learning infinite horizon MDPs. * The paper (perhaps) advances state of the art in online learning infinite horizon tabular MDPs. Weaknesses: - It is hard to say this work directly improves [14] because the objective considered is not the same. It would be helpful if a more formal discussion of this point was provided by the authors. For the same reason it is also unclear if the lower bound (also from [14]) applies. Given, this, the way the results are presented in the table are misleading. - The improvement is rather incremental, shaving off a factor of $O(SA)$ (at the cost of an extra $(1-\gamma)^4$) from the lower order (non $T$ dependent) term. In addition, the algorithm is complicated and presentation does not make it particularly easy to understand. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Is there a more formal treatment of the two regret definitions? It would be helpful to make a more educated decision here; the improvement would have been marginal even if the objectives considered were the same, and when taking into account it may not even be an actual improvement --- the contribution seems questionable. - It seems like the approach given here borrowed most of the algorithm from the finite horizon case [54] and then employs a slow switching strategy to essentially reduce the infinite horizon problem to a finite horizon one, thereby gaining in the $SA$ factors but loosing in the horizon. This also seems to be the source of the different regret definition. The presentation does not make this entirely clear however, is this more or less the case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate you taking the time to provide this valuable review! Let us provide responses to your comments and questions as follows. > It is hard to say this work directly improves [14] ... > Is there a more formal treatment of the two regret definitions? It would be helpful to make a more educated decision here. This is a good question. While there is no discussion about this difference in the existing literature, let us provide some more explanation of the relationship between the two regret definitions here, in addition to our brief discussion in Section 2.3. The difference lies in the quantity being compared against the optimal value function at every step $t$. In the definition we used (also in [55,19]), (i) it is the expected cumulative reward of the execution policy $\pi_t$ at time $t$, i.e., $V^{\pi_t}(s) := E[\sum_{i=0}^\infty \gamma^i r(s_i, \pi_{t}(s_i)) | s_0=s]$. Note that $\pi_t$ is a stationary policy. In the other definition (of [14]), (ii) it is the expected cumulative future reward of the algorithm since time $t$. This can be viewed as the value function of the non-stationary policy $\\{\pi_j\\}\_{j=t}^\infty$, i.e., $V^{\\{\pi_j\\}\_{j=t}^\infty}(s) := E[\sum_{i=0}^\infty \gamma^i r(s_i, \pi_{t+i}(s_i)) | s_0=s]$. When the execution policy $\pi_t$ is becoming more and more optimal over the course of execution, which is expected given the theoretical results from [14] and our paper, it is clear that $V^{\\{\pi_j\\}\_{j=t}^\infty}$ of (ii) should be larger than $V^{\pi_t}$ of (i). This makes the regret $V^{\star}(s_t) - V^{\\{\pi_j\\}\_{j=t}^\infty}(s_t)$ defined with (ii) smaller than the regret $V^{\star}(s_t) - V^{\pi_t}(s_t)$ defined with (i), so it can be seen that the regret with (ii) is a more lenient metric than (i), which means an algorithm that can achieve certain guarantee under regret (i) should also be able to achieve the same guarantee under regret (ii). Thus, this means our , and the lower bound under our regret (i) is at least the one for (ii). But overall, this difference is not big enough to change the order of regret bounds, and we included [14] in our table for completeness. We can be clearer about the relationship between these two definitions and its implications when we revise the paper. > The improvement would have been marginal even if the objectives considered were the same, and when taking into account it may not even be an actual improvement --- the contribution seems questionable. > The improvement is rather incremental, shaving off a factor of $O(SA)$ ... Our burn-in improvement can have a huge effect when the state-action space is immense, which is what usually happens in practice and what this paper is focused on. We will revise our paper to make this limitation clearer. It is also important to note that the burn-in cost is not the only contribution we make; our algorithm is also the first model-free algorithm to achieve minimax regret optimality $\widetilde{O}(\sqrt{\frac{SAT}{(1-\gamma)^3}})$. Compared to the state-of-the-art result in [14], we reduced the space complexity from $O(S^2A)$ to $O(SA)$ and the computational complexity from $O(S^2AT)$ to $O(T)$. > It seems like the approach given here borrowed most of the algorithm from the finite horizon case [54] ... > In addition, the algorithm is complicated and presentation does not make it particularly easy to understand. We indeed used the reference-advantage idea from [54] and was clear about this in Section 3. To be precise, there is a difference between how we control the reference update $V^{\mathrm{R}}$, as our approach uses an additional lower confidence bound estimate to control the reference better. On the other hand, we are unsure how the slow-switching technique could “reduce the infinite horizon problem to a finite horizon one”. Our switching follows an adaptive schedule rather than a fixed one, so it doesn’t seem anything can be reduced to a finite horizon. In addition, our algorithm switches slowly, at a rate of once every $O(\sqrt{T})$ steps on average, and it would not make sense to consider a reduction to a finite-horizon MDP with horizon $O(\sqrt{T})$. Let us also underscore the nontrivial difference between the finite-horizon setting and infinite-horizon discounted setting when it comes to *online* regret minimization. These two settings give rise to different problem structures, which has been long recognized in the existing literature and also been briefly pointed out in our introduction. In the infinite-horizon setting, we optimize regret over one single trajectory, and there is always dependence between any pair of time steps along the trajectory. Such dependence can cause statistical difficulties and give rise to an infinitely expansive structure in the error decomposition. In contrast, the finite-horizon setting resets the trajectory every $H$ steps, so any statistical dependence lasts for at most $H$ steps. Neither of the two settings can be reduced to the other under online regret minimization. For this reason, there is little resemblance between our analysis and the one in [54]. If we apply the analysis from [54] naively to the infinite-horizon setting, it will not converge. The two proofs diverge from the very beginning: while [54] manipulate the regret into a clipped pseudo-regret and expand it for further decomposition, we decompose the regret directly and cope with the infinite expansion problem (unique to finite-horizon) with a recursion-style algebraic manipulation (Appendix D). As for the presentation of the algorithm, while the technicality might be involved, everything can be summarized into three key ideas: UCB, variance reduction and the adaptive slow-switching technique, which are detailed in Section 3. We will be happy to clarify if you have any specific question about the algorithm. Please let us know if you have any further questions and we will be happy to answer. Thanks again! --- Rebuttal Comment 1.1: Comment: Dear Reviewer zms9, Thank you again for taking the time to review our paper! You asked some very important questions in your review that we really hope to resolve. Could you please let us know if our rebuttal has addressed them sufficiently? Please don’t hesitate to contact us if you have any further questions.
null
null
null
null
null
null
Label Correction of Crowdsourced Noisy Annotations with an Instance-Dependent Noise Transition Model
Accept (poster)
Summary: The authors formulate the noise transition model in a Bayesian framework and design a new label correction algorithm. The authors further formulate the label correction process as a hypothesis testing problem and propose a novel algorithm to infer the true label from the noisy annotations based on the pairwise likelihood ratio test (LRT). The experimental results on benchmark and real-world datasets validate the effectiveness of the proposed approach. Strengths: 1. This paper provides a posterior-concentration theorem, which guarantees the posterior consistency of the noise transition model in terms of the Hellinger distance. 2. This paper formulates the label correction process as a hypothesis testing problem and propose a novel algorithm to infer the true label from the noisy annotations based on the pairwise likelihood ratio test (LRT). 3. This paper presents extensive experiments on benchmark and real-world datasets to validate the effectiveness of the proposed approach. Weaknesses: The performance of the proposed approach outperforms other algorithms on three image datasets with synthetic annotations, but does not have significant advantages on two datasets with human annotations, CIFAR-10N and LabelMe. The authors do not provide a reasonable explanation for these results. In addition, some of the author's statements in the abstract need to be considered, such as “ Learning an instance-dependent noise transition model, however, is challenging and remains less explored.”. To my knowledge, there are many algorithms proposed to solve this problem. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. From Table 1, the performance of the proposed approach outperforms other algorithms on three image datasets with synthetic annotations, but does not have significant advantages on two datasets with human annotations, CIFAR-10N and LabelMe. The authors have less analytical discussion of the experimental results. The authors should give a fuller discussion of the experimental results and add other datasets with human annotations to verify the validity of the proposed approach. 2. The authors claim that “Learning an instance-dependent noise transition model, however, is challenging and remains less explored.”. But to my knowledge, there are many algorithms proposed to solve this problem. The authors should notice t and cite these papers. 3. The proof section of the text is relatively substantial, but the author's explanation of the motivation of the article is slightly lacking. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1. From Table 1, the performance of the proposed approach outperforms other algorithms on three image datasets with synthetic annotations, but does not have significant advantages on two datasets with human annotations, CIFAR-10N and LabelMe. The authors have less analytical discussion of the experimental results. The authors should give a fuller discussion of the experimental results and add other datasets with human annotations to verify the validity of the proposed approach. 2. The authors claim that “Learning an instance-dependent noise transition model, however, is challenging and remains less explored.”. But to my knowledge, there are many algorithms proposed to solve this problem. The authors should notice t and cite these papers. 3. The proof section of the text is relatively substantial, but the author's explanation of the motivation of the article is slightly lacking. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your valuable comments on our paper. Below we address each question in detail. 1.As shown in Table 1 of our paper, the proposed method outperforms the baseline methods in most cases. We also display the results from Table 1 in Figure 1 of the PDF attachment, which may help demonstrate the benefits of the proposed method more clearly. Concerning the reason why the advantage of our approach is more pronounced on the datasets with synthetic annotations than on the real dataset, it may be pertinent to that we strictly follow the instance-dependent assumption in generating the annotations. To further verify the effectiveness of our method, we now add additional experiments on the MUSIC dataset and the results are shown in Table 2. 2.Thanks so much for your suggestion. In preparing a revision, we will take your suggestion and include this background information and an explanation of how our work differs from earlier efforts in our paper. Related references will be added too. In addition, to highlight the advantages of the proposed approach, we have now conducted additional empirical analyses by comparing with baseline methods [A-C] which employ instance-dependent matrix; the results are presented in Table 3 in the attached PDF. 3.Thanks for this thoughtful remark. The motivation and the advantages of the proposed approach can be summarized as follows. (1) Utilizing such a sparse Bayesian model enables us to consistently estimate the instance-dependent noise transition matrix with limited number of anchor points more. As shown in Figure 1 on page 9 of our paper, the average estimation error of the proposed method is apparently smaller than that of the baseline methods. We now also present the average estimation error under different annotator number settings in Figure 2 of the attached PDF. (2) Most of existing works related to the estimation of instance-dependent matrix are heuristic and lack theoretical guarantees. [D] made some theoretical progress to justify the use of the trace regularisation, extending the work of [E] in the case of instance-independent noise matrix. However, the theory in [D] only holds for individual samples rather than the population setting. Our work first theoretically characterizes the distance between the noise transition model and the true instance-dependent noise transition matrix in terms of the Hellinger distance, which largely closes the theoretical gaps in the consistency of the noise transition model. (3) The posterior consistency of the noise transition model enables us to conduct further analysis. In particular, based on the posterior consistency result, we propose an algorithm to infer the true label utilizing the noise transition model, and provide an information-theoretic bound on the Bayes error of the label inference method. **References** [A] Learning from Multiple Annotator Noisy Labels via Sample-wise Label Fusion. ECCV 2022 [B] Part-dependent label noise: Towards instance-dependent label noise. NeurIPS, 2020. [C] Estimating instance-dependent bayes-label transition matrix using a deep neural network. ICML, 2022. [D] Disentangling human error from the ground truth in segmentation of medical images. NeurIPS, 2021. [E] Learning from noisy labels by regularized estimation of annotator confusion. CVPR, 2019.
Summary: The paper introduces a method for modelling input-dependent label noise for classification in the presence of multiple annotators. The generative process of annotation is modelled as a hierarchical Bayesian model in which the observed labels are noisy versions of the latent true labels that are corrupted through a noise transition model that is dependent on the input as well as the annotator. The authors propose a method for inferring the true labels based on the approximated noise model and demonstrate benefits on classification benchmarks with both synthetic and real annotation noise. Strengths: - The idea of modelling labeling noise as a function of both input and annotator is much more realistic than the majority of related works which drop the input dependence or ignore the differences between annotators. - They propose a practical algorithm with demonstrable efficacy on classification benchmarks with both synthetic & real label noise. - The authors provide a sound theoretical justification for the above algorithm; 1) the concentration theorem shows the "tightness" of the approximation of the noise transition model; 2) formulating inference of true labels as a statistical test gives an information-theoretic bound on the error. Weaknesses: - The empirical analysis lacks insights into which components of the annotation model are actually important e.g., how sensitive is the model to a different specification of the prior distribution. - There seems to be no comparison with other works that employ instance-specific noise transition models --- e.g., [18] seems to propose a much simpler model that simultaneously learn the instance-specific confusion matrix per annotator and the true label distribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - how does the algorithm scale with the number of annotators? - how does the accuracy of the method depend on the "sparsity" of labels (e.g., if we had only one label per image, would it still be possible to model the input-dependent noise)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reading our paper and providing meaningful feedback. Below we address each question in detail. **Weaknesses** 1.The performance of the proposed method is relatively robust to the prior distribution as long as it satisfies the conditions listed in Appendix A.3. Specifically, for computational considerations, we consider the prior $\theta\sim \lambda_n N(0, \sigma^2_{1n})+(1-\lambda_n)N(0, \sigma^2_{0n})$ for each parameter $\theta$, which satisfies the conditions in Appendix A.3 if the values of $\lambda_n$ and $\sigma_{0n}$ are small enough and $\sigma_{1n}$ is taken a proper value. We provide experimental results with different choices of $\lambda_n$ in Table 1, from which we can observe that the model performance is relatively stable with respect to different values of $\lambda_n$ if $\lambda_n$ is within a reasonable range. 2.Thanks so much for pointing this out. The reference [18] in our paper is intended for segmentation tasks so that we do not include it as a baseline method in our paper. But your suggestion that we should compare with other works employing instance-specific noise matrix is insightful. We have now added more baseline methods Multi-AnT[A], Part-T[B], and BLTM[C] to highlight the advantages of the proposed approach, and the results are presented in Table 3 in the attached PDF. **Questions** 1.In the network structure of our experiments, each annotator corresponds to a linear fully-connected layer, connected after the shared representation layer, and thus accounts for a small percentage of the overall parameters. We consider different settings on the CIFAR10 dataset with the number of annotators set to 10, 20, and 30, respectively. The performance of the proposed method under different settings can be found in Table 4, with the total training hours listed below. Number of Annotators | IDN-LOW | IDN-MID | IDN-HIGH | --- | --- | --- | --- | | 5 | 2.59 | 2.64 | 2.65 | | 10 | 2.67 | 2.62 | 2.63 | | 20 | 2.65 | 2.65 | 2.67 | | 30 | 2.67 | 2.67 | 2.71 | 2.To evaluate the methods under incomplete labeling, for each data item, an annotator labels it with probability 0.2. **References** [A] Learning from Multiple Annotator Noisy Labels via Sample-wise Label Fusion. ECCV 2022 [B] Part-dependent label noise: Towards instance-dependent label noise. NeurIPS, 2020. [C] Estimating instance-dependent bayes-label transition matrix using a deep neural network. ICML, 2022.
Summary: The paper deals with the noisy annotation problem with crowdsourcing data. Instead of traditional transition matrices, the paper proposed to learn an instance-dependent transition model that uses a Bayesian network with guaranteed consistent posterior to approximate the noise in labels. The paper then proposes to infer the true label from the noisy annotations and improve the performance based on the corrected labels. Strengths: 1. The paper is clearly motivated and deals with a potentially meaningful problem for supervised learning with noisy annotations. 2. The paper provides a detailed theoretical analysis of the proposed Bayesian model and an information-theoretic bound on the Bayes error. 3. The experiments on MNIST and CIFAR datasets show that the proposed method outperforms the baselines most of the time. Weaknesses: 1. The assumption that an instance-dependent noise model can be learned with limited training data is quite strong. It seems that the proposed method is still associated with preset thresholds like existing methods, which is particularly difficult to set in a noisy problem with multiple annotators. 2. The conditions for Theorem 1 are not listed in the main paper. I think some remarks are necessary for people to have a fair understanding of the proposed theorems. 3. The proposed method does not always outperform baselines, and the advantage is not significant. In some cases, the biggest difference appears in the IDN-MID case, which is not thoroughly investigated or explained. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can there be a more convincing claim that an instance-dependent noise model can be learned with limited training data? 2. Is the main theoretical result asymptotic? Can there be a clearer message that indicates the dependency on the number of data samples? 3. Can there be more analysis on the reasons for the change of performances with the number of annotators on some datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Although the paper does mention limitations and extensions, the impact of modeling annotators and instance-dependent noise model is not adequately addressed. The noisy problem is often associated with biases or fairness concerns, so some careful consideration needs to be made. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for assessing our paper and providing meaningful comments. Below we address each question in detail. **Weaknesses** 1.Thanks for your question. (1) As shown in Appendix A.1 and Appendix A.2, we assume that the underlying noise transition probability can be approximated by a sparse model. This assumption is supported by empirical studies [A-C] which explore different methods on compressing neural networks without impacting performance. Besides, theoretical works [D-E] in approximation theory have established theories guaranteeing uniform approximation rates for a broad family of function classes. Additionally, as shown in Figure 1 on page 9 of our paper, the average estimation error of the proposed method is apparently smaller than that of the baseline methods, which in turn provides empirical evidence for our assumption. (2) The threshold value in the proposed label correction method is directly related to accuracy of the inferred labels. In our experiment, we start with a smaller value, which is chosen from {5, 10, 15, 20} and linearly increases the value in the training process. 2.Thank you for this suggestion. We will add remarks on these conditions when preparing a revision shortly. Specifically, the conditions to guarantee the posterior consistency can be roughly categorized into the three parts. (1) Part 1: the hypothesis set we consider is a class of DNNs. (2) Part 2: the underlying noise transition probability can be approximated by a sparse model. (3) Part 3: the prior we use need to satisfy some conditions. 3.Thanks for your comment. As shown in Table 1 of our paper, the proposed method outperforms the baseline methods in most cases. On the CIFAR-10 and CIFAR-100 datasets, the performance of our method is always the best, but the second-best method is always changing, which may be the reason why the biggest difference is sometimes seen in the IND-MID case. We display the results from Table 1 in Figure 1 of the PDF attachment, which may help to demonstrate the benefits of the proposed method more clearly. **Questions** 1. Please see the first point in **Weaknesses**. 2. Yes. The main theoretical result is asymptotic. To investigate the dependency on the number of data samples $n$, we may look at the conditions on the prior in Appendix A.3. For computational considerations, we may consider the prior $\theta\sim \lambda_n N(0, \sigma^2_{1n})+(1-\lambda_n)N(0, \sigma^2_{0n})$, where the values of $\lambda_n$, $\sigma_{1,n}$ and $\sigma_{0,n}$ depends on $n$, and it can be verified that this prior satisfies the conditions in Appendix A.3 if the values of $\lambda_n$, $\sigma_{1n}$ and $\sigma_{0n}$ are properly chosen. In particular, the value of $\lambda_n$ is related to the sparsity of the model and we require it to satisfy that $\lambda_n =O(1/J_{nk}[n^{H_{n1}+H_{n2}}(L_{n1}+L_{n2}p_{n})]^{c})$ for some positive constant $c$ and $k=1, 2$, which should be chosen by considering the network structure and the number of data points $n$. The meaning of the notations in the preceding formula is the same as in the paper. This procedure sheds some light on the consideration of the dependency on $n$. 3. Thanks for your question. We now also consider different settings on the CIFAR10 dataset with the number of annotators set to 10, 20, and 30, respectively. To evaluate the methods under incomplete labeling, for each data item, an annotator labels it with probability 0.2. The performance of the proposed method under different settings can be found in Table 4 and Figure 2, which further demonstrates the effectiveness of the proposed method. **References** [A] Learning both weights and connections for efficient neural network. NeurIPS, 2015. [B] Model compression and acceleration for deep neural networks: The principles, progress, and challenges. IEEE Signal Processing Magazine, 2018. [C] Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. ICML, 2019. [D] Optimal approximation with sparsely connected deep neural networks. SIAM Journal on Mathematics of Data Science, 2019 [E] Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Networks, 2018. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for providing the rebuttal. I appreciate the answers to address the clarity of conditions. However, although I don't agree with Reviewer i77Y about asking for extremely sparse cases, I do agree with some concerns about the practicalness of the proposed method. I'd like to keep the rating for now to see if the strongly negative opinions persist. --- Reply to Comment 1.1.1: Title: Thank you for the comments. Comment: Thank you for the comments. We present additional experimental results in Tables 1-2 to further illustrate the benefits of the proposed approach. * We conduct more experiments with varying number of annotators (10, 30, 50, 100) and each instance only has one label. Under these settings, we compare our approach with different baseline methods: (1) Instance-dependent method -- "Multi-AnT"; (2) Label correction methods -- "EM" and "MBEM"; and (3) Methods utilizing two networks -- "Co-teaching" and "Co-teaching+". The results are shown in Table 1, where "EM+" and "MBEM+" are the results obtained by replacing the label inference process in our method with EM and MBEM methods, respectively. * In Table 2, we present the accuracy of the inferred labels of our method and EM/MBEM method, where the numbers in the brackets of our method are the amount of data points whose labels are inferred and used for training. The results in Table 2 show the high accuracy of the inferred labels with our method and further exhibit the advantages of the proposed label inference method. We hope our response can address your concerns. #### Experimental results. **Table 1:** Average accuracy of learning the CIFAR10 dataset with different number of annotators. | | | Ours | Multi-AnT | MBEM | MBEM+ | EM+ | Co-teaching | Co-teaching+ | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | IDN-LOW | 10 | $87.34_{\pm 0.27}$ | $85.23_{\pm 0.42}$ | $82.60_{\pm 0.36}$ | $82.58_{\pm 0.24}$ | $81.02_{\pm 0.88}$ | $84.23_{\pm 0.56}$ | $84.32_{\pm 0.60}$ | | | 30 | $86.94_{\pm 1.15}$ | $84.36_{\pm 0.58}$ | $82.18_{\pm 0.62}$ | $82.19_{\pm 0.21}$ | $80.88_{\pm 0.48}$ | $83.69_{\pm 0.54}$ | $84.27_{\pm 0.40}$ | | | 50 | $86.78_{\pm 0.68}$ | $84.53_{\pm 0.75}$ | $82.00_{\pm 0.33}$ | $81.85_{\pm 0.53}$ | $80.18_{\pm 0.32}$ | $84.39_{\pm 0.82}$ | $84.90_{\pm 0.50}$ | | | 100 | $86.67_{\pm 0.35}$ | $84.49_{\pm 0.32}$ | $81.71_{\pm 0.45}$ |$81.34_{\pm 0.52}$ | $80.18_{\pm 0.42}$ | $84.49_{\pm 0.50}$ | $84.34_{\pm 0.73}$ | | IDN-MID | 10 | $86.58_{\pm 0.62}$ | $81.14_{\pm 1.10}$ | $77.96_{\pm 0.32}$ | $78.68_{\pm 0.52}$ | $76.61_{\pm 0.41}$ | $80.97_{\pm 0.82}$ | $81.28_{\pm 0.67}$ | | | 30 | $85.67_{\pm 0.55}$ | $80.74_{\pm 0.50}$ | $77.61_{\pm 0.48}$ | $78.37_{\pm 0.51}$ | $76.51_{\pm 0.32}$ | $81.14_{\pm 0.70}$ | $81.72_{\pm 0.40}$ | | | 50 | $85.68_{\pm 0.41}$ | $80.06_{\pm 0.66}$ | $77.46_{\pm 0.74}$ | $78.17_{\pm 0.17}$ | $76.28_{\pm 0.63}$ | $81.47_{\pm 0.75}$ | $81.19_{\pm 0.38}$ | | | 100 | $84.99_{\pm 0.59}$ | $79.26_{\pm 0.42}$ | $76.62_{\pm 0.28}$ | $78.06_{\pm 0.73}$ | $75.24_{\pm 0.66}$ | $81.65_{\pm 0.21}$ | $81.30_{\pm 0.39}$ | | IDN-HIGH | 10 | $84.68_{\pm 0.37}$ | $76.64_{\pm 0.68}$ | $71.98_{\pm 1.36}$ | $73.95_{\pm 1.86}$ | $66.82_{\pm 2.19}$ | $76.47_{\pm 1.24}$ | $77.45_{\pm 1.11}$ | | | 30 | $83.05_{\pm 0.41}$ | $73.57_{\pm 0.99}$ | $72.73_{\pm 0.76}$ | $75.32_{\pm 1.15}$ | $70.52_{\pm 0.80}$ | $77.57_{\pm 0.76}$ | $77.89_{\pm 0.06}$ | | | 50 | $82.66_{\pm 0.33}$ | $73.78_{\pm 1.13}$ | $72.85_{\pm 0.65}$ | $74.51_{\pm 0.82}$ | $70.69_{\pm 0.75}$ | $77.79_{\pm 0.74}$ | $77.87_{\pm 0.88}$ | | | 100 | $81.71_{\pm 1.19}$ | $71.47_{\pm 1.37}$ | $72.70_{\pm 0.90}$ | $74.08_{\pm 0.94}$ | $70.73_{\pm 0.79}$ | $78.25_{\pm 0.77}$ | $78.50_{\pm 0.45}$ | **Table 2:** Average accuracy of the inferred labels. | | | Ours | MBEM/EM | | :---- | :---- | :---- | :---- | | IDN-LOW | 10 | $97.98_{\pm 0.22}$ (39417) | $80.26_{\pm 0.51}$ | | | 30 | $98.01_{\pm 0.23}$ (38271) | $78.20_{\pm 1.58}$ | | | 50 | $97.30_{\pm 0.59}$ (39104) | $79.37_{\pm 0.17}$ | | | 100 | $97.33_{\pm 0.37}$ (38478) | $77.35_{\pm 2.68}$ | | IDN-MID | 10 | $97.70_{\pm 0.42}$ (36579) | $62.81_{\pm 0.37}$ | | | 30 | $96.65_{\pm 0.38}$ (37316) | $62.22_{\pm 0.19}$ | | | 50 | $96.21_{\pm 0.37}$ (38110) | $61.67_{\pm 0.38}$ | | | 100 | $96.65_{\pm 0.37}$ (35528) | $61.61_{\pm 0.30}$ | | IDN-HIGH | 10 | $95.18_{\pm 0.65}$ (37394) | $48.66_{\pm 0.37}$ | | | 30 | $94.03_{\pm 0.47}$ (37040) | $49.08_{\pm 0.21}$ | | | 50 | $93.51_{\pm 0.30}$ (36498) | $48.89_{\pm 0.25}$ | | | 100 | $94.13_{\pm 0.42}$ (34347) | $48.85_{\pm 0.12}$ |
Summary: This paper focuses on the problem of estimating annotator-specific instance-dependent transition matrices. The authors propose a solution based on a family of Bayesian estimators to approximate these matrices, supported by theoretical foundations. Through rigorous mathematical proofs, the authors demonstrate that their estimator can effectively approximate transition matrices under claimed mild conditions. Additionally, they introduce a novel label correction method with bounded Bayes error. To evaluate the effectiveness of their proposed method, the authors conduct experiments using three commonly used benchmarks and synthetic annotations. Overall, this paper has potential significant insights that could benefit the community. The experiment evaluation is not very convincing, but I'm tending to accept it at this stage. Strengths: There are several main theoritical contributions in this paper: 1. Authors proposed a sparse Bayesian approach to learn the annotator-specific instance-dependent transition matrices to model the relationship between label noise and crowdsourced annotators. 2. The proposed estimator is claimed to approxmiate true transition processs with guarantee (minimizes Hellinger distance). 3. Based on the derived theorem, the authors also provided a novel correction algorithm named that leverages likelihood ratio test, an upper-bound of the Bayes error for the proposed algorithm is provided. With the approxmiation guarantee and error-bound on the correction algorithm, the proposed method is therefore considered to have significant theoritical implications and considerable novelties, provided that the proof are rigourous and correct, which is under further checking. Weaknesses: 1. The section numbers provided in the appendix do not correspond to the references in the main paper, where are assumptions A.1-A.4 in A.1? 2. Compared baselines are not known for their advantages in modeling instance-dependent label noise, methods such as PartT [1], BLTM [2] should be compared instead. Also, SOTA methods based on geometrical properties such as [3,4] should also be compared. 3. One of the main issues is that the use of sparse Bayesian approach is not strongly motivated, it remains unclear to me why using sparse Bayesian estimators is better or non-trivial compared with existing works. [1] Xia, Xiaobo, et al. "Part-dependent label noise: Towards instance-dependent label noise." Advances in Neural Information Processing Systems 33 (2020): 7597-7610. [2] Yang, Shuo, et al. "Estimating instance-dependent bayes-label transition matrix using a deep neural network." International Conference on Machine Learning. PMLR, 2022. [3] Li, Xuefeng, et al. "Provably end-to-end label-noise learning without anchor points." International conference on machine learning. PMLR, 2021. [4] Yong, L. I. N., et al. "A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond." The Eleventh International Conference on Learning Representations. 2022. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Why are the results on CE trained with clean labels so low? This is inconsistent with prior works. 2. How are the sythentic annotations generated? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses** 1.Thanks so much for pointing this out. The assumptions A.1-A.4 are in Appendix A.2 and we will revise this error in the final version of our paper. We apologize for the misreference and thanks again for reading our paper carefully. 2.We have now added more baseline methods PartT [1], BLTM [2] and VolMinNet[3], and the results are presented in Table 3 in the attached PDF. 3.Thanks for this thoughtful remark. The motivation and the advantages of the proposed approach can be summarized as follows. (1) Utilizing such a sparse Bayesian model enables us to consistently estimate the instance-dependent noise transition matrix with limited number of anchor points. As shown in Figure 1 on page 9 of our paper, the average estimation error of the proposed method is apparently smaller than that of the baseline methods. We also present the average estimation error under different annotator number settings in Figure 2 of the attached PDF. (2) Most of existing works related to the estimation of instance-dependent matrix are heuristic and lack theoretical justifications. [A] made some theoretical progress to justify the use of the trace regularisation, which extends the work of [B] for the case of instance-independent noise matrix. However, the theory in [A] only holds for individual samples, rather than the population setting. Our work first theoretically characterizes the distance between the noise transition model and the true instance-dependent noise transition matrix in terms of the Hellinger distance, which largely closes the theoretical gaps in the consistency of the noise transition model. (3) The posterior consistency of the noise transition model enables us to conduct further analysis. In particular, based on the posterior consistency result, we propose an algorithm to infer the true label utilizing the noise transition model, and provide an information-theoretic bound on the Bayes error of the label inference method. **Questions** 1. This could be attributed to the different choices of the network structure and the number of training epochs. In particular, for CIFAR-10 and CIFAR-100 datasets, we choose ResNet-18 and ResNet-34 architecture and train the network for 120 and 150 epochs respectively. And we train the model from scratch instead of using pre-trained weights. 2. We generate three groups of annotators with varying expertise as described on page 8 of our paper, where 5 annotators are in each group. For each annotator, we generate the instance-dependent noisy annotations according to Algorihtm 2 in [C]. **References** [A] Disentangling human error from the ground truth in segmentation of medical images. NeurIPS, 2021. [B] Learning from noisy labels by regularized estimation of annotator confusion. CVPR, 2019. [C] Xia, Xiaobo, et al. "Part-dependent label noise: Towards instance-dependent label noise." Advances in Neural Information Processing Systems 33 (2020): 7597-7610.
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for providing feedback! We appreciate your comments and the time and effort you took to read, comprehend, and assess our paper. We value your comments and concerns regarding how to emphasize our motivations and clarify the assumptions more clearly (Reviewer i77Y, PGxM, wDkZ), as well as your suggestions that we compare with more baseline methods to highlight the advantages of the proposed method (Reviewer i77Y, PGxM, uiTb, ysQF). We carefully reviewed every one of your queries, concerns, and remarks. Here we addressed each review separately with a thorough response. In order to reflect our responses, we have also uploaded a one-page PDF with additional experimental results. The baseline methods used in the experiments are listed below. We hope that our responses have adequately addressed all of the concerns raised. However, if further details, justifications, or results are needed, we would be pleased to provide them. [1] Gao, Zhengqi, et al. Learning from multiple annotator noisy labels via sample-wise label fusion. European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [2] Xia, Xiaobo, et al. Part-dependent label noise: Towards instance-dependent label noise. Advances in Neural Information Processing Systems 33 (2020): 7597-7610. [3] Yang, Shuo, et al. Estimating instance-dependent bayes-label transition matrix using a deep neural network. International Conference on Machine Learning. PMLR, 2022. [4] Dawid, Alexander Philip, and Allan M. Skene. Maximum likelihood estimation of observer error‐rates using the EM algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics) 28.1 (1979): 20-28. [5] Khetan, Ashish, Zachary C. Lipton, and Anima Anandkumar. Learning from noisy singly-labeled data. The Ninth International Conference on Learning Representations. 2018. [6] Li, Yuan, Benjamin Rubinstein, and Trevor Cohn. Exploiting worker correlation for label aggregation in crowdsourcing. International conference on machine learning. PMLR, 2019. [7] Li, Xuefeng, et al. Provably end-to-end label-noise learning without anchor points. International conference on machine learning. PMLR, 2021. [8] Ibrahim, Shahana, Tri Nguyen, and Xiao Fu. Deep learning from crowdsourced labels: Coupled cross-entropy minimization, identifiability, and regularization. ICLR, 2023. Pdf: /pdf/f7f5a78ef4050bd041c6c596b79a39b69f80f85a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new method for label correction of crowdsourced data via an instance-dependent noise transition model. It parameterizes the instance-dependent noise transition matrices by a Bayesian network. A theoretical analysis that guarantees the posterior consistency of noise transition matrices is provided. A new label correction way based on the instance-dependent noise transition matrices is also provided. Experiments show the proposed method can achieve better performance than some existing methods. Strengths: 1. Estimating the instance-dependent noise transition matrices for crowdsourcing is an important topic. 2. This paper provides some theoretical analyses to understand and guarantee some useful properties of the proposed method. 3. Experiments on synthetic and real crowdsourced datasets verified the effectiveness of the proposed methods. Weaknesses: 1. General instance-dependent noise matrix is not new for crowdsourcing. For example, several prior methods [1-5] have studied it in the early period. Recently, this topic receives more attention [6-8], due to the increasing demand for large-scale well-annotated datasets. I think the paper should include these background and existing works, and discuss the difference with them. 2. The conditions to guarantee the posterior consistency of the noise transition model seems very complex. How can we know such conditions are met in real scenes? Besides, since the anchor point assumption is strong in label-noise learning [9-11], the requirement of the proposed method that a set of anchor points can be used for training the noise transition model seems much more difficult. 3. The experiments are insufficient in terms of the following aspects: - The number of annotators is usually large, while the experiments on synthetic data only assume 5 annotators, which is not a typical setting. - Also due to the small number of annotators, the annotations are not sparse for each annotator as usual case. - I suggest the authors compare the proposed method with some baselines that model instance-dependent noise matrices [6-8], and more truth inference baselines [12-14] to show its advantage more clearly. - Since the experimental datasets are all image classification datasets, it would be better to do some experiments on the dataset from other modalities, e.g. Music dataset [15]. 4. Lack of ablation discussion on label correction way. How to use noise transition for inferring clean labels has been explored in previous works [16,13], I think it is necessary to discuss and compare the proposed label correction way with the existing truth inference methods with noise transition matrix to when to use such label correction way. [1] The multidimensional wisdom of crowds. NeurIPS 2010 [2] Exploiting structure in crowdsourcing tasks via latent factor models. Citeseer. 2010 [3] Modeling annotator expertise: Learning when everybody knows a bit of something. AISTATS 2010 [4] Learning from multiple annotators with varying expertise. MLJ 2014 [5] Learning to Predict from Crowdsourced Data. UAI 2014 [6] Disentangling human error from the ground truth in segmentation of medical images. NeurIPS 2020 [7] Learning from Multiple Annotator Noisy Labels via Sample-wise Label Fusion. ECCV 2022 [8] Beyond confusion matrix: learning from multiple annotators with awareness of instance features. MLJ 2023 [9] Are Anchor Points Really Indispensable in Label-Noise Learning? NeurIPS 2019 [10] Provably End-to-end Label-noise Learning without Anchor Points. ICML 2021 [11] Estimating Noise Transition Matrix with Label Correlations for Noisy Multi-Label Learning. NeurIPS 2022 [12] Bayesian classifier combination. AISTATS 2012 [13] Learning from noisy singly-labeled data. ICLR 2018 [14] Exploiting worker correlation for label aggregation in crowdsourcing. ICML 2019 [15] Gaussian process classification and active learning with multiple annotators. In ICML 2014 [16] Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society, 1979 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the limitations at the end of the main body, while I think the demand of a set of anchor points is difficult to meet in real-world scenes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing such a comprehensive review. 1.We appreciate your thoughtful remarks. We will include this background and an explanation of how our work differs from earlier works in our paper. Specifically, existing methods investigate the human annotation process and propose different models to estimate the instance-dependent noise matrix. [1-5] use traditional classification models such as logistic regression, and [6-8] are works in the context of large datasets and deep models. However, [1-5] and [7-8] are heuristic and lack theoretical guarantees in estimating instance-dependent noise matrix. [6] makes some theoretical progress to justify the use of the trace regularisation, and extends the work of [A] in the case of instance-independent noise matrix. However, the theory in [6] only holds for individual samples, rather than the population setting. In contrast to existing papers, our work first theoretically characterizes the distance between the noise transition model and the true instance-dependent noise transition matrix, which largely complements the theoretical gaps in the consistency of the noise transition model. Moreover, based on the posterior consistency result, we propose an algorithm to infer the true label utilizing the noise transition model. 2.Thank you for your comments. We will add remarks on these conditions for clarifications in our paper. Specifically, the conditions to guarantee the posterior consistency can be roughly categorized into the three parts. (1) Part 1: the hypothesis set we consider is a class of DNNs. (2) Part 2: the underlying noise transition probability can be approximated by a sparse model. (3) Part 3: the prior we use should satisfy some conditions. The condition (1) is a general setting in literature. Below we illustrate how conditions (2) and (3) are met in real scenes. Regarding (2), existing works [B-D] empirically show that large DNNs contain a large number of redundant parameters and propose various methods on compressing neural networks without impacting performance. Moreover, theoretical works [E-G] in approximation theory establish theories guaranteeing uniform approximation rates for a broad family of function classes. Concerning (3) we can consider the prior $\theta\sim \lambda_n N(0, \sigma^2_{1n})+(1-\lambda_n)N(0, \sigma^2_{0n})$ for each parameter $\theta$ in the training process. By using techniques such as Mill's ratio, we prove that this prior satisfies the conditions in Appendix A.3 if the values of $\lambda_n$ and $\sigma_{0n}$ are small enough and $\sigma_{1n}$ is taken a proper value. The anchor point assumption is used in our paper for theoretical consideration, which, however, can be relaxed to be the existence of a set of points belonging to the $k$th class with high probability (denoted $1-\delta_n$) for $k\in[K]$. We prove that a similar result to Theorem 1 still holds, with the distance $d_n$ multiplied by a term related to $1-\delta_n$. Moreover, we add an additional baseline method VolMinNet[10] which does not exploit anchor points and present the result in Table 3. We observe that our approach still outperform this baseline, which further verifies the effectiveness of the proposed method. 3.We take your suggestions and run additional experimentals. The results can be found in the attached PDF. (1) \& (2) We now consider different settings on the CIFAR10 dataset with the number of annotators set to 10, 20, and 30, respectively. To evaluate the methods under incomplete labeling, for each data item, we consider the case where an annotator labels it with probability $p=0.2$. The performance of the proposed method under different settings can be found in Table 4 and Figure 2. Due to the time constraint, we have not obtained all results yet; we are still running some experiments. In the subsequent discussion period, we will give the results of the experiments with varying values of $p$. (3) The experimental results with additional baselines Multi-AnT[7] and MBEM [13] are shown in Table 3. (4) We now present the results on the MUSIC dataset in Table 2. 4.Thanks for your insightful comments and we will add this part in the related work of our paper. The EM[16] and MBEM[13] methods are proposed under the instance-independent transition matrix setting, while our method is developed in the instance-dependent case. In the process of truth inference, we all need to utilize the transition matrix, but the difference is that the EM and MBEM methods directly select the inferred label as the label value corresponding to the maximum label posterior distribution. Our method, however, conduct the pairwise likelihood ratio test between different label values and we select only labels with the likelihood ratio greater than a pre-specified threshold to train the classifier, thus greatly increasing the accuracy of the inferred labels. The efficiency of the proposed approach is further confirmed by the performance comparison with the EM and MBEM methods, as shown in Table 3. **References** [A] Learning from noisy labels by regularized estimation of annotator confusion. CVPR, 2019. [B] Optimal approximation with sparsely connected deep neural networks. SIAM Journal on Mathematics of Data Science, 2019. [C] Learning both weights and connections for efficient neural network. NeurIPS, 2015. [D] Model compression and acceleration for deep neural networks: The principles, progress, and challenges. IEEE Signal Processing Magazine, 2018. [E] Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. ICML, 2019. [F] Optimal approximation with sparsely connected deep neural networks. SIAM Journal on Mathematics of Data Science, 2019 [G] Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Networks, 2018. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response of authors. After carefully reading the response and checking the source codes, I still have following questions: 1. Expect for the existent of anchor points, how to collect a set of anchor points is unclear and has no guarantee. According to the source codes, this paper collected anchor points by selecting some points with the estimated noisy class posteriors exceeding a certain threshold. Even if anchor points exist in the training dataset, as far as I know, this way only has the guarantee when faced with instance-independent label noise [1,2]. While, when faced with instance-dependent label noise, without other assumptions, collecting a set of anchor points by such way is not guaranteed. Besides, according to the source codes, this work utilized the inferred true labels of anchor points for training the transition matrix models, which I didn’t find this practice explicitly in the paper. And this practice also raises more questions: could the accuracy of inferred true labels of anchor points be guaranteed, and if the inferred true labels is wrong, how it will influence the proposed method. I think the authors should largely improve their analysis to clarify these problems. 2. The annotations in the experiments are still not sparse as real-world cases. As seen in CIFAR10-N and CIFAR100-N, the number of annotations of each annotator is only about hundreds. Besides, could the proposed method work well when each instance only has one label, which is also a typical case in related works[3,4,5]. I more want to see the learning performance in CIFAR-100 with sparse annotations. 3. The proposed framework seems computationally complex, which utilizes two networks to model instance-dependent noise transition model. An analysis of computational complexity is suggested. It will be great to know how much we pay to improve the performance with new proposed method. 4. As for the label correction way, does the choice of the pre-specified threshold matter and how the proposed way perform better than that in MBEM and EM are still unclear. I suggest the author to conduct the experiments by replacing the proposed label correction way with that in MBEM and EM to show the effectiveness of the proposed label correction way directly. 5. Since the proposed method trains two classifiers to reciprocally perform the label correction, the comparison with other single-classifier methods will be unfair to some extent. The authors should conduct the ablation study of this factor. [1] Classification with noisy labels by importance reweighting. TPAMI 2015 [2] Making deep neural networks robust to label noise: A loss correction approach. CVPR 2017 [3] Learning from noisy singly-labeled data. ICLR 2018 [4] Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion. CVPR 2019 [5] Learning from crowds with sparse and imbalanced annotations. MLJ 2022 --- Reply to Comment 1.1.1: Title: Thank you for the follow-up questions. Comment: Thank you for the follow-up questions. Below are our responses and hope they can address your concerns and clarify the miscommunication. #### 1. The anchor point assumption. * As mentioned in our previous response (point 2), the anchor point assumption can be relaxed. More concretely (and thanks to your question, which motivates us to extend our analysis), we further extend our theoretical result (Theorem 1) to a more relaxed condition by re-defining the (_pseudo_) anchor point of class $k$ as $\mathbb{P}(y=k|\mathbf{x}) \ge 1-\delta_n$ and denoting $D_{\delta_n}$ as the set of pseudo anchor points accordingly, where $\delta_n$ characterizes the relaxation of our assumption. Then, we can modify Theorem 1 and theoretically prove that (detailed proof will be included in our revised manuscript) the following holds: $$\Pi[\theta\in\Theta:d_n(\theta, \theta_0)>M_n \epsilon_n+C\delta_n|D_{\delta_n}]\rightarrow 0$$ in $\mathbb{P}^{n}_{\delta_n}$ probability with $C$ denoting a constant. From the modified theorem, we can observe that as $\delta_n$ converges slower, the Hellinger distance of the transition model and the true transition probability converges to zero at a slower rate. In other words, we theoretically justify that **the transition model will still converge even if collecting a set of anchor points is not guaranteed (with the cost of a slower rate)**. * On the empirical side, the accuracy of the pseudo anchor points for training the transition model is higher than 90% in all the cases (Table 1), and the theoretical analysis of relaxing the anchor point assumption is given in the above bullet point. * We do not use the inferred labels to train the transition matrix model. The inferred labels are only used for training the base classifiers (Line 559 - Line 743 in train_proposed_plus.py) when the training of the transition matrix (Line 419 - Line 555 in train_proposed_plus.py) is finished. #### 2. More sparse case. * We conduct more experiments with varying number of annotators (10, 30, 50, 100) and each instance only has one label. Under these settings, we compare our approach with different baseline methods, including instance-dependent method [A], label inference methods [B-C], and methods trained on two classifiers [D-E]. The results are given in Table 2, which exhibit the advantages of the proposed method. * To further investigate the influence of the sparsity of annotations on the proposed method, we also conduct experiments with different number of annotations (1, 3, 5, 7, 9) for each instance and the results are presented in Table 3. As we increase the number of annotations, the performance of the proposed method is similar to the training result with access to the true labels. * Since it is already the last two days of the discussion period, we don't have enough time to complete the experiments on the CIFAR100 dataset. All the results below are obtained on CIFAR10. We will add more experiments on the CIFAR100 dataset in the new version of our paper. #### 3 & 5. Use of two networks. * We don't use two networks to model instance-dependent noise transition model. The two networks are used as base classifiers and reciprocally provide prior information in the label inference method, and the training of the transition model is already finished at this stage. According to our records, the total training time of our method on cifar10 is about 2.8 hours, which is about twice as long as that of the majority vote method (~ 1.3 hours) 1.7 times as that of MBEM (~1.6 hours), and about the same as the training time of other methods using two metworks. The total training time for the above mentioned methods on CIFAR10 with 100 annotators is as follows. | CE(MV) | MBEM | Co-teaching | Co-teaching+ | Ours | | :---- | :---- | :---- | :---- | :---- | | $1.31_{\pm 0.04}$ | $1.62_{\pm 0.03}$ | $2.56_{\pm 0.09}$ | $2.69_{\pm 0.07}$ | $2.78_{\pm 0.04}$ | * There are indeed other methods in the existing literature [D-E] that utilize two networks, and for a fairer comparison, we also use them as baseline methods, denoted "Co-teaching" and "Co-teaching+" in Table 2. Moreover, when we compare the label correction methods later, we also use two networks to train and reciprocally perform label correction ("MBEM+" and "EM+" in Table 2). As shown in Table 2, our method still outperforms the baseline methods when two networks are employed.
null
null
null
null
null
null
Contrastive Training of Complex-Valued Autoencoders for Object Discovery
Accept (poster)
Summary: This paper improves the state-of-the-art in synchrony based object centric learning methods. The current SoTA is the Complex-valued AutoEncoder (CAE). But CAE is unable to handle even the Tetrominoes dataset. The paper presents CAE++ and CtCAE. CAE++ = CAE + minor architectural improvements CtCAE = CAE++ + Contrastive loss term. Experiments show that CAE++ and CtCAE are the first synchrony based methods that can handle colour images, and CtCAE can handle up to 6 objects per image. The contrastive loss term in CtCAE is intended to make the object clusters well separated in phase space. This is then evaluated in Table 2 by measuring inter-cluster and intra-cluster distances. The rest of the paper measures ARI and FG-ARI on Tetrominoes, dSprites, and CLEVR datasets. Ablation experiments are used to justify the design choices. Strengths: Although this line of work is niche, it is promising and engages scientific curiosity among many readers. The paper advances this line of work closer to the overall SoTA among object centric learning methods; although there is still a large gap here. The method is easy to understand and the paper is well written. The experiments cover most design choices and compare against CAE, the obvious baseline. The experiments also address important questions such as the robustness across varying number of objects and generalization to more objects. Error bars are included in all tables. Weaknesses: Please incorporate the following minor improvements - In line 175, it is claimed that performance is closely matched at 32x32. This is not true for dSprites ARI-FULL is 0.90 vs 0.68. I believe it may be sufficient to say that the methods are ordered the same way across resolutions. - In Table 2, please indicate in row 1 whether higher is better (up arrow) or down is better (lower arrow) to aid interpretation. In some cases numbers have been bolded despite overlapping error bars. Are these statistically significant? - In lines 299-305, consider citing Odin (https://arxiv.org/abs/2203.08777) which does pixel level BYOL followed by kmeans to get objects. It is kind of a contrastive object centric method (although mean-teacher distillation is probably the right word for BYOL). - Some background explanation about the encoder and decoder and where the kmeans happens to get masks would make it even easier to understand the method, for readers not familiar with CAE. The main drawback is that slot based methods (DINOSAUR, https://openreview.net/forum?id=b9tUk-f_aG) are now working on real world data whereas Synchrony based methods are yet to make the break into textured images. But I think this paper is still relevant and that future developments in this line of work may catch-up to Slot based methods. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I've asked my questions in the weaknesses section. Additionally a sweep over the \beta parameter would be nice. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed the weakness although the empirical gap between slot based and synchrony based methods should be made clearer in my opinion. I do not see any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable feedback on our work and for the highly positive comments on the strengths of our contributions. We are very happy to hear that the reviewer found the method easy to understand, the paper to be well written and the experiments thorough. Below are our responses to your questions/comments: *“ In line 175, it is claimed that performance is closely matched at 32x32. This is not true for dSprites ARI-FULL is 0.90 vs 0.68. I believe it may be sufficient to say that the methods are ordered the same way across resolutions.”* We thank the reviewer for highlighting this error. We will make the requested change by indicating the broader point that the various models continued to be ordered in the saw way across resolutions. *“In Table 2, please indicate in row 1 whether higher is better (up arrow) or down is better (lower arrow) to aid interpretation.”* Thank you for this suggestion, we will make the requested change to Table 2 in the updated manuscript. *“In lines 299-305, consider citing Odin (https://arxiv.org/abs/2203.08777) which does pixel level BYOL followed by kmeans to get objects."* We thank the reviewer for pointing out this relevant work. We will cite and discuss this work in the related work section under “Contrastive Learning with object-centric models”. *“ The main drawback is that slot based methods (DINOSAUR, https://openreview.net/forum?id=b9tUk-f_aG) are now working on real world data whereas Synchrony based methods are yet to make the break into textured images.”* We share the view of the reviewer (noted in lines: 312-313) that the current synchrony-based family of models lag behind state-of-the-art slot-based approaches to real-world visual complexities. They have received far less attention by the research community who have directed significant algorithmic/engineering efforts over the last several years to scale them from simple multi-object datasets (MNIST/Shapes/Tetrominoes) to the current visual complexities (MoVi-C/E). We hope the research community invests a similar amount of effort towards scaling synchrony-based models as well. Regarding where the ‘masks’ used by K-means for clustering are obtained, we take the phases at the decoder output layer as inputs for the clustering algorithm to compute the object assignments (please refer to L101-103). *“Additionally a sweep over the $\beta$ parameter would be nice.”* We initially evaluated several values for the $\beta$ parameter such as [0.01, 0.001, 0.0001, 0.00001, 0.000001] and quickly settled to the value of 0.0001 (kept constant across all experiments in this work) as it led to better training and grouping performance across different datasets. As per your request, we additionally ran a sweep for $\beta \in [0.01, 0.001, 0.0001, 0.00001, 0.000001]$ on the 32x32 dSprites dataset. Here are the ARI scores we obtained for various values of $\beta$ (shown on the first column below): 0.01 - ARI-FG: 0.20$\pm$0.06; ARI-FULL: 0.14$\pm$0.04 0.001 - ARI-FG: 0.43$\pm$0.05; ARI-FULL: 0.66$\pm$0.11 0.0001 - ARI-FG: 0.48$\pm$0.03; ARI-FULL: 0.68$\pm$0.13 0.00001 - ARI-FG: 0.48$\pm$0.07; ARI-FULL: 0.53$\pm$0.05 0.000001 - ARI-FG: 0.46$\pm$0.04; ARI-FULL: 0.49$\pm$0.11 This extra sweep confirms that $\beta=0.0001$ (used throughout the work) is a good choice. Please let us know if there are any concerns you have raised that our responses have not yet addressed satisfactorily. --- Rebuttal Comment 1.1: Comment: Thank you for the sweep over \beta and addressing the remaining concerns. I continue to support the acceptance of this paper.
Summary: CAEs hold potential to address several issues associated with slot-based representation. They may provide flexible object granularity, enable the extraction of part-whole hierarchies as needed, and provide faster training speeds. However, testing of the original CAE was confined to grayscale images and a small number of objects (2-3). This paper explores the question: how can we scale up CAEs beyond this initial scope? For this, the paper takes the following approach: 1. Discovers certain architectural modifications that make original CAE perform much better on more complex datasets, in this case, Tetris, CLEVR, and dSprites. 2. However, noting that this is not enough, the paper proposes a novel contrastive loss, which improves the segmentation performance even further when added to the original reconstruction loss of CAE. Strengths: 1. Useful ablations show that architectural optimizations can be important to achieve the full potential of CAE. A good thing about some of these optimizations is that they actually relax and don’t complicate the original architecture e.g., by removing sigmoid activation. 2. Empirically show higher storage capacity in terms of the number of objects than CAE. 3. Paper is largely well-written. Weaknesses: 1. While it is interesting that contrastive loss helps the segmentation performance, it somehow does not provide me new insights about the CAE framework itself. For instance, it doesn’t illuminate why separability was poor in the original CAE in the first place. As such, I am a bit worried that the proposed solution addresses a symptom rather than the root cause. 2. Line 215: *“The results in Table 2 indicate that indeed, intra-cluster distance (the 216 last column) is smaller for CtCAE than CAE++ for every dataset, especially for the most difficult ones”* It is actually difficult to conclude this from Table 2 considering the large standard deviations of these measurements. My suggestion would be to tone down this claim and/or confine this result to the appendix. Regarding the separability results more broadly, it is not clear whether separability in the representation space (as defined in this paper) is actually important with respect to the things we may eventually care about e.g., downstream performance or other tasks. 3. (Optional) It would be good to also report performances of some slot-based baselines, not for strict comparison, but to help the reader position the current performance in a more complete perspective. I see a line mentioned about it in the conclusion but without a pointer to a supporting result. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. Why is CAE performance not reported in Table 3? 2. Why not apply the contrastive loss on all layers rather than just the output of the encoder and the decoder? It appears a bit arbitrary to choose two specific layers to apply this loss. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the paper discusses the limitations relative to current slot-based methods, but one should also consider the potential benefits of the CAE family of models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for giving us valuable feedback and critiques on our manuscript. We’re happy to hear that the reviewer found our paper to be well-written and ablation studies on architectural design to be useful and simplifying. Below are our responses to your questions/comments: *“... does not provide me new insights about the CAE framework itself. For instance, it doesn’t illuminate why separability was poor in the original CAE in the first place.”* Our intuition for designing the contrastive loss stems from training the CAE model with solely the pixel-level reconstruction objective provided insufficient optimization pressure to guide the learning of bias terms ($b_{\psi}$) to discover the various independent factors (objects) in the scene. Beyond that, we do not claim to provide further insights into why just the phase interference mechanisms in CAE activations are insufficient to allow it to learn meaningful groups on more complex datasets (like the ones tested here). We believe this could be an interesting analysis for future work. *“... difficult to conclude this from Table 2 considering the large standard deviations of these measurements.”* We thank the reviewer for highlighting this aspect of the results in Table 2. We agree with the reviewer, and will reduce the strong claim in the updated manuscript as follows: The results in Table 2 show that, the mean intra-cluster distance (the last column) is smaller for CtCAE than CAE++ on two (dSprites and CLEVR) of the three datasets. However, we believe it is important to retain Table 2 in the main text due to its relevance to the proposed contrastive loss. *“... it is not clear whether separability in the representation space (as defined in this paper) is actually important with respect to the things we may eventually care about e.g., downstream performance or other tasks.”* Separability improves the ability of the model to group/store independently the representations of various objects in a scene. This separation of information internally allows the model to generalize better in compositional scenarios such as varying the number of objects, novel combinations of known objects etc by avoiding the “superposition catastrophe” problem [1]. The ability to separately maintain the information of different object instances also allows us to compute relational factors between them useful to solve tasks that involve object-level reasoning. This is the same motivation behind the task of perceptual grouping with slot-based models [1]. We do not make any specific claims about how “separability” is positively correlated to downstream task performance in our work. *“(Optional) It would be good to also report performances of some slot-based baselines, not for strict comparison, but to help the reader position the current performance in a more complete perspective. I see a line mentioned about it in the conclusion but without a pointer to a supporting result.”* We thank the reviewer for this suggestion (as was also noted by Reviewer BSdH). We agree to this recommendation and will update the manuscript adding the grouping results for a baseline SlotAttention model to Table 1. For reference, SlotAttention achieves ARI-FG of $98.8\pm0.3$ for CLEVR, $91.3\pm0.3$ for dSprites and $99.5\pm0.2$ for Tetrominoes (results from Emami et al. [2]). *“Why is CAE performance not reported in Table 3?”* This was because the CAE baseline does not learn any meaningful grouping at all on dSprites and CLEVR as can be seen in Table 1. It achieves very low ARI-FG and ARI-FULL scores (near zero in several cases). Therefore, we decided to remove it from the results in Table 3 as at this very low level of grouping performance we cannot draw any meaningful conclusions about its ability to generalize to different numbers of objects in the scene. *"Why not apply the contrastive loss on all layers rather than just the output of the encoder and the decoder? It appears a bit arbitrary to choose two specific layers to apply this loss."* This is a useful suggestion (as was noted by Reviewer QpC7 as well). We did experiment with various configurations applying the contrastive loss to different layers of both the encoder and decoder modules, including the reviewer’s recommendation of applying the contrastive loss to every convolutional layer. However, we observed that adding more such contrastive loss terms did not help improve the grouping performance and so we utilized the minimal (but still the best performing) choice of contrasting only the encoder and decoder outputs. Please let us know if there are any concerns that the reviewer raised that our responses have not addressed satisfactorily. If the reviewer found our response useful, could the reviewer please consider increasing the score. Thank you very much! References: [1] Greff et. al, “On the Binding Problem in Artificial Neural Networks”, 2020. [2] Emami et. al, “Efficient Iterative Amortized Inference for Learning Symmetric and Disentangled Multi-Object Representations”, ICML 2021. --- Rebuttal Comment 1.1: Title: Thank You Comment: My biggest concern is still whether contrastive loss is the right direction to go from the original CAEs. To me, new contrastive loss somehow contradicts the main promise of CAEs. CAEs are purported to take us away from hard clustering decisions about what objects are --- to leave flexibility for the downstream user if they want to attend to a part or a whole by giving them a "map of phases". By enforcing separability by force (in this case by asking pixels/patches with similar visual information to become close in phase space), we are going back to square one --- back to hard clusters of slot attention. I believe in the promise of CAEs but I think a good solution to scaling CAEs may come from a better understanding of the original CAEs (which authors say will fall in the realm of future work) and also asking why hard separability should even be desired. I am fully onboard about going from CAE to CAE++ which introduces modifications to aid optimization of the model. But for the reasons mentioned above, I am somehow finding it difficult to convince myself to go from CAE++ to CtCAE. I will maintain my score for now but I may be a bit less inclined towards acceptance than before. --- Reply to Comment 1.1.1: Comment: Thank you very much for your comment and initiating this discussion on the high-level motivation behind our work. > *“To me, new contrastive loss somehow contradicts the main promise of CAEs.”* > *“By enforcing separability by force (in this case by asking pixels/patches with similar visual information to become close in phase space), we are going back to square one --- back to hard clusters of slot attention.”* Unlike slots, our contrastive loss does not impose activations to form a fixed set of clusters, thereby the core essence of synchrony-based approaches—flexible and graded quality of clusters—is still preserved. The entire object is not simply assigned a single phase value. It is still up to the downstream user to extract clusters at the granularity level needed by the specific task, on-demand, given the “map of phases” using the appropriate level of quantization of the continuous phase map. The “hard” decision about what the “right” objects are is still left to the specifics of the clustering procedure used which can be conditioned based on the downstream task and is not hard-coded during the training phase. > *“good solution to scaling CAEs may come from a better understanding of the original CAEs”* We agree with the reviewer, but “better understanding of the original CAEs” itself seems to be an important open-ended research challenge. This difficulty is also partially due to the lack of maturity of these models. The CAE architecture and earlier works by Reichert et. al [37] (following the reference numbering in our paper) motivated the binding mechanism in synchrony-based models simply via phase interference (addition of complex-valued activations). In practice, it still required the use of the gating mechanism which modulates the output response of a unit as function of the phase difference such that even activations with opposite phases do not completely cancel out their magnitude responses. This architectural change was necessary for these models to learn meaningful structure in the phase space. Simply training these models to reconstruct pixels without this gating mechanism was insufficient to drive emergence of any meaningful structure in their phase space (ablation without gating in Table 2 in Löwe et. al [33]). This highlights a “gap” between the conceptual motivations in synchrony-based models and the practical engineering design choices required to get this class of models to train/learn meaningful visual structure. At this stage, general explorations in the space of both architectures and training objectives seem necessary to gain empirical knowledge about these models. This is especially the case when there is no guarantee that the mainstream pixel-level reconstruction loss is enough for the model to behave as expected; in fact, the old literature on the synchrony models even used a “supervised learning approach” (see Mozer et al. [35]) to encourage phase learning/spreading. Further, such explorations could also go hand in hand with future, better understanding of the model architecture. > *“asking why hard separability should even be desired”* We agree with the reviewer that “hard” separability is not desirable apriori. But, we need some priors about the independence in the visual input that must be injected either via the architecture or the training objective for the system to discover these independent factors in its phase map. Therefore, here we have chosen to create minimal optimization pressure for the network to discover the “right” independent factors in the visual scene by our regularization objective.
Summary: This paper presents a collection of small modifications of the Complex-valued AutoEncoder architecture leading to improved reconstruction/segmentation performance on simple RGB datasets; a contrastive learning objective is introduced to further improve the performance. Strengths: - The paper's story is easy to follow, and the motivation is clearly stated. - Architectural choices are justified through ablation studies. - Experimental results are averaged over multiple random seeds. Weaknesses: - The captions of figures/tables could be improved to make it easier for readers to understand what is displayed. For example, Figure 1 is hard to understand without reading the relevant section. - Table 1: While I see that this is a comparatively new line of research on new approaches for object-centric learning, I'd still appreciate it if the authors compared to state-of-the-art models so that readers get a feeling for how far the gap still is/how small it has become. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Eq. 2: A short note on why this separation is used and not one that uses the magnitude and phase could be helpful. - L134: Moving the reference to Figure 1 to the previous paragraph might make it easier to follow the general idea of this loss. - Eq. 6: Is this the loss for a single input image, meaning that the loss is then averaged over a batch? Or are the negative/positive samples chosen across a batch? If it's not across the entire batch, what speaks against doing this? - Figure 2: An explanation of what the last three columns show/what one can learn from looking at them can be useful for readers. - L218: Please elaborate on why the minimum distance is more relevant than the average. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: While the authors mention the strongest limitation, a more extensive discussion of this approach's limitations is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and suggestions to improve our work. We are happy that the reviewer found the manuscript easy to follow and the main ideas presented to be clearly motivated. Below are our responses to the comments/questions from the reviewer: *“ … captions of figures/tables could be improved to make it easier for readers to understand what is displayed.”* Indeed, due to the space limitation, we had to avoid redundancy between the caption and section text. To improve this, we will add a link in the caption that refers to the corresponding text in the section (which provides the necessary contextual information), wherever appropriate. We thank the reviewer for pointing this out. *“ I'd still appreciate it if the authors compared to state-of-the-art models so that readers get a feeling for how far the gap still is/how small it has become.”* We thank the reviewer for this suggestion (as was also requested by reviewer bBXA). We agree to this recommendation and will update the manuscript adding the grouping results for a baseline SlotAttention model to Table 1. For reference, SlotAttention achieves ARI-FG of $98.8\pm0.3$ for CLEVR, $91.3\pm0.3$ for dSprites and $99.5\pm0.2$ for Tetrominoes (results from Emami et al. [3]). *“ Eq. 2: A short note on why this separation is used and not one that uses the magnitude and phase could be helpful.”* This use of the cartesian form (real and imaginary components of a complex-valued activation) in a layer’s forward pass was the formulation introduced by Reichert and Serre [1] which has been maintained in Löwe et. al [2] as well. *“ L134: Moving the reference to Figure 1 to the previous paragraph might make it easier to follow the general idea of this loss.”* We thank the reviewer for this suggestion and agree to move the reference to Figure 1 to the paragraph above to improve the readability. *“ Eq. 6: Is this the loss for a single input image, meaning that the loss is then averaged over a batch? Or are the negative/positive samples chosen across a batch? If it's not across the entire batch, what speaks against doing this?”* The contrastive loss is computed on a per-image basis and averaged across the samples in a batch. The positive and negative pairs are extracted from within one image. This is because we would like to enforce the phases of pixels within a single image to be far apart or close depending on the objects they belong to. The phase values of objects across images are not relevant to achieve this instance-level grouping of objects within a scene. For example, given 2 images: img_1: contains a red ball and green cube || img_2: red ball, green cube and yellow triangle. Ideally we would want phases of the ball and cube in img_1 to be separated by 180 degrees, whereas in img_2 their difference is ideally 120 degrees. The red ball and green cube cannot have the same absolute phase values across both these images. Rather, what matters more is the relative phase between objects within a single image (scene). Therefore, it would be incorrect to contrast object instances across image samples in a batch to achieve instance-level grouping. *“ Figure 2: An explanation of what the last three columns show/what one can learn from looking at them can be useful for readers.”* We thank the reviewer for this suggestion. We will include descriptive text for e.g., “column 5 shows the radial plot with the phase values from $-\pi$ to $\pi$ radians (colors correspond to phase angles), column 6 shows the raw phase values (in radians) averaged over the 3 output channels as a heatmap (where colors correspond to the colors from Phase Rad. column 5) and column 7 shows magnitude component of the complex-valued outputs” in the updated manuscript to assist the reader’s interpretation of the last three column's contents in Figure 2. *“ L218: Please elaborate on why the minimum distance is more relevant than the average.”* Separability indicates how difficult it is for the phase pattern of one object to interfere with another. During the evaluation procedure to extract discrete object assignments from the continuous phase maps, a clustering procedure is applied on the phase values. Our clustering procedure (KMeans) uses the centroid that is closest (minimum) distance to assign a phase value as being part of that cluster. Therefore, minimum distance is more relevant to the accuracy of the extracted object assignments through clustering than average distance. *“ … a more extensive discussion of this approach's limitations is missing.”* We would like to highlight the qualitative failure modes of CAE++/CtCAE discussed in lines: 259-263. We visualize and qualitatively examine these failure modes in Figure 4 in the main text as well as provide more such qualitative samples shown in Figure 14 in Appendix C. Please let us know if you have some specific recommendations in-mind for the discussion of limitations that we have not included in the paper. Please let us know if there are any concerns remaining that the reviewer raised that our responses have not sufficiently addressed. References: [1] Reichert et. al, “Neuronal Synchrony in Complex-Valued Deep Networks”, ICLR 2014. [2] Löwe et. al, “Complex-Valued Autoencoders for Object Discovery”, TMLR 2022. [3] Emami et. al, “Efficient Iterative Amortized Inference for Learning Symmetric and Disentangled Multi-Object Representations”, ICML 2021. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my question and addressing the points I raised. As mentioned in my original review, I believe that while the ansatz presented in this work is currently performing worse than other established (and long-refined approaches), there is nevertheless a value in finding new approaches and would, hence, recommend accepting this paper.
Summary: This paper proposes several improvements over the Complex AutoEncoder (CAE) to scale it to more complex multi-object datasets. The CAE is a synchrony-based model that learns object-centric representations by injecting complex-valued activations into every layer of its architecture, but it is only applicable to grayscale images with up to 3 objects. In this paper, two modifications to the original architecture and a novel auxiliary loss function are proposed that make the CAE applicable to RGB datasets with more than 3 objects. Strengths: Overall, the paper is very well written and easy to follow. The introduction provides a good overview over the relevant research and a convincing motivation for the proposed approach. The methods section describes the proposed improvements and novel contrastive loss clearly. The results section is well-structured, and the related works section embeds the proposed research well into the existing research landscape. The proposed contrastive loss seems like an elegant solution towards improving the "separability" of objects in the phase space. The complex-valued features contain two separate types of information (feature presence in the magnitudes and feature binding in the phases), and the proposed contrastive loss smartly utilizes this separation. The provided ablation studies provide interesting insights into the proposed model improvements and design choices. Weaknesses: 1) The main problem with the paper is that it seems to evaluate the models incorrectly. As described in section C.1 - "RGB Images" of the paper proposing the CAE [1], it cannot simply be applied to RGB images due to its evaluation method. During evaluation of the CAE, as well as in the proposed approach, features with small magnitudes are masked out, as they may exhibit increasingly random orientation values. However, this rescaling may lead to a trivial separation of objects in RGB images. For an image containing a red and a blue object, for example, the reconstruction will assign small magnitudes to the color channels that are inactive for the respective objects. By masking out values with small magnitudes, objects will subsequently be separated based on their underlying color values, rather than their separation as learned by the model. As far as I can tell, the paper does not describe any mechanism that would prevent the evaluation from doing this. I believe this effect is even visible in the qualitative examples. In Figure 3, the phase image of the CtCAE model on the dSprites example does not seem to clearly separate the purple and orange objects. The mask, however, separates them perfectly. In my eyes, this is only possible if the evaluation procedure makes use of additional (color) information besides the pure phase output. Overall, this makes it unclear whether the presented results are valid. 2) Eq. 6: When measuring the cosine distance between the phase feature vectors, extra care would be necessary to take into account their circular nature. Otherwise, two phase vectors $a=(0,0,0)$ and $b=(2 \pi, 2 \pi, 2 \pi)$ would have a large cosine distance to one another even though they represent the same angles. Has this been taken into account? 3) It would be interesting to see an evaluation of how the proposed improvements influence the separation of objects in the latent space of the model. Ultimately, the goal of object discovery is to create object-centric representations within the model, but the paper currently only evaluates the separation at the output. Minor points: - line 25: not all slot-based method work with iterative attention procedures - Personally, I would move the CAE overview into a separate Backgrounds section - line 90: $m_\psi$ can technically not be called a magnitude, as it may contain negative values. - Eq. (6): Is there no exp() applied to the cosine distances, such that the equation would correspond to a Softmax? - Using a dimensionality of 32x32 for Tetrominoes changes the shapes of the objects. While at the original dimension, all objects are made up from perfect squares, these squares get squeezed randomly when resizing the images. Thus, it would be better to make use of the original input dimension. --- [1] Löwe, S., Lippe, P., Rudolph, M., & Welling, M. (2022). Complex-valued autoencoders for object discovery. arXiv preprint arXiv:2204.02075. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: 1) As described above, I believe the main weakness of this paper is its evaluation procedure. If the authors could come up with a solution to the described problem or convince me that this problem does not apply to their setup, I'd be happy to increase my score. Other questions: 2) Why do you not apply the contrastive loss to every (convolutional) layer of the architecture? 3) Do you have an explanation as to why the standard deviation is so large for some of the results of the CtCAE (Table 1)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable feedback on our work. We’re glad that the reviewer found our paper well-written, the proposed contrastive loss well-motivated and an elegant solution to the problem and the experiments/ablation studies insightful. Below are our responses: *”main problem with the paper is that it seems to evaluate the models incorrectly.”* It is true that in our setting, certain color information may contribute to the separation process, and the example by the reviewer provides an illustration. However, the reviewer's claim "By masking out values with small magnitudes, objects will subsequently be separated based on their underlying color values, rather than their separation as learned by the model." cannot be true, since CtCAE largely outperforms CAE++ (see Table 1) while both of them have a similar (almost perfect) reconstruction loss and (of course) are evaluated in the same way. This clearly indicates that the potential issue pointed out by the reviewer only has a marginal effect in practice in our setting, and that separation learned by the model is crucial to achieve good performance. In fact, “pure” RGB colors (as in the reviewer's example “containing a red and a blue object”) very rarely occur in images from the multi-object datasets (dSprites, CLEVR) used in this work. The percentages of pixels with a magnitude below 0.1 are 0.44%, 0.88%, and 9.25% for CLEVR, dSprites, and Tetrominoes respectively. While 9% on Tetrominoes is not marginal, we empirically observe that this is not an issue in practice, since we observe many examples in Tetrominoes where two or three blocks with the same color get grouped into different clusters by either CAE++ or CtCAE (see three yellow blocks in Figure 3, two magenta blocks in Figure 8, two magenta blocks in Figure 9), demonstrating that the separation is clearly not simply based on color values of pixels. Therefore, while we agree that our object extraction process should ideally be fixed for these edge cases, we leave that as an investigation for future work. *”In Figure 3, the phase image of the CtCAE model on the dSprites example does not seem to clearly separate the purple and orange objects ....... only possible if the evaluation procedure makes use of additional (color) information besides the pure phase output.”* We would like to emphasize that the ‘Phase Img.’ (shown in columns 3/6/9 of Figure 3) is the average of the 3 output phase values plotted as a heatmap. Due to this averaging it cannot distinguish between different phase values eg. [0.1*$\pi$, 0.5*$\pi$, 0.2*$\pi$], [0.2*$\pi$, 0.1*$\pi$, 0.5*$\pi$], [0.5*$\pi$, 0.2*$\pi$, 0.1*$\pi$]. Therefore the visual discrepancies between a ‘Mask’ and corresponding ‘Phase Img.’ are due to the use of the “average” phase plot. (Note that for the evaluation the KMeans clustering is applied on all 3 phase values (of a pixel) at the decoder output and is therefore able to discriminate between these different phase values leading to a correct prediction of the ‘Mask’.) We understand this aspect of the visualization is not explained and will add a note in the updated manuscript. Thank you for drawing our attention to this. *”Eq. 6: When measuring the cosine distance between the phase feature vectors, extra care would be necessary to take into account their circular nature.”* We investigated a variant which accounted for the circularity of phase space but it did not improve the results. We observed in practice that phases are initialized from zero and rarely go beyond the range [-0.75*$\pi$, 0.75*$\pi$] (as observed in Figure 2 and 3). *”.. would be interesting to see an evaluation of how the proposed improvements influence the separation of objects in the latent space of the model ... currently only evaluates the separation at the output.”* Evaluating the separation of objects only at the output level follows the method used by the original CAE model. It allows a fair comparison of grouping performance learned by the baseline models and our proposed variants. We agree with the reviewer that this idea of evaluating object separation at several layers within the synchrony-based model is interesting and valuable but it is outside the scope of our current work. *”Why do you not apply the contrastive loss to every (convolutional) layer?”* We experimented with various configurations applying the contrastive loss to different layers of the encoder and decoder, including applying the loss to every layer. We observed that adding more such loss terms did not help improve grouping performance and so we utilized the minimal (but still the best performing) choice of contrasting only the encoder and decoder outputs. *Do you have an explanation as to why the standard deviation is so large for some of the results of the CtCAE (Table 1)?* Regarding the std-dev for CtCAE in Table 1, please note that the comment about the larger std-dev for CtCAE compared to other models is only true for the ARI-FULL score on CLEVR dataset. Please note that on some dataset variants like dSprites (32x32) or CLEVR (96x96) the std-dev for CtCAE is lower and in other cases it is comparable to the std-dev of CAE++. *”On minor points.”* We thank the reviewer for the suggestion to move the CAE description to a background section and agree to make this change. Thank you for pointing out that $m_{psi}$ cannot technically be called a magnitude due to potential negative values, we will correct this. We will correct the statement (line 25) to include a broader reference (not just iterative attention) to segregation mechanisms in slot-based models. Finally, thank you for spotting the error in Eq. 6. Indeed, we exponentiate the cosine distances resulting in a Softmax term. We apologize for this typo and will correct it in the updated manuscript. We hope our responses resolve the major comments/questions posed by the reviewer. Please let us know if any other questions are yet to be addressed. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. Besides the two concerns described below, all my questions have been adequately addressed. 1) ”main problem with the paper is that it seems to evaluate the models incorrectly.” While it is true that the current form of evaluation has a neglible influence on the comparison between the CAE++ and CtCAE models, the same cannot be said when comparing against other models. For one, the CAE achieves a much poorer reconstruction error, which will limit its ability to benefit from any color information that may contribute to the evaluation. Most importantly, the current setup renders a comparison unfair with evaluation procedures in which color information will never influence the segmentation result. For example, in response to other reviewers, the authors stated that they will add a SlotAttention baseline to Table 1. SlotAttention's evaluation procedure is entirely independent of the color information in the images, however, making a fair comparison to the current results impossible. Similarly, this may influence future work when trying to compare against the results presented in this paper, or when trying to apply this model to other datasets with different pixel statistics. Thus, I would propose to include a suitable disclaimer in the paper that describes the limitations of the presented results and evaluation procedure. 2) "Evaluating the separation of objects only at the output level follows the method used by the original CAE model." The original CAE paper includes a (limited) evaluation of the feature separation in the latent space in Figure 5. Both in this evaluation, as well as in new work on synchrony-based models [1], the separation in the latent space seems to be limited to two objects. Considering that the proposed method claims to improve separatibility, it would be very interesting to see whether this applies to the latent space as well, and whether this would help to overcome the aforementioned limitation. [1] Löwe, Sindy, et al. "Rotating Features for Object Discovery." arXiv (2023). --- Reply to Comment 1.1.1: Comment: Thank you very much for your prompt response and your engagement! > *“While it is true that the current form of evaluation has a negligible influence on the comparison between the CAE++ and CtCAE models, the same cannot be said when comparing against other models. For one, the CAE achieves a much poorer reconstruction error, which will limit its ability to benefit from any color information that may contribute to the evaluation.”* Thank you for agreeing that this evaluation has negligible influence on the comparison between CAE++ and CtCAE models. We respectfully disagree that this puts CAE at a disadvantage. Firstly, if the method is unable to reconstruct images completely (CAE in Tetrominoes), it is intuitively expected it would have a low (zero) clustering score. Secondly, it can be seen that CAE reconstruction on CLEVR is close to CAE++ and CtCAE (e.g. Table 1 CLEVR 32x32 - CAE++ actually has the lowest MSE whereas CtCAE and CAE are close). Even in this case CAE has 5x lower ARI-FG score and 7x lower ARI-FULL score compared to CtCAE. > *”Most importantly, the current setup renders a comparison unfair with evaluation procedures in which color information will never influence the segmentation result. For example, in response to other reviewers, the authors stated that they will add a SlotAttention baseline to Table 1. SlotAttention's evaluation procedure is entirely independent of the color information in the images, however, making a fair comparison to the current results impossible.”* We would like to respectfully disagree on this point too. The main evaluation procedure for both classes of methods (slot-based and synchrony-based) are the same in the current literature: it is about measuring the ARI score given some extracted alpha mask. There are fundamental differences though in the way the masks are “extracted” from these two classes of models. In slot-based ones typically one takes the mixing alpha-mask at the decoder output. One could argue that in such methods taking the alpha mask of the latent attention would be a better option. Still it’s a design choice. On the other hand, synchrony-based methods don’t have such explicit masks, so one has to devise a way of generating them, and all methods so far in the literature use KMeans clustering to do so. In sum, different methods have different ways of extracting masks as part of their design choices, and this is not an issue. What matters is the final evaluation metrics. Having said this, we should keep in mind that the evaluation through ARI scores is not the end goal, but more a proxy when it comes to object-centric representation learning. The actual thing we care about is some downstream task that evaluates the usefulness of these representations, for example set prediction (as in the SlotAttention paper), question-answering task or using it inside an agent. Downstream task performance of the synchrony-based models is another avenue for future research in terms of modes of evaluation of the quality of synchrony-based object representations. > *”Similarly, this may influence future work when trying to compare against the results presented in this paper, or when trying to apply this model to other datasets with different pixel statistics. Thus, I would propose to include a suitable disclaimer in the paper that describes the limitations of the presented results and evaluation procedure.”* We fully agree that the optimal way of generating clustering masks for synchrony-based methods is an open research problem and we will add a disclaimer in the updated paper about the current limitations as requested by the reviewer. --- Rebuttal 2: Title: AC request for clarification Comment: Dear Authors, dear Reviewer QpC7, Thank you for deeply engaging into a discussion regarding the evaluation protocol for this paper. The precise way segmentation masks are obtained from the model(s) seems like important point that should likely not go unnoticed for future readers of the paper, and discussing and understanding the pros/cons of this protocol will likely provide significant value to the community. While following along, I was wondering whether the issue could be settled by including experiments on 1) a dataset that contains (uniformly) colored backgrounds and 2) greyscale or even binary multi-object datasets, but with otherwise similar visual complexity as the ones presented in the paper? A particularly well-suited dataset for 1) would be the Objects Room dataset which is available in the same library and format as the other datasets used in this paper, while for 2) the authors could consider adding experiments for greyscale CLEVR and binarized Multi dSprites as in the original Slot Attention paper [Locatello et al., 2020]. I would be curious to hear both from Reviewer QpC7 and the authors what you think about the value of adding such experiments. I do not expect that the outcome of these experiment will affect the accept/reject decision of the paper, but I think it would be very valuable to have more clarity on this issue for future readers of the paper, independent of the outcome of the peer review process at NeurIPS. Thank you. --Your AC --- Rebuttal Comment 2.1: Comment: Dear Authors and AC, thank you for your replies. As far as I understand the presented evaluation protocoll, it may lead to issues whenever multi-channel inputs, such as RGB images, are used. Whenever these images contain pixels for which at least one channel has a value < 0.1, the evaluation procedure intertwines the separation of objects in the phases of the learned features with the objects' underlying color values. The authors have previously stated that this problem occurs very rarely in the multi-object datasets considered in the paper, stating that 0.44%, 0.88%, and 9.25% of pixels in CLEVR, dSprites, and Tetrominoes are affected. Since these numbers seemed suspiciously low to me, I ran my own calculations which paint a very different picture. Using the code below, I find that 0.9%, 12.6% and 100% of pixels in the training sets of CLEVR, dSprites, and Tetrominoes are affected. While these values might differ somewhat depending on the normalization used, this may render the results on the dSprites and Tetrominoes datasets uninterpretable. Even though some qualitative results show that the method can separate objects of the same color on the Tetrominoes dataset, it remains unclear how much of the remaining separation observed (between objects of varying colors and between objects and background) is actually due to the underlying model and how much is due to the underlying color values of the objects. As a result, I believe that, as it stands, only the results on CLEVR can be interpreted meaningfully. Regarding the AC's suggestion, I believe that the addition of another RGB dataset may only be helpful if that dataset is carefully designed to not contain pixels for which at least one channel has a value < 0.1. Alternatively, if the goal is to demonstrate the improved separability between objects and that the proposed method can represent more objects at the same time, a grayscale dataset could indeed be a good solution to circumvent the issues of the evaluation procedure. --- Code: # Assume images are in range [0, 1] and of shape [batch, channel, height, width]. low_magnitude_on_r = torch.stack(torch.where(images[:, 0] < 0.1)) low_magnitude_on_g = torch.stack(torch.where(images[:, 1] < 0.1)) low_magnitude_on_b = torch.stack(torch.where(images[:, 2] < 0.1)) low_magnitude_on_rgb = torch.concat((low_magnitude_on_r, low_magnitude_on_g, low_magnitude_on_b), dim=-1) # Count each pixel only once, even when several channels are < 0.1. pixels_with_low_magnitude = torch.unique(low_magnitude_on_rgb, dim=1).shape[1] total_pixels = (images.shape[0] * images.shape[2] * images.shape[3]) low_magnitude_ratio = pixels_with_low_magnitude / total_pixels --- Reply to Comment 2.1.1: Title: Response to Reviewer and AC comments Comment: Dear AC and reviewer, thank you for your engagement and for your constructive suggestions. Firstly, we would like to reply to the reviewers' suspicions about our calculations. In the initial review, the reviewer wrote: > (Reviewer QpC7 wrote): features with small magnitudes are masked out, … this rescaling may lead to a trivial separation of objects in RGB images. For an image containing a red and a blue object, for example, the reconstruction will assign small magnitudes to the color channels that are inactive for the respective objects. This states that the evaluation becomes an issue for “pure RGB colors” (as we called them in our first response), the ones that have one (or more) channel value *larger* than the threshold 0.1 and at least one value *smaller* than 0.1. The calculation that the reviewer presented here includes also the pixels for which *all* channels have a value below 0.1. These pixels are the black pixels in the image belonging mostly (or exclusively as in Tetrominoes) to the background. Therefore we think this calculation misses the point that we were all discussing about in the previous replies. As reviewer wrote, we were discussing about pixels that have “at least one channel has a value < 0.1” but also *not all channels have value <0.1*. In code, this correction boils down to the following change: ``` pixels_with_low_magnitude = torch.unique(low_magnitude_on_rgb, dim=1).shape[1] pixels_with_all_channels_with_low_magnitude = low_magnitude_on_r * low_magnitude_on_g * low_magnitude_on_b total_pixels = (images.shape[0] * images.shape[2] * images.shape[3]) low_magnitude_ratio = (pixels_with_low_magnitude - pixels_with_all_channels_with_low_magnitude) / total_pixels ``` Having said all this, one last point on the percentages (although please let’s not switch the discussion back to this point now). Even by the reviewer’s calculation (which we don’t agree with) the percentages on dSprites and especially CLEVR dataset cannot account for the quantitative differences that are observed between the methods. Also for Tetrominoes, as the reviewer acknowledged, qualitative results show that the method is able to correctly group objects even in images of 2 or 3 objects of *exactly* the same color. Regarding the following suggestions from the AC: > (AC wrote): While following along, I was wondering whether the issue could be settled by including experiments on > (AC wrote): 1) a dataset that contains (uniformly) colored backgrounds (..Objects Room dataset..) Thank you very much for this suggestion. In general, dSprites and CLEVR used in our experiments already have a non-zero background but it is mostly grayscale, so the RGB-colored BG of Objects Room would expand on this. However, given the reviewers' worries that this dataset too might contain pixel values below 0.1 we are not sure what this experiment would add to the existing ones. > (AC wrote): 2) greyscale or even binary multi-object datasets (..greyscale CLEVR and binarized Multi dSprites..), but with otherwise similar visual complexity as the ones presented in the paper? > (Reviewer QpC7 wrote): if the goal is to demonstrate the improved separability between objects and that the proposed method can represent more objects at the same time, a grayscale dataset could indeed be a good solution to circumvent the issues of the evaluation procedure. In our architecture we contrast feature vectors both at the encoder and decoder outputs. However, in the grayscale images we could apply the contrastive objective only to the encoder output features which now would rely on edges, shape, texture and several other abstract visual features to guide the phase separation process. Though, we would be happy to run these experiments if it would be sufficient to convince the reviewer.
Rebuttal 1: Rebuttal: ### General Response to all Reviewers We have provided individual responses to each reviewer's questions/comments. We have not made any changes to the PDF of the paper to ensure the ease of referring to the line numbers, Figure/Table numbers as in the original version. We’ve provided additional results (Tables) as part of the individual rebuttal responses. We’re happy to continue engaging with all reviewers to resolve any remaining concerns.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Private Distribution Learning with Public Data: The View from Sample Compression
Accept (spotlight)
Summary: This paper studies differentially private learnability of probability distributions. The results put forward are very interesting! For example, they show that the public-private learnability of a class of distributions is connected to the existence of sample compression schemes as well as to an intermediate notion of learning they define. Strengths: Originality: This work is original, to the best of my knowledge. Quality: Quality of results and topic choice are very good; see questions. Clarity: Writing is clear; see questions. Significance: The results are very significant for the learning community. Weaknesses: Not significant weaknesses detected. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Are there any complexity-theoretic implications of your work? Learning is connected to complexity lower bounds. Is this the case for private learning? Maybe some stronger connection exists? Line 43: Nice question! Can you please discuss similar questions? Theorem 1.1: You should emphasize this result more, as it is a main important result. Line 118: Please define TV. (This is in the appendix...) Line 142: Please define $SC$ (as "sample complexity?"). General comment: Since you are not including proofs in the main body, maybe simplify the bounds you are presenting? This will save space and thus allow for more explanations. Section 5: Can you please better explain the notion of distribution shifted? Section 6 and Section 7: Can you please add some more intuition here? Maybe put limitations in the main body? There is no conclusion section! Why? You could put open problems there. This paper has so many novel results (proofs in appendix). Impressive! Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for thoughtful comments on our work. We are happy to hear the reviewer finds our results and topic of study interesting. We also thank the reviewer for their nice suggestions and questions. In what follows, we respond to specific points raised by the reviewer. —--------------------- > Are there any complexity-theoretic implications of your work? Learning is connected to complexity lower bounds. Is this the case for private learning? Maybe some stronger connection exists? We are not aware of any. But we are generally unfamiliar with this line of work – could you elaborate further on what kind of connections you have in mind (and link some papers)? The only thing that comes to mind is [Dan16] – but since our learners are (a) not efficient; and (b) are for distribution learning, rather than PAC learning), it seems like this work is not related. > Line 43: Nice question! Can you please discuss similar questions? Sure. Our work examines the minimal amount of public data needed to render a class pure privately learnable. We can also examine this question in the approximate DP case. Note that here, it is open whether there is even a separation between learnable and approximate DP learnable. The VC bound of Theorem 1.5, although more general, is loose and is off “qualitatively” also: when applied to gaussians, it yields a $d^2/\alpha$ public sample complexity, though we know $d$ is possible, with no dependence on $\alpha$. It would be interesting to understand what qualities of a distribution admit $\alpha$-independent public sample complexity, beyond a not-too-illuminating “it has an $\alpha$-independent compression complexity”. The situation is quite different in binary classification: [ABM19] shows that any class that is not learnable under pure privacy learnable requires $1/\alpha$ public samples. Another interesting question is whether a small amount of public data can lead to algorithmic improvements (runtime efficiency, simpler algorithms), as opposed to sample complexity improvements. For Gaussian mean estimation, one public sample removes the need for coarse estimation in [HKM22] (their coarse estimation algorithm formulates the task as an SDP via sum-of-squares paradigm, and runs in polytime for a large polynomial). It would be interesting to see if with more, but o(non-private sample complexity) public data, we can replace other steps to devise simpler, faster algorithms. > Missing or unclear definitions. We’ll address these in the revised version. > Section 6 and Section 7: Can you please add some more intuition here? > Maybe put limitations in the main body? > There is no conclusion section! Why? You could put open problems there. Yes, in our revised manuscript, we’ll focus on better discussion of our results. In particular, we plan to add proof sketches of the reductions, explanations of the results in Section 6 and 7, as well as more comparison to previous results (particularly for Gaussian mixtures) and implications. We also plan to put limitations and a conclusion in the main body. We’ll use the extra space granted and also try to find some space by simplifying or deferring bounds to the appendix and removing repeated theorem statements. —--------------------- If the reviewer has any additional comments or questions that have not been adequately addressed, we would be happy to speak more during the discussion phase. **References** [Dan16] Amit Daniely. Complexity theoretic limitations on learning halfspaces. STOC’16. [ABM19] Noga Alon, Raef Bassily, and Shay Moran. Limits of private learning with access to public data. NeurIPS’19. [HKM22] Samuel B. Hopkins, Gautam Kamath, and Mahbod Majid. Efficient mean estimation with pure differential privacy via a sum-of-squares exponential mechanism. STOC’22. --- Rebuttal Comment 1.1: Comment: Dear authors, Your message has been noted. The decision on your paper will be based on my discussion with the reviewers. We will reach out to your should we require further clarifications. Regards,
Summary: The authors consider the problem of learning a distribution in a known class using private samples under pure DP, as well as a number of public samples drawn from the same distribution. Some distribution classes, such as Gaussians, are not learnable using a finite number of private samples, or even with a finite number of private samples and a small number of public samples, but are learnable with private samples and a medium number of public samples. The authors make progress towards understanding what classes of distributions are learnable with what number of public samples. Their main contribution is to establish that the following three are equivalent (up to constants): (1) A class of distributions is public-private learnable, i.e. learnable with m public samples and some finite number of private samples (2) A class of distributions is realizably compressible with m samples, i.e. given a finite number of samples from a distribution, one can encode the distribution into m samples and some finite number of auxiliary bits such that some decoder can learn the distribution using only the encoded information (3) A class is list learnable with m samples, i.e. there is an algorithm that takes m samples and outputs a finite list of distributions such that one distribution in this list is close to the target distribution. The authors also give concrete/non-trivial bounds on the finite number of samples/finite list size in the equivalence statement. The above equivalence has a number of applications. For example, since mixtures of Gaussians are known to be compressible, the above equivalence immediately shows they are public-private learnable. More generally, since the class of mixtures (resp. products) of distributions from compressible classes is also compressible, mixtures (resp. products) of distributions from public-private learnable classes are also public-private learnable. While BKS22 established public-private learnability of Gaussians, the authors' result has several qualitative advantages, such as a stronger notion of DP and allowing arbitrarily small mixture probabilities. Similarly, the authors show that d-dimensional Gaussians are not learnable with d-1 public samples and any number of private samples, by showing that this implies list-learnability of d-dimensional Gaussians with d-1 public samples, which violates a lower bound. The authors also extend the reduction from compressible classes to public-private learnable classes, to a reduction from robust compressible classes to classes that are public-private learnable, with out-of-distribution public samples and a private distribution that is only close to a member of the class. Finally, the authors show that classes of distributions whose Yatracos class has VC dimension d and dual VC dimension d* are learnable with d private samples and d^2 times d* private samples. Strengths: Overall I feel the paper is quite strong. The main contribution of the paper, the equivalence of three properties of a class of distribution, is elegant and likely to have high impact, as distribution learning is obviously a very central and fundamental problem in learning theory, and using public data for private learning is becoming increasing popular. As an example of the potential for impact, just from the equivalence of these three classes, the authors are able to show a lower bound on the number of public samples needed to learn a Gaussian that is optimal up to an additive difference of 1, which was an open problem beforehand. I am not very confident in my knowledge of the field of distribution learning, but I suspect the elegance of the result could easily lend itself to understanding the public-private learnability of other important classes of distributions. In addition to this main contribution, there are a number of other contributions that are independently interesting. The paper is also technically interesting and novel. e.g, this paper is the first to use list-learning (which is related to, but not the same as list-decodable learning) in establishing upper and lower bounds for other definitions of learnability. The paper was enjoyable and easy to read - while there are several involved definitions needed to fully understand the results, the authors do a good job distilling the results down into a form that's easier to parse, without sacrificing too much detail. Weaknesses: I felt the main "weakness" of the paper is that there are many results the authors provide, and so the authors had to save most of the discussion of proof techniques for the appendix. In particular, I felt Sections 6 and 7 did not offer much more insight than the related sections of the introduction, and could be moved to the appendix in favor of e.g. a proof sketch for Prop 3.2, which would help the reader better understand the equivalence between the different definitions of learnability. Of course, such a weakness is likely to not exist in a camera-ready or arxiv version. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: -You mention that you do not optimize the private sample complexity - do you have a sense for how large the slack in the private sample complexities might be? (i.e., are they optimal up to log factors, small polynomials, large polynomials?) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes; in Appendix A the authors nicely state the limitations of their work. N/A on negative societal impact, since it is a theory work on improving privacy-preserving algorithms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful comments on our work. We are glad the reviewer finds the problem we study to be relevant, and our results to have potential for impact. In what follows, we address specific points raised by the reviewer. —--------------------- > Inadequate discussion of proof techniques in the main body. Noted. We’ll add proof sketches for the reductions (Proposition 3.2, 3.5, 3.6) and spend more time discussing results, in particular for Sections 6 and 7, since right now they are mostly repetitions of the theorem statements in the intro. > You mention that you do not optimize the private sample complexity. Do you have a sense for how large the slack in the private sample complexities might be? (i.e., are they optimal up to log factors, small polynomials, large polynomials?) What we were referring to there were the reductions to and from sample compression. They preserve public data complexity, but lose polynomial factors in the private sample complexity. For example: by applying the public-private learning/sample compression equivalence to obtain the generic “public-private learning implies public-private learning for mixtures” statement, if we apply it to a Gaussian public-private learning using $m = \tilde O(d)$ public and $n = \tilde O(d^2/\alpha^2 + d^2/\alpha\varepsilon)$ private samples, we get a $m = \tilde O(kd/\alpha)$ public, $n = \tilde O(d^2/\alpha^3 + d^2/\alpha^2\varepsilon)$ private public-private learner for Gaussian mixtures. This is a factor of $1/\alpha$ off from the sample complexity obtained from using the compression scheme for mixtures of Gaussians (Theorem 1.2). On another note, for Theorem 1.2 and Corollary 4.1, in the specific regime where $\varepsilon>\alpha$, the private sample complexities are tight up to log factors, based on non-private lower bounds of $\Omega(d^2/\alpha^2)$ for a single gaussian and $\tilde \Omega(kd^2/\alpha^2)$ for mixtures. This would be the case even with more than $O(d)$, but $o($non-private sample complexity$)$ public samples. —--------------------- If the reviewer has any additional comments or questions that have not been adequately addressed, we would be happy to speak more during the discussion phase. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response! After seeing the other reviews and rebuttals as well, I am planning on keeping my rating of accept.
Summary: Continuing a line of work recently initiated by Bie, Kamath, and Singhal (NeurIPS '22), this submission studies differentially private distribution learning (a.k.a. hypothesis selection) in the presence of public data. The main result is a collection of quantitative equivalences between this task and the problems of sample compression and list learning. More specifically, the paper shows that for any class of distributions Q, the following are (roughly) equivalent: 1) Q admits a sample compression scheme using m samples and compression size t 2) Q is pure $\varepsilon$-DP learnable using m public samples and t private samples 3) Q is list learnable using m samples and list size exp(t). As applications of this set of equivalences, the paper proves new upper and lower bounds on the sample complexity of pure public-private learning. These include new algorithms for learning Gaussians, mixture distributions, and product distributions via sample compression, as well as a lower bound on the public sample complexity of pure public-privately learning a single Gaussian that follows from a lower bound for list learning. The paper also works out a connection between robust sample compression and agnostic/distribution-shifted public-private learning, and a pure public-private learner for classes whose Scheffe sets have finite VC dimension. Strengths: - The equivalences shown in this paper provide a nice set of perspectives on the public-private distribution learning problem, both conceptually and "practically" in terms of offering new avenues for proving upper and lower bounds. - The proofs of these equivalences are simple, conceptually clear, and illuminating. - I'd say the main technical contribution in this paper is a lower bound showing that it is impossible to list-learn d-dimensional Gaussians using fewer than d samples. (This in turn implies a lower bound of public-private learning.) This is a sharp result that matches an upper bound from BKS'22. The proof goes through a general lower bound technique for list-learning, called a "no-free-lunch" theorem in this paper, roughly based on exhibiting collections of distributions that are not close in TV distance, but easily confused using a small number of samples. Weaknesses: - The characterizations shown in this work all apply to pure differential privacy. This is good for upper bounds, but is a major limitation of the most technically-involved contribution of the paper on lower bounding list- / public-private learnability of Gaussians. In particular, the conversion from a public-private learner to a list-learner depends crucially on pure differential privacy; while it can probably be extended to a quantitatively weaker result for concentrated DP, it doesn't say anything about, say, Renyi or approximate DP. - The paper's conversion from a public-private learner to a list-learner is not quite constructive, as it relies on a covering-based characterization of pure private distribution learning. However, this characterization can probably be unrolled into an algorithmic construction of a cover using the usual technique of running the DP algorithm repeatedly on a fixed dataset using many sequences of coin tosses. Also, none of the equivalences proved preserve computational efficiency in general. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Lines 235-239 offer a comparison to the previous result of BKS'22 on learning mixtures of Gaussians, but isn't easy to make sense of without a brief (informal) statement of what their result actually is. - Line 500: Should be $\tau_0 = \tau(\alpha/6, \beta/2)$ - While ultimately short, I found the contradiction-based proof of Proposition 3.5 unnecessarily hard to follow for this reason. Why not just do a direct proof along the following lines, using roughly the same calculations? For every distribution $p \in \mathcal{Q}$, accuracy of $\mathcal{A}$ implies that whp over $\widetilde{X} \sim p^m$, we have $p \in Q_{\widetilde{X}}$. This latter event implies $p$ is close to $\mathcal{L}(\widetilde{X})$, which is the success condition of the list learner. - I didn't understand why Lemma F.1 is called a "no free lunch" theorem. To my mind, a no free lunch result rules out the existence of learning algorithms that are simultaneously optimal for a sufficiently broad class of problems. Did the authors have in mind a different context in which this name makes sense to describe this result? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Some limitations (overlapping with weaknesses described above) are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful comments and suggestions. We are glad the reviewer finds that our work offers some nice perspectives on the public-private learning problem. Indeed, we acknowledge the limitations pointed out by the reviewer: (a) that our equivalence only holds for the pure DP setting and (b) we do not address issues pertaining to computational efficiency. Both aspects are important directions for future work; indeed for one of the tasks studied – density estimation of arbitrary Gaussian mixtures – the authors are not aware of any poly-time algorithm for the task, even in the non-private setting. In what follows, we reply to specific points raised by the reviewer. —--------------------- > This characterization can probably be unrolled into an algorithmic construction of a cover using the usual technique of running the DP algorithm repeatedly on a fixed dataset using many sequences of coin tosses. That’s really interesting. We would be happy to learn more about this – would it be possible to elaborate further/link an example of a paper that uses this technique? > Lines 235-239 offer a comparison to the previous result of BKS'22 on learning mixtures of Gaussians, but isn't easy to make sense of without a brief (informal) statement of what their result actually is. Noted. We’ll add a more comprehensive discussion of this result and other related results. > Line 500: Should be 𝜏 = 𝜏(ɑ/6, β/2) Thanks, we fixed this. > I found the contradiction-based proof of Proposition 3.5 unnecessarily hard to follow for this reason. Why not just do a direct proof Good point, we’ll change it to a direct proof. > I didn't understand why Lemma F.1 is called a "no free lunch" theorem. The context we had in mind is from the VC lower bound results for PAC learning, i.e., Theorem 5.1 from [SSBD14]. In that result, using the flexibility of a hypothesis class, a class of hard instances are constructed such that any algorithm that doesn’t see enough samples must fail on one of them. Hence, there is no universal learner (free lunch). Similarly, in our Lemma F.1, we use the flexibility of the class $\mathcal Q$ to construct a sequence of classes of hard instances ($\mathcal Q_k$). Each $\mathcal Q_k$ rules out the existence of a list learning algorithm that takes few samples, outputs a list of size $\leq k$, and succeeds on every $p$ in $\mathcal Q_k$. This rules out the existence of any finite list learner that sees few samples and succeeds on every $p$ in $\mathcal Q$. We hope this clarifies things. We’ll include this discussion in the revised version of the manuscript. —--------------------- If the reviewer has any additional comments or questions that have not been adequately addressed, we would be happy to speak more during the discussion phase. **References** [SSBD14] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge University Press, 2014. --- Rebuttal Comment 1.1: Comment: Thank you for your response, which indeed helps clarify and confirms my recommendation to accept. >> This characterization can probably be unrolled into an algorithmic construction of a cover using the usual technique of running the DP algorithm repeatedly on a fixed dataset using many sequences of coin tosses. > That’s really interesting. We would be happy to learn more about this – would it be possible to elaborate further/link an example of a paper that uses this technique? Of course. After thinking about it a bit more carefully, I think what you actually get is a randomized analog of a cover, which in turn induces a randomized list learner. (So depending on your personal view of what counts as constructive, this may or may not improve on what you have...but at least it may be a start for further investigation.) Starting from line 514 of your supplementary material (and simplifying notation a bit because the OpenReview editor is bugging out on me), let $\mathcal{Q}$ be learnable by some $\varepsilon$-DP algorithm $A$ using $n$ samples. Let $p \in \mathcal{Q}$ be an arbitrary member of the class, and define $G_p$ to be the set of all distributions that are $\alpha$-close to $p$ in TV distance. By accuracy of the DP algorithm, $\Pr_{X \sim p^n, A}[A(X) \in G_p] \ge 9/10$, so by averaging, there exists a sample $S$ of size $n$ such that $\Pr_A[A(S) \in G_p] \ge 9/10$. By group privacy, this implies $\Pr_A[A(0^n) \in G_p] \ge 9/10 \cdot e^{-\varepsilon n}$. Thus, by running $A(0^n)$ some number $\ell = O(e^{\varepsilon n})$ times using independent coin tosses, with constant probability, one obtains a list $L$ of distributions containing at least one distribution $\widehat{p}$ that is $\alpha$-close to $p$. (The induced distribution over such lists is what I would call a "randomized cover", and the algorithm that outputs a list from this distribution is a randomized list-learner.) This idea goes at least as far back as https://arxiv.org/abs/1402.2224, Lemma 3.15, where it was used to show that any pure-DP PAC learner for a concept class $\mathcal{C}$ induces a "probabilistic representation" for $\mathcal{C}$. Indeed, in the PAC setting, one can think of a probabilistic representation as a randomized list learner that ignores the sample. It's interesting to note that in the PAC setting, there's a gap between what one can achieve with a deterministic representation (the analog of a standard cover) vs. a probabilistic representation, whereas in the distribution learning world, your results imply that there's no such gap. > The context we had in mind is from the VC lower bound results for PAC learning, i.e., Theorem 5.1 from [SSBD14]. In that result, using the flexibility of a hypothesis class, a class of hard instances are constructed such that any algorithm that doesn’t see enough samples must fail on one of them. Hence, there is no universal learner (free lunch). Thanks for the explanation. My understanding of the reason why the result you mention is called "no free lunch" is indeed because it rules out the existence of a universal learner for the class of _all_ concepts over $\mathcal{X}$. This motivates the need to make prior knowledge assumptions, e.g., assuming a smaller concept class than the space of all concepts, as elaborated in their Section 5.1.1. Said another way, the supposed "free lunch" in question would be a universal learner that does not need to make any assumptions about the structure of the target concept, and is ruled out by a sample complexity lower bound proportional to the domain size. Meanwhile, in your case, the lower bound your prove is _already_ for a pre-specified class of distributions $\mathcal{Q}$. So the sense of a "universal learner" that you rule out is already one that is only universal with respect to $\mathcal{Q}$ -- which is a much weaker sense of universality than the one in the SB NFL theorem. Given the context of the SB NFL, my guess for a public-private distribution-learning NFL theorem would say something like, "Every public-private learner for the class of all distributions over a finite domain $\mathcal{X}$ requires at least ... samples." Of course, as you point out, there are technical similarities in the proofs. But I'd argue that a better analogy for your result is the implied VC-dimension lower bound for PAC learning; as you alluded to, one can of course derive this from the SB NFL by embedding into a high-capacity class, but the VC-dimension statement is one about class-specific learning. If anything, I think calling your result an NFL theorem undersells that it's in fact able to say something on a class-by-class basis. --- Reply to Comment 1.1.1: Comment: Thanks for the detailed reply! Very nice! Thanks for the reference and the explanation. Yes, I believe this will go through and give us constructive learners for Theorem 4.5 and Theorem 4.6. We can add a note pointing to and summarizing this discussion. It also motivates considering randomized compression schemes, which might be interesting to play around with. The point on NFL is well-taken; similarities between the lemma and Theorem 5.1 of [SSBD14] are on technical side rather than conceptual, and the NFL naming choice is a conceptual one. Indeed Gaussianity is a strong inductive bias yet we still have a negative result, so its more "no-reasonably-priced-lunch". To prevent confusion we'll refer to the lemma as a lower bound employing a NFL-style argument.
Summary: In public-private learning, an algorithm gets two sets of i.i.d. samples from an unknown distribution $q$ from a class $\cal{Q}$ (say, the first set of size $m$ and the second one of size $n$), but is required to preserve DP only with respect to the second set. This research direction is about understanding what are the (minimal) regimes of $m$ and $n$ that allows to learn a distribution from $\cal{Q}$. The interesting regimes are when $m$ is much smaller than the sample complexity required to learn without privacy, and $n$ is smaller than what is required for learning under (full) DP. This research direction has been initiate by Bie, Kamath and Singhal (NeurIPS 22’, BKS22), who showed that a relatively small amount of public samples suffices to improve the sample complexity of privately learning a single Gaussian and a mixture of Gaussians. This work makes a significant step towards characterizing public-private learning by proving that it is equivalent (in terms of sample complexity) to two other notions: (1) Compressible Learning (Ashtiani et al., J.ACM 2020, ABDH+20) and (2) List-Learning. In more detail (ignoring running times and low order terms and constants in the sample complexity parameters): Proposition 3.2: $(\tau, t, m)$-compressible learning implies $(m,n = t + \tau \log m)$-public-private learning, where the former means that we encode $m$ samples using a $\tau$ size subset plus an auxiliary string of size $t$, and this information suffices for estimating the distribution. Proposition 3.5: $(m,n)$-public-private learning implies $(m,\ell=\exp(n))$-list learning, where the latter means that using $m$ samples we can output a list of size $\ell$ of candidate distributions that at least one of them is a good estimation to the true distribution. Proposition 3.6: $(m,\ell)$-list learning implies $(\tau=m, t = \log(\ell), m)$-compressible learning. The main applications comes from the equivalence to compressible learning, since [ABDH+20] proved that learning a Gaussian or a mixture of Gaussians is compressible learnable with relatively small amount of samples, so using this connection they deduce an improved sample complexity upper bounds for these tasks under private-public learning, as well as a tight lower bound of the number of public samples that is required for learning a single Gaussian. The upper bound on public-private learning a mixture of Gausians is actually an upper bound for learning any mixture of distributions for a given class $\cal{Q}$ as a function of the sample complexity that is required to learn a single distribution from $\cal{Q}$, as [ABDH+20] proved that for compressible learning, learning a $k$ mixture requires (esseitially) $k$ times the sample complexity for a single distribution. In section 5 they also handle the agnostic case where the samples comes from a distribution that is only close to a one in the class, a setting which was also considered in the previous work of [BKS22]. They prove (Theorem 5.2) that robust compression implies agnostic public-private learning, and deduce an upper bound on learning a Gaussian in this case. Strengths: The equivalences between public-private learning to the other notions of learning are very interesting, and the progress from the work of [BKS22] (both quantitative and qualitative) is significant. Despite the significant limitations (described below), in my opinion, this is a good submission which is above the acceptance threshold. Therefore, I recommend acceptance. Weaknesses: As mentioned in the limitations section (Section A, Supplementary material), the reductions between the different notions are inefficient, and sometimes not even constructive (e.g., in the reduction from list learning to public-private learning). So all the new upper bounds of this paper are inefficient, and it is still left open whether we can construct efficient public-private learning algorithms with the improved samples complexities achieved in this paper. While the results of [BKS22] are more restricted and weaker in terms of sample complexity, they are computationally efficient (unlike this work), so if we are only interested in computationally efficient learners, this work does not make any progress except the lower bound which holds (in particular) for efficient learners. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: No questions. One suggestion: I think it is worth adding short proof sketches of the reductions (Propositions 3.2, 3.5, 3.6) in the main body. The proofs are not long, and I'm sure you can explain the idea in each proof using a few sentences. Minor comments: (1) Theorem 1.1, item 2: "$\cal{Q}$ is has". (2) Theorem 1.3: robust compression is not defined at this point. (3) Definition 2.5: In ABDH+20, $f_q$ is explicitly defined by outputting an $\tau_0$-subset of samples, and a short string. Here, you just claiming the existence of such $f_q$, which is strange to me since, for example, if $f_q$ is randomized and depends on $q$, it could simply ignore the input samples (which are sampled from a distribution that is only close to $q$) and just use fresh samples from $q$. (4) Corollary 4.2: I think it is worth comparing the sample complexity to existing efficient and fully DP methods like https://arxiv.org/abs/2303.04288 and https://arxiv.org/pdf/2112.14445.pdf (BKS22 only compared their result to KSSU19 https://arxiv.org/abs/1909.03951 which is significantly inefficient compared to the first two results I mentioned). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors present the limitations in Section A (supplementary material). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful comments and suggestions. We are glad that the reviewer finds the connections we present between public-private learning and other notions of learning interesting. Indeed, our investigation only answers questions regarding the statistical complexity of public-private learning. Finding efficient algorithms for these tasks is not addressed by our work, and is an important open question. In what follows, we address specific points raised by the reviewer. —--------------------- > Adding short proof sketches for the reductions. Yes, we’ll add proof sketches for the reductions in the revised manuscript. > Typos and organization Thanks for pointing out typos and the out of order definition. Theorem 1.3 requires some setup, so we’ll introduce its consequence (agnostic and distribution shifted learner for Gaussians) and leave the statement for the main body. We’ll also move the limitations to the main body in the revised version, with the extra space granted. > Definition 2.5: In ABDH+20, $f_q$ is explicitly defined by outputting an 𝜏-subset of samples, and a short string. Here, you just claiming the existence of such $f_q$, which is strange to me since, for example, if $f_q$ is randomized and depends on $q$, it could simply ignore the input samples (which are sampled from a distribution that is only close to $q$) and just use fresh samples from $q$. Yes, we missed a condition in the definition. The $f_q$ defined should have the property that on all samples $S \in \mathcal X^m$, $ f_q(S)$ is a subset of $S$. We’ll fix this in the revised version. > Corollary 4.2: I think it is worth comparing the sample complexity to existing efficient and fully DP methods like https://arxiv.org/abs/2303.04288 and https://arxiv.org/pdf/2112.14445.pdf (BKS22 only compared their result to KSSU19 https://arxiv.org/abs/1909.03951 which is significantly inefficient compared to the first two results I mentioned). We’ll do a more comprehensive discussion of results in the updated version. All three of these are looking at parameter estimation with assumptions on the underlying Gaussian mixture, whereas we focus on density estimation of arbitrary Gaussian mixtures (and we are not aware of any $\text{poly}(k,d,\alpha,\beta)$ algorithms even in the non-private case). [AAL23] uses [MV10] as a black box, and inherits its unoptimized and very large polynomial sample complexity. We believe the exponent of the polynomial is something like $\approx 300$, but we are not exactly sure. —--------------------- If the reviewer has any additional comments or questions that have not been adequately addressed, we would be happy to speak more during the discussion phase. **References** [AAL23] Jamil Arbas, Hassan Ashtiani, and Christopher Liaw. Polynomial time and private learning of unbounded gaussian mixture models. ICML’23. [MV10] Ankur Moitra and Gregory Valiant. Settling the polynomial learnability of mixtures of gaussians. FOCS’10. --- Rebuttal Comment 1.1: Comment: Thank you. I read the rebuttal and have no questions.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FouriDown: Factoring Down-Sampling into Shuffling and Superposing
Accept (poster)
Summary: This paper introduces a novel down-sampling operator in the Fourier domain, specifically tailored to mitigate the common problem of frequency aliasing encountered during down-sampling. The operator's efficacy has been assessed across diverse computer vision problems, consistently yielding improved performance in these tasks. Strengths: 1. The authors address the long-standing issue of frequency aliasing in signal processing, with a specific focus on image down-sampling. Their meticulously crafted design offers a compelling and well-founded solution to mitigate frequency aliasing. 2. The authors substantiate their claims in the paper with comprehensive and rigorous theoretical proofs, providing a strong basis for their proposed approach. 3. The proposed FouriDown operator serves as a plug-and-play solution, seamlessly integrating into existing networks. Furthermore, it consistently enhances the performance of the underlying models across various tasks. 4. The writing in the paper is clear and the explanations are thorough, ensuring a comprehensive understanding of the presented concepts. Weaknesses: 1. While the quantitative results are satisfactory, it would be beneficial to include additional efficiency analysis, such as providing the parameters of the FouriDown operator, in the form of a table showcasing relevant metrics. 2. Although the FouriDown operator is currently limited to replacing the stride 2 down-sampling operator, it would be valuable to explore whether the authors have developed other variants capable of handling more general scenarios. 3. It would be advantageous to present more visual results while condensing reducing the length of the theorem proofs. 4. Figure 5 captures attention due to its intriguing nature. It would greatly benefit the readers if the authors could provide more detailed explanations accompanying the figure to enhance understanding. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see the weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors mentioned the limitations in section 5 and addressed the potential broader impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Thank you for your sincere suggestion. We incorporating a comprehensive efficiency analysis including as following and above Table in the attached file. We will add them to further demonstrate the effectiveness and efficiency of the FouriDown operator in the revised version. 2. Indeed, while the current work primarily focuses on replacing stride 2 down-sampling, the underlying principle of FouriDown has the potential to be extended for more general scenarios. For example, we have explored different stride lengths in the down-sampling. As shown in Supp. Figure 6, we aim to generalize the method to accommodate varying strides, making it more adaptable to diverse applications. Moreover, beyond convolutional layers, it would be intriguing to see more potential architectures like transformers are performed for the application of specific scenarios. By delving into these areas, we hope to make the FouriDown operator even more versatile and better suited to a wider range of tasks. 3. Thanks for your suggestions, we will add more visualizations in the updated version. 4. Thank you for your suggestions. Figure 5 depicts the spectrum changes of an image as it undergoes progressive downsampling, further aiding readers in understanding the frequency aliasing process. In Figure 5, three downsampling instances are displayed, each resulting in differently sized spectral representations. These are then superimposed onto the same frequency coordinate system. According to Theorem 1, high-frequency components will fold into the lower frequencies during downsampling, so the process is illustrated by the yellow arrows in the figure. In the revised version, we will expand the figure caption and add a dedicated subsection in the manuscript. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for the response. The explanations and extra results solved my concerns.
Summary: In this research, the authors re-examine the spatial down-sampling mechanism and analyze the biased impacts that arise from the static weighting strategy used in previous methods. To overcome this limitation, this work proposes a new down-sampling paradigm called FouriDown, which incorporates frequency-context adaptive superposing. This component can be seamlessly integrated into existing image restoration networks and significantly improve performance. Strengths: This work provides an in-depth exploration of the down-sampling operator, offering a novel perspective that has the potential to significantly advance future research related to operators. Furthermore, the motivation behind this work is clear and well-founded, as evidenced by the compelling depiction in Figure 2 along with the accompanying theoretical proof. Importantly, this research showcases considerable potential across diverse visual tasks. Weaknesses: 1. To better emphasize the main content of the paper, it is suggested to move the proof of formulas to the supplementary material. Furthermore, the experimental section of the main text could benefit from incorporating the information from Section 1 in the supplementary material, which illustrates the superior extensibility of this framework over other down-sampling methods. 2. Although the supplementary material includes numerous excellent visual results to showcase the superiority of the proposed method, it is recommended to supplement the updated version with visualizations of the down-sampled feature maps. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The authors present a novel down-sampling method based on Shannon's sampling theorem. As feature maps commonly comprise multiple channels, it would be interesting to investigate the interaction patterns that emerge among these channels. 2. Based on the diagram provided, the FouriDown method seems to involve information transfer onto the channel followed by channel contraction, similar to the approach used in pixel-shuffle down-sampling. It is crucial to conduct an analysis that specifically explores and demonstrates the advantages of the FouriDown method over pixel-shuffle down-sampling. This comparative analysis will provide valuable insights into the superiority of the proposed method. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1. Thank you for your sincere suggestion. We agree with your suggestion and will move the proof of formulas to the supplementary material. This will allow us to incorporate more relevant experiments and analysis into the main text, highlighting the superiority of our framework over other down-sampling methods. W2. Thank you for pointing this out. We perform the comparisons with visual representations of down-sampled feature maps to better illustrate the effectiveness of our method in Figure 1-3 in the attached file. In the revised version, we will incorporate them to provide a clearer understanding of how our method performs in comparison to others. Q1. In this work, the FouriDown doesn't involve interactions among channels, they operate independently, but they do interact in the spectral domain. This is analogous to the Fourier transform, where channels also operate independently when transforming feature maps. Thanks for your insightful problems and how to establish relationships between channels in the Fourier transform is an intriguing question to explore in the future. Moreover, while channels are processed independently during FouriDown, their interactions within convolutional networks are still related. In low-level vision tasks, channel interactions might occur between high-frequency texture information and low-frequency color information. Through our research, we find that while there's no explicit channel interaction during down-sampling, there's a manifest high-low frequency interaction occurring in the spectral domain. This could be one of the reasons why FouriDown is effective in various low-level vision tasks. Q2. Indeed, at a superficial glance, FouriDown might appear to have similarities with pixel-shuffle down-sampling in terms of channel information transfer and contraction. However, there are distinct differences: 1. **Domain of Operation**: FouriDown operates in the spectral domain, leveraging the properties of the Fourier transform. In contrast, pixel-shuffle primarily works in the spatial domain. 2. **Inherent Mechanisms**: The shuffle process in FouriDown is rooted in signal theory and is meticulously designed for convolution operations in the spectral domain. Pixel-shuffle, on the other hand, utilizes a more straightforward neighborhood sampling mechanism. 3. **Potential Impact**: We believe the spectrum-shuffle isn't merely a technical novelty, it has significant implications for the develop of the Fourier theory in the low-level vision task. In light of your feedback, we'll bolster our paper with a more in-depth comparative analysis between FouriDown and pixel-shuffle down-sampling. This will elucidate the specific advantages and the overarching superiority of our method. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. The authors address my concerns.
Summary: This paper proposes a downsampling method in Fourier domain (FouriDown) for several low-level vision tasks and an image classification task. FouriDown is supported by theoretical background proven by the authors. While this work improves multiple vision tasks when compared to previous anti-aliasing downsampling methods, it lacks to truly demonstrate its superiority over existing other downsampling approaches due to not well-organized experiments. Moreover, exploration of other architecture is insufficient. So, despite the robust and well-presented theorems the contribution of this paper is limited. Strengths: The explanation and background of this paper are theoretically robust. Why the shuffling and superposition operations of FouriDown are constructed in that manner is clear. The learnable downsampling weight, which is unlike previous anti-aliasing methods, is technically sound. Many low-level vision tasks have been improved by FouriDown. The tables showing experimental results are easy to understand (however, some issues of experimental composition exist as mentioned in weakness). In addition to low-level tasks, the results that an image classification task is also improved makes FouriDown more robust. Weaknesses: 1. Most importantly, this work focusing only on downsampling method has limited contribution for being published at this conference. If this paper has studied other architectures (e.g., CNN’s kernel or Transformer’s self-attention) and secondarily introduced the effectiveness of FouriDown, the novel downsampling method could have delivered more importance for deep learning field. This does not mean that FouriDown is not effective. Instead, more urgent architecture designs could be considered and provided in this paper. Moreover, a study regarding how to effectively and efficiently address the hierarchical downsampled multi-scale feature maps, where downsampling itself loses informative details, is suggested to be conducted. 2. While replacement of static superposition to learnable one is interesting, the composition of this learnable weight is not sophisticated enough to be considered as a novel idea. It just follows standard 1x1 convolution, activation function, and softmax. It should be carefully considered. 3. Tabs.1,2,3,4,5,6 should include some factors with respect to efficiency: inference time, the number of parameters, etc. 4. The “Original” downsampling methods in Tabs.1,2,3,4,5,6 are not presented. The authors, therefore, should present other downsampling methods that are not anti-aliasing approaches. For example, pixel-unshuffle, 2x2 learnable CNN (with stride=2), max-pooling, or other interpolation can be considered. 5. The summation operation along subsample dimension (indicated as “4”) in the Algorithm 1 can be better demonstrated by another ablation study. As alternatives to summation, there exists average operation or weighted summation for the “4”. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How about ImageNet 1K classification or object detection? Did you conduct other experiments about high-level tasks? Because the performance gains are more notable in the case of CIFAR classification than low-level tasks, focusing mainly on high-level vision recognition tasks seems better. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. I think the additional tables, figures, and description made this paper more comprehensive to be published at NeurIPS 2023. Therefore, I would like to recommend this paper to be accepted, and will change my rating from borderline reject to weak accept. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Recently, many recent studies have been centered on downsampling [reference] as stated in the related works, thus we believe the downsampling is sufficient as a crucial research topic. Our work also has ample contributions for the following reasons: (1) Our study is the first to revisit the deep downsampling operator from a frequency domain perspective. (2) This approach is particularly challenging due to the significant disparity between the spatial and frequency domains (details see W2). (3) In this work, we innovatively pinpoint that previous downsampling methods were mostly constrained by the issue of static frequency aliasing. Therefore, this work comprises ample theoretical motivation and presents a challenging solution from an unprecedented perspective, which aligns well with the preferences of this conference. Meanwhile, we appreciate your sincere suggestions for hierarchical multi-scale exploration. In fact, our work essentially taps into the causes of detail information loss. Moreover, in the supplementary material, we've discussed different scale relationships in FouriDown, which might provide insights for addressing this issue. We will explore it in the future. 2. The design of the learnable weight is both carefully-considered and theory-robust. It seems there might be some misunderstandings regarding the novelty of our work. We don’t merely propose a simple convolution layer replacement. Importantly, before the convolution operation, a pioneering and theoretically based spectral-shuffle in Figure 3 is meticulously incorporated, ensuring the feasibility of learnable superposition. Further, replacing the convolution layer directly without the shuffle would result in inferior performance, as evidenced in the following table. Hence, we have rigorously considered and trialed numerous methods to realize dynamic superposition. In response to your concerns, we'd like to highlight some insights as following: | | | | | | |--------|--------------------|---|-------|---------| | MPRNET | FouriDown | | PSNR | SSIM | | | wo shuffle | | 25.43 | 0.8144 | | | w pixel-shuffle | | 23.43 | 0.7954 | | | w spectral-shuffle | | 30.31 | 0.8996 | (a) Designing learnable superposition is inherently challenging. (i) The pronounced nonlinear discrepancy between spatial and frequency domains makes learning downsampling directly in the frequency domain via convolutional layers especially arduous. As depicted in above table, solely relying on some nonlinear activation functions is nearly inadequate to approximate the functional relationships between spatial and frequency domains. To relieve the challenging nonlinear function, we introduce spectral-shuffle, based on Theorem 1 and Theorem 2, thus offering a viable approach for realizing learnable weights via CNNs. (ii) The convolution operation is typically ill-suited for spectral domains. Conventional convolution operations are primarily designed to operate in the spatial domain, capturing local features and receptive fields. However, the frequency spectrum doesn't exhibit similar biases. Most prior works [1,2] use 1x1 convolutions that share weights across the entire spectrum, essentially learning channel-wise weights. Such a design has limited the evolution of Fourier theory in deep learning. Therefore, how to extract "pseudo-local" spectral features using convolution operators remains a challenge. Fortunately, our work uncovers the 'pseudo-local' relationship within the downsampling spectrum, denoted as $[F(u), F(u + \frac{\Omega_x}{2})]$, as described in Theorem 1. Based on the derivation, we achieve a 'pseudo-local’ extraction from spectral features via convolution operators, fostering future applications and growth of Fourier theory in deep learning. [1] Intriguing Findings of Frequency Selection for Image Deblurring. In AAAI, 2023. [2] Deep Fourier-Based Exposure Correction Network with Spatial-Frequency Interaction. In ECCV, 2022. (b) Simplicity with effectiveness epitomizes elegance. (i) "Entities should not be multiplied without necessity" – Occam's razor. Resolving existing issues in a straightforward and principled manner is intrinsically significant. While more complex implementations exist, we believe they aren't particularly necessary. (ii) General algorithm is usually simple. As shown in experiments section, thanks to our simple yet potent design, the FouriDown exhibits commendable versatility across various tasks (both low-level and high-level) and diverse architectures (both CNNs and Transformers). Such a design resonates with the contemporary philosophy of General Artificial Intelligence (AGI). In fact, according to Theorem 2, within our FouriDown framework, the average operation is essentially equivalent to the stride=2 operation, meaning it aligns with the baseline of many existing methods. The weighted summation is an adaptive weight prediction we introduced. The ablation study for average versus weighted summation is presented as follows. | | FouriDown | | PSNR | SSIM | |------------|--------------|---|-------|---------| | DeepDeblur | Average | | 29.32 | 0.8817 | | | Weighted sum | | 29.44 | 0.8856 | | MPRNET | Average | | 30.11 | 0.8956 | | | Weighted sum | | 30.31 | 0.8996 | 3.Due to time constraints, we trained the performance of YOLO v5 on the COCO subdataset and found the improvement, as shown in the table below. | YOLOv5 | AP | AR | |-----------|------|-------| | Baseline | 23.5 | 36.8 | | FouriDown | 25.3 | 32.6 | As mentioned, FouriDown possesses good versatility. However, since this paper primarily focuses on addressing low-level tasks, we hope more experiments will be conducted in the future to further explore its potential in high-level tasks. --- Rebuttal Comment 1.1: Comment: 1. If more related works mainly focusing on down-sampling itself can be provided, the claim of the authors becomes more compelling. This discussion should include brief explanation that can reveal each related work's main approach in regard to down-sampling method. The existing related work section can be further refined to convey the urgency and significance of addressing down-sampling challenges more explicitly. What is the relevant section of supplementary material that you mentioned? Does it indicate Sec.2 Stride Extension? 2. Thanks to your comment, I think it is justified that the adaptive convolution in superposing of FouriDown has been carefully designed. My concern of this part is resolved. Previously, I thought that introducing 1x1 conv is too simple. Now, however, I understand why complex architectures for spectral domain are not necessary. Moreover, regarding the average and weighted sum operation, I apologize for my misunderstanding. 3. It's okay. --- Reply to Comment 1.1.1: Comment: 1.Thanks for your sincere comments. We have refined the related work as following and will add them in the revision version. Downsampling is an important and common operator in computer vision, which benefits from enlarging the receptive field and reducing computational costs. So many models incorporate downsampling to allow the primary reconstruction components conducting at a lower resolution. Moreover, with the emergence of increasingly compute-intensive large models, downsampling becomes especially crucial, particularly for high-resolution input images. Previous downsampling methods often utilized local spatial neighborhood computations (e.g., bilinear, bicubic and MaxPooling), which show decent performances across various tasks. However, these computations are relatively fixed, making it challenging to maintain consistent performance across different tasks. To address this, some methods made specific designs to make the downsampling more efficient in specific tasks. For instance, some works [1,2,3,4] introduce the Gaussian blur kernel before the downsampling convolution to combat aliasing for better shift-invariance in classification tasks. Grabinski et al. [5,6] equip the ideal low-pass filter or the hamming filter into downsampling to enhance model robustness and avoid overfitting. Moreover, some other works [7,8,9,10,11,12] introduce dynamic downsampling to adaptively adjust for different tasks, thereby achieving better generalizability. For instance, pixel-shuffle [7] enables dynamic spatial neighborhood computation through the interaction between feature channels and spaces, restoring finer details more effectively. Kim et al. [8] proposes a task-aware image downsampling to support upsampling for more efficient restoration. In addition to dynamic neighborhood computation, dynamic strides have also gained widespread attention in recent years. For instance, Riad et al. [9] posits that the commonly adopted integer stride of 2 for downsampling might not be optimal. Consequently, they introduce learnable strides to explore a better trade-off between computation costs and performances. However, the stride is still spatially uniformly distributed, which might not be the best fit for images with uneven texture density distributions. To address this issue, dynamic non-uniform sampling garners significant attention [10,11,12]. For example, Thavamani et al. [10] proposed a saliency-guided non-uniform sampling method aimed at reducing computation while retaining task-relevant image information. In conclusion, most of recent researches focus on dynamic neighborhood computation or dynamic stride for down-sampling, where the paradigm can be represented as **Down(s)**, where ‘s’ denotes the stride. However, in this work, we observe that the methods based on this downsampling paradigm employ static frequency aliasing, which may potentially hinder further development towards effective downsampling. However, learning dynamic frequency aliasing upon the existing paradigm poses challenges. To address this issue, we revisit downsampling from a spectral perspective and propose a novel paradigm for it, denoted as **FouriDown(s,w)**. This paradigm, while retaining the stride parameter, introduces a new parameter, ‘w’, which represents the weight of frequency aliasing during downsampling and is related to strides. Further, based on this framework, we present an elegant and effective approach to achieve downsampling with dynamic frequency aliasing, demonstrating notable performance improvements across multiple tasks and network architectures. [1] Blending anti-aliasing into vision transformer. In NIPS, 2022. [2] On aliased resizing and surprising subtleties in GAN evaluation. In CVPR, 2022. [3] Making convolutional networks shift-invariant again. In ICML, 2019. [4] Delving deeper into anti-aliasing in Convnets. In IJCV, 2020. [5] Frequency LowCut Pooling -- Plug & Play against Catastrophic Overfitting. In ECCV, 2022. [6] Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling. In Arxiv, 2023. [7] Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In CVPR, 2016. [8] Task-Aware Image Downscaling. In ECCV, 2018. [9] Learning strides in convolutional neural networks. In ICLR, 2022. [10] Learning to Zoom and Unzoom. In CVPR, 2023. [11] Efficient segmentation: Learning downsampling near semantic boundaries. In CVPR, 2019. [12] SALISA: Saliency-Based Input Sampling for Efficient Video Object Detection. In ECCV, 2022. 2.Yes. It indicates Sec. 2 Stride Extension of supplementary material, which discusses the implementation of FouriDown at other strides based on extended theory as well as the diagram.
Summary: - This paper explores a new method for downsampling, by factoring it into shuffling and superposing. Firstly, the authors provide an understanding of the aliasing in deep neural networks from a spectrum view. - FouriDown, a unified downsampling approach with the learnable and context-adaptive down-sampling operator, is proposed. This method is made of four key components: 2D discrete Fourier transform, context shuffling rules, Fourier weighting-adaptively superposing rules, and 2D inverse Fourier transform. - Extensive experiments on image de-blurring and low-light image enhancement are conducted, which consistently show that FouriDown can provide significant performance improvements. Strengths: - Extensive qualitative and quantitative evaluation results on scale-sensitive tasks are conducted, which show the advantage of the proposed downsampling method over previous ones. - This work first provides an exploration of the aliasing issue in neural networks from a spectrum perspective, which will facilitate a deeper understanding of the aliasing issue. Weaknesses: - Additional details, such as the computation costs and the respective results on images and features, should be provided. (See Question) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What are the additional computation costs of FouriDown, compared to other downsampling methods? - Is there any difference between applying FouriDown to the image and feature level? It's interesting to see the comparisons with other downsampling methods in-between, as images and features in DNN have differences in frequency distribution. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **FouriDown Computation**: Thank you for your question. The FouriDown incorporates the Fourier transform and its inverse, which traditionally have a computational complexity of \(O(N \log N)\).However, this part has been accelerated by pytorch library and does not occupy any FLOPs. Moreover, there are additional computations related to the spectral-shuffle operation and filtering in the frequency domain. For spectral-shuffle, the operations such as tensor remodeling and composition do not require additional Flops and the number of parameters. Moreover, the additional convolution layers contain very small flops and parameters. Finally, the computational cost of our algorithm compared to other downsampling methods is shown in the Table above. See attached file for more experiments. Note that although the shuffle process doesn't consume any parameters or FLOPs, implementing this step in software does take extra time. We will accelerate this step by CUDA and release the code in the future. We will further elucidate these computational trade-offs in the revised manuscript. 2. Thank you for raising this insightful question. FouriDown is a downsampling operator applied to the feature layer that relies on convolutional parameters. The learning downsampling process is similar to pixel-unshuffling. As a result, it may not be suitable for image domain resizing, such as bicubic and bilinear. However, it is meaningful to compare the feature map visualizations and their spectrum of different downsampling methods. Please refer to Figure 1-3 in the attached file. It can be seen that the spectrum following FouriDown obtains the outstanding smooth response in high and low frequencies.
Rebuttal 1: Rebuttal: Thank all reviewers for insightful comment. We recognize the importance of providing comprehensive benchmarks for our proposed method. As shown in table below, we include results from traditional downsampling techniques like bicubic, bilinear, pixel-unshuffle, 2x2 learnable CNN (with stride=2), max-pooling, average-pooling, LPF, Gaussian and Ours. Noting that the “Original” downsampling of the method is pointed by asterisk (‘*’). This will allow a clearer contrast and showcase the advantages of our method not only against anti-aliasing approaches but also against these conventional downsampling methods. Note that although the shuffle process doesn't consume any parameters or FLOPs, implementing this step in software does take extra time. We will accelerate this step by CUDA and release the code in the future. We will further elucidate these computational trade-offs in the revised manuscript. | Method | Config | | LOL | | FLOPs(G) | Time(s) | Para(M) | |--------|-----------------|---|--------|--------|----------|---------|----------| | | | | PSNR | SSIM | | | | | SID | Bicubic | | 21.35 | 0.8497 | 13.764 | 0.0131 | 7.84 | | | Bilinear | | 21.26 | 0.8464 | 13.764 | 0.0136 | 7.84 | | | Pixle-shuffle | | 21.41 | 0.8552 | 13.954 | 0.0138 | 8.11 | | | Stride Conv | | 21.36 | 0.8534 | 13.954 | 0.0144 | 8.11 | | | Max pooling * | | 21.46 | 0.8584 | 13.753 | 0.0134 | 7.84 | | | Average pooling | | 21.34 | 0.8481 | 13.754 | 0.0128 | 7.84 | | | LPF | | 21.79 | 0.8612 | 16.102 | 0.0149 | 8.54 | | | Gaussian | | 20.74 | 0.8124 | 16.102 | 0.0137 | 8.54 | | | Ours | | 23.28 | 0.8708 | 13.827 | 0.0176 | 7.87 | | Method | Config | | DVD | | FLOPs(T) | Time(s) | Para(M) | |--------|-----------------|---|---------|--------|----------|---------|----------| | | | | PSNR | SSIM | | | | | MPRNET | Bicubic | | 29.8302 | 0.8815 | 1.398 | 0.5258 | 15.74 | | | Bilinear | | 29.8795 | 0.8822 | 1.398 | 0.5422 | 15.74 | | | Pixle-shuffle | | 29.8202 | 0.8816 | 1.399 | 0.575 | 15.93 | | | Stride Conv * | | 30.12 | 0.8958 | 1.399 | 0.5082 | 15.93 | | | Max pooling | | 29.8038 | 0.881 | 1.398 | 0.5402 | 15.74 | | | Average pooling | | 29.8716 | 0.8825 | 1.398 | 0.5414 | 15.74 | | | LPF | | 30 | 0.8918 | 1.416 | 0.5004 | 16.26 | | | Gaussian | | 30.23 | 0.8922 | 1.416 | 0.5491 | 16.26 | | | Ours | | 30.31 | 0.8996 | 1.398 | 0.5974 | 15.93 | Pdf: /pdf/77a116d72da47094e7c84c9fb2144ebdb55f6211.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a downsampling method for neural networks in the frequency domain. The proposed downsampling consists of discrete Fourier transform, shuffling, superposing, and inverse discrete Fourier transform. The shuffling shuffles the patches in Fourier domain and the superposing adds the patches with learned and adaptive weights. Experimental results present that the proposed method outperforms Gaussian filtering and the ideal low-pass filtering in frequency domain for neural networks on various computer vision tasks. Strengths: Downsampling in frequency domain is an interesting topic. The proposed method and the experimental results are reasonable. Performance improvements on enhancement tasks are impressive. Weaknesses: Experiments lack comparisons and analyzes to convince the effectiveness of the proposed method. The proposed method uses additional convolution layers than compared methods, so the computational overheads (FLOPs and latency) should be compared. It is worth noting that DFT and IDFT also require additional overheads compared to the original networks. Comparisons to numerous image downscaling methods (bilinear, bicubic, etc) are missing. In particular, pixel-downshuffling [A] with convolution performs a similar operation to the proposed method without Fourier transform. Feature map visualization in frequency domain is missing. Feature maps of convolution layers usually contain high frequency edges, thus it is curious how the proposed method works in the feature space. Ablation study of additional convolution layers is missing. Simple operations instead of the convolution layers such as max or mean will be an interesting ablation study. [A] Channel Attention Is All You Need for Video Frame Interpolation, AAAI 2020 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: Please see the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly, we appreciate your acknowledgment of the topic and methodological design of our work. Complete and meaningful of the comparisons and analysis has been complemented by visualizations and tables in the rebuttal phase. If this paper is accepted, we commit to meticulously revising and organizing the experimental and analysis sections in the revision. 1. ** DFT overheads **: First, following [1,2], we perform DFT using \textit{torch.fft.fft2(x)} with few extra overhead. To further illustrate this point, we conduct a toy experiment shown in the following table. It could be seen that although the FFT and its inverse traditionally have a computational complexity of O(N log N), this part has been accelerated by pytorch library and does not occupy any FLOPs. Therefore, the consumption of time is also very little | SID | | FLOPs(G) | Time(s) | |--------------|---|----------|----------| | wo FFT/IFFT | | 13.753 | 0.0134 | | w FFT/IFFT | | 13.753 | 0.0139 | | w 4*FFT/iFFT | | 13.753 | 0.0146 | [1] Intriguing Findings of Frequency Selection for Image Deblurring. In AAAI, 2023. [2] Deep Fourier-Based Exposure Correction Network with Spatial-Frequency Interaction. In ECCV, 2022. 2.Comparison with Traditional Downscaling Methods. Due to space limitation, in the supplementary, we recognize the importance of comparing our method with traditional techniques like nearest/bicubic/bilinear/lanczos downsampling shown in Fig 5. As depicted in Supp. Line 86-90, it can be observed that when the down-sampling factor is exactly divisible by the original image resolution, no interpolation actually takes place. In such cases, the bicubic/bilinear/lanczos methods mentioned above are equivalent to the nearest neighbor method, which appears same static weights [0.25,0.25,0.25,0.25] in FouriDown. Furthermore, we include comprehensive comparisons with these methods as shown in the above table. 2.Pixel-shuffling and proposed spectral-shuffling: We are aware of the pixel-downshuffling method you referred to in [A]. However, it is hard to agree with your opinion. Specifically, there are following distinct differences. (a)Domain of Operation. Our method operates in the frequency domain, leveraging the unique properties of Fourier transforms. Pixel-downshuffling operates in the spatial domain, and the nature of these domains and their inherent properties can lead to different results. (b)Distinct Shuffle Patterns. It is apparent that our shuffle mode is specifically designed for spectrum and is completely different from the pixel-shuffle[A] mode which relies on nearby sampling. To illustrate the point better, we replace our spectrum-shuffle with pixel-shuffle after FFT as evidenced in following table, which leads to bad results. | | | | | | |--------|--------------------|---|-------|---------| | MPRNET | FouriDown | | PSNR | SSIM | | | wo shuffle | | 25.43 | 0.8144 | | | w pixel-shuffle | | 23.43 | 0.7954 | | | w spectral-shuffle | | 30.31 | 0.8996 | (c)White-box vs. Black-box Approach. Different from the pixel-downshuffle[A] operating as a 'black box', our shuffle criterion is derived from rigorous signal theory, ensuring a more interpretability and expansibility grounded process. (d)Potential Impact for FFT Developing. We believe the spectrum-shuffle isn't merely a technical novelty, it has significant implications for the develop of the Fourier theory in the low-level vision task. Above all, the Fourier domain offers insights and opportunities that are not readily available in the spatial domain. By working in this domain, our method provides a novel solution for adaptive aliasing issue and potential improvements in downsampling. 3.Feature map visualization. Thank you for your suggestions. We have included visualizations of the feature maps and their corresponding spectra in the PDF file attached to the rebuttal. Notably, the features extracted by FouriDown demonstrate a significantly larger response than other methods, attributing this to the unique global modeling mechanism we employ in the frequency domain. Moreover, to delve deeper into the reasons for its efficacy, we have also compared the spectral images of the features. The results show that the spectrum using FouriDown exhibits an exceptionally smooth response across both high and low frequencies. 4.Max or Mean Ablation Study. Indeed, this is an interesting experiment which highlights the extensibility of the FouriDown framework. In fact, we have already conducted this ablation study earlier, but due to space constraints, it was not included. As shown in the following table, according to Theorem 2, when averaging is used, FouriDown essentially becomes equivalent to the stride=2 method. The Max approach is indeed an interesting idea, however, regrettably, it hasn't been so effective. We hope to find the scenarios where the Max method is more applicable in the future. | | | | | | |------------|------------|---|-------|---------| | DeepDeblur | FouriDown | | PSNR | SSIM | | | Max | | 29.32 | 0.8815 | | | Average | | 29.34 | 0.8817 | | | Conv layer | | 29.44 | 0.8856 | --- Rebuttal Comment 1.1: Title: Discussion Comment: Dear e5Db, Thanks for your review! Could you please indicate that you have read the rebuttal and state whether or not your concerns are addressed? Thanks and best, SAC
null
null
null
null
null
null
The Equivalence of Dynamic and Strategic Stability under Regularized Learning in Games
Accept (spotlight)
Summary: This paper studies the long run behavior of no-regret learning, and introduces the notion of resilience to strategic deviations as a metric with which to characterize no-regret learning algorithms. Moreover, to further strengthen their results the paper utilizes the idea of setwise strategic stability (m-club). In particular, the paper shows a nice connection between regularized learning and club sets. Finally, they estimate convergence rates to club sets, showing geometric rate for entropic regularizers and finite iterations for projection-based methods. Strengths: I think this paper is a conceptual step forward in the area of regularized learning dynamics. A major point of contention in many prior works in learning in games has been the disconnect between desirable equilibria and equilibria that are actually converged to by regularized learning. The results given in the paper are fairly broad, and the characterization of club sets as an alternative solution concept for regularized learning is (as far as I can tell) quite watertight and intuitive. Weaknesses: I only have very minor complaints with the paper, namely the section on regularized learning that introduces a few different algorithms under the RL umbrella. I feel this section is quite long and the notation is unnecessarily heavy, most of these could be in the appendix. I would have preferred to see more space allocated to either an extended proof sketch for Thms 2 & 3, or more experimental details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: A question regarding the actual convergence properties of regularized learners to club sets in comparison to classical solution concepts: how do the payoffs of these points compare to the Nash equilibrium values? Do you see any interesting behaviors of note between standard FTRL dynamics and other RL dynamics that are significantly different in your experiments? Also, is there any connection between club sets and the chain recurrent set of the dynamics? Of course the paper is focused on discrete time regularized learning, but do you have any intuition as to whether club sets could be useful to construct/converge to chain recurrent sets in the continuous-time setting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations and scope of the proposed ideas are discussed adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are sincerely grateful for your detailed input and positive evaluation. We reply to your questions below and we will revise our manuscript accordingly at the next revision opportunity: 1. "*I feel Section 3 is quite long and the notation is unnecessarily heavy, most of these could be in the appendix. I would have preferred to see more space allocated to either an extended proof sketch for Thms 2 & 3, or more experimental details.*" **Reply.** We understand your concern but, at the same time, we were constrained by the NeurIPS page limitations, and we had to do some tough choices in terms of presentation along the way. In this regard, we intend to take full advantage of the extra page allowed in the revision phase to implement the following restructuring changes: - Include a quick "warm-up" section before the current Section 3, intended to discuss some basic algorithms and feedback models that are standard in the field (exponential weights, optimistic exponential weights, and bandit exponential weights). - Proceed to state the "umbrella" FTRL template and quickly explain how it includes these algorithms as special cases (without any details). - Relegate the remaining technical elements, definitions and examples to the appendix (in order to streamline the flow of the discussion), as per your suggestion. This will provide the necessary examples and anchor points to motivate the general analysis and ease notation, and it will also leave us sufficient space to expand on the proof sketches of Theorems 1-3 (by bringing forward the step-by-step skeleton from the paper's current appendices). --- 2. "*How do the payoffs of these points compare to the Nash equilibrium values?*" **Reply.** Each club set contains an essential component of Nash equilibria so, by the multilinearity of the players' payoff functions, the payoff values of the latter will be contained in the convex hull of the former (and, in the case of m-club sets, it cannot be contained in a smaller subset thereof). Without further structural assumptions on the underlying game, we are not aware of a finer relation between them. --- 3. "*Do you see any interesting behaviors of note between standard FTRL dynamics and other RL dynamics that are significantly different in your experiments?*" **Reply.** In general, the two driving factors are (*a*) the regularizer of the method; and (*b*) the feedback available to the players. Methods with Euclidean regularization tend to have faster identification rates (i.e., converge to the support of an equilibrium / club set faster), but they are more "extreme" than methods with an "entropy-like" regularizer (in the sense that players tend to play pure strategies more often). As for the feedback available to the players, payoff-based methods tend to have higher variance (and hence a slower rate of convergence) relative to methods with full information; otherwise however, from a qualitative viewpoint, there are no perceptible differences in their limiting behavior. Finally, optimistic / extra-oracle methods with **full** information exhibit better convergence properties in two-player zero-sum games (relative to standard FTRL policies); however, this is a fragile advantage that evaporates in the presence of noise and/or uncertainty (in which case "vanilla" and "optimistic" methods are essentially indistinguishable). Figure 2 in Appendix B was intended to illustrate a part of these findings, and we would be happy to expand on it and bring it to the main text if you think it would make things clearer for the reader. --- 4. "*Is there any connection between club sets and the chain recurrent set of the dynamics? Of course the paper is focused on discrete time regularized learning, but do you have any intuition as to whether club sets could be useful to construct/converge to chain recurrent sets in the continuous-time setting?*" **Reply.** Excellent question! Due to space limitations, we could not expand on the continuous-time implications of our work but, indeed, there is an important relation between sets that are *minimally* closed under better replies (m-club) and sets that are internally chain recurrent. First, in continuous time, the dynamics of regularized learning are described by the system $\dot y(t) = v(x(t))$ with $x(t) = Q(y(t))$, as per reference [28] of our paper. In this context, the analogue of Theorem 2 would state that a set is m-club if and only if it is irreducibly stable under the above dynamics: we do not prove this result, but our techniques can be used to show that this statement holds. Given this equivalence, it follows that m-club sets are compact, invariant, and do not admit any asymptotically stable subsets with smaller support. This is not exactly the same as not admitting any proper *attractors*, but it is close - and since a set is internally chain recurrent if and only if it is compact, invariant, and does not admit any proper attractors, this would suggest that m-club sets are, in many cases, internally chain recurrent. We conjecture that this relation is actually true in general, but we do not have a proof for this fact, and we believe it is a very fruitful direction for future research. --- We hope and trust that the above points address your questions - thank you again for your detailed input and positive evaluation! Kind regards, The authors --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thank you very much for your detailed response. The suggestions for content in the additional page are very reasonable, and should make the paper more suitable for the NeurIPS audience. The comment about Figure 2 and comparisons to other RL methods is interesting but might seem a bit out of place without changes to the flow of the paper, though I think having more discussion about the figure in the appendix would suffice. Regarding chain recurrence, I agree that it is a fascinating connection! It is a shame that this cannot be expanded upon given space constraints but I am eagerly awaiting a future work in this direction. Best regards, Reviewer vfB1
Summary: The paper shows the convergence of regularized learning in games to sets satisfying a property called closeness under better replies. Moreover, convergence rates are derived even with bandit feedback. Strengths: The pointwise behavior of regularized learning in games has gained lots of attention recently, and the paper considers a fundamental question in this field. The authors provide complete and general answers to the questions. Weaknesses: The paper is super notation-heavy and considers a very general setting in terms of both algorithms and feedback models. It would be much easier to follow if the authors could first provide some basic or toy examples and then generalize. Minor: a corrupted reference at the beginning of Line 267. Technical Quality: 3 good Clarity: 3 good Questions for Authors: While the questions (line 59-60) are important and the results are mathematically sound, what do they imply to regularized learning, in particular, for practitioners? For example, from prior works, we already know that a day-to-day strategy can be arbitrarily bad so it is probably better to do some averaging. Does this paper extend our understanding in this direction? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed input and positive evaluation. We reply to your questions below and we will revise our manuscript accordingly at the next revision opportunity: 1. "*The paper is [notation-heavy and considers a very general setting] It would be much easier to follow if the authors could first provide some basic or toy examples and then generalize.*" **Reply.** We understand your concern but, at the same time, we were constrained by the NeurIPS page limitations, and we had to do some tough choices in terms of presentation along the way. In this regard, we intend to take full advantage of the extra page allowed in the revision phase to implement the following restructuring changes: - Include a quick "warm-up" section before the current Section 3, intended to discuss some basic algorithms and feedback models that are standard in the field (exponential weights, optimistic exponential weights, and bandit exponential weights). - Proceed to state the "umbrella" FTRL template and quickly explain how it includes these algorithms as special cases (without any technical details that could distract the reader). - Relegate the remaining technical elements and definitions to the appendix (in order to streamline the flow of the discussion), as per your suggestion. This will provide the necessary examples and anchor points to motivate the general analysis and ease notation, so we trust and hope it addresses your notation concerns. --- 2. "*Minor: a corrupted reference at the beginning of Line 267.*" **Reply.** Apologies, this was a broken reference to Appendix C - thanks for catching it. --- 3. "*From prior works, we already know that a day-to-day strategy can be arbitrarily bad so it is probably better to do some averaging. Does this paper extend our understanding in this direction?*" **Reply.** There are several remarks to be made here, so we proceed point-by-point: - First, from a practical point of view, when agents are involved in a real-time, online learning process (e.g., commuting from home to work each day), the payoff they obtain at each epoch is determined by the action they actually *played* at said epoch. In this context, an "averaged" sequence of strategies (either time-averages or empirical frequencies) is not as meaningful, because it is never actually *played* by the agents. As such, the day-to-day sequence of play becomes the de facto figure of merit for the problem, as it describes what the agents actually play and determines their in-game rewards. - Second, if the game is not monotone (in the sense of operator theory and variational inequalities), the time-averaged sequence $\bar x_{i,t} = (1/t) \sum_{s=1}^t x_{i,s}$ has no convergence guarantees in general. The only general class of finite games which *is* monotone is two-player, zero-sum games: in this case, time-averaging can be beneficial as a technique for the **offline** computation of Nash equilibria but, even then, since $\bar x_{i,t}$ is never actually played by the players, it is not as meaningful from an **online** viewpoint. - Finally, concerning the empirical frequency of play $\bar z_{\alpha,t} = (1/t)\sum_{s=1}^t \mathbb{I}(\alpha_t = \alpha)$ (which is not the same as the time-averaged sequence $\bar x_t$ above), it is indeed well known that, under no-regret learning, $\bar z_{t}$ converges to the Hannan set of the game (its set of CCE). However, as we discuss in Section 4, the Hannan set could contain highly non-rationalizable outcomes, e.g., with players selecting dominated strategies for all time (the counterexample of Viossat and Zapechelnyuk). Our result serves to exclude such outcomes by showing that the *only* part of the Hannan set which is stable and attracting under regularized learning is its intersection with the set of correlated actions that are supported on a **club** set - i.e., which is closed under better replies, and hence strategically stable. We did not include a detailed discussion along those lines because, as we explained above, averaging in an online context is less relevant than in the offline case. However, we will be happy to take advantage of the extra page provided in the camera-ready phase to include the above discussion. --- We hope and trust that these points address your questions - thank you again for your input and positive evaluation! Kind regards, The authors --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and confirm my positive evaluation.
Summary: The paper deals with a fundamental question about long-term behavior of regularized learning algorithms in finite player finite action static games. They prove an interesting equivalence between the set of strategies which are closed under better replies with that of stochastically stable and attracting fixed points of wide class of regularized learning algorithm popularly studied in literature. Furthermore, they also study the rate of convergence of regularized learning algorithms to this set. Strengths: The paper studies a very fundamental equivalence relation between payoff structure of a static game with that of limit sets of regularized learning algorithms. This result is very strong result which enhances our understanding of learning in static games. Particuarly, this paper brings the notion of better replies from economic literature and presents a deep connection with asymptotic properties of learning algorithm. The clarity of presentation of this paper is very good. The literature survey is also up to the mark to the best of my knowledge. Weaknesses: The paper has no major weakness in my view. Technical Quality: 3 good Clarity: 3 good Questions for Authors: -- Is there any characterization on size of m-club set given certain regularity structure of game. This will provide more predictive power about the asymptotic behavior of common learning dynamics Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some typographical errors 1. In supplementary material, line 267 has ?? 2. In equation C.13 the right hand side is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you again for your encouraging input and positive evaluation. We reply to your questions below and we will revise our manuscript accordingly at the next revision opportunity: 1. "*Is there any characterization on size of m-club set given certain regularity structure of game. This will provide more predictive power about the asymptotic behavior of common learning dynamics.*" **Reply.** To the best of our knowledge, a complete characterization of (the size of) the support of an m-club set is an open question in the literature. However, under certain structural hypotheses, it is indeed possible to predict what these sets will look like: for example, in (generic) congestion games, the support of any mixed Nash equilibrium contains that of a strict equilibrium, so only strict Nash equilibria can be m-club (a fact which, coupled with Theorem 1, implies that the only irreducibly stable sets of regularized learning in congestion games are strict equilibria). We conjecture that the class of $(\lambda,\mu)$-smooth games introduced by Roughgarden may enjoy similar structural properties, but we are not aware of a specific characterization along these lines. --- 2. **Typographical errors** - "*Line 267 has ??*": apologies, this was a broken reference to Appendix C. - "*In equation C.13 the right hand side is missing.*": indeed, (C.13) should read $$ \lim_{t\to\infty} \frac{\sum_{s=1}^{t} \gamma_s^2 (1+B_s^2+\sigma_s^2)}{\sum_{s=1}^{t} \gamma_s} = 0 $$ Thanks for spotting these two typos, consider them fixed! --- Please let us know if you have any further questions - and thank you again for your input and positive evaluation! Kind regards, The authors --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I have read the rebuttal and comments from other authors. I will keep my score.
null
null
Rebuttal 1: Rebuttal: Dear AC, dear reviewers, We are sincerely grateful for your time, input and positive evaluation. To streamline the discussion phase, we reply to each reviewer's questions in a separate point-by-point thread below. Thank you again for your time and positive input. Kind regards, The authors
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Predicting Global Label Relationship Matrix for Graph Neural Networks under Heterophily
Accept (poster)
Summary: The paper proposes a Low-Rank Graph Neural Network (LRGNN) that can model both homophilous and heterophilous graphs via signed propogation (two nodes with the same class label share a positive edge and with different class share a negative edge). The authors predict the label relationship matrix by solving a robust low-rank matrix approximation problem, which results in a block diagonal structure and varying distributions of within-class and between-class entries. Strengths: Clarity: The paper is generally easy to follow. Novelty: The authors propose a low-rank matrix approximation technique for predicting the label relationship matrix and enhancing the representation power of nodes. This technique is designed to utilize the low-rank properties of weakly-balanced graphs. Experiments: The authors have conducted extensive experiments on both synthetic and real-world graphs to evaluate the proposed method. The results indicate that LRGNN outperforms other baseline methods on both synthetic and real-world graphs. Presentation: The paper is well-structured and many visualization analysis are provided to better assess the qualitative and quantitive analysis of the proposed model. Weaknesses: Assumption of Uniform Distribution: The main theoretical support for the approach assumes a uniform distribution of node degrees. However, in practice, node degree distributions frequently follow a power-law distribution that is non-uniform. Although the main results of this paper are obtained using real-world graphs with non-uniform node degree distributions, it is unclear how this affects the theoretical results. Dependence on Pseudo Labels: It is mentioned that LRGNN’s performance is affected by the accuracy of the generated pseudo labels. Although the supplementary material provides an experiment that indicates LRGNN demonstrates a non-sensitive response to this impact, further exploration and discussion are needed to fully investigate the theoretical implications. Additionally, the motivation of using signed labels for edges is not strong enough becuase 1) only applies for homogenous graphs. 2) When the # of class labels is high, such strategy may lose a lot class-related information so it may not generalize. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's positive acknowledgment on the clarity, novel, experiments, and presentation of our paper. For the weakness part, we have indeed discussed these limitations in the last section of our paper and devote much efforts to support the robustness of LRGNN by presenting extensive designated experiments in our experiment section as well as the Appendix. Uniform Distribution: The real-world datasets considered in our research exhibit non-uniform distributions, yet our experimental results validate the strong performance of our model on such datasets. In Appendix A, we have provided a detailed discussion and conducted an experiment to empirically demonstrate the effectiveness of low-rank matrix factorization (LRMF) in accurately predicting the label relationship matrix, despite the non-uniform distribution of observed entries. Dependence on Pseudo Labels: We believe LRGNN does not rely on accurate pseudo labels. In fact, the weakest estimator used to generate pseudo labels is the one for the Squirrel, which has an accuracy of about 50%, while LRGNN achieves an impressive 74.38% accuracy on Squirrel, surpassing the runner-up score significantly. Thus, LRGNN demonstrates to be effective even when the generated pseudo labels are less reliable. Moreover, the experimental results in the Appendix indicate that the performance of LRGNN endures only a minor impact when the pseudo labels are corrupted by random noises. We recognize the significance of the theoretical implications that arise from the shape of the distribution and the accuracy of pseudo labels. We plan to dedicate our future work to explore the theoretical aspects. In summary, we appreciate the reviewer for acknowledging the strengths of our paper include the clarity, novelty, experiments, and presentation. For the limitation mentioned in our paper, we will make further improvement upon our recent trials in the future work. --- Rebuttal 2: Title: About Authors' Reply Comment: Dear reviewer B8tM, Would you mind to check authors' reply and indicate whether you'd like to change the score or need more discussion? AC
Summary: This paper introduces sign links to model both the homophilous and heterophilous relationships in the global graph, thus extending the applicability of GNNs beyond heterophily. Additionally, the paper leverages the weak balance theory to support the existence of a global low-rank structure in the signed graph and formulates sign link prediction as a robust low-rank matrix approximation problem. Experimental results demonstrate the effectiveness and performance improvement of the proposed approach. Strengths: Clarity: 1. The submission demonstrates clear writing, readability, and a well-organized structure. 2. The supplementary material provides additional details that enhance the reproducibility of the results. Quality: 1. The claims put forth in the submission are strongly backed by theoretical and empirical analyses. 2. The inclusion of the structural balance theory and conducting accuracy comparison, ablation, and visualization experiments significantly bolsters the overall quality of the submission. Weaknesses: Originality 1. The originality of the contributions can be strengthened by explaining the differences and comparing them with several relevant existing works, such as FAGCN [1], HOG-GCN [2], etc. 2. In particular, the description in the paper about how the proposed signed GNN can better work for heterophily problems is insufficient. Significance 1. Most heterophily datasets considered in the paper have issues (e.g., small scale, train-test data leakage, etc.) as pointed out by recent publications on high-quality heterophily datasets [3]. 2. Strengthening the significance of the contributions can be achieved by incorporating important references and conducting experimental comparisons. [1]Bo, D., Wang, X., Shi, C., & Shen, H. (2021). Beyond low-frequency information in graph convolutional networks. In AAAI, 35, 3950-3957. [2]Wang, T., Jin, D., Wang, R., He, D., & Huang, Y. (2022). Powerful graph convolutional networks with adaptive propagation mechanism for homophily and heterophily. In AAAI, 36, 4210-4218. [3]Platonov, O., Kuznedelev, D., Diskin, M., Babenko, A., & Prokhorenkova, L. (2023). A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. In ICLR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The idea behind LRGNN bears similarities to FAGCN; however, the author fails to provide a detailed explanation of the differences between them. 2. There is a lack of experimental comparisons with state-of-the-art (SOTA) models such as FAGCN [1], DMP [4], and Ordered GNN [5]. 3. The paper lacks experimental comparisons with other popular large-scale datasets, as mentioned in [6], such as pokec, snap-patents, and twitch-gamers. 4. Figure 3 solely displays the recovery loss on heterophily graphs. It would be valuable to include the results on homophily graphs as well. [4]Yang, L., Li, M., Liu, L., Wang, C., Cao, X., & Guo, Y. (2021). Diverse message passing for attribute with heterophily. In NeurIPS, 34, 4751-4763. [5]Song, Y., Zhou, C., Wang, X., & Lin, Z. (2023). Ordered GNN: Ordering message passing to deal with heterophily and over-smoothing. In ICLR. [6]Lim, D., Hohne, F., Li, X., Huang, S. L., Gupta, V., Bhalerao, O., & Lim, S. N. (2021). Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In NeurIPS, 34, 20887-20902. --------- Update: Most of the above concerns have been addressed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions for improving our paper and for appraising the quality and clarity of our paper. Here are our responses to your questions, and we put the tables in the uploaded pdf file. **Q1 On the differences between LRGNN and existing models.** **R1** Thanks for the suggestion, and we would like to discuss the limitation concerning the expressive power of FAGCN. FAGCN derives the propagation weights using neural networks, while LRGNN is based on traditional matrix completion. In FAGCN, the propagation weight for edge $(i,j)$ is computed as $\alpha_{i,j} = tanh(g^T [h_i || h_j]) = tanh (g_1^T h_i + g_2^Th_j), g = [g_1||g_2]$ This attention function has been proven to be static, as shown in recent research [1]. Since tanh is a monotonic function, for $\forall i,j,m,n$, if $\alpha_{i,m}>\alpha_{i,n}$, we have $g_1^T h_i + g_2^Th_m > g_1^T h_i + g_2^Th_n$. Therefore, $g_2^Th_m>g_2^Th_n$, which indicates $g_1^T h_j+ g_2^Th_m > g_1^T h_j + g_2^Th_n$ and $\alpha_{j,m}>\alpha_{j,n}$. In short, for FAGCN, if we observe $\alpha_{i,m}>\alpha_{i,n}$, then we know $\alpha_{j,m}>\alpha_{j,n}$, irrespective of the label relationships between $j$ and $m$, or $j$ and $n$. Hence, the ranking of the attention scores is unconditioned on the query node, $i$ and $j$ in this case. In contrast, our visualization results (Fig. 4) demonstrate that For LRGNN, the within-class, and between-class weights are positioned on opposite sides of the zero. As discussed in Sec. 3, FAGCN is a standard signed GNN based on structural balance theory, while LRGNN is a naturally weakly-balanced model as it aggregates all nodes’ representations for a node in one-hop propagation. This together explains LRGNN's superiority over FAGCN. The signed feature propagation has not been explicitly modeled **in HOG-GCN**. This is crucial because negative propagation weights are necessary to push away node pairs with different labels in the embedding space. Additionally, HOG-GCN amplifies the neighborhood of nodes by explicitly computing $A^k$. However, this calculation greatly increases the time complexity. For instance, when $k=3$, the time complexity of HOG-GCN becomes cubic, while the time complexity of LRGNN is linear to the # of edges. [1] Brody, Shaked, Uri Alon, and Eran Yahav. "How attentive are graph attention networks?." ICLR 2022. **Q2 There is a lack of experimental comparisons with state-of-the-art (SOTA) models...** **R2** We **included Ordered GNN** as a baseline model in our experiment section for performance comparison. Please refer to line 272, table 2, and table 3 (**abbreviated as OGNN** in the table otherwise the page will be overwhelmed by the table). Unfortunately, it seems that the authors of DMP and HOG-GCN did not make their source codes publicly available, as we could not find any relevant implementations on GitHub or any links to the source codes in their papers. Additionally, the dataset split used in DMP differs from our own, and the experimental results provided in the HOG-GCN paper only cover a subset of the datasets used in our paper. Given these limitations, we can only implement FAGCN and directly use the available results from the HOG-GCN paper for comparison. Please see the pdf file for the results. **Q3 The limitations of the datasets considered in the paper** **R3** Actually, we read the paper [2] during its reviewing phase and highly valued its insights and contributions to the community. Nevertheless, we selected these datasets as they are currently the most widely utilized by the community, despite their known limitations. For example, a recent paper [3] accepted at ICML and the most recent papers [4][5] published on arXiv adopted these datasets for their experiments. We duly appreciate the reviewer's efforts in advancing the evaluation of GNNs under heterophily. We provide the results on the datasets from [2] for your reference. Note that the results of baselines are directly taken from [2]. As indicated by [2], classic GNNs generally perform better than heterophily-specific models on these datasets. The performance of LRGNN is comparable to FSGNN and surpasses other heterophily-specific models. [2] Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: are we really making progress?." ICLR 2023. [3] Zheng, Yizhen, et al. "Finding the Missing-half: Graph Complementary Learning for Homophily-prone and Heterophily-prone Graphs." ICML 2023. [4] Wang, Junfu, et al. "Heterophily-Aware Graph Attention Network." arXiv preprint arXiv:2302.03228 (2023). [5] Yang, Wenhan, and Baharan Mirzasoleiman. "Contrastive Learning under Heterophily." arXiv preprint arXiv:2303.06344 (2023). **Q4 The paper lacks experimental comparisons with other popular large-scale datasets...** **R4** We have included three out of six large-scale datasets released by [6] for evaluation. However, the remaining datasets, pokec, snap-patents, and twitch-gamers are too large, and such a huge scale presents significant computational challenges for our machines. For instance, pokec consists of over 1.6 million nodes and 30 million edges. We acknowledge the significance of evaluating a new method using large-scale datasets and understand the concern expressed by the reviewer. We are sorry that, unfortunately, due to limited resources, we are unable to conduct experiments on pokec and snap-patents. Thus, we only present the results on the twitch-gamers dataset, which is processed using CPUs. We will consider other datasets in our future work once more computational resources are available. [6] Lim, Derek, et al. "Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods." NeurIPS 2022. **Q5 Figure 3 solely displays the recovery loss on heterophily graphs...** **R5** The visualization results on the homophily graphs can be found in the newly uploaded PDF file. Kindly refer to it for further information. --- Rebuttal Comment 1.1: Comment: I've gone through the authors' responses and I have increased my initial rating and assessment since most of my previous concerns have been addressed. However, authors are still suggested to show more convincing experimental comparisons (especially digging into details of Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: are we really making progress?." ICLR 2023.) to futher enhance the technical contribution. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for reconsidering your rating and assessment based on our responses. We are glad to hear that you recognize our efforts in addressing your concerns. In response to your suggestion, we will include a description of the work by Platonov, Oleg, et al. in the Introduction section. Additionally, we will dedicate an additional section in the Appendix specifically focused on providing more experimental comparisons and visualization results using the datasets released by Platonov, Oleg, et al. We believe that incorporating these additional details and comparisons will further strengthen the technical contribution of our paper. Thank you once again for your valuable feedback. We are committed to continuously improving our work based on your suggestions.
Summary: Graph Neural Networks (GNNs) do not operate effectively when the graph is heterophilic with respect to the task. Addressing this issue is an important problem. The paper starts by analyzing SignedGNN model and demonstrates through a simple analysis that SignedGNNs might be implicitly implementing balance theory (at least for binary scenario). This fails in multi-class setting because e-e-f is not right. To address this issue, the aggregation is modified to remove the e-e-f aggregation, creating the weak-balanced model. However, this requires the knowledge of friends and enemies. Another model is used to predict this relationship labels. The actual model incorporates this signedGCN model along with a matrix factorization component that compares low rank behavior with the learned representations. So, the learnt representations capture low rank matrix factorization behavior while aggregating over reasonably right neighbors. Strengths: 1. **Easy Approach**: The proposed approach is fairly simple. 2. **Clarity**: The paper was easy to read and follow. 3. **Good Results**: The presented results are good. Weaknesses: 1. **Weak Novelty**: The proposed approach is similar to GloGNN [A]. Also, the low rank pattern has been observed in some of the prior works [B]. [C] also builds a compatibility matrix to use in their model. However, what this paper has achieved is to put these ideas together in an effective way. 2. **Unintuitive modeling choices**: The definition of $c_{ij}$ is quite unintuitive. While the individual terms have been explained, the way they are put together is inexplicable. 3. **Large number of hyperparameters**: The number of hyperparameters go up significantly in this approach. It would be useful to have a hyperparameter sensitivity study. [A] Finding Global Homophily in Graph Neural Networks When Meeting Heterophily, ICML 2022. [B] Simple Truncated SVD based Model for Node Classification on Heterophilic Graphs, KDD 2021 Workshop on Deep Learning on Graphs. [C] Graph Neural Networks with Heterophily, AAAI 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. While individual terms of $c_{ij}$ have been explained in the writing and some ablation study in the supplementary have been performed to show how well it works. It will still be good to understand why the three terms can be put in a very simple additive model combined with some values through Softmax. 2. It will be good to perform some hyper-parameter sensitivity study. 3. The "Impact of the Accuracy of the Estimated Signed Adjacency Matrix" was a very interesting section. It seems almost unintuitive that the model is robust to arbitrary corruption of the estimated signed adjacency matrix. However, would it not indicate that the proposed approach could also work with SignedGNN instead of WB-SignedGCN? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive and valuable comments! Here are our responses to your concerns. **W1 The differences between LRGNN and existing GNNs.** **R1** We put the discussion comparing LRGNN and GloGNN [1] in Appendix Sec. E, please refer to it for a detailed description. Please refer to it for a comprehensive description. Although we acknowledge that the compatibility matrix is a commonly used technique for modeling node affinity and that the low-rank pattern is not a new discovery, we firmly believe that our paper introduces novel contributions and claims. Specifically, our use of low-rank matrix completion to predict label relationship matrices and the modeling of heterophilous graphs present unique contributions. Additionally, our paper identifies limitations of existing signed GNNs and proposes a promising solution by incorporating weak balance theory. These distinct aspects of our work effectively demonstrate its novelty. **Q2 Why the three terms can be put in a very simple additive model...** **R2** The first two terms take into account the reliability of the signed adjacency matrix from varying perspectives. Intuitively, when the estimator (the pseudo label $\bar Y$) is more accurate, greater importance should be attributed to the Euclidean term. In heterophilous settings, greater importance should be assigned to the absolute term due to the effectiveness of the signed adjacency matrix generation algorithm in identifying heterophilous node pairs, as described in Section 4.2. Consequently, the significance of each term varies depending on the dataset. We leave the neural network to determine their importance using learned parameters. We visualize the learned weights using three datasets characterized by varying levels of edge homophily and estimator accuracy. | weights | Wisconsin (E. H. = 0.11, Acc.=0.85) | Cora (E. H. = 0.81, Acc.=0.87) | Squirrel (E. H. = 0.22, Acc.=0.53)| | ---- | ---- | ---- |----| | Euclidean | 0.38| 0.36 | 0.22 | | Absolute | 0.40 | 0.34 | 0.36| | Attention | 0.22|0.30 | 0.41| The table indicates that the weight assigned to the Euclidean term is slightly higher when the accuracy of the estimator is high, and the Absolute term holds great importance under heterophily settings. **Q3 It seems almost unintuitive that the model is robust to arbitrary corruption...** **R3** Our experiments show that LRGNN exhibits robustness against **Gaussian noise with a standard deviation of less than 1**. This can be attributed to the use of a square loss that provides robustness against small and dense noise, namely the Gaussian noise, as it significantly diminishes the impact of a small noise (less than 1) through squaring. In fact, the L2 norm is considered optimal when attempting to recover a matrix corrupted by Gaussian noise [4]. Further, the capped norm also effectively improves its robustness against outliers. However, this is not the case for SignedGCN. In the case of e-e-f, nodes with different labels are occasionally misclassified as belonging to the same class. This misclassification introduces sparse noise with a large value of 2, which does not align with the characteristics of "small and dense". [1]Finding Global Homophily in Graph Neural Networks When Meeting Heterophily, ICML 2022.  [2]Simple Truncated SVD based Model for Node Classification on Heterophilic Graphs, KDD 2021 Workshop on Deep Learning on Graphs. [3]Graph Neural Networks with Heterophily, AAAI 2021. [4]Robust matrix factorization with unknown noise, CVPR 2013. **Q4 Concern about the number of hyperparameters** **R4** We focus on the extra hyper-parameters introduced by LRGNN, namely |Notation| Effect | Search space | | ----| ----| ----| |β | weight of initial residual| [0.5, 0.9] | |μ | weight of adjacency matrix for constructing the initial node representation | [0.1, 0.9]| |δ| tendency to generate negative edges in signed adjacency matrix| [0, 0.9] | |γ| weight of the attention term in the objective function| {0.0001, 0.001, 0.002, 0} | |q| Operating rank| dependent on the number of classes| |λ | weight of L2 regularization on $U$ and $V$ | [0.01, 0.02, 0.05, 0.001]| |K | number of iterations to minimize the objective function| {1, 2} | |Estimator| the neural network used to generate signed adjacency matrix | {GCN,MLP} | Here K and Estimator have only two choices. The Estimator can be selected based on the accuracy of these two Estimators on the designated datasets. Additionally, q can be straightforwardly set to the number of classes. As a result, there is no need to tune the hyper-parameters Estimator and q. During fine-tuning the hyper-parameters we found that our model is insensitive to the choice of K, the values of λ, and the value of γ. In fact, the selection of K and the values of λ and γ (if within the search space for λ and γ, respectively) have a very small impact on accuracy. To demonstrate, we fix these three hyper-parameters as K=1, λ=0.01, and γ=0.001, and fine-tune other hyper-parameters. The results are presented in the table below. | | Texas | Wiscon. | Cornell| Actor| Squirrel | Chamel.| Cora| Citeseer | Pubmed| | ----| ----| ----| ----| ----| ----| ----| ----| ----| ----| |LRGNN|90.27| 88.23| 86.22| 37.34| 74.38 |79.16| 88.33| 77.53| 90.16| |LRGNN-restricted|88.34 $\scriptsize{\downarrow1.94}$ |82.94 $\scriptsize{\downarrow 5.29}$|85.95 $\scriptsize{\downarrow0.27}$|35.21 $\\scriptsize{\downarrow2.13}$ |72.35 $\scriptsize{\downarrow 2.03}$ |77.93 $\scriptsize{\downarrow1.23}$ |88.37$\scriptsize{\uparrow 0.04}$|76.82$\scriptsize{\downarrow{0.71}}$| 90.12$\scriptsize{\downarrow0.04}$| Here LRGNN-restricted refers to LRGNN with these three hyper-parameters being fixed and q is the number of classes. It can be observed that the decrease in accuracy is minimal, consistent with our previous statement. Thus, careful tuning is necessary only for β, μ, and δ. Tuning the hyper-parameters for LRGNN is not difficult. --- Rebuttal Comment 1.1: Title: Response of Reviewer mJPn Comment: 1. **Limited Novelty:** I did read through the Appendix Sec. E. Regularization broadly means that the learnt variables are constrained with some known knowledge, so if the regularization happens explicitly as GloGNN or implicitly via incorporating Label Relationship predicted via the low rank structures is conceptually the same but with different implementations. The same could be said for the various other differences identified in that Section. We can think of these things as knobs that can take different values, and the current proposed approach found effective values to set, which by no means is a small feat, but novelty in terms of new ideas is still limited to incorporating the weak balance theory. 2. **Combination of $c_{ij}$'s:** Again, procedurally it is quite clear what is happening, but to me Equation 13 is not a very obvious way to design it and I wanted more information on why it was designed the way it has been designed. Just to give an example, the first term's potential range is $(0, (1-1/c)^2)$, the second term's potential range is ~$(0.5, 1)$ and the third term's potential range is $(0, 1)$, so these three ranges are not the same, so it is not obvious why a simple softmax combination of these terms would even work. 3. **Corruption of Adjacency:** Thanks for the clarification on this. 4. **Number of hyperparameters:** It is great that some of the hyperparameters can be fixed while giving reasonable results. However, the performance on the heterophily datasets seem to vary significantly (between 1-6 percentage points). Is this because of K? or some other fixed parameter? --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We sincerely appreciate your constructive and enlightening feedback. Here are our responses to your questions. **About the difference with GloGNN:** We appreciate and acknowledge your suggestion that regularization implies the constraint of learned variables based on prior knowledge. This viewpoint provides a new perspective for understanding our work. However, we would like to provide further clarification. One of the new ideas of our paper is that it is possible to find an effective way to accurately predict the majority of the unknown label relationships using the very limited known ones. The key difference lies in the amount of new information or knowledge gained. When considering the term $||Z-A_{GCN}||$ in GloGNN and $||P_{\Omega}(UV'-A_{signed})||$ in LRGNN, the similar part of these two approaches. We can immediately find a solution for minimizing $||Z-A_{GCN}||$, namely $Z=A_{GCN}$. However, this outcome is trivial since it does not result in any new knowledge. $A_{GCN}$ is already known. Conversely, LRGNN receives a scant amount of known label relationships, say 10%, and predicts the substantial majority of the unknown ones, say 90%, which is accomplished through the decomposition of $UV'$ and the utilization of the projection function $P_\Omega$. They consider different tasks and have varying levels of complexity. $||Z-A_{GCN}||$ appears to function as an auxiliary term, enhancing the feasibility of the solution for the subspace clustering problem by approximating the adjacency matrix. From our perspective, they should not be considered as mere different implementations, as it provides a new direction to predict the unknown label relationships solely based on a small set of observations, with theoretical guarantees. The term "different implementations" might be more appropriate for describing techniques such as using L1 norm-based low-rank approximation to predict label relationships. However, we wish to emphasize that the concept of obtaining a coefficient matrix via optimization problem in GloGNN is undeniably inspiring and motivating, and we completely understand and respect that reviewers have their own criteria for a "new idea". **Combination of $c_{i,j}$'s:** Previously, we defined the first term as $\Vert \hat Y_{i,:}\Vert_2^2 \Vert \hat Y_{j,:}\Vert_2^2$, which has a range $(0,1)$. But we empirically found that $(\Vert \hat Y_{i,:}\Vert_2^2 -1/c) (\Vert \hat Y_{j,:}\Vert_2^2-1/c)$ works better, as it imposes a stronger penalty on uniformly distributed $\Vert \hat Y_{i,:}\Vert_2^2$. Regarding the combination of these three terms, our intuition is straightforward. We intend for the softmax combination to serve as an attention-like mechanism that appropriately weighs the three terms according to different datasets. But here the attention coefficients are directly parameterized ($softmax(W_a)$ where $W_a$ $\in \mathbb R^{1\times 3}$ is a randomly initialized learned matrix) rather than being computed using two involved representations as done in a standard attention function. During the early stage of the experiment, we also tried to combine these terms in alternative ways, like using $Relu(W_a)$ or $Relu(W_a)/sum(Relu(W_a))$, as well as a more complicated edge-level scheme $[a^1_{i,j},a^2_{i,j},a^3_{i,j}]=softmax(W_a[h_i||h_j])$, where $W_a \in \mathbb R^{3 \times 2d}$ is a learned matrix. But we found the simple $softmax(W_a)$ works best as the softmax function is more attentive, and we have already included edge-level attention (the third term), so $softmax(W_a[h_i||h_j])$ may be redundant. In summary, the way of combination is motivated by the need to dynamically weigh the three terms, and after considering various alternatives, it was ultimately selected. **Large performance drop on Wisconsin dataset:** This is because of the characteristic of the dataset. As pointed out in a recent work [1], Cornell, Texas, and Wisconsin are very small (183-251 nodes), which can lead to unstable results, and the standard deviation on these datasets is very high. To mitigate this impact, we perform 100 independent runs (previously 10 runs) on these datasets to obtain more stable results. ||Texas|Wisconsin|Cornell| |----|----|----|----| |LRGNN|90.27±4.5| 88.23±3.5| 86.22±6.5| |LRGNN-restricted|88.34±3.5$\downarrow(1.93)$|86.86±4.1$\downarrow(1.37)$|85.14±5.4$\downarrow(1.08)$| Currently, the performance drop is no more than 2 percentage points. Therefore, fixing these hyperparameters would not cause a significant change in performance. Thank you once again for your valuable comments. We look forward to your reply! [1] Platonov, Oleg, et al. "A critical look at the evaluation of GNNs under heterophily: are we really making progress?." ICLR 2023. --- Rebuttal 2: Title: About Authors' Reply Comment: Dear reviewer mJPn, Would you mind to check authors' reply and indicate whether you'd like to change the score or need more discussion? AC
Summary: This paper consider predicting the global label relationship matrix as a low rank matrix completion problem. This formulation is based on the observation that the rank of the global label relationship matrix shall equal the number of classes. The authors proposed an efficient solution to the low rank matrix completion problem and the predicted global relationship matrix is used to mix features into the node embedding for predicting node labels. The authors compared the proposed method with a series of baselines and find the proposed method consistently outperform or on-par with the state-of-the-art methods. Besides the LRGNN, the authors also prove that a variant of signed GNNs shows a tendency to follow the structural balance 72 theory, which can be enhanced by eliminating the faulty assumption from the model design. Strengths: 1. The idea of computing the global label relationship matrix and then use it as a one-hop GNN is quite novel. 2. The formulation of predicting the global label relationship matrix as a signed low rank matrix completion problem is elegant. 3. The paper is extremely well-written and the logic well organized. *I am not able to verify the math behind the matrix completion solution, but the rest of the method looks sound to me* Weaknesses: The weakness of the paper is well-discussed in the last section. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: It is not clear to me how the \tilde{A} entries are defined. In particular, assuming a graph has 30% of labels being known, then only roughly 9% of the edge signs are known. I am not sure how the rest of the values are determined, are they simply set to 0? Furthermore, I am not sure how intuitively matrix completion would help determine the label relation of a node with other nodes in the graph, if the graph has no edges that has a know sign. In particular, the authors states: *Finally, at least one observation is present per row and per column (\tilde{A}_i,i = 1)* Yet I am not sure why that information alone would be helpful. would appreciate if you can provide more intuition Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: ... Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments and for taking the time to review our paper. We appreciate your insights and would like to address your concerns. First, regarding our optimization algorithm for the matrix completion problem, we would like to provide some additional information. The optimization algorithm we used follows the approach outlined in a highly regarded paper [1] and we have properly cited and acknowledged this source in our paper. The Majorization-Minimization technique, upon which our algorithm is based, is elegant and conceptually straightforward. Its correctness is easily verifiable. We have put significant effort into verifying the mathematical aspects of this algorithm to ensure its correctness. We would love to describe the basic operations of the Majorization-Minimization to make sure that you would not have concerns about the correctness of the algorithm. Consider we wish to solve $\underset { x}{min} f(x)$ that is so **complicated** that we cannot handle the problem directly. We aim to find a sequence $\left\\{x_k\right\\}$ such that $f(x_{k+1})\le f(x_{k})$. The idea is about using the variable x at the current iteration ($x_k$) to construct a **simpler** surrogate function $g(x|x_k)$, so we can obtain $x_{k+1}$ by solving $x_{k+1} = \underset { x}{arg min} \\; g(x|x_k)$, where $g(x|x_k)$ can be any function once it satisfies the following two conditions: **(1)** $f(x) \le g(x|x_k)$, $\forall x$ **(2)** $f(x_k) = g(x_k|x_k)$, $\forall x_k$ We can easily prove $f(x_{k+1}) \underset {(1)} {\le} g(x_{k+1}|x_k) \underset {\text{by definition}}{\le} g(x_k|x_k) \underset {(2)}{=} f(x_k)$. The intuition is to minimize $f$ by another simpler function $g$ such that minimizing $g$ ’helps’ minimize $f$. Now back to our paper, the construction of the surrogate function is described by Eq.(24) (page 22), and the proof of the two conditions above refers to Eq.(37),(38), and (39) (page 24). **Q1** I am not sure how the rest of the values are determined, are they simply set to 0? **R1** Yes! We just simply set the unknown entries to 0, but you can set them to any other specific values if you like. The rest values of $\tilde{A}$ do not affect the objective function or the values of $U$ and $V$ because only the known entries contribute to the objective function Eq.(8). Specifically, the first term of the objective function is given by $\sum_{(i,j)\in \mathcal{E}} (UV^T_{i,j}-\tilde{A}_{i,j})$. Here, $\mathcal{E}$ represents the observation set, which includes the known entries. **Q2** Finally, at least one observation is present per row and per column (\tilde{A}_i,i = 1). Why that information alone would be helpful. **R2** This is only a consideration at the theoretical level. Look at the objective function again. $O= \sum_{(i,j)\in \mathcal{E}} (UV^T_{i,j}-\tilde{A}_{i,j})$ If there are no observed entries in a specific row, such as row 1 with $(1,j)\notin \mathcal{E}, \forall j$, the value of $U_{1,:}$ does not affect $O$. Consequently, the value of $U_{1,:}$ remains unchanged throughout the iteration process, as it does not contribute to the gradient. Thus, its value is solely determined by the initialization method and random seed employed. Knowing $\tilde A_{i,i} = 1$ alone is **insufficient** to infer its label relationships by matrix completion, but at least we can generate a unique $U_{i,:}$ and $V_{i,:}$ that are independent of the initialization method and the random seed. For other cases demonstrating how the label relation of a node with other nodes can aid in matrix completion, please refer to Appendix Section A. Thanks for your time and effort again! Please let us know if you have any further question. [1] Hastie, Trevor, et al. "Matrix completion and low-rank SVD via fast alternating least squares." The Journal of Machine Learning Research 16.1 (2015): 3367-3402. --- Rebuttal 2: Title: About Authors' Reply Comment: Dear reviewer JZ4m, Would you mind to check authors' reply and indicate whether you'd like to change the score or need more discussion? AC
Rebuttal 1: Rebuttal: Dear Reviewers and AC, In response to the concerns raised by Reviewer jP2E, we have presented tables that juxtapose LRGNN with existing baselines across a spectrum of recent and large-scale datasets. This should help alleviate concerns regarding the scope of datasets used in our initial study. To elucidate on the distinctions, we delve deeper into the expressive constraints of FAGCN. One of the core tenets of our work is the novel integration of low-rank matrix completion within GNNs, particularly tailored for Heterophily graphs. Alongside this, our paper also sheds both theoretical and empirical light on the inherent limitations of the current crop of signed GNNs – an exploration that remains largely untouched in prior research. Addressing the queries posed by Reviewer mJPn, we have included a hyper-parameter sensitivity analysis. Furthermore, to offer a transparent view on our methodology, a table elucidating the learned weights has been incorporated. This should provide clarity on the amalgamation of terms in $c_{i,j}$ and any apprehensions regarding the volume of hyperparameters. For reviewer rmTT, who expressed concerns about the potential dependency on the size and quality of the observed entries, we direct attention to the plethora of figures and examples (both from real-world scenarios and generated datasets) in Appendix A and B of our initial submission, underscoring the resilience of our approach. Lastly, we have taken steps to clearly address the questions from Reviewer JZ4m, elucidating on the nature of $\tilde{A}$, diving deeper into matrix completion, and providing a clearer perspective on our optimization algorithm. We want to express our gratitude to the reviewers for their valuable suggestions and would greatly appreciate any further questions they may have regarding our paper. If our responses have addressed your questions, we would kindly ask for your reconsideration of the scores. Pdf: /pdf/f1b4517c4a0c70a50fc2b6f2c8d88fa85b00b72c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces the LRGNN method, which aims to enhance the performance on heterophilous graphs by re-constructing the signed relationships. The approach involves optimizing the low-rank signed matrix using the SVD-free LRMF strategy. Strengths: The utilization of the low-rank signed relationship matrix in heterophilous graph representation is intriguing and promising. Weaknesses: Weaknesses: 1. The limitations on the first contribution. One of the limitations of this method is its reliance on the data split. The success of low-rank matrix completion is highly dependent on the availability and quality of the signed adjacency matrix and the observation set. In cases where the observation set is small, there may be nodes that have no neighbors selected, as mentioned in Line 140. This poses challenges in effectively guessing the values of U and V, potentially impacting the accuracy of the low-rank matrix completion process. 2. Limited novelty: Although the method incorporates low-rank optimization, the overall novelty of the approach is somewhat limited. The designed method in Section 4 shares similarities with other existing methods. For instance, when compared to the general compatibility matrix used in [1,2], the primary difference lies in both matrices' objective to measure whether two nodes belong to the same class. 3. Concerns with the experiments: a. Effectiveness of low-rank recovery of Z (Eq. 7 and 10): To further justify the effectiveness of the low-rank recovery of Z, it is essential to provide evaluations for Eq. (7) and (10). Demonstrating the accuracy and reliability of the low-rank recovery process would strengthen the paper's claims. b. Parameter numbers: Including the parameter numbers in the paper would provide valuable insights, especially regarding the growth of the matrices U and V in correlation with the number of nodes. This information would contribute to a more comprehensive understanding of the method's scalability and resource requirements. Detailed comments: 1. Fig.1: In panel t3, the color of the circle appears to be incorrect. Please ensure that the color corresponds accurately to the represented class. 2. Fig.4 Wisconsin: There seems to be a discrepancy in the edge numbers of in-class and between-class connections. Given that this dataset is heterophilous, the number of edges within the same class and between different classes should differ significantly. Please review and verify the edge counts for accuracy. [1] Graph Neural Networks with Heterophily. [2] Powerful Graph Convolutioal Networks with Adaptive Propagation Mechanism for Homophily and Heterophily. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please check the weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please check the weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed review. We address the concerns in below. **Q1: The reliance on the size of the observation set** **R** In order to align with the time complexity of other GNNs, we define the observation set as the edge set. However, it is important to note that the performance of the GNN algorithm is known to be influenced by the number of edges present in the graph. So if a small observation set poses a challenge to the low-rank approximation method, it would be also a challenge to GNN models, since they use the same edge set. There is no evidence suggesting that the low-rank approximation method is more sensitive to this size than a GNN model. In fact, our experimental results demonstrate that our LRGNN achieves the best or runner-up results on sparse graphs such as the Texas, Wisconsin, and Cornell datasets. Furthermore, the visualization results presented in Figures 4 and 5 showcase that low-rank approximation can recover good label relationship matrices even if the edges are sparse. Additionally, there is no difference between low-ran approximation and GNNs when encountering isolated nodes. For instance, when a node contains no links, the output representation of this node by a GNN model is equivalent to that of its MLP counterpart. Similarly, the corresponding row of the node in the matrix recovered by a low-rank approximation model has only one non-zero element, then using it for propagation we also obtain the same outcome as an MLP. Since all types of GNNs are influenced by the number of edges and there is empirical evidence that low-rank matrix completion performs effectively with limited observation sets, this should not be considered a particular drawback of our proposed approach or paper. **Q2: the reliance on the quality of the signed adjacency matrix** **R** Suppose the accurate signed adjacency matrix is given by $A_{ground}$, then the estimated signed adjacency matrix can be expressed as $\tilde A = A_{ground} + N$, where $N$ is a noise matrix. We have dedicated much effort to exploring the influence of the noise in Appendix B, including the effectiveness of the capped norm and the change in performance of LRGNN when the entries of $N$ are i.i.d. drawn from Gaussian distribution. For example, LRGNN outperforms the runner-up OGNN by a substantial margin on the Squirrel dataset. It is worth noting that the signed adjacency matrix of Squirrel is particularly inaccurate, as reflected by the estimator's low accuracy of approximately 50%. There is also a visualization result of the recovered $Z$ on Squirrel presented on page 21, figure 14(b). These results suggest that the quality of the generated signed adjacency matrix has a very slight impact on LRGNN’s performance, and the outstanding performance of LRGNN is not conditioned on a very accurate signed adjacency matrix. **Q3: limited novelty** **R** While compatibility matrices are commonly employed to model node affinity, our paper delves into several innovative aspects that set it apart. Firstly, we introduce low-rank matrix completion to the GNNs, showcasing its power in predicting the label relationship matrix. Secondly, we establish that predominant signed GNNs find their foundation in balance theory. Our empirical analysis further highlights that these can be significantly enhanced by reconsidering the 'e-e-f' assumption, which might not always hold true. Notably, the reviewer has also positively remarked on our approach, mentioning, "The exploitation of the low-rank signed relationship matrix in heterophilous graph representation is both intriguing and holds promise." This attests to the novelty we're bringing into the domain. In summation, our primary contributions and the claims made in the paper distinctively address areas that have been relatively untouched in preceding research, thereby underscoring the paper's innovative value. **Q4 Effectiveness of low-rank recovery of Z (Eq. 7 and 10)** **R** We would like to clarify a few points. Firstly, Eq. 7 pertains to a different aspect of our methodology and does not directly address the recovery of Z. Conversely, Eq. 10 is instrumental in generating the signed adjacency matrix. As for the low-rank recovery's precision, our study showcases a plethora of visualization results underscoring its effectiveness. These include the recovery error rate on real-world graphs illustrated in Figure 3, visualization of the recovered Z in Figures 4 and 5, and recovery accuracy on generated data with a minimal observation rate presented in Figures 7 and 8 (Appendix A). We invite you to review these figures for a more comprehensive understanding. **Q5 Parameter numbers** **R** U and V are not parameterized, and they are expressed as a series of matrix multiplication, see Eq. 25 and Eq. 27 for their formulation. Regarding the size of U and V, they are of size $n \times q$, where q is a small number typically close to the # of classes. Therefore, U and V would not require more computational or storage resources than a typical node representation matrix in a GNN model. The parameter number of LRGNN is $ O(fd+dc+nd) $, where $f$ is the dimension of the raw feature, $d$ is the hidden size, $n$ the number of nodes. As a comparison, the parameter number of GCN is $O(fd+dc+d^2)$, and the additional parameters of LRGNN is only linear to the number of nodes. **Q6 Fig.1 & Fig. 4** **R** We will correct that mistake in Fig 1. Note that Figure 4 is the visualization of $Z$ with $n \times n$ non-zero entries, which can be viewed as the predicted adjacency matrix of a signed **complete graph**, not the sparse adjacency matrix. Therefore, the number of edges within the same class and between different classes is only related to the class balance, irrespective of heterophily or homophily. Fig.4 Wisconsin is accurate. --- Rebuttal Comment 1.1: Comment: I have read the reply and the questions have been addressed. I will change my score from 4 to 5. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for taking the time to read our response. We sincerely appreciate your insightful feedback and adjustment in your score. We will keep improving our paper based on your suggestions. --- Rebuttal 2: Title: About Authors' Reply Comment: Dear reviewer rmTT, Would you mind to check authors' reply and indicate whether you'd like to change the score or need more discussion? AC
null
null
null
null
null
null
Training Transitive and Commutative Multimodal Transformers with LoReTTa
Accept (poster)
Summary: This paper presents a learning paradigm to account for missing paired modalities during training. For example, for three modalities A, B, and C, the proposed system can be trained on (A, B) and (B, C) paired data but transfer to (A, C) or (A, B, C) paired data. To tokenize the modalities, the authors uses standard tokenizers for language, and raw bytes or spatially-reduced methods (CNN/VQ-VAE) to tokenize images and audios. The model is based on a standard transformer decoder, and the key novelties are: (a) commutative modeling by asking the model to next-token predict modality A from B, or B from A. (b) transitive modeling by producing pseudo data using a linking modality. To incorporate bidirectional context while using causal attention, casual masked modeling loss is employed. The proposed system is evaluated on both customized and real-world datasets and its performance is compared against popular multimodal learning paradigms including masked, casual, and contrastive objectives. Strengths: This paper conducts extensive experiments (section 5 and 6) on both constructed and real-world medical datasets with three modalities, and show that the proposed system is very effective when there is missing paired modality data. Especially, the authors perform careful ablations against mainstream popular pre-training objectives, including contrastive, casual, and masked modeling losses. Theoretical analysis (section 4) based on perplexity is interesting and sound. Weaknesses: The proposed system should ideally be tested on larger-scale multimodal datasets, such as AudioSet [1] with image-text-audio modalities. Importantly, this would be crucial to see if the proposed system generalize to real-world vision-language-audio domains where the three modalities are naturally aligned. This setup would be different from the constructed SVL-MNIST dataset where the language modality (from WineReview) has weak or no correlation to the vision (MNIST) and audio (AudioMNIST) modalities. For complete ablations, the authors could provide results on the “upper bound” training setups where: - All samples have (A, B, C) modalities. - There are (A, B), (B, C), (A, C) paired modality data. Not a weakness but a suggestion: L38: “To the best of our knowledge, this scenario has never been considered before in the literature.” —> There is a recent related work [2] in vision community that use images as the linking modality to bind all other modalities such as depth/audio/text. [1] Audio set: An ontology and human-labeled dataset for audio events. Gemmeke et al. 2017. [2] ImageBind: One Embedding Space To Bind Them All. Girdhar et al. 2023 Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Some technical concerns that I have: - The baseline linear probing accuracy on SVL-MNIST is rather poor. For instance, the vision-MNIST should at least have an accuracy above 95%, but the vision-only transformer in this paper only achieves ~80%. - Also, the text modality of SVL-MNIST seems to be noisy as its linear probing accuracy is only 60%. Is the poor accuracy resulting from low-shot training with 100-shot per class, or due to some label noise? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the authors discussed limitations of their approach from both technical and societal perspectives. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer SR2A,** We appreciate your time and effort in reading our manuscript and providing us with valuable feedback. It is wonderful to hear that the soundness, presentation, and contributions of our paper are considered excellent. We would like to address the remaining questions below. **Large-scale experiments.** We were faced with a limited computational budget. However, we were still able to perform extensive experiments on a synthetic and real medical dataset with meaningful applications. It is impressive to see that LoReTTa can effectively integrate highly complex omics data across different domains to improve survival predictions. The question, of course, is how to scale the model. According to the AudioSet website and paper, it is primarily a dataset for audio classification. The corresponding video clips must first be found and transcribed to obtain the video and text modalities. However, a recent video-language-audio multimodal dataset [Hayes et al., 2022] was recommended during the rebuttal. It contains 375,000 aligned data pairs sampled from a real-world reinforcement learning problem. Even with limited time and resources, we were able to finish the experiment and achieve performances that are competitive with the upper bound model (see global rebuttal). Because of this additional experiment, we could not provide the upper bound results for the other experiments. However, it was a reasonable decision to omit them. We considered a scenario where there is no dataset with all modalities present and aligned. Thus, in practice, we would not know the upper bound. The only relevant upper bound is the current state-of-the-art, which we have shown to outperform. **Linear probing accuracy.** Training on raw pixel values instead of image patches is more challenging since we lose potentially important spatial information [Chen et al., 2020; Jaegle et al., 2021; Yu et al., 2022]. We committed ourselves to this setting to make the classification problem of SVL-MNIST more challenging. Otherwise, all models would obtain high accuracy. This would have made it hard to see the effect of each individual design decision. We also chose the WineReview dataset because classifying wines based on written text alone is particularly challenging, as evidenced by public leaderboards. **New results**. The table below shows the results of our new experiments. Both GPT and LoReTTa were trained on disjoint (video, audio) and (video, text) pairs to solve the problem of cross-modal translation. We then evaluated the models on the unseen task of audio captioning. As can be seen, LoReTTa is more than capable of overcoming the modality gap and rivals the upper-bound models (MMGPT). More details can be found in the global rebuttal. | Method $\phantom{.}$ | $\phantom{...}$ Train | $\phantom{a}$ Test | BLEU4 | METEOR | ROUGE | |----------|----------|----------|----------|----------|----------| | GPT | A $\rightarrow$ V, V $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 1.7 | $\phantom{...}$ 18.5 | $\phantom{...}$ 30.7 | | LoReTTa | $\phantom{.}$ A $\leftrightarrow$ V $\leftrightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 2.8 | $\phantom{...}$ 20.8 | $\phantom{...}$ 34.7 | | | | | | | MMGPT | $\phantom{...}$ A $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 6.7 | $\phantom{...}$ 19.4 | $\phantom{...}$ 27.1 | | MMGPT| $\phantom{...}$ V $\rightarrow$ T |$\phantom{.}$ V $\rightarrow$ T | $\phantom{...}$ 7.8 | $\phantom{...}$ 21.3 | $\phantom{...}$ 29.1 | **References** Chen et al., Generative Pretraining from Pixels, 2020 Hayes et al., MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration, 2022 Jaegle et al., Perceiver: General Perception with Iterative Attention, 2021 Yu et al., Scaling Autoregressive Models for Content-Rich Text-to-Image Generation, 2022 --- Rebuttal Comment 1.1: Title: Follow-up Comment: I appreciate the author responses. I still have a few follow-up comments. **1. I still don't understand why not include the suggested upper bound setups. These should be provided as a reference because the datasets you used contain all 3 modalities for each sample.** **2. Even a 1-layer linear NN achieves 88% test accuracy on MNIST (as can be found in Yann Lecun's original MNIST database page) and a simple KNN can achieve 95% test accuracy.** **3. The authors did not answer my question on the poor accuracy on the text modality of SVL-MNIST dataset.** --- Reply to Comment 1.1.1: Comment: We would like to answer all of the Reviewer's remaining open questions: **(1)** As explained in the rebuttal, we did not add the upper bound results due to computational constraints. During the rebuttal, we committed all resources to provide the large-scale experiment. We are currently running the upper bound experiments for TCGA-OMICS and MNIST-SVL, which the Reviewer correctly points out is an interesting comparison (for the MUGEN dataset, the upper bound is given and LoReTTa manages to come close or even exceed it). We will include the results in the final manuscript. If the results are available before the end of the discussion period, we will inform the Reviewer here. In particular, we pre-train two new models and fine-tune them a total of 28 times. **(2)** Our experiments on MNIST differ from the results mentioned by the Reviewer because we considered a few-shot scenario (100 samples per class). If we had probed our linear classifier with all classes, we would have gotten well over 90% accuracy. This would have made it difficult to see the impact of the powerful multimodal features learned by LoReTTa, which are useful for downstream tasks with few labeled samples, as can be seen in our experiments. For reference, when we fine-tune on all labels, we get 99.6% accuracy for image, 96.3% for audio, and 82.0% for text. **(3)** We have answered this question in the rebuttal (linear probing accuracy, last two sentences). The text modality from SVL-MNIST is exactly the WineReviews dataset. It is indeed noisy, since many words used to describe one wine could be attributed to another wine. And since taste is a subjective experience, two wine tasters could give different reviews for the same wine. We chose this dataset precisely to show the ambiguity of relying on only one modality. --- Reply to Comment 1.1.2: Title: Upper Bound Results Comment: We would like to update the reviewer about the "upper bound experiments". It is trained with C2M3 on all aligned modalities. We call it "C2M3 (3-modal)" and not "Upper Bound Model" because training with all modalities does not guarantee the best result due to negative transfer and modality competition, which can be mitigated by more advanced techniques like transitive modeling as shown by LoReTTa. For SVL-MNIST, we have highlighted the accuracies of the modalities that were not seen during pre-training. The subscript indicates the linking modality, i.e., image (I), text (T), and speech (W). For the TCGA-OMICS experiments, we have highlighted the best c-index without the "upper bound". As can be seen, LoReTTa is consistently close to or even better than the full model. These results are also consistent with those from the large-scale experiment on the MUGEN dataset. Method | IMG | TXT | WAV | IMG-TXT | IMG-WAV | TXT-WAV | IMG-TXT-WAV |----------|----------|----------|----------|----------|----------|----------|----------| LoReTTa$_I$ | 82.7 | 62.8 | 82.5 | $\phantom{..}$ 88.5 | $\phantom{..}$ 89.7 | $\phantom{..}$ $\textbf{84.0}$ | $\phantom{....}$ $\textbf{90.7}$ | LoReTTa$_T$ | 81.2 | 63.3 | 81.0 | $\phantom{..}$ 89.0 | $\phantom{..}$ $\textbf{80.9}$ | $\phantom{..}$ 85.5 | $\phantom{....}$ $\textbf{87.8}$ | LoReTTa$_W$ | 80.1 | 62.1 | 84.2 | $\phantom{..}$ $\textbf{83.0}$ | $\phantom{..}$ 90.4 | $\phantom{..}$ 89.0 | $\phantom{....}$ $\textbf{91.6}$ | | | | | | | | | | C2M3 (3-modal) | 80.8 | 56.6 | 82.4 | $\phantom{..}$ 84.9 | $\phantom{..}$ 86.3 | $\phantom{..}$ 84.9 | $\phantom{....}$ 88.3 | | | | | | | | | | Method | mRNA | miRNA | RPPA | mRNA-miRNA | mRNA-RPPA | miRNA-RPPA | mRNA-miRNA-RPPA |----------|----------|----------|----------|----------|----------|----------|----------| BERT | 0.592 | 0.621 | 0.594 | $\phantom{....}$ 0.595 | $\phantom{....}$ 0.613 | $\phantom{....}$ 0.594 | $\phantom{........}$ 0.610 | GPT | 0.575 | 0.590 | 0.611 | $\phantom{....}$ 0.585 | $\phantom{....}$ 0.576 | $\phantom{....}$ 0.606 | $\phantom{........}$ 0.588 | CLIP | 0.561 | 0.603 | 0.587 | $\phantom{....}$ 0.610 | $\phantom{....}$ 0.600 | $\phantom{....}$ $\textbf{0.623}$ | $\phantom{........}$ 0.612 | C2M3 | 0.621 | 0.599 | 0.599 | $\phantom{....}$ 0.624 | $\phantom{....}$ 0.620 | $\phantom{....}$ 0.571 | $\phantom{........}$ 0.624 | LoReTTa | $\textbf{0.652}$ | $\textbf{0.623}$ | 0.563 | $\phantom{....}$ $\textbf{0.660}$ | $\phantom{....}$ $\textbf{0.652}$ | $\phantom{....}$ $\textbf{0.623}$ | $\phantom{........}$ $\textbf{0.657}$ | | | | | | | | | | C2M3 (3-modal) | 0.665 | 0.635 | 0.645 | $\phantom{....}$ 0.643 | $\phantom{....}$ 0.671 | $\phantom{....}$ 0.650 | $\phantom{........}$ 0.620 | | | | | | | | | |
Summary: This paper proposes a new method LoReTTa (Linking mOdalities with a tRansitive and commutativE pre-Training sTrAtegy) to address the setting where not all modality pairs are available during training and inference. Concretely, it leverages casual masked modeling and transitive modeling to accomplish the objective. Theoretical analysis has been provided to help the understanding of the proposed method, with empirical results presented as well. Strengths: + The target problem is practical and meaningful, which may indeed occur in our real implementation of multimodal algorithms. + The proposed method is simple and easy-to-follow. + Theoretical analysis contributes to the understanding. Weaknesses: I am not an expert in this field, but I find some issues that might affect the quality of this manuscript. - The proposed method seems a bit tricky; I am concerned whether this is up to the standard of NeurIPS. And the authors should implement ablation studies to verify the effectiveness of each proposed technique. - In transitive modeling, we need to use a "predicted" modality to bridge the missing modality pair. However, since the prediction can not be that accurate, how can we ensure robustness against such noise? - The experimental datasets are sort of small (e.g. SVL-MNIST dataset). It would be better if experiments are conducted on more datasets or even larger ones. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer FSDR,** It is a great pleasure to hear that our work addresses a practical and meaningful problem. Our proposed algorithm is indeed simple but has a big impact. We demonstrated this by applying our novel method to a real medical problem. The results we saw were fully consistent with our theoretical analysis of the mechanisms behind LoReTTa. In the following sections, we would like to address the remaining concerns. **Ablation studies.** We have not forgotten about the ablation studies. They are critical to understanding which component contributes to which outcome. The two new concepts we have introduced are commutative modeling and transitive modeling. We applied commutative modeling to CM3 [Aghajanyan et al., 2022] to get C2M3. We then extended C2M3 with transitive modeling – resulting in LoReTTa. CM3 and its variants [Aghajanyan et al., 2022; Bavarian et al., 2022; Fried et al., 2023] are themselves extensions of GPT [Radford et al., 2018] and BERT [Devlin et al., 2018]. Thus, an ablation study for causal masked modeling has already been done in the literature (they show noticeable improvements). Nevertheless, we used all of these models as baselines in our real-world experiment on TCGA-OMICS, comparing GPT, BERT, C2M3 (GPT + BERT + Commutative Modeling), LoReTTa (C2M3 + Transitive Modeling), as well as CLIP [Radford et al., 2022]. Hence, our benchmark automatically included an ablation study. Since C2M3 is already a strong improvement over GPT, BERT, and CLIP, we only compared LoReTTa with C2M3 in our SVL-MNIST experiments, where we extensively studied the effect of transitive modeling under different scenarios. **Robustness against noise.** LoReTTa predicts the missing modality to bridge the modality gap. This only works if the prediction is accurate. Otherwise, we would be training with noisy samples. We ensure this by pre-training our transformer with C2M3 until convergence. It learns to effectively transition between all seen modality pairs through causal modeling, within modalities through masked modeling, and vice versa through commutative modeling. With this strong generative model, we are able to faithfully infer the missing modality. We further reduce the noise by adding another level of alignment: using the predicted modality to infer the omitted sample (see Figure 2 for more details). **Large-scale experiments.** We chose SVL-MNIST to pre-train 12 models and fine-tune them a total of 162 times. This allowed us to gain valuable insights into the inner workings of transitive modeling with a limited computational budget. As a real-world application, we chose the medical domain, which is known to lack large datasets for training foundation models. It is still worth noting that LoReTTa already gives impressive results on a smaller medical dataset with highly complex modalities such as genomics, transcriptomics, or proteomics. Nevertheless, it is interesting to see how well LoReTTa scales to larger experiments. Thanks to the valuable feedback we received during the rebuttal, we were directed to a new multimodal dataset with 375,000 samples from a real-world vision-language-audio problem [Hayes et al., 2022]. We immediately downloaded the data and started training. The results are presented in the global rebuttal and show that LoReTTa is scalable and effective in more complex (non-medical) environments. **New results**. The table below shows the results of our new experiments. Both GPT and LoReTTa were trained on disjoint (video, audio) and (video, text) pairs to solve the problem of cross-modal translation. We then evaluated the models on the unseen task of audio captioning. As can be seen, LoReTTa is more than capable of overcoming the modality gap and rivals the upper-bound models (MMGPT). More details can be found in the global rebuttal. | Method $\phantom{.}$ | $\phantom{...}$ Train | $\phantom{a}$ Test | BLEU4 | METEOR | ROUGE | |----------|----------|----------|----------|----------|----------| | GPT | A $\rightarrow$ V, V $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 1.7 | $\phantom{...}$ 18.5 | $\phantom{...}$ 30.7 | | LoReTTa | $\phantom{.}$ A $\leftrightarrow$ V $\leftrightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 2.8 | $\phantom{...}$ 20.8 | $\phantom{...}$ 34.7 | | | | | | | MMGPT | $\phantom{...}$ A $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 6.7 | $\phantom{...}$ 19.4 | $\phantom{...}$ 27.1 | | MMGPT| $\phantom{...}$ V $\rightarrow$ T |$\phantom{.}$ V $\rightarrow$ T | $\phantom{...}$ 7.8 | $\phantom{...}$ 21.3 | $\phantom{...}$ 29.1 | **References** Aghajanyan et al., CM3: A Causal Masked Multimodal Model of the Internet, 2022 Bavarian et al., Efficient Training of Language Models to Fill in the Middle, 2022 Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018 Fried et al, InCoder: A Generative Model for Code Infilling and Synthesis, 2023 Hayes et al., MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration, 2022 Radford et al., Improving Language Understanding by Generative Pre-Training, 2018 Radford et al., Learning Transferable Visual Models From Natural Language Supervision, 2022
Summary: This paper aims to remedy the problem of training a mobility to perform well on any combination of modalities in the training data, regardless of if these combinations appear in the training data. This is done through commutative and transitive pretraining: the former allows for the model to learn modality A from B and B from A, and the latter learns the missing joint distributions. This results in a model which outperforms models trained on fewer modalities in the pre-training stage or models trained without transitive modeling. Strengths: This paper addresses an important and under-addressed problem of training with mismatching modality pairings, and their commutative and transitive modeling is proven to be very affective on training in both toy and real datasets. While I do not keep up closely with the literature in many-modality pretraining, if this is the first paper to address this problem of training on mismatching modalities, it is incredibly valuable to the community and would be a good baseline for any future methods due to its simplicity. Weaknesses: My main concerns are surrounding the writing, as I found the methods section to be quite wordy and I think you could remove a lot of pretext on transformers and existing pretraining strategies. Furthermore, possibly due to the density of the method section, I found the explanation of the perplexity to be a bit confusing. Aside from the writing, I wonder what the difference in compute requirements is between the authors method and the baselines. It was mentioned that LoReTTa was trained on on A100 for 3 days, which seems like a large amount of compute for the datasets presented. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: see above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The main limitation I think is missing is the amount of compute required. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer 7Goz,** We sincerely appreciate your critical insights and expertise during the review of our manuscript. We are greatly encouraged by your recognition that our work addresses an important and understudied problem. Such recognition underscores the relevance of our research and further motivates us. Below, we would like to address your two concerns in detail. **Writing.** Our proposed learning algorithm covers a wide area of research including transformers, computer vision, natural language processing, audio signal processing, computational pathology, self-supervised learning, and multimodal learning. We sought to appropriately contextualize our method within the existing literature and bring readers from a wide variety of backgrounds on the same page, as we believe that LoReTTa will have a profound impact in a wide variety of fields. Still, we appreciate your concern and will make the related work and methods section even more precise. **Compute requirements.** LoReTTa relies on the auto-regressive generation of the missing modality for transitive training. This is a slow process for transformer decoders. However, we are happy to report that our community has made great progress in reducing computational complexity. Recently, FlashAttention-2 [Dao, 2023] has been announced, which doubles the speed of FlashAttention [Dao et al., 2022]. Multi-query attention [Shazeer, 2019] and grouped-query attention [Ainslie et al., 2023] have been shown to reduce inference time by a factor of up to 6. In addition, speculative sampling [Chen et al., 2023] achieves a 2-2.5x decoding speedup. The combination of all these advances promises to significantly reduce the training time of LoReTTa. We have omitted them to avoid too many moving parts in our experiments that would distract from the main results. In the global rebuttal, we presented the results of an additional experiment on a large-scale dataset [Hayes et al., 2022] with 375,000 samples. We were able to achieve excellent results with LoReTTa with a limited time budget of just a few days. **New results**. The table below shows the results of our new experiments. Both GPT and LoReTTa were trained on disjoint (video, audio) and (video, text) pairs to solve the problem of cross-modal translation. We then evaluated the models on the unseen task of audio captioning. As can be seen, LoReTTa is more than capable of overcoming the modality gap and rivals the upper-bound models (MMGPT). More details can be found in the global rebuttal. | Method $\phantom{.}$ | $\phantom{...}$ Train | $\phantom{a}$ Test | BLEU4 | METEOR | ROUGE | |----------|----------|----------|----------|----------|----------| | GPT | A $\rightarrow$ V, V $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 1.7 | $\phantom{...}$ 18.5 | $\phantom{...}$ 30.7 | | LoReTTa | $\phantom{.}$ A $\leftrightarrow$ V $\leftrightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 2.8 | $\phantom{...}$ 20.8 | $\phantom{...}$ 34.7 | | | | | | | MMGPT | $\phantom{...}$ A $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 6.7 | $\phantom{...}$ 19.4 | $\phantom{...}$ 27.1 | | MMGPT| $\phantom{...}$ V $\rightarrow$ T |$\phantom{.}$ V $\rightarrow$ T | $\phantom{...}$ 7.8 | $\phantom{...}$ 21.3 | $\phantom{...}$ 29.1 | **References** Ainslie et at., GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints, 2023 Chen et al., 2023 Accelerating Large Language Model Decoding with Speculative Sampling, 2023 Dao et al., FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness, 2022 Dao, FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning, 2023 Hayes et al., MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration, 2022 Shazeer, Fast Transformer Decoding: One Write-Head is All You Need, 2019 --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the detailed response. After reviewing the comments of the other reviewers, my thoughts on the utility of the paper has not changed but I do have similar concerns to QEx4 about reproducibility and paper structure, with most of my concerns on the former. I am unsure why the authors said they would release code to process the data but not the full training code and this sparks a fair amount of concern. Even with a detailed description of the implementation, not releasing the code to reproduce the experiments exactly leaves plausible deniability if others implementations do not perform similarly. I apologize for responding so late and I don't expect and further experimental results or detailed explanations in the remaining response time. That being said, the concerns around reproducibly have cause me to reduce my score from a strong accept to accept, but I am happy to hear from the authors if they think my concerns are not well founded. --- Reply to Comment 1.1.1: Comment: We fully understand the reviewer's concerns and agree that reproducibility is an important topic. At this time, we are unable to make our code publicly available. This decision is beyond the direct control of the authors. Given our constraints, we have tried to provide as much detail as possible in the paper, appendix, and rebuttal/discussion period to help readers implement our proposed method. We are working behind the scenes to publish the code, but cannot promise a release. --- Reply to Comment 1.1.2: Title: Pseudo Code Comment: In addition to the details outlined in the paper, we will also include a pseudocode in the final submission to better illustrate our approach: ``` class LoReTTa: """ Pseudo-code for commutative and transitive modeling """ def forward(self, tokens, modes=['commutative','transitive']): """ tokens ... tokenized inputs, e.g., [x_0,...x_n, y_0,...,y_m] x_0, y_0 .... modality-specific tokens, 'a', 'b', or 'c' """ if 'commutative' in modes: #shuffle modalities tokens = self.shuffle_modalities(tokens) if 'transitive' in modes: #generate missing modality existing_modalities = self.extract_modality_tokens(tokens) if ['a', 'b'] in existing_modalities: #case 1 modality_a, modality_b = self.split_tokens(tokens) modality_c = self.model.generate([modality_b, 'c']) tokens = [modality_c, modality_a] if ['b', 'c'] in existing_modalities: #case 2 modality_b, modality_c = self.split_tokens(tokens) modality_a = self.model.generate([modality_b, 'a']) tokens = [modality_a, modality_c] logits = self.model(tokens[:, :-1]) #get predictions targets = tokens[:, +1:] #shift targets loss = self.criterion(logits, targets) #calculate cce-loss return self.split_loss(loss) #return individual loss for each modality ```
Summary: In this paper, the authors introduce the problem of learning from multi-modal data (e.g. modalities A, B, and C) if not all modality combinations are available at training time (e.g. only paired data (A, B) and (B, C) are given). This constitutes a highly relevant research problem, as learning from multiple modalities has shown a lot of potential, but collecting fully annotated multi-modal datasets is a costly endeavour. In order to take advantage of _existing_ data and modality combinations, the authors propose a self-supervised learning paradigm (LoReTTa), which combines cycle consistency with masked modelling: i.e., the models are trained to predict masked tokens in A from B (masked modelling) or C from A via B (masked modelling with cycle consistency). As a result, the underlying model is able to perform well not only on the seen modality combinations (A, B) and (B, C), but is also able to handle unseen combinations such as (A, C) or (A, B, C). In particular, the authors show on a synthetically created data pairing (A=MNISt + B=AudioMNIST + C=WineReviews) that a model trained with cycle consistency (LoReTTa) achieves significantly lower perplexity and higher linear probing accuracy on the test data than models trained without such cycle consistency. Additionally, the authors evaluate their approach on the real-world TCGA-OMICS dataset and report consistent performance improvements, especially on unseen modality combinations. Strengths: This paper is a good submission for the following reasons. - S1: The authors introduce a novel and highly relevant setting for learning from multiple modalities. - S2: The proposed approach is well-motivated and simple yet effective. - S3: The presentation and writing make this paper a pleasure to read. - S4: The experimental results show convincing and significant improvements over the chosen baseline models (however, see W1). Weaknesses: While I find this to be a good submission in general, there are several points of concern that make me hesitant. Specifically, I would highly appreciate additional feedback from the authors on the following aspects. - W1 (baselines and evaluation): The two main novel contributions of this paper seem to be (1) introducing the problem of learning from multi-modal data with disjoint modality pairings, and (2) showing that cycle consistency can be highly beneficial in this context. In order to validate (2), the authors compare to backbones without this cycle consistency in Tables 1+2, and to other training paradigms (contrastive loss, CLIP) or that use different model architectures without cycle consistency (different attention masks in GPT and BERT) in Table 3. While C2M3 seems to be a strong contender among the models without cycle consistency (Table 3), the other baselines yield competitive results, too. __My question in this context is thus the following__: why do the authors only include the cycle consistency in the C2M3 model? In principle, cycle consistency could also be integrated into BERT and GPT. It seems to me like the submission could be strengthened by showing that all of the baselines benefit from the addition of cycle consistency. - W2 (causal modelling): Following from W1: what is the motivation to focus on causal masked modelling in the first place? In the context of multi-modal learning, I fail to see why a causal attention structure is desired. Even more, only due to the causal modelling approach does the need for commutative switching arise, which the authors introduce as an important contribution. If the authors were to apply the cycle consistency approach to BERT e.g., this would not be necessary. - W3 (structure of the paper): Especially in light of W1 and W2, the space usage in the paper seems to be suboptimal to me. E.g., half a page in the method section is spent on the model architecture and tokenization options, which seem to be experimental details rather than essential aspects of the proposed method. Similarly, the definition and discussion of what constitutes a modality seems to be tangential, yet is given another 1/2 page in section 4. More generally, while interesting to read, the relevance of section 4 and its integration with the rest of the paper are not fully convincing to me. Instead, I think the paper would benefit from an extended experimental evaluation, including additional comparisons to other methods under the addition of cycle consistency (see W1) and potentially additional datasets and methods (e.g., as in [4]), which would allow for a more fine-grained understanding of the benefits and limitations of the proposed method (e.g., how much does it help if full modality combinations are in fact, at least sometimes, available?) - W4 (reproducibility), following from W3: While the authors spend a significant amount of space on describing tokenization strategies, it remains unclear to me when and how e.g. VQ-VAEs or CNNs are used, as the authors only mention the usage of PCA in the experimental details. With the current amount of information given, I am concerned about the reproducibility, as the final model architectures and the tokenization (where do CNNs and VQ-VAEs enter the picture) remain unclear to me. Additional minor remarks: - In the light of the strong results reported in ImageBind [29], it would be interesting to include additional discussion on how LoReTTa conceptually compares to CLIP (bringing 'transitivity' to masked modelling, which is somewhat inherent in CLIP?), how LoReTTa might perform in such a large scale setting, and why CLIP might perform suboptimally in the reported results (Table 3). Does CLIP simply require more data? - The authors argue that their transitivity approach is 'much more general' than cycle consistency (see caption Fig. 2, or line 153). I currently fail to see the difference, however, and would appreciate if the authors could elaborate. - Line 313: 'communicative switching' -> do the authors mean commutative? Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Please see the weakness section. If the authors are able to address my concerns, I will gladly update my score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed societal impacts and limitations of their work. Nonetheless, as discussed in the weakness section, I believe the paper could further benefit from an increased discussion and comparison to related works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer QEx4,** We are very grateful for the thorough review and appreciate that our proposed method is considered novel and highly relevant, showing significant and consistent improvements across different baselines. While we have addressed many of the Reviewer's concerns in the global rebuttal, we provide a customized response below. **W1 (baselines and evaluation).** We refer to our approach as transitive modeling instead of cycle consistency [Zhu et al., 2017] because, upon closer inspection, the two are quite different. CycleGAN was proposed to transition between two domains (image styles) of the same modality (image), while LoReTTa can transition between three or more modalities (i.e., image, text, and audio). It also avoids using the input as a target, which could cause the representation to collapse. Furthermore, our approach works with more complex modality relations, as described in the global rebuttal. All this makes transitive modeling much more general and applicable than cycle consistency – with which it again shares only some high-level conceptual similarities. By design of transitive modeling, the missing modality must be generated. This is not possible with BERT [Devlin et al., 2018]. It is trained with masked modeling, which simply predicts the missing tokens, not a completely missing sequence on the right. On the other hand, GPT-style [Radford et al., 2018] models are generative and can be extended with our proposed transitive modeling approach. In particular, we start with CM3, which improves GPT by introducing causal masked modeling [Aghajanyan et al., 2022; Bavarian et al., 2022; Fried et al., 2023]. This effectively combines BERT and GPT into a single model. We extend CM3 with commutative (C2M3) and transitive (LoReTTa) modeling. Thus, with LoReTTa we have a unified architecture that includes transitive modeling in both GPT and also BERT. The latter would not be possible otherwise, as explained above. **W2 (causal modeling).** To use transitive modeling, we must predict the missing modality by generating it in its entirety. This requires a generative model, which is only possible with causal training in a sequential model. However, we are aware of the advantages of bidirectional attention in BERT over causal masks in GPT. Therefore, we use a unified version called causal masked modeling to take advantage of both. This is our way to use transitive modeling in BERT since it is not intended to be used for generative tasks. **W3 (structure of the paper).** We provide an additional large-scale experiment on the MUGEN [Hayes et al., 2022] dataset with 375,000 aligned samples from video, audio, and text samples to make our empirical findings even more comprehensive. In particular, we apply LoReTTa to the task of cross-modal translation (the task is analogous to visual question answering). This generative task is not possible with the masked (BERT) and contrastive (CLIP) approaches. Therefore, we compare it only with GPT and its commutative (C2M3) and transitive version (LoReTTa). Thanks to the Reviewer's suggestions, we will restructure our paper to make room for this additional experiment. **W4 (reproducibility).** We briefly mentioned our tokenization scheme in the methods and results sections. Since the MNIST dataset contains low-dimensional samples, we encoded each byte stream as a token. For the high dimensional TCGA dataset, we used PCA for dimensionality reduction, and bin each dimension into tokens. For the new dataset, we used pre-trained video and audio VQ-VAE encoders provided along the dataset. We will emphasize this more in the final version. **Comparison to CLIP and ImageBind.** The advantage of LoReTTa is that it can be used as both an encoder and a decoder to solve both discriminative and generative problems, as demonstrated in our old and new experiments. CLIP [Radford et al., 2022] and its extension ImageBind [Girdhar et al., 2023] (published shortly before the NeurIPS 2023 submission deadline) train their models primarily as encoders – one for each modality. For generative tasks, a diffusion model or other variants must be fitted to these embeddings. Thus, contrastive methods must train multiple encoders and decoders. Worse, contrastive training is inefficient. It requires large batch sizes and large datasets [Chet et al., 2020]. This is because self-supervised contrastive loss only clusters embeddings from the same sample and pushes away those from other samples. Consider a batch with an image of a cat and an accompanying text. In the same batch, there is another image of a similar cat with a similar text description. Now, the CLIP loss will individually push the feature vectors of the first and second image-text pairs closer together. But at the same time, it will push the representations of each sample away from each other, regardless of their similarity. **New results.** Please have a look at the global rebuttal, where we have added the additional large-scale experiments. **References** Aghajanyan et al., CM3: A Causal Masked Multimodal Model of the Internet, 2022 Bavarian et al., Efficient Training of Language Models to Fill in the Middle, 2022 Chen et al., A Simple Framework for Contrastive Learning of Visual Representations, 2020 Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018 Fried et al, InCoder: A Generative Model for Code Infilling and Synthesis, 2023 Girdhar et al., ImageBind: One Embedding Space To Bind Them All, 2023 Hayes et al., MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration, 2022 Radford et al., Improving Language Understanding by Generative Pre-Training, 2018 Radford et al., Learning Transferable Visual Models From Natural Language Supervision, 2022 Zhu et al., Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, 2022 --- Rebuttal Comment 1.1: Comment: I am very grateful for the detailed rebuttal and thank the authors for their effort. For now, I still have the following questions: 1. The authors state that 'This is not possible with BERT [Devlin et al., 2018]. It is trained with masked modeling, which simply predicts the missing tokens, not a completely missing sequence on the right.'. I am not fully convinced by this argument for two reasons, and I would highly appreciate if the authors could clarify a potential misunderstanding from my side. First, predicting 'the missing tokens' versus 'a sequence to the right' seems to differ mainly in the mask layout — I do not understand why instead of a random masking scheme (as done in BERT) one could not simply mask most or the entirety of a given modality (except for modality specific encodings to signal which modality should be reconstructed). This would effectively constitute conditional generation with BERT. Secondly, the notion of 'a sequence on the right' is confusing to me in the context of transformers, which — without a causal masking structure (as in BERT) — are agnostic to the token order. As I mentioned in W2, BERT would not even require any commutative switching, since the model would be agnostic to the placement of the additional modality. As evidenced by the introduction of the commutative switching, the authors seem to agree with me that a causal dependence between the modalities is not necessarily desirable and BERT would thus lend itself to such a setup. 2. I am still not convinced that the transitive modelling approach is conceptually very different from cycle consistency and I would recommend not running the risk of overclaiming this contribution. The main difference seems to be the extension from a single step ($A\rightarrow B$) to multiple steps ($A\rightarrow B\rightarrow C$); whether or not $A, B, C$ are from the same modality seems to be a semantic detail. In this context, I would also like to point out the 2019 paper [Learning Robust Joint Representations by Cyclic Translations Between Modalities](https://dl.acm.org/doi/10.1609/aaai.v33i01.33016892) as an additional citation that the authors seem to have missed. This paper seems highly relevant to the proposed approach and would kindly like to ask the authors if they could discuss commonalities and differences to this paper within the rebuttal period. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's recognition of the time and effort we put into the rebuttal and the additional experiments provided. We would like to address the two remaining questions below. **(1.1)** As we understand it, the reviewer's concern is why we did not mask the missing modality and use bidirectional attention for the remaining modality. By reordering the modalities so that all masked modalities are at the end of the input, this proposal is equivalent to prefix modeling as explored in T5 [Raffel et al., 2019] and SimVLM [Wang et al., 2021]. With prefix modeling, the model has the full context of the unmasked modalities while still being able to predict the missing ones. In this regard, we highly recommend the recent paper by Artexte et al. (2022), which explores different pre-training strategies such as causal, masked, and prefix modeling. They show that both causal modeling and causal masked modeling outperform prefix modeling as well as masked modeling for generative tasks. Even for infilling tasks (which is the main learning objective of BERT pre-training), causal modeling and causal masked modeling are still highly competitive compared to masked approaches. Since LoReTTa relies heavily on predicting the missing modality for transitive pre-training, high perplexity in generative tasks is crucial. In fact, we have shown this in our discussion of the error bound. In conclusion, we agree with the reviewer that while it is in principle possible to use BERT for generative tasks, it is not a good foundation for transitive modeling as literature has shown that this yields a weak baseline for the intended use case. **(1.2)** A pure transformer is indeed permutation invariant. However, one must add modality-specific positional encoding to correctly capture the spatio-temporal relations of an image, sentence, or song. Thus, the representation of each token depends strongly on its context. In bidirectional masking, the token is allowed to consider the "past" and the "future", whereas in unidirectional masking, a token can only attend to previous information. We agree that the former is equivalent to the latter if all masking happens to be on the right. In practice, this is unlikely. Hence, BERT and GPT are very different models. Again, we refer to the paper by Artexte et al. (2022) for a deeper dive into this very fascinating topic. Their results fit perfectly with what we know about sequence models. Intuitively, a bidirectional mask considers information from both left and right to infer the missing information. It was never designed to rely on left information alone (except in the rare case where only right tokens were randomly masked). There is another good paper about this topic [Wang and Cho, 2019] that explores the generative capability of BERT by iteratively placing masked tokens at the end of the input to generate the output. The results are consistent with the recent paper by Artexte et al. (2022) in that the generated output is inferior to GPT. Since the goal of transitive modeling is to obtain high quality predictions of the missing modality, a causal approach is the best approach, as evidenced in the literature. We thank the reviewer for pointing out this topic. It may cause confusion for future readers. That is why, we will include the two mentioned papers in the final version of our paper. **(2.0)** Please see the second post for an answer to question number two.
Rebuttal 1: Rebuttal: **Dear Area Chair and Reviewers,** We are very grateful for your insightful comments and sincerely appreciate the time and effort you have taken to provide constructive feedback. It is gratifying to hear that all reviewers unanimously agree that the paper is well-written and well-motivated. They acknowledge that our proposed method is "widely applicable" [a22R] and "valuable as it addresses a real-world problem" [u4o4]. In particular, our "results show convincing and significant improvements over the chosen baseline models" [QEx4] and “would be a good baseline for any future methods" [7Goz]. We are also thrilled to highlight that the "theoretical analysis contributes to the understanding" [FSDR] of our novel multi-modal and self-supervised framework LoReTTa. Moreover, the extensive experiments "show that the proposed system is very effective" and supported by "careful ablations against mainstream popular pre-training objectives” [SR2A]. Below, we would like to respond to the most common concerns raised by reviewers. We have also addressed individual questions in the comments section. **Extending Transitive Modeling to BERT and GPT.** We would like to clarify the differences between LoReTTa, C2M3, GPT, and BERT, as this has caused some confusion. BERT uses masked modeling, which has proven to be a powerful approach for training encoders and bidirectional representations. GPT, on the other hand, uses causal modeling and unidirectional representations to train autoregressive decoders. GPT excels at generative tasks but has more difficulty with discriminative tasks as compared to BERT. Conversely, BERT is not designed to solve generative problems which is also a drawback of the contrastive method CLIP. FIM and InCoder combine the strengths of both BERT and GPT by introducing causal masked modeling, which reinterprets masked modeling as a causal problem. Thus, we get a GPT-style architecture augmented by BERT-style training. This method was extended to the multimodal case by CM3. We go further and apply commutative modeling to CM3, yielding C2M3. The next step is to incorporate transitive modeling and we arrive at LoReTTa. Thus, LoReTTa unifies BERT and GPT training by combining and extending them. In essence, C2M3 is an extended version of GPT, while BERT and CLIP cannot be directly modified with transitive modeling because they are unable to generate the missing modality. For example, one would have to train a diffusion model on top of the CLIP embeddings to generate an image as in DALLE-2. **Comparison to CycleGAN.** We agree that LoReTTa has a similar flavor to the cycle consistency concept introduced in CycleGAN. However, our proposed approach is much more general and applicable. The main idea of CycleGAN is to translate an image from domain A to domain B, compute the adversarial loss on the generated image, and reconstruct the original image from the generated image. This involves two domains from the same modality. On the other hand, LoReTTa works on three or more domains from three or more modalities. In addition, we avoid modeling a → b'→ a as in CycleGAN. Instead, we use the idea of transitivity to model a → b' → c, given the pair (a, c). This is a stronger learning signal since we have to reconstruct an entirely new data point c that was not part of the model input (a). In fact, this idea can be generalized to more complex multimodal relationships such as (A, B), (A, D), (A, C), and (B, C) (as shown in Figure 2d). As long as there is a path from one modality to the other, we can use the principle of transitivity to approximate the full data distribution. How would we learn the missing distribution (C, D)? By taking a real data point (a, d) and learning the consistent transition a → b' → c' → d or a → c' → d, respectively. This would not be possible at all using only the CycleGAN approach. **Large-scale Experiments.** We appreciate the reviewer's insights and understand the potential value of large-scale experiments. However, due to resource limitations, we were faced with constraints on the amount of computations. Nevertheless, not counting hyperparameter search, we pre-trained 17 models and fine-tuned them a total of 169 times to obtain a comprehensive experimental insight into the operation of LoReTTa that strongly complements our theoretical analysis. Within these constraints, we were able to derive meaningful and impactful results that contribute significantly to the ongoing scientific conversation in our rapidly changing field. We are fully committed to our research and recognize the importance of scalability and generalizability. Reviewer a22R recommended the new large-scale and multimodal MUGEN dataset with 375K naturally aligned video (V), audio (A), and text (T) samples from a real reinforcement learning problem. We immediately downloaded all the files and started to run an additional set of experiments. In particular, we used video as the linking modality and considered the disjoint datasets (A, V) and (V, T). We pre-trained the same transformer as the state-of-the-art upper baseline MMGPT. As can be seen below, LoReTTa clearly manages to rival the upper bound. | Method $\phantom{.}$ | $\phantom{...}$ Train | $\phantom{a}$ Test | BLEU4 | METEOR | ROUGE | |----------|----------|----------|----------|----------|----------| | GPT | A $\rightarrow$ V, V $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 1.7 | $\phantom{...}$ 18.5 | $\phantom{...}$ 30.7 | | LoReTTa | $\phantom{.}$ A $\leftrightarrow$ V $\leftrightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 2.8 | $\phantom{...}$ 20.8 | $\phantom{...}$ 34.7 | | | | | | | MMGPT | $\phantom{...}$ A $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 6.7 | $\phantom{...}$ 19.4 | $\phantom{...}$ 27.1 | | MMGPT| $\phantom{...}$ V $\rightarrow$ T |$\phantom{.}$ V $\rightarrow$ T | $\phantom{...}$ 7.8 | $\phantom{...}$ 21.3 | $\phantom{...}$ 29.1 |
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a new self-supervised learning method to pretrain from multiple modalities. More specifically, the paper argues that it is hard to obtain datasets with 3 aligned modalities (e.g., text, images, and speech) and tries to address this issue. This can be especially cumbersome for medical datasets where some modalities can be missing for certain patients. As a result, this work proposes a self-supervised framework based on (causal) masked modeling, commutativity, and transitivity to go from one modality to another. These can be regarded as additional consistencies that are imposed during the pretraining stage. While the pretraining only has seen disjoint modality pairs, the model is able to handle any modality combination at test time. The evaluation considers the perplexity metric and also the classification score for unseen modality pairs during the pretraining stage. Strengths: - Focus: The premise and focus of the paper is valuable as it addresses a real-world problem in order to pretrain on multiple modalities. The paper starts with the argument that it is hard to collect perfectly aligned datasets when aiming to pretrain on multiple modalities. This alignment step is indeed a very difficult procedure that can cost a significant amount of time and money. Furthemore, in some cases, aligning the data might not even be possible due to missing information/modalities, such as in medical datasets. For instance, the paper considers mRNA, miRNA, and RPPA samples from the TCGA dataset (a medical dataset) when a certain modality is missing. This could be useful in practice. - Method: To address the aforementioned issue, the approach relies on causal masked modeling and cycle consistency, which are powerful tools that I have not seen being combined. - Style: The paper is well-written and easy to follow. The overview figure (see Figure 2) is especially valuable to quickly grasp the aim and mechanics of the presented approach. - Experiments: The experiments show that the approach can improve over strong baselines, such as BERT and GPT (see Table 3) and can leverage multiple modalities when some might be missing. Advantageously the presented method can leverage multiple modalities as these simply need to be tokenized, such as audio, images, and speech. Weaknesses: - The scale and scope of the experiments is limited: As the paper considers pretraining, the paper only considers two datasets with a fairly small amount of samples. For instance the SVL-MNIST and TCGA-OMICS datasets are small-scale datasets with respectively 12k and 3.5k sample pairs. Furthermore, only limited finetuning experiments are considered. What about other tasks, such as retrieval or (visual) question answering? Competing works such as CLIP and BERT have shown to work well for these tasks as well. Overall, it is difficult to judge the generality of the approach as the scope of the experiments is limited. - The originality of the approach is fairly limited as the approach can be seen as a combination of ideas in prior works. For example, the causal masking strategy is similar to [1, 28, 11], and the transitive loss is similar to cycle consistency [90]. While the paper mentions modification to these works in the paper (L141 and L153), it is not clear how significant these contributions are to the final performance. More information would be useful. - It can be observed from Table 3 that CLIP already obtains relatively good results compared to the proposed method. More information on why this is the case would be useful. While CLIP requires the pretraining of 3 different encoders, as the paper mentions, I don’t think this is a strong argument. Especially, as the paper under review also relies on expensive ways to compress high-dimensional data with a VQVAE (see L116). More information on the computational cost when comparing the approach to CLIP would be valuable during the rebuttal. - There is currently no experiment that includes all modalities during pretraining. This could be seen as an upperbound but is currently missing as mentioned in the limitations. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Please address the questions and issues raised in the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: The paper briefly discusses the limitations, environmental impact and broader impact at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer u4o4,** Thank you for pointing out that our proposed method improves on strong baselines such as BERT and GPT by combining powerful ideas and applying them to real-world problems. In the comments below, we address each of the remaining concerns that were raised. **Scale and scope of the experiments.** We chose SVL-MNIST to perform extensive experiments with a limited computational budget. In total, we have trained 12 models and refined them 162 times on this dataset alone. This does not even include the search for hyperparameters. We complemented this with experiments on a real medical dataset for the important task of survival prediction. In both cases, LoReTTa works as theorized in Section 4. To assure the Reviewer that our method also extends to larger datasets, we performed an additional experiment (cross-modal translation) on the MUGEN [Hayes et al., 2022] dataset with 375,000 aligned video, audio, and text samples. We refer to the global rebuttal for the results, which show that LoReTTa is generalizable and scales very well to larger datasets and other tasks. **Originality of the approach.** We noted in the paper that causal modeling and masked modeling are two of the current state-of-the-art methods for self-supervised learning. Both methods had been combined into a strategy called causal masked modeling [Aghajanyan et al., 2022; Bavarian et al., 2022; Fried et al., 2023], which we used as a starting point and never claimed as our invention. However, we greatly improved causal masked modeling by extending it with commutative modeling (C2M3) and transitive modeling (LoReTTa), which had never been done before. By comparing LoReTTa with C2M3, GPT [Radford et al., 2018], and BERT [Devlin et al., 2018], we indirectly included an ablation study. In fact, transitive modeling is a very unique contribution. On the surface, it shares similarities with cycle consistency [Zhu et al., 2017]. But despite this high-level conceptual relationship, the two approaches work quite differently. For example, CycleGAN is only able to transition between two domains (image styles) of the same modality (images), while LoReTTa works on three or more modalities, e.g. image, video, audio, and text, with complex multimodal relations between them. This also helps to avoid representation collapse, which is common in adversarial training. We have given a more detailed explanation in the global rebuttal. **About CLIP results.** LoReTTa consistently outperforms CLIP [Radford et al., 2022] in 5 out of 7 modality combinations (see paper) by a large c-index. Only in one case (R), CLIP outperforms LoReTTa and in another (I, R) it is equal. In both cases, the reverse-phase protein array (R) is included as a modality. For GPT, BERT, C2M3, and LoReTTa, the combination (I, R) leads to a worse prediction than I or R alone. Only for CLIP the c-index increases. We assume that I and R lead to negative transfer, but since CLIP learns to create similar embeddings for both, the embedding vectors may be devoid of features causing the problem. This could be considered a kind of smoothing or regularization. However, this could also cause the model to ignore important high-frequency features inherent in each modality, leading to worse performance as seen for the other modality combinations. Another issue with self-supervised contrastive learning is that two samples with the same label might be pushed away in the embedding space. For example, a batch might contain an image of a dog with a corresponding caption and another image of a similar dog with another similar caption. The CLIP loss only learns to pull the feature vector of each pair closer but pushes each sample away. Thus, contrastive learning requires very large batch sizes and huge amounts of training data [Chet et al., 2020]. In addition, each modality requires its own encoder. LoReTTa, however, already works on smaller datasets which is a huge advantage for data-deprived domains like medicine. In addition, it can reuse existing and pre-trained tokenizers. **Upper bound.** We did not include experiments for a model trained with all three modalities. This was done to emphasize the real-world problem that often there is simply no dataset with all three modalities at once. The upper bound is the current state-of-the-art for existing datasets, which LoReTTa manages to outperform. We have focused our resources on providing extensive experiments for this setting. In the global rebuttal, we have also included an even larger experiment on a new dataset [Hayes et al., 2022] to show the usefulness of LoReTTa on even more problems. **New results.** We have added the additional large-scale experiment in the global rebuttal. Both GPT and LoReTTa were trained on disjoint (video, audio) and (video, text) pairs to solve the problem of cross-modal translation. We then evaluated the models on the unseen task of audio captioning. As can be seen above, LoReTTa is more than capable of overcoming the modality gap and rivals the upper-bound models (MMGPT). **References** Aghajanyan et al., CM3: A Causal Masked Multimodal Model of the Internet, 2022 Bavarian et al., Efficient Training of Language Models to Fill in the Middle, 2022 Chen et al., A Simple Framework for Contrastive Learning of Visual Representations, 2020 Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018 Fried et al, InCoder: A Generative Model for Code Infilling and Synthesis, 2023 Hayes et al., MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration, 2022 Radford et al., Improving Language Understanding by Generative Pre-Training, 2018 Radford et al., Learning Transferable Visual Models From Natural Language Supervision, 2022 Zhu et al., Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, 2022 --- Rebuttal Comment 1.1: Title: Questions After Rebuttal Comment: I thank the authors for providing the rebuttal. I still have remaining questions after reading the rebuttal and other reviews: 1. Why are no upper-bound experiments included? While the availability of three modalities might be less likely in practice, it’s still important to include this information in the paper. This upper-bound is currently missing. 2. When and how does the paper rely on expensive ways to compress high-dimensional data, such as with a VQVAE? (see L116). It’s valuable to include more information on the computational cost when comparing the approach to CLIP. Especially, since L321 emphasizes that CLIP requires the pretraining of 3 encoders, one for each modality. 3. Is it possible to evaluate the current approach for tasks like retrieval and VQA problems, as shown in CLIP? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for helping us make our paper as clear, concise, and comprehensive as possible. Below, we would like to answer the three open questions. **(1)** We stated in the rebuttal that we focused all of our time and resources on providing the large-scale experiments. In particular, we read the dataset authors' paper, learned the intricacies of processing the dataset, downloaded the dataset, trained the models, and evaluated them on the test dataset. We agree with all the reviewers that this experiment is important, and we provided it. Upon completion, we immediately started the “upper bound experiments”. Both training and testing have been completed and are shown below. We trained this model using C2M3 on all fully-aligned modalities. We refer to it as “C2M3 (3-modal)” and not “Upper Bound Model” since training with all modalities does not guarantee the highest possible result due to negative transfer and modality competition, which can be mitigated by more advanced techniques such as transitive modeling as shown by LoReTTa. We have also added transitive modeling to BERT and GPT, called T-BERT and T-GPT, to better demonstrate the significance of our contribution. **(2)** We briefly mentioned the tokenization strategy in the Methods and Experiments section. In the final paper, we will be more specific to avoid confusion. For the SVL-MNIST experiments, we considered each byte stream as a token and binned them to integer values. For the high-dimensional TCGA-OMICS dataset, we first reduced the input dimension via PCA and also discretized the values. For the large-scale MUGEN dataset, we used the pre-trained VQ-VAE encoders for images and audio provided along the dataset to obtain the discrete representations. **(3)** We have analyzed our proposed method on three very different datasets, analyzing different metrics such as perplexity, accuracy, c-index, BLEU4, METEOR, and ROUGE. Despite this large diversity of tasks, we have consistently shown that LoReTTa outperforms current methods and comes close to the theoretical upper bound. Although not exactly equivalent, in the MUGEN experiments we have applied LoReTTa to the task of cross-modal translation from audio to text which is very similar to audio captioning. While this is not exactly visual or acoustic question answering, it serves as a good proxy since we are not aware of a large-scale visual-acoustic question answering dataset with three modalities. Evaluating our method in an exhaustive fashion on even more tasks would be out of the scope of this work. Method | mRNA | miRNA | RPPA | mRNA-miRNA | mRNA-RPPA | miRNA-RPPA | mRNA-miRNA-RPPA |----------|----------|----------|----------|----------|----------|----------|----------| BERT | 0.592 | 0.621 | 0.594 | $\phantom{....}$ 0.595 | $\phantom{....}$ 0.613 | $\phantom{....}$ 0.594 | $\phantom{........}$ 0.610 | T-BERT | 0.573 | 0.618 | $\textbf{0.626}$ | $\phantom{....}$ 0.580 | $\phantom{....}$ 0.591 | $\phantom{....}$ $\textbf{0.623}$ | $\phantom{........}$ 0.608 | GPT | 0.575 | 0.590 | 0.611 | $\phantom{....}$ 0.585 | $\phantom{....}$ 0.576 | $\phantom{....}$ 0.606 | $\phantom{........}$ 0.588 | T-GPT | 0.616 | 0.607 | 0.599 | $\phantom{....}$ 0.620 | $\phantom{....}$ 0.619 | $\phantom{....}$ 0.611 | $\phantom{........}$ 0.622 | CLIP | 0.561 | 0.603 | 0.587 | $\phantom{....}$ 0.610 | $\phantom{....}$ 0.600 | $\phantom{....}$ $\textbf{0.623}$ | $\phantom{........}$ 0.612 | C2M3 | 0.621 | 0.599 | 0.599 | $\phantom{....}$ 0.624 | $\phantom{....}$ 0.620 | $\phantom{....}$ 0.571 | $\phantom{........}$ 0.624 | LoReTTa | $\textbf{0.652}$ | $\textbf{0.623}$ | 0.563 | $\phantom{....}$ $\textbf{0.660}$ | $\phantom{....}$ $\textbf{0.652}$ | $\phantom{....}$ $\textbf{0.623}$ | $\phantom{........}$ $\textbf{0.657}$ | | | | | | | | | | C2M3 (3-modal) | 0.665 | 0.635 | 0.645 | $\phantom{....}$ 0.643 | $\phantom{....}$ 0.671 | $\phantom{....}$ 0.650 | $\phantom{........}$ 0.620 | | | | | | | | | | Method | IMG | TXT | WAV | IMG-TXT | IMG-WAV | TXT-WAV | IMG-TXT-WAV |----------|----------|----------|----------|----------|----------|----------|----------| LoReTTa$_I$ | 82.7 | 62.8 | 82.5 | $\phantom{..}$ 88.5 | $\phantom{..}$ 89.7 | $\phantom{..}$ $\textbf{84.0}$ | $\phantom{....}$ $\textbf{90.7}$ | LoReTTa$_T$ | 81.2 | 63.3 | 81.0 | $\phantom{..}$ 89.0 | $\phantom{..}$ $\textbf{80.9}$ | $\phantom{..}$ 85.5 | $\phantom{....}$ $\textbf{87.8}$ | LoReTTa$_W$ | 80.1 | 62.1 | 84.2 | $\phantom{..}$ $\textbf{83.0}$ | $\phantom{..}$ 90.4 | $\phantom{..}$ 89.0 | $\phantom{....}$ $\textbf{91.6}$ | | | | | | | | | | C2M3 (3-modal) | 80.8 | 56.6 | 82.4 | $\phantom{..}$ 84.9 | $\phantom{..}$ 86.3 | $\phantom{..}$ 84.9 | $\phantom{....}$ 88.3 | | | | | | | | | |
Summary: This paper proposes a self-supervised learning framework, LoReTTa (Linking mOdalities with a tRansitive and commutativE pre-Training sTrAtegy), for multimodal learning with not-aligned multimodal datasets. To be specific, LoReTTa (i) first trains an autoregressive model for generating A->B and B->C, and then (ii) generates pseudo targets for A->C using the model. Then, it (iii) performs autoregressive modeling again. This paper demonstrates that this transitive modeling can improve generation performance between the unseen pair of modalities (i.e., A->C). Strengths: - This paper is well-motivated. - The proposed framework is widely applicable. - The proposed framework outperforms baselines that do not consider transitive modeling. Weaknesses: - Limited methodological novelty - The proposed framework is mainly based on the idea of CycleGAN, and it simply uses causal language/mask modeling with Transformers. I agree that the idea is more general, but I feel some lack of novelty. I'm also concerned about the scalability of the framework. - In addition, due to the recent advance of foundation models (e.g., Llama2 [1], DINOv2 [2]), there are many research attempts to utilize them for multimodal learning. For example, Chameleon [3] uses expert models for multi-modalities (see [4] for a survey on multimodal LLMs). I think the authors should discuss what is the advantage of the proposed framework over the approach utilizing one foundation model as a general knowledge hub. - Weak theoretical analysis (Section 4) - The analysis is too vague. Any theoretical statement is not stated concretely. Also, the authors should discuss the generalization error bound because, for example, the sample (C0...C3) in Figure 2c is not observed. - Notations are somewhat confusing. Why do the authors state the definition (3)? $f(x_1|x_2)=x_1+e$ is a mathematically correct notation? Many notations are roughly written, so it may cause confusion to the readers. - Section 4 should be reorganized. - Weak experiments - In this topic, large-scale experiments and diverse applications are important. However, I think the provided experiments are very small-scale, e.g., MNIST images. I suggest using more challenging benchmarks, e.g., [5-6]. [1] Llama 2: Open Foundation and Fine-Tuned Chat Models \ [2] DINOv2: Learning Robust Visual Features without Supervision \ [3] Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models \ [4] A Survey on Multimodal Large Language Models \ [5] Multi-modal Dense Video Captioning, CVPR Workshop 2020 \ [6] MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration, 2022 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. About causal masked modeling (L140-L143) - If randomly mask some tokens and move them to the end, then such a prediction task is somewhat different from the original masked language modeling (like BERT) since it loses positional information about the masked tokens. Is the approach ok? I also think it may interfere with the causal language modeling. 2. In Equation (3), what is $L_X$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This paper has addressed the limitations and the potential impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Dear Reviewer a22R,** We appreciate your valuable comments on our paper. Your recognition of the motivation and broad applicability of our proposed framework is very encouraging. Below, we would like to address the open questions. **Limited methodological novelty.** In the global rebuttal, we pointed out that LoReTTa is much more general and applicable than CycleGAN [Zhu et al., 2017]. We used CycleGAN as an analogy because it has a similar flavor to our method. But beyond the high-level conceptual similarity, our transitive modeling approach is fundamentally different. For example, it can be used to transition across multiple modalities – not just two domains from the same modality. We use a single transformer as the central point for all modalities. By combining causal, masked, commutative, and transitive modeling, we have shown theoretically and experimentally that our model can meaningfully integrate information from different modalities. This is in contrast to recent approaches that use LLMs as a communication hub to explain and reason about one modality. For example, Chameleon [Lu et al., 2023] uses different modalities in a plug-and-play approach to describe images, process spreadsheets, or search the web. However, this approach does not effectively integrate different modalities to potentially gain new insights. In our TCGA-OMICS experiments, we have shown that combining genomic, transcriptomic, or proteomic data provides valuable prognostic results. Adding each modality individually to a foundation model is not enough, as they do not interact with each other. Thus, LoReTTa serves as a good starting point for fine-tuning these general models. One could also think about using LLMs that communicate with LoReTTa via API calls to solve more challenging problems that a user faces with their multimodal input. **Weak theoretical analysis.** Our theoretical analysis is supported by references and highlights the probabilistic principles behind our approach. It also discusses the error propagation caused by transitive modeling and how we mitigate this by modeling the omitted modality. We use standard notation from stochastic, analysis, and linear algebra to emphasize the intricate details that arise. Any confusion caused by this will be addressed in an updated version of the manuscript. Regarding the unobserved samples $(C0, ..., C3)$, they are indeed missing, but the goal of LoReTTa is precisely to impute this missing modality and align it with the existing data points. It corresponds to the predicted sample $\hat{x}_3 = x_3 + e$ in Equation 6 for which we have provided the error bounds. **Weak experiments.** We agree that large-scale experiments and diverse applications are very important. That is why we have provided results on a real medical dataset for the crucial task of survival prediction. Our results show that the proposed approach works not only on synthetic data but also on complex biomedical samples. We are very grateful to the Reviewer for suggesting the MUGEN [Hayes et al., 2022] dataset. It is indeed a challenging and large-scale benchmark. After hearing about it, we immediately downloaded the dataset and started training. We summarized the results in the global rebuttal, which shows that LoReTTa works on even more problems and is easily scalable. These results will be included in the final version of the manuscript. **New results**. The table below shows the results of our new experiments. Both GPT and LoReTTa were trained on disjoint (video, audio) and (video, text) pairs to solve the problem of cross-modal translation. We then evaluated the models on the unseen task of audio captioning. As can be seen, LoReTTa is more than capable of overcoming the modality gap and rivals the upper-bound models (MMGPT). More details can be found in the global rebuttal. | Method $\phantom{.}$ | $\phantom{...}$ Train | $\phantom{a}$ Test | BLEU4 | METEOR | ROUGE | |----------|----------|----------|----------|----------|----------| | GPT | A $\rightarrow$ V, V $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 1.7 | $\phantom{...}$ 18.5 | $\phantom{...}$ 30.7 | | LoReTTa | $\phantom{.}$ A $\leftrightarrow$ V $\leftrightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 2.8 | $\phantom{...}$ 20.8 | $\phantom{...}$ 34.7 | | | | | | | MMGPT | $\phantom{...}$ A $\rightarrow$ T | $\phantom{.}$ A $\rightarrow$ T | $\phantom{...}$ 6.7 | $\phantom{...}$ 19.4 | $\phantom{...}$ 27.1 | | MMGPT| $\phantom{...}$ V $\rightarrow$ T |$\phantom{.}$ V $\rightarrow$ T | $\phantom{...}$ 7.8 | $\phantom{...}$ 21.3 | $\phantom{...}$ 29.1 | **References** Hayes et al., MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration, 2022 Lu et al., Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models, 2023 Zhu et al., Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, 2022
null
null
null
null
Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
Accept (poster)
Summary: The paper presents multiple extensions of Haim et al. [2022] in relation to reconstruction of training data in neural networks. In short the extensions are: extension to multiclass from binary, extension to regression losses, investigate of the effect of weight decay and analysis of relationship between number of samples and number of parameters. All extensions are supported by experiments and for the first two extensions the authors provide theoretical foundation. Strengths: Originality: The work seems original and novel (even if it is only incremental) Quality: The paper is of high quality and provide theoretical foundation where it is needed and empirical evidence for everything else. Clarity: The paper is clearly written and easy to follow. Significance: It is not unreasonable to say that the end goal for this line of work is to be able to study neural networks that are used in more real life scenarios today. In that regard this works provides an important stepping stone in that direction, especially with its extension to multiclass. Weaknesses: Section 5 on general loss functions seems to be the weaker contribution primarily due to two reasons: 1. The authors are essentially trying out multiple regression losses, which is good, but selling this as "general loss functions" seems to be a stretch. The authors could be more clear in their contributions by simply calling "general regressions losses". 2. The experiment is created by artificially taking a task that is binary in nature (labels {-1,1}) and trying to model them using a regression loss. It would be much more relevant if the authors had considered another dataset and/or task where the target is continues. In general I find that the paper has a lot of experiments to show and due to the limited space this does not give room for an actual discussion of their findings. I find this a wrong prioritization, and that the authors should discuss and give some kind of reasoning for just some of their findings (see questions). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The empirical results are impressive and speaks for themself, but I am really missing a bit of discussion of the results: * Any reasoning why the straightforward extension failed to reconstruct samples (L139-141)? * The reason why CNNs are harder to reconstruct? Essentially they are just sparse linear layers (compared to fully connected) * Any reasoning why weight-decay can have such an influence on the number of reconstructed examples. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: It is clear from the paper that work on reconstruction of training data from memorization in the neural networks are still in its early stages. That said it would be great if the authors could be more explicit in the paper about the obvious limitations of the method: * Only possible for shallow networks * Only possible for specific networks (only ReLU it seems) * Only possible for networks trained on few samples (the extension to 5000 samples is nice, but still nowhere near practical settings) It is mentioned in future work, but giving a bit of reasoning what challenges lies in scaling the methods seems to be in order. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and constructive and positive feedback. Regrading general vs. regression loss: we agree with the reviewer on the comment. Although our method described in Section 5 allows us to reconstruct data which is trained using any loss function, we tested it only on several regression losses. We will change the phrasing throughout the paper to "general regression losses" instead of "general loss functions". Regarding datasets with continuous labels, we agree that these are interesting cases to study, however it is beyond the scope of this paper and we leave it to future research. Regarding the prioritization, we agree it would be helpful to provide more discussion on the findings and theoretical insights (e.g. such as in sections 4.1 and 5.1), however we had to trade-off in terms of space limitations to also present the results and figures in the main part of the paper. In the camera-ready version we will add more discussion on the findings themselves, and provide more details on the theoretical reasoning behind our results. Please also see the response to the last question below for further justification for weight decay. Regarding the questions: 1) This is a good point and we will further elaborate on it in the camera-ready version. The straightforward extension has $C \cdot n$ constraints whereas our proposed equivalent formulation has only $n$. For reconstruction, each such constraint requires its own optimization variable $\lambda$. So the latter formulation has much less optimization variables ($n$ < $C\cdot n$) which is very significant for large number of classes $C$. We believe that reducing the number of constraints is the main reason for the successful reconstruction in the multi-class setting. 2) Although CNNs can be written as a sparse linear layer, they are inherently different. First, there is weight sharing, and second, the number of parameters is significantly smaller than linear layers. For these reasons, the previous approach from [Haim et al. 2022] did not work as is. But we could reconstruct from CNNs using weight decay and large enough layers. 3) This is a very good question. We do have a theoretical justification for why weight decay helps reconstructability, at least for simplified networks. We will add this explanation in the camera-ready version. In a nutshell, Theorem 3.1 from our paper is used to devise a reconstruction loss which is based on that networks converge in direction to a KKT point of the max-margin problem. However, this directional convergence occurs asymptotically as the time $t \to \infty$, and the rate of convergence in practice might be extremely slow, logarithmic in $t$. Hence, even when training for, e.g., $10^6$ iterations, gradient descent might reach a solution which is still too far from the KKT point, and therefore reconstruction fails. In other words, even when training until the gradient of the empirical loss is extremely small, the direction of the network’s parameters might be far from the direction of a KKT point. In [1], the authors proved that in diagonal linear networks (i.e., a certain simplified architecture of deep linear networks) the initialization scale indeed controls the rate of convergence to the KKT point, namely, when the initialization is small gradient flow converges much faster to a KKT point. This theoretical result explains the behavior that we observe in the experiments, for a simple setting. For this reason, when training without weight decay, small initialization seems to be required to allow reconstruction. However, when training with weight decay, our theoretical analysis in Section 5.1 explains why small initialization is no longer required. Here, the reconstruction does not rely on converging to a KKT point of the max-magin problem, but relies on Eq. (14) which holds (approximately) whenever we reach a sufficiently small gradient of the training objective. Thus, when training with weight decay and reaching a small gradient Eq. (14) holds, which allows for reconstruction, contrary to training without weight decay where reaching a small gradient doesn't imply converging close to a KKT point. Regarding limitations, we will add a dedicated limitations section in the camera-ready version with elaborated discussion on the points highlighted by the reviewer. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their long and elaborated answer. In my initial review I was mostly unhappy with the missing discussion of results, but based on the authors feedback it is clear that they have indeed thought about their method. I am hoping most of this discussion will go into the camera-ready version of the paper, even if some of the results should be moved to appendix. I am still positive regarding the paper, and even accounting for some of the other reviews, I will be keeping my score.
Summary: Reconstructing training samples from the trained model may cause privacy issues. This paper demonstrates the training samples reconstruction for multi-classes and also for both fully-connected neural networks and convolutional neural networks. In addition, the authors investigate different factors that contribute the reconstruction. Strengths: 1) Extended the previous training-sample reconstruction scheme to multi-class settings and also to both fully-connected NN and CNN, making it more practical. 2) It includes both theoretical analysis and empirical results to verify its effectiveness. Weaknesses: 1) When the authors reconstruct the data, the generalization of the models is very poor. However, in practice, the trained model may also do well on unseen test data. So, the question is, can we still reconstruct the data successfully even when the trained model performs well on unseen test data? 2) What if a model is trained without weight decay or a model trained with other regularizers? Will this finding still be held? 3) The experimental datasets are relatively simple and on a small scale. Will it be possible to experiment on a more large-scale and complex dataset such as ImageNet? #---------------------------------------------------------------------------------------------------------------# #---------------------------------------------------------------------------------------------------------------# After reading the rebuttals and the reviews from other reviewers and the discussions, some of my previous concerns of this paper remain unaddressed, therefore, I would like to stick to my original evaluation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Did not discuss the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and feedback. Regarding the questions raised by the reviewer: 1) Our work considers relatively small-scale networks, trained on small datasets (up to $5$,$000$ samples). Although the generalization of our models is not comparable to the state of art, it is significantly non-trivial (much larger than random). We also ran an extensive hyper-parameter sweep, hence we possibly attain the best generalization score that can be achieved using these architectures and datasets. We do believe that these methods can be further improved, allowing for larger architectures and bigger datasets, and as such better generalization. 2) This is an interesting question. We compare our results to baseline models that are trained without regularization at all, for example see the baselines in Fig.6 (as dashed lines) and the rightmost column in Fig.4. The reason we specifically used weight decay as a regularizer was because of the theoretical understanding we have about it (presented in Section 5). Studying memorization in models trained with other regularizers is an interesting direction and we currently don't have a theoretical understanding of how it would affect reconstructability. Testing this question empirically may prove to be useful, but is beyond the scope of this paper. 3) We agree that the datasets we trained on are relatively small scale. We emphasize that this is also a limitation of previous works on data reconstruction. Note however that in our work we were able to reconstruct from a model trained on $5$,$000$ samples, which is $5$ times more than the largest dataset in [Haim et al. 2022]. Reconstructing from ImageNet is a challenging and non-trivial task (even when considering only a small part of it). We believe it is possible, but may require further improvement of the current reconstruction methods and theoretical understanding, and we leave it for future research. Regarding limitations, similar to previous works on data reconstruction our results are limitied to relatively small datasets and architectures. We will emphasize these in the camera-ready version by adding a limitations section. --- Rebuttal Comment 1.1: Comment: Thank the authors for taking the time in answering my questions. After reading the rebuttals and the reviews from other reviewers and the discussions, some of my previous concerns of this paper remain unaddressed, therefore, I would like to stick to my original evaluation.
Summary: This paper is following the work of Haim et al. (2022) who introduced a reconstruction scheme for neural network with logistic or exponential loss for binary classification tasks. They extended this work by introducing this reconstruction scheme in a multi class setting. They also demonstrated how it can also be extended to convolutional neural network and more general loss functions. Strengths: - This paper is well written and the problematic well presented. - Memorization is indeed an important research area and this paper offer significant finding that give a better understanding on which conditions influence memorization of training data. - I really appreciated the balance between quantitative and qualitative experiment. As well as the ablation study with the number of training class and examples. It gives really good insights. - The appendix contain good explanations and interesting additional experiments. Overall, I think this is a good paper that give important insights for the community. Weaknesses: - One small issue I have with this paper is that there isn't a single strong storyline. It is kind of a mix of different improvements over Haim et al. (2022) with General Loss Function/Multi Class and CNN. So it's like section 4, 5 and 6 could be entirely independent. For example, I was surprised after introducing the multi class that the CNN was trained with a binary classification task. In term of presentation I would probably have started with the General loss, then present the multi class setting, then present multi-class with CNN (and weight decay) (Just to make the story smoother). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there any reason why you didn't use a multi class setting with the CNN ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I didn't found a limitation section in this work or in the appendix. One main limitation is that this work is limited to extremely small architectures and that it's not clear how much of it will transfer to practical architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and constructive feedback. Regarding the paper's storyline: this is a good point, and we had a dilemma on how to organize the different sections. We will consider implementing the reviewer's suggestion for reordering in the camera-ready version. Regarding multi-class CNNs: the reason is that we wanted to decouple the effects of changing the architecture from changing to multiclass from binary classificatoin. We can add a multiclass CNN experiment in the camera-ready version. Regarding limitations, we will add a limitations section in the camera-ready version with an elaborated discussion on the limitations of our current approach, including references to small architectures and datasets. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I looked at the other reviews, answers and I will keep my score. The main weakness that is raised by another review is about "novelty". However, novelty is always a very subjective notion and researchers often have different threshold of what they consider something novel. Since the authors are transparent of the fact that their work extends [Haim et al. 2022] and that they provide additional insights (with a rigorous experimental setup) which were not demonstrated before, I consider that this paper should be accepted.
Summary: This work extends the work of [Haim et al. 2022] to multi-class classification and with a more general loss function. Through mainly empirical evaluations, the authors demonstrate that data reconstruction is also possible and reveal interesting relationships between the ability to reconstruct and the number of classes, number of training data, size of the network, and the role of weight decay in the process. The findings are most empirical. Though interesting, the novelty and technical contribution of this work is relatively low. Strengths: Reconstructing samples from trained classifiers is an important problem with direct connections to generative modeling and model privacy. The findings of this paper, though derivative from [Haim et al. 2022], are interesting and non-trivial. The findings regarding the data reconstruction ability vs. number of classes and weight decay are novel to me. They do bring new insights into understanding neural network classifiers. Weaknesses: Novelty: This is my main concern about this submission. The framework is a direct extension from [Haim et al. 2022]. The objectives have been slightly modified, for the multi-class cross-entropy case and for the general loss with weight decay case, but they do not bring significant new insights over the original work. Experiments: Though the experiments conducted in this work are interesting, the results do not include variation analysis or error bars from multiple runs, nor statistical tests for whether two cases are significantly different. The experiments can be further improved by 1) evaluating large-scale datasets, e.g., ImageNet, rather than the CIFAR10 dataset, which has been investigated in [Haim et al. 2022]. 2) extending from simple classifier architectures (MLPs, Simple CNNs) to more complicated architectures such as ResNet or Vision Transformers. Analysis: The empirical findings lack depth. For instance, the observations about weight decay vs. data reconstruction seem new and interesting. Is any of them provable, even in oversimplified cases? There are theoretical works on classifiers trained with square loss and weight decay [1]. Adding more theoretical justifications will make this submission stronger. I feel that this is a missed opportunity and I will consider raising my score if more theoretical insights can be provided to support the empirical findings. [1] Hu et al. Understanding square loss in training overparametrized neural network classifiers Technical Quality: 3 good Clarity: 3 good Questions for Authors: For empirical evaluations, I am wondering how good are the reconstructed samples in terms of FID. There is a line of work trying to generate/reconstruct samples from the classifier, mostly by gradient ascent [2,3]. How does your method compare with those as a generative model? Are there any deeper connections between the two very different methodologies? [2] Wang and Torr, Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs [3] Zhu et al. Towards Understanding the Generative Capability of Adversarially Robust Classifiers Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the potential negative societal impact. The limitation of the work could have been more thoroughly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and the constructive comments. 1) Regarding novelty, we would like to emphasize that although our work is a direct extension of [Haim et al. 2022], we make some significant and novel contributions and overcome some of the limitations of the previous work. To name a few: - We generalized the previous work to a multi-class setting (as opposed to binary classification). This required a non-trivial extension of the KKT solution to the max-margin problem, which was not shown before in practical applications. - We used a different theoretical justification (beyond convergence to a KKT of the max margin problem) to devise a reconstruction loss which works on general regression losses, and demonstrated it on three different losses. - We investigated the connection between weight decay, number of parameters and reconstructability and found a connection which was not known before. This connection enabled us to overcome some of the limitations of the previous work, such as reconstruction from CNNs (as opposed to reconstruction from MLPs only), reconstruction from models with a standard initialization scheme (as opposed to models trained with a very small and non-standard initialization scale), and reconstruction from a model trained on up to $5$,$000$ samples. As far as we know, this is the largest dataset used to train a classifier and shown to be successfully reconstructed. 2) Regarding the experiment and improving to larger datasets and more complicated architectures: we believe that the end goal of understanding memorization in neural networks is to be able to apply it on more realistic settings. However, the current state of the research in this field is still in its infancy, and we believe that our work provides an important stepping stone in that direction. We agree that there is much more work to be done for future research in this field. Regarding the comment about the error bars, we will add them whenever relevant in the camera ready version. We note that every reconstruction experiment in our paper uses a large hyper-parameter sweep and over many different runs. The full details appear in Appendix B. 3) Regarding theoretical justifications, we thank the reviewer for the helpful comment. Please note that of the three main sections in this work, two provide theoretical analysis and insights on the success of the demonstrated empirical reconstruction results (Sections 4\&5). We also have a theoretical justification on why weight decay helps reconstructability for simplified networks, which we will add in the camera-ready version. In a nutshell, the (directional) convergence rate to a KKT point of the max-margin problem, as paraphrased in Thm 3.1, might be extremely slow -- logarithmic in $t$ for time $t \to \infty$. Hence, even after many gradient descent steps (e.g., $10^6$ iterations) the learning process may essentially stop because of small gradients, but the parameters may still be far from the direction of a KKT point. A relation between small initialization and such convergence is shown in Moroshko et al. [1]. They prove that for diagonal linear networks (i.e., a certain simplified architecture of deep linear networks) the initialization scale controls the rate of convergence to the KKT point. Namely, when the initialization is small, gradient flow converges much faster to a KKT point. Simply put: without weight-decay, small gradients (of the empirical loss) do *not* imply convergence to a KKT point. However: small initialization implies faster convergence (to KKT), which implies better reconstructability. In our novel approach - when weight-decay is used, small gradients (of the loss) *do* imply better convergence to the solution in Eq. 14, as opposed to convergence to a KKT point in the case of training without weight-decay. And better convergence implies better reconstructability. This provides a theoretical justification for why incorporating weight decay may result in better reconstructability for models with standard initialization, which is one of the main contributions in this work. 4) Regarding evaluation with FID, this is an interesting question and an interesting comparison that could be made. However, we emphasize that our work is focused on understanding *memorization* in neural networks, while works on generative models focus on how to model a distribution. Hence, such comparisons are out of the scope of our paper. [1] Edward Moroshko, Blake E. Woodworth, Suriya Gunasekar, Jason D. Lee, Nati Srebro, and Daniel Soudry. "Implicit bias in deep linear classification: Initialization scale vs training accuracy." Advances in neural information processing systems 33 (2020): 22182-22193. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! Most of my raised questions have been addressed. However, my biggest concern still stands, i.e., the novelty of this work on top of existing literature. I thank the authors for listing 3 bullet points for extra contribution. Could the authors elaborate on which one is the most significant, and the associated technical difficulties? In my humble opinion, extending the methodologies from binary classification to multi-class (equation 12) is relatively straightforward. About the general losses case, equation 15 also seems a bit derivative from equation (6). The implicit bias of minimum $l_2$ norm is substituted by an explicit penalty on $l_2$ norm, which also doesn't seem surprising in terms of extending the methodology. I could be missing some important points here and hopefully the authors could explain more. --- Reply to Comment 1.1.1: Title: Re: Comment Comment: We thank the reviewer for the comment. Regarding the extension to general regression losses (Eq. (15)), the novelty in this formulation is that it is *conceptually* different than reconstruction from classification. Note that in [1], the theoretical framework, which is the main basis for that work, revolved only around classification losses. Their reconstruction method is based on the KKT conditions for margin maximization, which allows for reconstructing the points on the margin. Showing reconstruction in regression setup is not a trivial extension of [1], both conceptually, and also it is not trivial to assume that the success of the reconstruction in the classification case would apply in the regression case - empirically speaking. Moreover, the results that we show in the paper are not only empirical. We also based our results on the theoretical reasoning (discussed in Section 5). Although the loss in Eq. (15) looks similar to the one in Eq. (6), the theoretical reasoning behind the two losses is very different. They are also not exactly the same. Note that in Eq. (15) there is no restriction on the $\lambda_i$'s, namely, they may be negative. As opposed to Eq.(6) where $\lambda_i$ are required to be positive. We also emphasize that the role of weight decay and the effect of the number of parameters on reconstructability was not known before. One of the main limitations of [1] is that their method required very small and not-so-commonly used initialization scheme. We overcome this limitation by introducing weight decay, which is a common practice. Another limitation of [1] is that they focused on fully-connected networks, while adding weight decay allowed us to extend the reconstruction to CNNs. Regarding reconstruction from a multiclass classifier: the reconstruction loss in Eq.(12) is not the trivial extension of the max-margin conditions for multi-class classification. A straightforward extension of the KKT conditions for the multiclass case, as appears in Appendix G in [2], did not work as-is for reconstruction. Only after formalizing the equivalent form, by removing most of the constraints (as discussed in lines 137-150), did we manage to get good reconstructions from multiclass classifiers. We agree that assessing the novelty of works in general is rather subjective. However, we believe that our work extends previous works on reconstruction in several directions, far beyond what was known before. [1] Reconstructing Training Data from Multiclass Neural Networks, Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, Michal Irani, 2022 [2] Gradient Descent Maximizes the Margin of Homogeneous Neural Networks, Kaifeng Lyu, Jian Li, 2019
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization
Accept (poster)
Summary: This paper proposes EAGLE to improve OOD generalization in DGNNs. It focuses on modeling and inferring complex environments on dynamic graphs with distribution shifts and identifying invariant patterns within inferred spatio-temporal environments. The EAGLE incorporates an environment-aware model, an environment instantiation mechanism for diversification, and an invariant pattern recognition mechanism for OOD prediction. The authors claim that EAGLE outperforms current methods on various dynamic graph datasets, marking the first approach to OOD generalization on dynamic graphs from an environment learning perspective. Strengths: 1. The problem to be solved in this article is significant. 2. The authors conducted sufficient experiments to verify their claim. Weaknesses: 1. The paper lacks a clear definition of 'environment', particularly in the context of multiple graphs. As the environment seems to correlate with time (T), the authors should elucidate whether the environment changes with T. Given the evident correlation between time and environmental changes - as seen in the authors' own example of transitioning from school to a company (past classmates --> current colleagues) - this should be addressed in the discussion. 2. It is not apparent whether the different environment K is determined by the dataset or is a hyperparameter. The paper should elucidate the relationship between the environment number K and the graph data T. 3. Assumption 1, which emulates a significant amount of existing work [1-4], seems to have a miswritten part (b) where the noise term should be placed outside the brackets. 4. The article uses notations that are confusing, making it difficult to read. For instance, it is unclear why the same symbol, z, is used to represent the result set obtained by the set operation in line 134. The meanings of symbols e and s should be clarified. 5. The author should provide more explanation about Proposition 1 and Proposition 2, rather than simply stating the theories and placing the proofs in the appendix. A brief explanation of what these theorems illustrate would be beneficial. 6. Several remarks and statements in the article are wrong. For example: - The absolute statement, "Existing works fail to generalize under distribution shifts ..." overlooks recent works [4] addressing distribution shifts. - The sentence on lines 140-141, "We regard the learned node embeddings as environment samples drawn from the ground-truth distribution of latent environment", is ambiguous and warrants clarification. How can we learn "ground-truth" distribution? In summary, the main issue with this paper is its lack of clarity, manifested in confusing mathematical symbols, unclear theorem and formula meanings, and an ill-defined problem statement. [1] Handling Distribution Shifts on Graphs: An Invariance Perspective, ICLR 2022 [2] Learning Invariant Graph Representations for Out-of-Distribution Generalization, NeurIPS 2022 [3] Learning Substructure Invariance for Out-of-Distribution Molecular Representations, NeurIPS 2022 [4] Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift, NeurIPS 2022 Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: plz refer to Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: plz refer to Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed comments. Responses are as follows. **Q1.1:** The definition of “environment”. **A1.1:** Thanks for your comment. We would clarify that we have explained “environment” from the perspective of **latent factor** (lines 88-94), **real-world explanations** (lines 100-104, Figure 1(a)), and **formal definition** (line 111). In short, environments are latent factors, where the ground-truth labels are inaccessible. The impact of environments causes relationships on dynamic graphs to be multi-attribute. To model diverse spatio-temporal environments, the ego-graphs of each node are disentangled and processed in $K$ embedding spaces. --- **Q1.2:** Whether environment changes with T. Give evident correlations. **A1.2:** Thanks for your question. Environment is **time-dependent** (past classmate, current colleague). Note, time-independent is a special case, e.g., regions near the equator are always summer, shown in Figure 1(a). EAGEL doesn’t require explicit correlations between time and environment. Instead, EAGLE infers distributions of latent environments with multi-label $\mathbf{y}$, mixing with time index $t$ and environment index $k$. We will add further clarifications in the revision. --- **Q2:** How $K$ is determined? What’s the relationship between $K$ and graph? **A2:** Thanks for your question. We adopt a **warm-up mechanism** to determine $K$. We evaluate the performance of the first 10 epochs on the validation set to find the most suitable $K$, and fix it for the rest of training. We also add a parameter sensitivity experiment on COLLAB, which shows the impact of $K$ on model performance. We will add this in the revision and leave the study of how to adaptively determine $K$ for each node as future work. | $K$ | 2 | 4 | **6** | 8 | 10 | | --- | --- | --- | --- | --- | --- | | AUC (w/o OOD) | 80.33±1.29 | 81.17±1.04 | **84.41±0.87** | 83.65±0.79 | 81.94±1.03 | --- **Q3.1:** Assumption 1 emulates a significant amount of existing works [1-4]. **A3.1:** Thanks for your comment. Assumption 1 is **a universally acknowledged and widely accepted assumption** in OOD works based on the Independent Causal Mechanism and Invariant Learning theory [1-4]. **However**, most related works don’t state that they can be optimized to satisfy this assumption. Proposition 3 in our work proves Assumption 1 can be satisfied by optimizing EAGLE (Appendix C.3). --- **Q3.2:** Assumption 1 seems to have a miswritten part (b). **A3.2:** Thanks for your careful and professional review. We revisit relevant literature and confirm it was a **typo**, i.e., $\epsilon$ should be placed outside the brackets. We have re-checked that the typo will not affect Assumption 1 or the related propositions and proofs. We will correct this in the revision. --- **Q4.1:** Notations are confusing and difficult to read. Why $\mathbf{z}$ is used to represent results of set operation? **A4.1:** Thanks for your questions. We use $\mathbf{z}$ to denote “embeddings”. $\mathbf{z}$ with different super/subscripts introduces extra meaning. Explanations for all $\mathbf{z}$ are listed in Appendix A. To avoid confusion, we will update $\mathbf{z}$ in line 134 to $\mathbf{Z}$ in the revision. We will further clarify the notations. --- **Q4.2:** The meanings of $\mathbf{e}$ and $s$. **A4.2:** Thank you for your question. - $\mathbf{e}$ means multiple latent environments (lines 88-89). We explicitly define $\mathbf{e}_v$ = {$\mathbf{e}_k$}$_1^K$ in line 111, which means the multiple latent environment of node $v$ is a compound of $K$ surrounding environments. - $s$ is the element in $\mathcal{S} _{ob} \cup \mathcal{S} _{ge}$ (Eq. (15)), i.e., the instances sampled from the observed and generated environment samples. Detailed explanations for $\mathbf{e}$ and $s$ are listed in Appendix A. --- **Q5:** More explanation about Proposition 1 and Proposition 2. **A5:** Thank you for your suggestions. - **[Proposition 1]** introduces loss function $\mathcal{L}_{\mathrm{ECVAE}}$ (Eq. (8)) to train ECVAE. ECVAE is realized by fully connected layers with the optimization goal in Eq. (C.6). To make it tractable, we apply MCMC sampling and reparameterization tricks, and reach the final optimization goal for ECVAE as Eq. (8). - **[Proposition 2]** solves a dynamic programming problem to obtain $\mathbb{I}^\star(\cdot)$ with proofs in Appendix C.2. The target is to find a partition dividing all patterns into (in)variant set with the max difference between the variance means. Suppose $\mathrm{Var}(\mathbf{z}_v^{\mathbf{e}\prime})$ is [0.1, 0.2, 0.3, 0.4, 0.9], then the optimal partition is [0.1, 0.2, 0.3, 0,4] and [0.9] (with a larger mean difference) rather than [0.1, 0.2, 0.3] and [0.4, 0.9]. An optimal partition can always be found for each node, which contributes to the generalization. We will provide further explanations in the revision. --- **Q6:** The absolute statement, “Existing works fail to generalize under distribution shifts ...” overlooks recent work DIDA addressing distribution shifts. **A6:** Thanks for your comment. We clarify that our original sentence is “**most** existing works fail to generalize under distribution shifts. **DIDA [84] is the sole prior work** that tackles ......” (lines 303-305). We also express the exactly same meaning in lines 427-431 (Appendix F.2). We respectfully think our sentence clearly states the related works and is not an absolute statement. --- **Q7:** Sentence on lines 140-141 is ambiguous. How can we learn “ground-truth” distribution? **A7:** Thanks for your question. We clarify that this sentence does not imply EAGLE could learn the ground-truth distribution. By “**regard**” the node embeddings obtained by EA-DGNN as environment samples drawn from the ground-truth distribution, we can **infer** the ground-truth distribution with the observed samples by ECVAE. We will further clarify our paper to avoid potential misunderstandings. --- Rebuttal Comment 1.1: Comment: Thank you author for the rebuttal. I found some concerns after reading this article and DIDA [1] carefully. 1. There are a lot of over-declarations in the article, such as "this is the first trial to explore the impact of environments on dynamic graphs under OOD shifts. However, the OOD problem on dynamic graphs has been studied and defined by other works, such as DIDA. And DIDA has also explored the environment or variant patterns. Therefore, the contribution of the article is insufficient. 2. In addition, a lot of content is imitated from the work of DIDA, such as the problem description in Section 2, and Assumption 1. So I think this part of the contribution is also insufficient. 3. I am also very surprised why there is no detailed comparison with DIDA in this article. For example, what are the problems in the work of DIDA, and what are the advantages of this work compared with DIDA? In summary I do not decide to revise my score. [1] Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift, NeurIPS 2022 --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the discussion. After carefully reading the reviewer's comments, we think there exist some misunderstandings and would like to clarify the concerns point by point. **Q1.1:** There are a lot of over-declarations in the article, such as “this is the first trial to explore the impact of environments on dynamic graphs under OOD shifts”. However, the OOD problem on dynamic graphs has been studied and defined by other works, such as DIDA. **A1.1:** Thank you for your concerns and comment. - We would like to claim that, we **do** propose to **explore the impact of environments** on dynamic graphs under OOD shifts **for the first time**, by applying the novel Environment “Modeling-Inferring-Discriminating-Generalizing” paradigm of our EAGLE. - Our above contribution **does not imply** we are the first to study the problem of OOD on dynamic graphs, as **we reiterated that DIDA [1] is the sole prior work that tackles distribution shifts on dynamic graph** (e.g., lines 304-305), but **we are the first to utilize “environment” to solve the problem**. Specifically, we tackle this problem from a **novel** perspective (the latent environments) and gain **better** performance due to our **advantages** compared to DIDA [1] (please see A3 for details). In summary, these declarations emphasize the contribution of investigation into “environments” (first time), rather than works on the dynamic OOD problem. --- **Q1.2:** And DIDA has also explored the environment or variant patterns. Therefore, the contribution of the article is insufficient. **A1.2:** Thank you for your comment. - In terms of “environment”, we have rechecked DIDA [1] carefully and found there are **no investigations on “environment” or other similar terms**. - **The variant patterns exploited by DIDA [1] are different from the “environments” in our work**. Specifically, DIDA [1] defines variant/invariant patterns as subsets of ego-graphs across time stamps whose predictively to labels are (not) stable across time periods and graph communities **(all in only one embedding space)**, while our explanations for “environments” are $K$ **latent embedding spaces**, which are formed under the impacts of complex spatio-temporal node interactions, causing the relationships to be multi-attribute. --- Reply to Comment 1.1.2: Comment: (continue) **Q2:** In addition, a lot of content is imitated from the work of DIDA, such as the problem description in Section 2, and Assumption 1. So I think this part of the contribution is also insufficient. **A2:** Thank you for your comment. - **[Section 2]** First, we would like to clarify that **we never mentioned the “problem” defined in Section 2 is new or it should be considered as our contribution**. We have cited DIDA [1] accurately and comprehensively in the main paper and Appendix. However, **we tackle the problem, i.e., OOD generalization on dynamic graphs, from a totally different perspective with a novel method**, enjoying superior advantages and achieving better results. - **[Assumption 1]** We have replied to this concern in the rebuttal (please see Q3.1 and A3.1 for details). We would like to explain it again. Assumption 1 is **a universally acknowledged and widely accepted assumption** in almost all works based on the Independent Causal Mechanism (ICM) and the Invariant Learning theory, not only in graph OOD related works [1-4], but also in works of other fields [5-7]. So we do not consider this as imitations or emulations of DIDA [1] or our main contributions. Besides, most related works **do not** provide the support that their models can be optimized to satisfy this assumption. **We propose Proposition 3 supporting that optimizing EAGLE can help satisfy Assumption 1, which is our contribution**. We further provide their proofs in Appendix C.3. --- Reply to Comment 1.1.3: Comment: (continue) **Q3:** I am also very surprised why there is no detailed comparison with DIDA in this article. For example, what are the problems in the work of DIDA, and what are the advantages of this work compared with DIDA? **A3:** Thank you for your comment. We would like to point out that **we have provided detailed comparisons with DIDA [1] in the rebuttal and we will add the comparison in the revised version**. We also explain the comparisons again as follows. - **[Problems]** We acknowledge that we solve the same problem and follow the same task settings proposed by DIDA [1], which is not considered our contribution. We have made citations accurately and comprehensively in the contents. However, we tackle the problem from a new perspective with a novel framework, achieving better results. - **[Advantages]** Due to the page limit, we briefly discuss the comparison with DIDA in the main paper (lines 270-271) and experimentally validate the advantage of our method compared with DIDA [1]. The detailed difference and EAGLE’s advantages are: - **[Modeling environments]** EAGLE is the **first** to explicitly model latent environments on dynamic graphs by variational inference. DIDA [1] neglects to model complex environments, which weakens its ability in identifying invariant patterns. - **[Representation learning]** EAGLE learns node embeddings by $K$-channel environments disentangling and spatio-temporal convolutions, which helps better understand multi-attribute relations. DIDA [1] learns with single channel convolutions with attentions. - **[Invariant learning]** EAGLE discriminate spatio-temporal invariant patterns by **the theoretically supported $\mathbb{I}^\star(\cdot)$** for each node individually, leading to better removal of spurious correlations. DIDA [1] divides invariant/variant parts simply and heuristically with a minus operation for all nodes. - **[Causal intervention]** EAGLE perform fine-grained causal interventions with **both** observed and generated environment samples, better minimizing the variance of extrapolation risks, generalizing to unseen distributions better. DIDA [1] intervenes coarse-grainedly with only observed samples. --- We will make sure to provide the necessary clarifications and avoid these misunderstandings in the revision. Should you have any further questions or concerns, we are glad to continue the discussion and provide further clarifications. [1] Dynamic graph neural networks under spatio-temporal distribution shift. NeurIPS, 2022. [2] Learning invariant graph representations for out-of-distribution generalization. NeurIPS, 2022. [3] Learning substructure invariance for out-of-distribution molecular representations. NeurIPS, 2022. [4] Handling distribution shifts on graphs: An invariance perspective. ICLR, 2022. [5] Out-of-distribution generalization with causal invariant transformations. IEEE CVPR, 2022. [6] Causal inference by using invariant prediction: identification and confidence interval. Journal of the Royal Statistical Society Series B: Statistical Methodology, 2016. [7] Invariant risk minimization. arXiv, 2019.
Summary: This paper proposes a novel framework called EAGLE to address the challenge of out-of-distribution generalization in dynamic graphs. The authors investigate the impact of latent environments on dynamic graphs and develop methods to model and infer these environments. They propose techniques to recognize spatio-temporal invariant patterns and perform fine-grained causal interventions. The rationality and correctness of EAGLE are guaranteed by rigorous reasoning and mathematical proof. Experimental results demonstrate the superiority of their method in handling distribution shifts in dynamic graph datasets. Strengths: (1) First attempt to explicitly model the environment on dynamic graphs. Through the innovative Environment “Modeling-Inferring-Discriminating-Generalizing” paradigm, the influence of open environment on OOD generalization on dynamic graphs is explored, and the generalization ability of the model in this scenario is improved. (2) Well-organized derivation and mathematical proofs. Based on causality theories, the proposed method is proven to satisfy the invariance assumptions and propositions, directing the optimization that achieves OOD generalization for in-the-wild extrapolations by fine-grained causal interventions. (3) Reasonable experiment settings and sufficient results. Appropriate baseline methods are compared under multiple OOD environment settings. The results of the main experiment and several auxiliary experiments demonstrate the effectiveness and superiority of the proposed method. (4) This method can be easily extended to other sequential models to perform temporal convolutions, or compounded with any attention/reweighting mechanisms in environment modeling. This paper is well-organized and easy to follow. The background problems and method design are clearly explained. Weaknesses: (1) Variable symbols are kind of complicated, and their expressions are not very clear, which brings difficulties in understanding theoretical proofs. (2) Computational complexity and sufficient analysis are not discussed in the main contents. (3) The distribution shifts in the experiment datasets are manually designed. I wonder if there exist naturally formed dynamic graph OOD datasets that measure on their OOD degree, and conduct experiments based on those datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Are there other OOD types besides the links and node features? How to demonstrate that the proposed model improves generalization performance on the other OOD types? (2) It is better to explain the details of the baselines and datasets, like the differences with the proposed model, the statistical information of the datasets, etc. in the main contents. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Y Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed comments and insightful questions. We make responses to the reviewer’s comments as follows. **Q1:** Variable symbols are kind of complicated, and their expressions are not very clear, which brings difficulties in understanding theoretical proofs. **A1:** Thanks for your kind suggestion. In order to enhance readability and reduce the difficulty of understanding symbols, we will improve complex symbols and notations by increasing the descriptions and omitting unnecessary superscripts and subscripts. Especially in the theoretical proofs, we will add easy-to-follow understandings and examples in the revision to improve the reader experience. --- **Q2:** Computational complexity and sufficient analysis are not discussed in the main contents. **A2:** Thanks for your suggestion. - **[Computational complexity]** We include the computational complexity and its analysis in Appendix B. The computational complexity of EAGLE is $\mathcal{O} \left ( |\mathcal{E}| \sum_{l=0}^{L} d^{(l)}+ \mathcal{V} \left ( \sum_{l=1}^{L} d^{(l-1)}d^{(l)} + (d^{(L)})^2 \right ) \right )$ (Eq. (B.1)), indicating EAGLE has **a linear computation complexity** with respect to the number of nodes and edges, which is **on par with** existing dynamic GNNs and OOD methods on dynamic graphs. - **[Space complexity]** Denote $|\mathcal{V}|$ and $|\mathcal{E}|$ as the number of nodes and edges, respectively, $K$ as the number of environments, $T$ as the number of time slices, $L$ as the number of layers in EA-DGNN, $L^\prime$ as the number of layers in ECVAE, $d_0$ as the dimension of input node features, $d=K d^\prime$ as the hidden dimension of EAConv layers in EA-DGNN, $d^\prime$ as the hidden dimension of the encoder and decoder networks layer of ECVAE, $\sum_{v \in \mathcal{V}} \mathrm{Var}(\mathbf{z}_v^{\mathbf{e}\prime})$ as the variance of $K$ environment-aware representations. - the dynamic graph: $\mathcal{O}(K T(|\mathcal{V}|+|\mathcal{E}|))$ - the node input features: $\mathcal{O}(|\mathcal{V}| K T d_0)$ - the EAConv layer: $\mathcal{O}(L d^2)$ - ECVAE: $\mathcal{O}(L^\prime d^\prime)$ - storing generated environment samples: $\mathcal{O}(|\mathcal{V}| K T d^\prime)$ - storing the states for function $\mathbb{I}(\cdot, \cdot)$: $\mathcal{O}(K \sum_{v \in \mathcal{V}} \mathrm{Var}(\mathbf{z}_v^{\mathbf{e}\prime}))$. - The **overall** space complexity of EAGLE can be roughly calculated as $\mathcal{O}(K T(|\mathcal{V}|+|\mathcal{E}|)) + \mathcal{O}(|\mathcal{V}| K T d) + \mathcal{O}(L d^2) + \mathcal{O}(L d) + \mathcal{O}(K \sum_{v \in \mathcal{V}} \mathrm{Var}(\mathbf{z}_v^{\mathbf{e}\prime}))$ (note that, we omit differences in superscript and subscript of $L$’s and $d$’s for brevity) As $K$, $T$, $d$ and $L$ are small numbers, EAGLE has a linear space complexity with respect to $|\mathcal{V}|$ and $|\mathcal{E}|$. Our experiments show that the empirical memory cost of EAGLE **is on par with the related works.** We will add these analyses in the main contents in the revised version. --- **Q3:** If there exist naturally formed dynamic graph OOD datasets that measure on their OOD degree, and conduct experiments based on those datasets. **A3:** Thanks for your question. To the best of our knowledge, there is no existing dynamic graph OOD datasets that measure their OOD degree. - **[Why no naturally formed OOD datasets]** Out-of-distribution shifts naturally exist in dynamic graphs as the formation of real-world dynamic graphs typically follows a complex process under the impact of underlying environments. However, though these shifts vary in degree, they are **difficult to quantify**. We leave the problem of measuring the OOD degree between training and testing distributions as future work. - **[Our datasets are more challenging]** In our experiments, we manipulate the datasets following DIDA [1]. Actually, the manually designed distribution shifts in our work are **more practical and challenging** in real-world scenarios as the model **cannot get access to any information** about the filtered links until the testing stages. Our model still has satisfactory performance under such harsh OOD distribution shifts than all baselines, which further demonstrates EAGLE's excellent out-of-distribution generalization ability. --- **Q4:** Are there other OOD types besides the links and node features? How to demonstrate that the proposed model improves generalization performance on the other OOD types? **A4:** Thanks for your question. Though there exist other OOD types on the graph level, such as the graph size, scaffold, base motifs, etc., none of them has a time dimension (not dynamic) and has no real-world scenarios in terms of the dynamic settings, leaving them **only suitable for static graphs**. As our work focuses on the OOD generalization problem on dynamic graphs for **node-level tasks**, the most related OOD types on the node level are the **links and node features**. So we conduct extensive experiments to demonstrate the generalization ability of our model in node-level OOD shift settings. --- **Q5:** Explain the details of the baselines and datasets in the main contents. **A5:** Thank you for your suggestions. - **[Baselines]** We provide a short introduction in the main contents (Section 4, lines 220 to 227) and provide the full details of the baseline and the analysis of the differences with our proposed model in Appendix D.2, and the implementation details of the baseline are introduced in Appendix E.2. - **[Datasets]** We give a simplified explanation and the OOD settings in the main contents (Section 4, lines 216 to 219), and implement the full details of the baselines in Appendix D.1, including statistical information, the OOD manipulations, their visualizations, etc. We will add more details of baselines and datasets in the main contents in the revision, to enhance a better understanding of experiments. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their careful rebuttals and efforts, which have resolved all my questions and concerns, and proposed corresponding improvement in the revision. In summary, this paper proposes a novel framework to address the challenge of out-of-distribution generalization in dynamic graphs. The authors investigate the impact of latent environments on dynamic graphs for the first time and extensive experiments demonstrate their superiors compared to baselines. It is worth noting that, OOD generalization on dynamic graphs is under-explored, and this work emphasizes the importance of the latent environments, which I think is insightful for the upcoming works. I suggest this paper to be accepted. --- Reply to Comment 1.1.1: Comment: Dear Reviewer YPMZ, We would like to express our sincere gratitude to you for endorsing our work and providing constructive suggestions. OOD generalization on dynamic graphs is an interesting but under-explored task, and our work investigates the impact of latent environments on dynamic graphs for the first time, which brings a new perspective to the upcoming work in this field. Thanks again for the time and effort in reviewing our work!
Summary: In this paper, the authors propose a novel framework EAGLE, Environment-Aware dynamic Graph LEarning, which tackles the OOD generalization problem by modeling complex dynamic environments and exploiting spatial-temporal invariant patterns. Following the Environment “Modeling-Inferring-Discriminating-Generalizing” paradigm, EAGLE consists of three modules. Firstly, an Environment-Aware Deep Graph Neural Network (EA-DGNN) is designed to model environments by learning disentangled representations under multi-channel environments. EAGLE then diversifies the observed environments' samples with the environment instantiation mechanism, which infers environment distributions by applying the multi-label variational inference. Finally, EAGLE discriminates spatio-temporal invariant patterns for out-of-distribution prediction by the invariant pattern recognition mechanism and performs fine-grained causal interventions node-wisely with a mixture of observed and generated environment samples. Extensive experiments on real-world datasets demonstrate that EAGLE outperforms baseline methods on dynamic graph OOD tasks. Strengths: 1. The OOD generalization problem on dynamic graphs is important, especially when considering changes in latent environments. EAGLE is the first model to tackle this problem by modeling the impact of environments on dynamic graphs under distribution shifts. 2. The proposed Environment “Modeling-Inferring-Discriminating-Generalizing” paradigm is technically sound and easy to understand, and using the proposed Environment Instantiation Mechanism to generate instances for inferring environment is novel. 3. The experiments are extensive (real-world datasets under distribution shifts on link attributes and node features, ablation studies on proposed mechanisms), and the results are considerably better than existing methods. Weaknesses: 1. The requirement of the environment information is not specified. Especially, whether EAGLE requires the environment label to be known or not. The reviewer thinks one of the main contributions of EAGLE should be inferring environments from samples, but there are not any detailed explanations, and it seems that the number of environments is pre-determined in the experimental setting. The reviewer wonders if the authors could discuss the requirement of environment labels in detail and how to select the proper number of environments if the environment label is unknown. 2. Proposition 2 is difficult to understand and lacks details about how to obtain the optimal $\mathbb{I}^*(\cdot)$. It would be great if the authors could provide further explanation and a concrete example to help readers better understand the proposition. 3. The related work part should also add literature related to disentangled representation learning and further discuss the difference between EAGLE and DIDA [61] in detail. Minor issues: typo: line 145, multi-lables -> multi-labels Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please see the first two questions in Weaknesses. 2. Compared to DIDA, what is the advantage of EAGLE for modeling the complex dynamic environment? It would be great if the authors could provide some examples for explanation. 3. When the training data is homogeneous (i.e., only contains few environments), how does EAGLE generalize to unseen test environments with OOD shifts? 4. How to select an appropriate number of generated instances for Invariant Pattern Recognition to exploit spatio-temporal invariant patterns? 5. The authors discussed EAGLE's computational complexity in the appendix. What is EAGLE's space complexity? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not address the limitations of their work. The reviewer thinks that the authors should make the description of the problem setting, the definitions of spatio-temporal invariant patterns, and variant patterns more clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed comments. Responses are as follows. **Q1.1:** Whether EAGLE requires the environment label? **A1.1:** Thanks for your question. EAGLE doesn’t require the ground-truth environment labels, which are also **inaccessible** in the real dynamic graphs. We propose a multi-label $\mathbf{y}$, which is mixed up with time and environment index. It can be seen as our inferred environment labels. --- **Q1.2:** How to select the proper number of environments? **A1.2:** Thanks for your question. We adopt a **warm-up mechanism** to determine $K$. We evaluate the performance of the first 10 epochs on the validation set to find the most suitable $K$, and fix it for the rest of training. Following your suggestion, we add the parameter sensitivity experiment for $K$ on COLLAB, which shows the impact of $K$ on model performance. We will add this in the revision. | $K$ | 2 | 4 | **6** | 8 | 10 | | --- | --- | --- | --- | --- | --- | | AUC (w/o OOD) | 80.33±1.29 | 81.17±1.04 | **84.41±0.87** | 83.65±0.79 | 81.94±1.03 | --- **Q2:** Further explanation and example to understand Proposition 2. **A2:** Thank you for your suggestion. - **[Explanation]** Proposition 2 solves a **dynamic programming problem** to obtain the optimal $\mathbb{I}^\star(\cdot)$ with proofs in Appendix C.2. The **target** is to find a partition dividing all patterns into variant/invariant set with the **maximum** difference between the variance means. - **[Example]** Suppose $\mathrm{Var}(\mathbf{z}_v^{\mathbf{e}\prime})$ is [0.1, 0.2, 0.3, 0.4, 0.9], then the optimal partition is [0.1, 0.2, 0.3, 0,4] and [0.9] (with a larger mean difference) rather than [0.1, 0.2, 0.3] and [0.4, 0.9]. An optimal partition **can always be found** for each node, which greatly improves the generalization, and is **one of our main advantages** compared with DIDA [1]. We will provide further explanations in the revision. --- **Q3:** More related work to disentangled representation learning. **A3:** Thank you for your valuable suggestions. Disentangled Representation Learning (DRL) is closely related to our work. Most existing graph OOD works fail to learn separate semantics of latent and complex environments, and DRL inspires us to perform environment disentangling, which greatly helps to improve the generalization capability. We will update this in the revision to enhance the understanding of environments. --- **Q4:** Further discuss the difference and between EAGLE and DIDA [1]. What are EAGLE’s advantages? **A4:** Thanks for your constructive suggestion. The difference and advantages are: - **[Modeling environments]** EAGLE is the **first** to explicitly model latent environments on dynamic graphs by variational inference. DIDA [1] neglects to model environments, which weakens its ability in identifying invariant patterns. - **[Representation learning]** EAGLE learns node embeddings by $K$-channel environments disentangling and spatio-temporal convolutions, which helps better understand multi-attribute relations. DIDA [1] learns with single channel convolutions with attention. - **[Invariant learning]** EAGLE discriminate spatio-temporal invariant patterns by **the theoretically supported $\mathbb{I}^\star(\cdot)$** for each node individually, leading to better removal of spurious correlations. DIDA [1] divides invariant/variant parts simply and heuristically with a minus operation for all nodes. - **[Causal intervention]** EAGLE perform fine-grained causal interventions with **both** observed and generated environment samples, better minimizing the variance of extrapolation risks, generalizing to unseen distributions better. DIDA [1] intervenes coarse-grainedly with only observed samples. We will add the comparison in the revision. --- **Q5:** How does EAGLE generalize to unseen test environments with OOD shifts when training data is homogeneous? **A5:** Thank you for your question. EAGLE is agnostic about the intrinsic properties of data (density, heterophily, etc.). Further, following OOD literature, we assume there are $K$ latent environments, where $K$ can be determined by **the warm-up mechanism**. We leave the study of how to dynamically and adaptively determine $K$ for different datasets as our future works. --- **Q6:** How to select an appropriate number of generated instances for Invariant Pattern Recognition? **A6:** Thank you for your question. In Appendix D.5, we provide analysis and appropriate mixing ratio settings. As the number of the observed instances is fixed ($n_{ob}=|\mathcal{V}| \times K \times T$), the number of generated instances can be calculated by multiplying the preferred ratio with $n_{ob}$. --- **Q7:** What is EAGLE's space complexity? **A7:** Thank you for your question. The overall space complexity of EAGLE is $\mathcal{O}(KT(|\mathcal{V}|+|\mathcal{E}|)) + \mathcal{O}(|\mathcal{V}|KTd) + \mathcal{O}(Ld^2) + \mathcal{O}(Ld)+ \mathcal{O}(K\sum_{v\in\mathcal{V}} \mathrm{Var}(\mathbf{z}_v^{\mathbf{e}\prime}))$, where the meaning of notations can be found in Appendix A and B. As $K$, $T$, $d$ and $L$ are small numbers, EAGLE has a linear space complexity with respect to $|\mathcal{V}|$ and $|\mathcal{E}|$. Our experiments also show that the empirical memory cost of EAGLE is on par with the related works. We will add the detailed space complexity in the revision. --- **Q8.1:** No limitations statements. **A8.1:** Thanks for your suggestions. We would like to clarify that we have briefly discussed the limitations of our work being restricted only to node-level tasks **in Section 6, lines 322-323.** We will extend limitation discussions in the revision. --- **Q8.2:** Make the descriptions more clear. **A8.2:** Thank you for your suggestion. We will make further improvements with clearer notations and easy-to-understand explanations in the revision. --- [1] Dynamic graph neural networks under spatio-temporal distribution shift. NeurIPS, 2022. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Thank you for the detailed rebuttal. I appreciate the authors providing the experimental results for selecting the appropriate $K$. However, I feel like the warm-up mechanism cannot fully convince me, as we usually do not know which kind of features (i.e., invariant or spurious) are learned by the model in the first 10 epochs, and the experimental results demonstrate that different $K$ affects the AUC score. Therefore, I would like to keep my score unchanged. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the feedback. We would like to further clarify your concerns as follows. **Q:** I feel like the warm-up mechanism cannot fully convince me, as we usually do not know which kind of features (i.e., invariant or spurious) are learned by the model in the first 10 epochs, and the experimental results demonstrate that different $K$ affects the AUC score. **A:** Thanks for your question. $K$ reflects the number of underlying environments, which is closely related to the datasets. Thus, we indeed need to decide the most appropriate $K$ for each dataset. As a general practice, we treat $K$ as a hyper-parameter and tune it on the validation set for the first 10 epochs, as we explained in the last response. Following your suggestion, we further demonstrate that 10 epochs are sufficient to decide the most appropraite $K$ empirically. Specifically, we show the AUC on COLLAB dataset with different $K$ as follows: | $K$ | 2 | 4 | 6 | 8 | 10 | | --- | --- | --- | --- | --- | --- | | Epoch 2 | 61.07 | 62.79 | 63.91 | 63.11 | 60.22 | | Epoch 4 | 63.44 | 64.32 | 65.23 | 64.60 | 61.34 | | Epoch 6 | 64.21 | 67.51 | 67.79 | 67.35 | 62.98 | | Epoch 8 | 66.58 | 68.23 | 69.95 | 68.83 | 64.05 | | Epoch 10 | 68.29 | 69.96 | 71.26 | 70.24 | 66.12 | | Final Epoch (converged) | 80.33 | 81.17 | 84.41 | 83.65 | 81.94 | The above results show that, though not converged, the model performance for the first 10 epochs is able to determine the optimal $K$ value, which is **consistent with the final result, validating the effectivness of our warm-up mechanism.** We will incorporate the results in the revision. Besides, we would like to clarify that the warm-up mechanism is a training “trick” we adopted in practice to improve the training efficiency, and we do not consider it as a main contribution. **There’s no big difference to fine-tune** $K$ **with complete epochs without applying the warm-up mechanism.** We will leave exploring more advanced methods to decide $K$ as future works. Should you have any further questions or concerns, we are glad to provide further responses. --- Reply to Comment 1.1.2: Comment: Dear Reviewer iWRC, As we draw closer to the rebuttal deadline, I would like to inquire if you have any additional questions or concerns about our work. We greatly value your feedback. Thank you! Best, Authors from submission 1276
Summary: This paper proposes a novel framework called EAGLE for out-of-distribution generalization on dynamic graphs. EAGLE models the complex environments that influence the generation of dynamic graphs and exploits the spatio-temporal invariant patterns that can generalize under distribution shifts. EAGLE consists of four mechanisms: environment-aware dynamic graph neural network, environment instantiation, invariant pattern recognition, and causal intervention. EAGLE achieves superior performance on future link prediction tasks on both real-world and synthetic datasets compared to existing methods. The paper also provides theoretical analysis and empirical studies to support the effectiveness of EAGLE. Strengths: 1. The illustration in Figure 2 is clear and aids in understanding the proposed method. 2. Theoretical analysis and proofs are provided as necessary. 3. Overall, the proposed solution for OOD Dynamic link prediction seems reasonable. The idea of generating spatio-temporal environments is novel. Weaknesses: 1. The generated environment patterns lack explainability. These patterns are produced through disentangling, making it difficult to explicitly understand their specific meaning in reality. 2. The author should include further discussion on how the proposed method can contribute to link prediction in a new environment, as exemplified in Figure 1. If the model successfully distinguishes environment-invariant patterns in previous graphs, how will it capture new patterns in a different environment to improve prediction? 3. The writing style is not reader-friendly for those unfamiliar with the task and techniques. It would be beneficial to provide more intuitive explanations to enhance understanding. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. What does the one-hot multi-label y introduced in line 142 represent? Does it means what index of the environment which the observed z belongs to, or does it also include the index of the time step? 2. Could the author provide a more detailed explanation of the visualization experiment mentioned in Section 4.4? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed comments and insightful questions. Responses are as follows. **Q1:** The generated environment patterns lack explainability. **A1:** Thank you for your question. - **[Understand latent environments]** Environments are **latent factors**, where the ground-truth labels are **inaccessible**. We provide real-world examples (lines 101-106) as illustrations. **The key insight is,** the formation of dynamic graphs typically follows a complex process under the impact of environments, causing the relationships to be multi-attribute. To effectively model diverse spatio-temporal environments, the ego-graphs of each node are disentangled and processed in **different embedding spaces**. - **[An example: the social networks]** The relations between a central node and its neighbors, e.g., “classmates”, “colleagues”, etc., are formed in different surrounding environments, e.g., classmates relations are formed in “school” environment, consisting of “classmates”, “teachers”, “staff”, etc., and colleagues relations are formed in “working” environment. We propose **multi-channel environments disentangling mechanism** to discriminate different semantics in $K$ embedding spaces, e.g., the 1-st embedding space means the “classmate” relations, the 2-nd embedding space means the “colleague” relations, etc. Thus, EA-DGNN can percept and encode environment features into node representations. We will add further explanation in the revised version. --- **Q2:** Further discussion on how the proposed method can contribute to link prediction in a new environment in Figure 1. How will the model capture new patterns in a different environment? **A2:** Thank you for your question. - **[Further discussions]** We have provided further explanations in Appendix F.1. It indicates that the existing model has captured the spurious correlations between the semantics of “coffee” and “cold drink”, which caused the **false** prediction of “Iced Americano”. EA-DGNN can learn multiple environment patterns, such as “coffee”, “cold drinks”, “iced dessert”, “summer dressing”, etc. Section 3.3 identifies the **spatio-temporal invariant pattern** to be “coffee”. Section 3.4 encourages the model to learn solely with the “coffee” pattern while minimizing the environment extrapolation risks to enhance the generalizing ability in a new environment. - **[During testing]** Our trained model will directly give a higher possibility score for a Hot Latte than an Iced Americano in testing. This is because the model has fully learned the invariant and sufficient information for correct prediction during training, and the risk in the unknown test environment is reduced. Besides, the model can perceive the testing environment's (winter) semantics through message passing and aggregation of its neighbor nodes, which leads to the final prediction. We will add further explanations in the revision. --- **Q3:** The writing style is not reader-friendly. Beneficial to provide more intuitive explanations. **A3:** Thank you for your suggestion. We will add more related works and further intuitive explanations to improve readability, especially to the techniques and theories. In addition, we will try to simplify the notations and add clearer clarifications to enhance understanding. --- **Q4:** What does the one-hot multi-label $\mathbf{y}$ introduced in line 142 represent? Does it mean what index of the environment which the observed $\mathbf{z}$ belongs to, or does it also include the index of the time step? **A4:** Thank you for your questions. - **[Meaning]** $\mathbf{y}$ is mixed up with time index $t$ and environment index $k$, indicating which environment index under which time index $\mathbf{z}$ belongs. It can be seen as our inferred label of environments. - **[Example]** If there exist $K$ environments and $T$ graph snapshots, we first initialize a zero matrix of the shape $K \times T$. Then we mark the value in position $(k,t)$ to be 1 and reshape the matrix into the 1-dimension vector $\mathbf{y}$, indicating the multi-label of $\mathbf{z}$ for the $k$-th environment at time $t$. - **[Role]** $\mathbf{z}$ is **concatenated** with its corresponding $\mathbf{y}$ to realize conditional variational inference for the distribution of latent environments by ECVAE. We can then instantiate environments by generating samples from the inferred distribution with any given $\mathbf{y}$. This can be regarded as **the data augmentation** under the guidance of the inferred prior distributions, which helps improve the generalization ability. We will add more explanations in the revision. --- **Q5:** Provide a more detailed explanation of the visualization experiment. **A5:** Thank you for your question. - **[Settings]** We visualized snapshots in the dataset COLLAB using NetworkX in Figure 5. As EAGLE concentrates on solving **node-level** downstream tasks, we carried out visualization analysis from the perspective of a single node $v$ (red color). - **[Process]** EA-DGNN first models the environment-aware node representation for node $v$ (represented by “🟪🟦🟩🟧🟨”), then Section 3.3 identifies spatio-temporal variant patterns (represented by “⛝”). To encourage the model to rely on sufficient and invariant information to make predictions and minimize the environment extrapolation risks, Section 3.4 randomly replaces the “⛝” parts with observed or generated environment samples (represented by “🔄”). - **[Analysis]** Thus, the model focuses more on the neighbor nodes in the **more invariant environments patterns** (purple and blue parts “🟪🟦”), increasing the edge weights (can be seen as spatio-temporal attention), while ignoring the links with the **more variant environments patterns** (represented by “🟩🟧🟨”). In conclusion, EAGLE can effectively learn invariant patterns, making generalized predictions. We will add more detailed explanations of the visualization experiment in the revision.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Clifford Group Equivariant Neural Networks
Accept (oral)
Summary: This paper presents a novel approach for creating E(n)-equivariant models based on Clifford (geometric) algebras. The authors identify the Clifford group and its action. By extending this action to the entire Clifford algebra, remarkable properties can be obtained enabling the construction of equivariant maps from the Clifford algebra to itself, as well as other equivariant operations, such as grade projection. Using these, an equivariant Clifford group neural network operating on vector fields can be constructed, capable of a more accurate and nuanced interpretation of the underlying structures than the baseline scalar-field networks. The proposed method exhibits application versatility across different tasks and dimensionalities of the space. Strengths: S1. The high quality of writing and presentation are two strong features of the paper. The authors succeeded in finding a good balance between the amount of technical detail and the narration clarity. S2. The paper presents novel theoretical results based on Clifford algebras, enabling an original way of constructing equivariant neural networks, which is a great contribution to the field. S3. Experimental validation is versatile and, to a large extent, convincing. Weaknesses: W1. Missing details in the presentation of the proposed method, including: (See the questions section for more information.) - translation equivariance (Q1) - invariant prediction computation (Q2), - the architecture of the proposed networks in the experiments (Q4). W2. Incomplete comparison with other methods: - The support of the claim that the scalar-feature methods are not able to extract some essential geometric properties from input data needs to be elaborated on (Q3). - Discussion on the complexity of the proposed method vs the baselines is missing (Q4). - Some training details are missing (Q5). Addressing these weaknesses will improve the presentation and clarify the support of certain claims. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1. Translations: - The authors claim to present an E(n)-equivariant method. - As discussed in Section 3.2, equivariance w.r.t. the Clifford group implies equivariance w.r.t. the orthogonal group O(n). It is, however, unclear how **translation** equivariance is attained. In fact, all the presented experiments involve only the orthogonal group transformations: in the $n$-body experiment, the authors use mean-subtracted positions of the particles, thereby removing the effect of translation already on the input level. - Could the authors clarify how translations can be handled using the proposed layers? Q2. How are O(n)-**invariant** predictions obtained with the proposed method? Q3. The claim of better underlying geometric properties capturing is meant to be supported by the *signed-volume experiment*. I would like for the authors to elaborate on the claim itself and clarify the presented experimental support. - How do the authors define *covariance*? - Some of the considered scalar-feature methods extract equivariant features and compute the invariant features from them to perform classification/segmentation requiring this. For instance, in the theory of the VN method [34], there's nothing that restricts the features to only be equivariant under rotations and not reflections. In fact, the invariant features are obtained as the inner product of equivariant features, which also cancels the effect of reflections thus making the method produce O(3)-invariant predictions. This makes the VN method unable to distinguish a tetrahedron and its reflected copy (and thus, their signed volumes will not be distinguished either) by the design of the invariant computation block only, even though the method extracts equivariant features. This should be taken into account in the comparison in Section 5.1. - Could the authors provide more details of the experimental setup? I want to see the number of samples, the ratio of positive/negative volumes, and the success rates of the predictions given positive/negative volumes for the proposed method and the baselines. Q4. Could the authors provide details of their method architecture used in the experiments in Sections 5.1-2? How does the complexity of the method compare to the baselines? Q5. Is the training setup the same for all the methods in the experiments? I.e., were the training hyperparameters optimized for the proposed method and used for others? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations and broader impact are adequately addressed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their highly valuable feedback. We address their concerns and questions in the following. The reviewer has already transformed weaknesses into questions (thanks!), which we answer directly. 1. *How is translation equivariance obtained?* a. Translation invariance is obtained through the subtraction of a reference point, like the mean position of the point cloud, which is a typical approach to getting translation invariance. To obtain equivariance, one simply adds the reference point back onto the NN's prediction. Villar et al. (2021) show that this has some universality properties regarding invariant functions. While relative positions are the golden standard for translation equivariance, the Clifford algebra approach can offer an alternative. One of the reasons why we developed the theory for the fully general, potentially degenerate metric case is that we can get $\rho(w)$ to act as a Euclidean isometry (see Remark D.30). This means we can get translation equivariance without subtracting a reference point. This is what the recent developments in projective geometric algebra (PGA) study, which use the signature $(n, 0, 1)$ (Roelfs \& de Keninck, (2021)). However, there were some practical technicalities. To give an idea, we have to map the data from the usual vector space V to its dual subspace in a well-defined manner, which is where the Euclidean group acts. However, here the metric is fully degenerate and does not correspond to the metric on the original space V. These difficulties motivated us to leave this for future work. 2. *How are invariant predictions obtained?* a. There are two approaches to obtaining invariance. First, the scalar subspace $\mathbb{F}=\mathrm{Cl}^{(0)}(V, q)$ is always invariant with respect to $\rho(w)$. So, one can take the grade projection of the NNs output onto the scalar subspace to obtain invariant outputs. On top of this, the algebra-extended quadratic form $\bar{q}: \mathrm{Cl}(V, q) \to \mathrm{Cl}^{(0)}(V, q)=\mathbb{F}$ is always invariant. So one can concatenate the zero-projection with the quadratic form of the covariant subspaces to get an invariant. One can use typical feed-forward neural networks to project this to an invariant of the dimension the problem requires. **Action taken:** we now clarify this in the invariant tasks. 3. *Clarifications regarding the volume experiments.* a. 1. We define a covariant object as a quantity that transforms in a nontrivial but predictable way under a change of coordinates. 2. The reviewer is right that scalar methods such as the default implementation of VNs produce O(3) invariant features, and thus VNs are not able to distinguish a tetrahedron and its reflected copy. We add this for clarity. Of course, we do not claim that such methods cannot be adapted to obtain access to these features. However, one of the merits of the Clifford algebra networks is that the network can readily distinguish these cases. 3. We're sorry that there is an unclarity regarding classification/regression. In Sec. 5.1, we detail that we regress the signed volume against the spatial positions. The number of training samples is then given in Figure 3. The ratio of positive vs. negative volumes is 50/50. The task is thus to predict the volume and not only its orientation. **Action taken:** we adjusted section 5 to clarify these matters. 4. *Clarify the neural architecture against baselines for Section 5.1-2. How does the complexity of the method compare against the baselines?* a. We use feed-forward networks based on the layers proposed in Section 3. We compare against baselines of similar numbers of parameters, where source codes were taken from original implementations. For 5.2, we took the numbers directly from the source paper, and used networks with similar numbers of parameters. For further details, please consider the attached PDF file. **Action taken:** we now elaborate better on model and baseline complexities in the experimental section. 5. *Is the training setup the same for all experiments? Were hyperparameters optimized?* a. We did no exessive hyperparameter tuning for CGENN architectures in experiments 5.1 and 5.2. We rather used design principles (number of parameters, layers from baseline architectures). We searched for learning rates for 5.3 and tried a few architectures and learning rate regimes for 5.4. Reported baseline numbers for experiments 5.2, 5.3, and 5.4 are taking from existing literature. For experiments in 5.1, baselines were optimized for similar parameter counts as CGENNs. We also ran small learning rate searches for them. We are very grateful for the reviewer's feedback. Where applicable, we have made an effort to disclose all the changes made to the manuscript. Given this and our previous discussions, we are hopeful that the reviewer is open to promoting the score. Should there be any questions or concerns, we are always ready for further dialogue. #### References - Villar, Soledad, et al. Scalars are universal: Equivariant machine learning, structured like classical physics. Advances in Neural Information Processing Systems 34 (2021): 28848-28863. - Roelfs, Martin, and Steven De Keninck. Graded symmetry groups: plane and simple. Advances in Applied Clifford Algebras 33.3 (2023): 30. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' comprehensive rebuttal addressing all of my questions and thank them for that! I would further like to clarify the following items. 1. *Translation equivariance* Is the network configuration used in the $n$-body experiment translation-*equivariant*? I.e., do the authors add the subtracted point onto the prediction? I would recommend that the authors include the translation-equivariance clarification in the paper. Otherwise, it is not entirely clear how exactly the currently presented method is suitable for constructing *E(n)*-equivariant networks (line 2). 2. *Neural architecture* With reference to Table 1 in the rebuttal PDF and comparing the architectures for O(5)-volume (convex hulls) and O(5)-regression experiments, I wonder what causes the difference in the number of parameters. In both cases, the authors use a 4-layer feedforward network. However, for the (seemingly simpler) task of O(5)-regression where the input is 2 5D points (vs., I presume, larger input for convex-hull volume prediction), the number of parameters is one order of magnitude higher. - Is there a typo or how can the authors explain this otherwise? - Using a feed-forward neural structure in experiments 5.1-5.2, do the authors address the *permutation equi- and invariance* wrt the input points/particles? Or is this requirement relaxed? - Could the authors also clarify this for experiments 5.3-5.4? --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful queries and for giving us the opportunity to further clarify our work. **Translation equivariance**: In the $n$-body experiment, we have indeed outlined the translation equivariance by mentioning, “The input consists of the mean-subtracted particle positions (to achieve translation equivariance) and…”. To make this clearer, we will explicitly mention in the revised version that we add the subtracted point back onto the prediction. We agree that emphasizing this point will provide better clarity on how our method achieves E(n)-equivariance, and we appreciate your recommendation in this regard. **Neural architecture**: The difference in parameter count between the O(5)-volume (convex hulls) and O(5)-regression experiments is primarily due to the different numbers of hidden dimensions used. The latter uses hidden dimensions of 128 (which is arguably overparameterized) compared to the former's 16 dimensions. For the regression experiment, we intentionally matched the number of parameters to those of the baselines to ensure a fair comparison. We'll ensure that such details are more explicitly mentioned in the revised manuscript. **Permutation invariance**: In our synthetic experiments, we relax the permutation invariance. Specifically: for the convex hull experiments, our goal was to test our raw feed-forward parameterizations in their most unadorned form against the baselines, among which some were also introduced as feed-forward architectures. **Dealing with permutation invariance in experiments 5.3 and 5.4**: We employ graph neural networks (GNNs) to address the permutation invariance for these experiments. Although our manuscript was limited in detailing this due to space constraints, we briefly touched upon using graph neural networks in 5.3. Delving deeper, these are straightforward message passing networks wherein both the message and update networks utilize our proposed layers. We acknowledge the omission of some of these details, especially regarding experiment 5.4, in our pursuit to fit the space constraints. We will rectify this by adding these details back in the revised version for a clearer understanding. We hope that these explanations successfully addressed your comments. We appreciate the time you spent on this review, which has resulted in an improved revision of the manuscript. If it aligns with your assessment, we'd be grateful if you'd consider promoting our score. As always, we remain at your disposal for any further questions or clarifications.
Summary: I like to admit that I am not familiar with this area of work. I try my best to review the key contributions of this work. My review is more of clarification than critic. The theory part of the paper lies mainly in section 3.1 starting with Eq.(5). The computation part of the paper is given in section 4. The authors introduce two kinds of layers. Linear Layers and Geometric Product Layers. Linear layer mix the input channels of the same grade (Eq.(11)). Interaction layer takes the geometric product of the i and j grade of x1, x2 and project to the k-th grade. Several experiments were performed using this neural network. I will discuss more about the experiments in later part of this review. Strengths: 1. The paper explore a different approach to deep learning. This paper helps in adding a new dimension in the field of deep learning. 2. The authors provided a huge amount of background and foundations in the supplementary materials. This is important because I presume relatively few readers are familiar with Clifford algebra. 3. Although I cannot fully understand the theoretical part of this work. It seems to me the authors try to substantiate their work with theories. Weaknesses: 1. The strength of this paper may also be "its weakness", readers without background on Clifford algebra may struggle to pick up this area of work. It will be good if the authors follow up with a good tutorial. Perhaps a series of online learning materials and provide its link in this paper. For example, make an archive tutorial paper (or online video) and those locations are cited in this paper. The supplementary materials of this paper is good. but it is insufficient to bring people outside of this field up to the level to use this paper. 2. The output of the neural network is given by Eq.(11) and Eq.(13). I understand that the output is also an element of the Clifford algebra. I struggle to understand how this output is link to the use cases in the experiments. It will be good if the authors help the readers by describing the experiments in detail. Give a step-by-step walk through of : 2a: how the data is being prepared and in what format 2b: how the data is being feed into the neural network 2c: how to link the k-th grade of x to the numerical output. 2d: perhaps walk through with a simple example, like one example using complex numbers. I explain 2c a bit more here. For a complex number c1 = 2+3i, the zeroth grade is 3 (or the 1 element). The first grade is 2 (or i). If the author can guide the reader through on how to input c1 into the linear layer and explain what is the output, that will help. Similarly, the author may also give another example c1=1-2i and c2=4+I on how to feed c1 and c2 into Eq.(12) and Eq.(13). Technical Quality: 3 good Clarity: 3 good Questions for Authors: How to prepare data to feed into the neural network? Perhaps the authors can give a simple example(s) of processing complex numbers and/or quaternions. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Readers without a background of abstract algebra or Clifford algebra may find it hard to understand this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the actionable feedback. We address their concerns and questions in the following. 1. *Familiarize readers with the Clifford algebra, perhaps with a tutorial.* a. Following up with a tutorial is a great suggestion! Indeed, we are already working on a blog post series that explains the subject matter in a less demanding way. We realized, after studying related work, that the orthogonal group acting on the Clifford algebra was not studied in a most general form. Physicists, engineers, and mathematicians usually restricted themselves to, e.g., definite or nondegenerate quadratic forms. In the context of deep learning, we studied this more generally, requiring a more rigorous approach. 2. *How is the data being prepared and processed? Explain with an example.* a. We can give you some intuitions here. 2a: The neural network tensor shapes of the Clifford algebra networks are of the form $B \times C \times 2^n$, where $n = \dim(V)$. If we have scalar-valued data, we can use the first component of that last dimension to embed those, since $\mathrm{Cl}^{(0)}(V, q)=\mathbb{F}$. For vector-valued data we can use $\mathrm{Cl}^{(1)}(V, q) = V$. The other components are left at zero at first. 2b: This tensor is then fed into the neural network, which also computes features in the other Clifford subspaces, and as such, we get densely filled multivectors. 2c: At the end of the network, we can project onto the grade $k$ subspace to get a $k$-vector. Usually, this would be the grade $0$ subspace for scalar-valued predictions or grade $1$ for vector-valued predictions. 2d: Regarding your complex numbers example, the tensor shape would be $B \times C \times 2$, where we have 2 for the real and imaginary components. You populate the last dimension using your complex-valued data. We hope this helps! **Action taken:** we now give a bit more intuition about these procedures in the experimental section. We thank the reviewer for their valuable insights. We have strived to be as explicit as possible in tackling the remaining issues. If our responses meet expectations, we hope the reviewer maintains their support for our paper. We are ready for any more discussions or questions the reviewer might have. --- Rebuttal Comment 1.1: Comment: I like to thank the author for answering my queries. I will work through the paper again. Really look forward to any new tutorial materials. I understand there is a deadline for rebuttal and focus of the author should be to discuss here. I look forward to the eventual tutorial materials on arxiv after the whole process of double blinded process is over.
Summary: In this submission the authors introduce an equivariant neural network architecture building on the Clifford algebra (defined by a vector space and a quadratic form). They remind that this algebra admits an orthogonal decomposition over sub-vector spaces called grade $m$ where $m$ is the number of basis elements. This is referred as the 'multivector grading of the Clifford algebra'. They identify a (Clifford) subgroup and its (adjusted twisted conjugation) action which not only acts on the original vector space but also on the Clifford algebra whilst satisfy many properties including multiplicatively w.r.t. the geometric product. They show that equivariance w.r.t. this a Clifford group acting on the Clifford algebra implies equivariance w.r.t. the orthogonal group acting on the Clifford algebra. Then they propose several layers, including a linear layer and a geometric product layer which they show to be equivariant w.r.t. the action of the Clifford group. Then they empirically assess their proposed approach on synthetic tasks with $O(3)$, $O(5)$ or $E(3)$ equivariance and show that it overperform scalar based methods but also steerable NNs. Finally, they tackle the task of identifying top quarks from high-energy jets produced in particle collisions at CERN, and show that they are able to perform as well as recent specialised neural networks. Strengths: - This is a nice paper that builds on this recent research avenue of equivariant neural networks based on the Clifford/geometric algebra. - In find particularly appealing the fact that the methodology applies to any dimension, and more generally than $O(n)$ equivariance via the choice of quadratic form $\mathfrak{q}$ as per Section 5.4 (if I understood correctly). - Additionally, it appears that the method is empirically able to fit well a variety of functions and able to generalise. - The paper is pretty well written given the space constraint, although I give some suggestion on improvements below. Weaknesses: - Likely due to space constraint, yet the methodology would deserve more discussion. - For instance, how is the data embedded into the algebra (e.g. for higher order quantities such as an inertia tensor)? - How is the quadratic form $\mathfrak{q}$ chosen (e.g. is it the Minkowski product for the Jet tagging experiment)? - I also believe that the linear and geometric layers could benefit from an illustrative diagram. Perhaps it may be useful to draw parallel with tensor product and irreducible representations for readers familiar with this language. - What's more the main paper is currently lacking a discussion on scalability of the method, and how this compares to Clebsch-Gordan tensor product approaches, although I understand that the space is limited. - Another points which I find slightly frustrating is the lack of ablation study, as generally it is hard to identify why this method overperform tensor product / irreps approaches. Is it more parameter efficient? Does it generalise better? Does it yield a better optimisation landscape? Is it computationally faster? Obviously fully answering this questions would deserve a full paper on its own, yet some elements of answer would be nice :) Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - line 62: 'our method readily generalizes to orthogonal groups regardless of the dimension'. Doesn't the method proposed in [Cesa et al. 2021] also work for any dimension $n$? - Eq 1: What is the intuition behind the independence of the choice of orthogonal basis? Is it because by combining elements from one basis (via the geometric product) one could obtain the second basis? How come the limiting factor is really the number $m$ of such basis elements? - line 232: Worth saying a bit more about the 'layer-wide normalization' proposed in [70]? - line 235: 'However, we still require activation functions after linear layers'. Does this statement come from practical requirements for the method to work well, or from some universal approximation theorem with necessary non-linearity after the linear layer? - Section 5.1: Is the 'signed volume' a pseudo-scalar? - One of the advantage of this method that it readily works whatever the input space dimension, yet what is the scalability of the method with respect to the dimension of the space? - Section 5.2: What are the reasons for EMLP underperforming? It is theoretically a universal approximator. - Section 5: Do all methods have the same number of parameters? or same compute time? A PROGRAM TO BUILD E(n)-EQUIVARIANT STEERABLE CNNS, Cesa, Gabriele and Lang, Leon and Weiler, Maurice, 2021 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: - 'CGENNs induce a (non-prohibitive) degree of computational overhead'. Where does this overhead come from? What specific operation would benefit from having a custom GPU kernel? More generally it would be really nice to discuss the scalability of the method. In particular vs other 'steerable' methods which rely on parametrised tensor product via Clebsch-Gordan coefficients. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the actionable feedback. We address their concerns and questions in the following. 1. *Data embedding?* a. The data is embedded by an additional multivector dim in the network tensors (response to reviewer W8CU), whose size depends on the algebra, e.g., 4 or 8 for 2D or 3D algebras. Multivectors, under the geom. product, parameterize bilinear operations similar to tensors. Not all tensors can be expressed as multivectors, but many from physics can. For the inertia tensor example, see Berrondo et al.(2012). **Action taken:** we give more intuition now in the paper. 2. *Choice of quadratic form?* a. In 3D, the signature is $[1, 1, 1]$, i.e., $p=3, q=0, r=0$. For more specialized problems we use different signatures. E.g., for SU(2), we use $[-1, -1]$, i.e., $p=0, q=2, r=0$ (see response to w3C8). In space-time settings, we use $[1, -1, -1, -1]$, i.e., $p=1, q=3, r=0$. **Action taken:** we explain this now in the paper. 3. *Visualization?* a. We added visualizations of the layers to the attached PDF and to the appendix. 4. *Scalability of the method?* a. Let $\dim(V)=n$, then the complexity of a fully connected geometric product layer is roughly $O(c _ {out} \cdot c _ {in} \cdot 2^{3n})$, where $c _ {out}$ and $c _ {in}$ are output and input channels. The scalability is worse than scalarization methods, and similar to Clebsch-Gordan based E3NN depending on the highest order of irrep that is chosen. 5. *Performance?* a. It is a rather general finding that any polynomial in multivectors is equivariant in any inner-product space. We constructed our layers with this in mind, ensuring that we capture the 0th, 1st, and 2nd-order terms of such a polynomial, leading to flexible parameterizations. In 3D, CGENNs are similar to E3NN based layers, i.e., existing results of E3NN based methods (e.g., TFN, SEGNN) should match the performance (see SEGNNs in the $n$-body experiment). However, depending implementation and optimization specifics, numbers can differ. Clear benefits of CGENNs are found in non-Euclidian (higher-dimensional) cases. 6. *Compare to Cesa et al. (2021)?* a. Currently, Cesa et al. support steerable CNNs equivariant to 2D/3D isometries. Other differences: Cesa et al. design equivariant CNNs, where the group acts on the grid (e.g., an image). We operate on the particles directly. Second, Cesa et al. find equivariant bases for linear maps in such CNNs. Our steerable basis is given directly by the input data, and as such we also operate using bilinear layers, which is crucial for expressive GNNs (Finzi et al., 2021). 7. *Intuition behind the independence of the choice of basis?* a. Great question! In general, basis independency goes back to Einstein's covariance principle, stating that one should come to the same conclusions regardless of the chosen frame of reference. More specifically, it means that grade-decomposition is well-defined and not dependent on a change of coordinates. We were unable to find in the literature a proof for this decomposition in the (non-definite, potentially degenerate) general case. As such, we regard it an original contribution of the work. Secondly, it directly allows for the equivariance of the grade projections! **Action taken**: we explain the basis independence proof more clearly now. 8. *Layer norm.* a. The layer norm is defined as $x \mapsto \frac{x - \mathbb{E}[x]}{\mathbb{E}[\sqrt{|q(x)|}}$, where the expectations are taken over channel dims. Intuitively, it re-centers and rescales multivectors according to their average norm. 9. *Nonlinearities?* a. We need a sufficiently flexible, nonlinear function to learn expressive representations. We empirically find that including additional nonlinearities after the linear layers results in improved performances. **Action taken**: we mention this in the methodology section now. 10. *Signed-volume a pseudoscalar?* a. Yes. 11. *Scalability regarding space dimension?* a. The dimensionality of the algebra scales as $2^n$, where $n=\dim(V)$. This looks bad at first sight, but usually the dimensionality of geometric spaces is not so large. Further, one can operate using subalgebras, which is typical in geometric algebra literature, or one can simply not consider the highest subspaces. 12. *What are the reasons for EMLP underperforming?* a. For O(5) regression, we took datasets and EMLP results directly from Finzi et al. (2021). We went through the EMLP code and noted that the output irreps of each layer is set equal to the output irrep of the whole task. For the O(5) regression task, whose output irrep is one-dimensional, this means that there will be no vector-valued outputs of the bilinear layers. This is potentially less expressive. Further, it depends highly on the specifics of the implementations and parameterizations. Our polynomials form a rather general basis from which we constructed our layers. 13. *Do all methods have the same number of parameters or compute-time?* a. We kept the number of parameters similar between models. The compute-time however can depend on which method one uses. Steerable methods that use bilinear layers are generally slower than scalarization methods. Compared to existing steerable methods, Clifford layers are not fundamentally less efficient. 14. *Computational overhead, scalability.* a. The computations overhead originates from the bilinear geometric product layers. Please also consider our previous responses (4, 13) regarding scalability. #### Refs - Berrondo, M., J. Greenwald, and C. Verhaaren. Unifying the inertia and Riemann curvature tensors through geometric algebra. American Journal of Physics 80.10 (2012): 905-912. - Finzi, Marc, Max Welling, and Andrew Gordon Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. ICML, 2021. --- Rebuttal Comment 1.1: Title: response Comment: Thanks for taking the time to address my questions and remarks, but also for providing additional visualisations! I acknowledge that it's tricky to strike a good balance between clarity and space constraint, and believe that the submission is doing a pretty good job already.
Summary: The paper introduces a new method for constructing E(n)-equivariant networks called Clifford Group Equivariant Neural Networks (CGENNs), based on an adjusted definition of the Clifford group, a subgroup of the Clifford algebra. The researchers have shown that the group’s action forms an orthogonal automorphism that extends beyond the typical vector space to the entire Clifford algebra, respecting its multivector grading and multiplicative structure. This leads to non-equivalent subrepresentations corresponding to the multivector decomposition and allows parameterizing equivariant neural network layers. CGENNs operate directly on a vector basis and can be generalized to any dimension. Incorporating group equivariance in neural networks has proved useful in many areas, including modeling the dynamics of complex physical systems and studying or generating molecules, proteins, and crystals. This paper demonstrates that the CGENNs can handle directional information more accurately and nuancedly than scalarization methods, operating directly in a vector basis rather than alternative basis representations. They can also transform higher-order features carrying vector-valued information, and readily generalize to orthogonal groups of any dimension or metric signature. The authors successfully demonstrated these advantages on various tasks, including three-dimensional n-body experiment, a four-dimensional Lorentz-equivariant high-energy physics experiment, and a five-dimensional convex hull experiment. Strengths: **Originality** This paper has many strengths across multiple dimensions. First, it tackles the problem of building an Equivariant network using a fresh perspective that has not been, as far as I know, used in the literature. Thus the use of the Clifford Group is an exciting direction of technological endeavor. Certainly, the mathematics behind building the Linear, Geometric Product, and Nonlinearities is nontrivial but offers what appears to be greater modeling flexibility. **Quality** It is abundantly clear that immense care has been taken with the writing of this paper. This is substantiated by the large and extensive appendix which in itself can prove to be a useful reference outside of the main thesis of this paper. Furthermore, it is clear that a computational approach was taken to build the equivariant layers as I appreciated the authors that the authors acknowledged the naive $(n+1)^3$ interactions needed in their geometric layers which could be reduced by first applying a linear projection. **Clarity** The authors should be given credit for attempting to make this paper clear and accessible. Unfortunately, it is not outside for those who are likely already familiar with geometric algebras. Although the writing and the flow of information are still very clear if one pretends to know what the Clifford group is. **Significance** In this reviewer's opinion, this work is significant for several reasons. First, it allows for the creation of equivariant networks in a completely different manner and one that appears to be more flexible/expressive. This is due to the Clifford group allowing for non-equivalent subrepresentations. For example, Clifford group equivariant models can extract covariant quantities which cannot be extracted from scalarization methods. Furthermore, the early empirical results suggest that such an approach also leads to empirical benefits and beats current equivariant models which suggests that this is a ripe direction for continued investigation. Weaknesses: This paper has a few areas for improvement that I highlight below. 1.) I thank the author for their efforts to make the paper accessible, but the appendix is not a very friendly introduction. For instance, in some places, the authors acknowledge that their definitions depart from existing ones in the literature. But in other places, they claim to follow the format of other works (e.g. C.1). This makes it hard to follow and trust what the authors state. 2. Perhaps what bothers me most is the fact that the motivation for using the Clifford algebra group is lost in all the technical jargon. The importance of being able to use multi-vectors in ML-specific problems is never really motivated in a convincing way. Even at present, I find it difficult to pinpoint exact non-toy applications where these tools might prove useful. Albeit there is no doubt the tools themselves are mathematically interesting. 3. There are parts of the paper where it becomes unclear which part of the theory is an original contribution of the authors versus something that is already present in the math literature on Clifford group/algebra. As far as I can tell the design of the adjusted twisted conjugation is novel and its extension is an original contribution of the authors (and the subsequent ML-specific layers). Can the authors tell me if I missed anything? 4. The experiments are quite preliminary. Perhaps this is by design as the authors note that the computational complexity can partially be resolved with better hardware-specific implementations. I would have loved to see the power of these methods on larger-scale equivariant problems. For instance, I am certain the authors are familiar with molecular dynamics simulations at a larger scale. These would already have the desired symmetries and established baselines to compare against. 5. The authors could improve the method section by providing an empirical investigation of the runtime complexity of their actual experiments in comparison to the baselines. Right now, a reader may intuit that the Clifford group equivariant networks are more computationally expensive but, exactly how much is not immediately clear. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. One part that is not clear to me is how equivariance is preserved in the geometric layers if you use a linear map to first project down to $l$. This would appear to break equivariance. Can the authors please comment on this and how it doesn't break the desired symmetry? 2. A limitation of the steerable equivariant literature is that it is heavily reliant on the Wigner-D matrices for $O(3)$. It seems the current effort in the Clifford group also applies to the orthogonal and $E(n)$ group. Can this approach be used to model other continuous symmetries e.g. $SU(2)$ or is this not possible? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the actionable feedback. We address their concerns and questions in the following. 1. *Accessibility and introductions of notations.* a. During our literature research, we quickly found out that there are at least three communities (physicists, mathematicians, computer vision engineers) working with Clifford algebra (aka geometric algebra). These are all interested in different properties and, as such, use different conventions and notations that were not compatible with each-other. For example, different group actions (e.g., equation 348), or restricting themselves to homogeneous multivectors, or restricting the quadratic form to be definite or nondegenerate. One of the contributions of this paper is the study of existing literatures, unifying and generalizing them. We study the action of the orthogonal group on the whole Clifford algebra within the context of equivariant neural networks. We derived and re-derived everything we needed, with generality and group representations for NNs in mind. 2. *Motivation of operating on multivector representations.* a. There are multiple reasons for using multivector representations. First, our theory allows for networks that operate in any inner product space, computing geometrically meaningful interactions (through geometric products). Second, if one takes a different route and goes for more general tensor product representations, one needs either irreps decomposition, which is hard to attain for any quadratic space, or use tensor product representations directly, which will grow intractably. We argue that Clifford algebra layers balances the two approaches, by allowing for a product structure (geometric product vs tensor product), which, due to their fundamental relations, will always remain finite dimensional, and, a way to decompose the whole Clifford algebra in smaller (but not necessarily irreducible) subrepresentations, which allow to reduce computational complexity (similar to the use of smaller irreps), without violating the equivariance properties. Third, there exist several physics problems that contain objects that transform in a non-standard way. For example, Maxwell’s equation treat magnetic fields as pseudovectors which can be represented as bivectors when using Clifford algebras / Clifford equivariant networks. **Action taken:** We elaborate on motivation for multivector representation now in more detail. 3. *Originality of theory.* a. While there already exist literature about the Clifford algebra, different notions of the Clifford groups with different actions, non of these sources we could simply cite for our purposes, because they all use their own notations/conventions, and have different corner cases, aspects or applications in mind. So a major and original contribution was to carefully construct/adjust definitions to get all our desired results (like multiplicativity, orthogonality) out and even provide proofs for the most general (non-definite, degenerate) cases. **Action taken**: we clarify this now in the main paper. 4. *Preliminary experiments.* a. The reason we chose not to go with molecular dynamics simulations is that the current state-of-the-art methods are highly finetuned and engineered to squeeze out all the features from the data. We felt that the adaptation and inclusion of these techniques to the Clifford layers was out of scope of the current work, and probably deserves a paper on its own. Finally, we would argue that the top-tagging experiment is at a reasonable scale and much less of a toy setting with 1.2M training samples and fully connected graphs with over 200 particles. Note that architectures, such as LorentzNet, were specifically designed for this task in previous works. 5. *Improve the method section by providing runtime complexity.* a. We are hesitant with making claims about empirical runtime complexity at the current moment. The geometric product as a bilinear operation can be implemented as efficiently as existing methods that use bilinear (tensor) operations. Even stronger, this implementation would then work for any inner product space. However, comparing the current implementations to existing, highly optimized code-bases would be unfair and cast an undeserved shadow on equivariant Clifford layers. To give an approximate result. Please also consider our response to reviewer jDZi regarding runtime complexity. 6. *Equivariance of projection.* a. The equivariance of the projection follows directly from the equivariance of the linear layer. I.e., $$ \rho(w)\left(T^{lin} _ {\phi_{c_{out}}}(x _ 1, \dots, x_\ell)^{(k)}\right) = \rho(w)\left(\sum _ {c _ {in}=1}^\ell \phi _ {c _ {out}c _ {in} k} \, x _ {c _ {in}}^{(k)},\right)=\sum _ {c _ {in}=1}^\ell \phi _ {c _ {out}c _ {in} k} \, \rho(w)\left(x _ {c _ {in}}\right)^{(k)} $$ which follows from the fact that $\rho(w)$ is linear, commutes with scalar multiplication, and respects grade projections. 7. *Use the Clifford group for other continuous symmetries like $SU(2)$.* a. Yes, and this is very much open for exploration! For your $SU(2)$ example, see e.g. Wilson (2020). Specifics of (Clifford) group representations are elusive for now, but we believe that the presented work gives a good starting point for such scientific endeavors. As such, further explorations for Clifford networks operating in (relativistic) quantum mechanics or quantum information theory is a very interesting future direction. We are grateful for the reviewer's feedback. We have made an effort to disclose all the changes made to the manuscript as clearly as possible. Given this and our previous discussions, we are hopeful that the reviewer continues to support the paper. Should there be any questions or concerns, we are always ready for further dialogue. #### References - Wilson, Robert A. Subgroups of Clifford algebras. arXiv preprint arXiv:2011.05171 (2020). --- Rebuttal Comment 1.1: Title: Re:Rebuttal Comment: I thank the authors for their detailed rebuttal. I am mostly satisfied with their answers. I still contend that the Appendix is not an easy introduction to the subject as I might care for but given the scope and divide between the different communities it is still a welcome first step. As such, I'm upgrading my confidence to a 4 and maintaining my original score (7). --- Reply to Comment 1.1.1: Comment: We are grateful for your acknowledgment of the improvements we've made in our rebuttal and are happy to hear that our efforts have enhanced your confidence in our work. Regarding the appendix, we understand and respect your perspective. It's always a balancing act to cater to various communities with varying familiarity with the subject. We'll continue to strive for clarity and accessibility in future iterations. Your feedback has been instrumental in this process.
Rebuttal 1: Rebuttal: We extend our gratitude to the reviewers for their efforts in evaluating our paper. The overall positive response we received has been highly encouraging. The feedback has undoubtedly improved the quality of our work, and we sincerely appreciate the insightful comments and suggestions that were provided. We endeavored to consider the feedback as comprehensively as possible, leading to a revision process that significantly honed the paper. We have strived to make all the necessary changes, balancing them with given space constraints. We have addressed every point in our responses and indicated where the paper was adapted based on the feedback. We would like to reiterate that our work on Clifford Group Equivariant Neural Networks (CGENNs) sets a distinctive path from the usual geometric deep learning approaches. Unlike typical methods, CGENNs operate directly on an input vector basis compared to, e.g., popular spherical harmonics basis, are more expressive than scalar methods, and apply to inner product spaces of any dimension. Concluding, we thank the reviewers for recognizing the potential and novelty of our work. It is encouraging to know that our paper is viewed as a significant contribution that provides a direction in the field of geometric deep learning. Pdf: /pdf/0f8523e5ea3cb425e32fed07ba6d60de9b0e6215.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper develops a new kind of neural network model based on the Clifford group, which makes the neural network E(n)-equivariant. The model is tested on a number of datasets, showing that it is capable of recovering the desired structure from the data. Strengths: This seems like an interesting and important direction. The results are convincing, in the sense that this approach "works." The method is backed up by nice theory. Weaknesses: Although the intro and figure 1 at the beginning are helpful, I think the rest of the paper is hard to follow. There are some theoretical results, but then when it comes to the methodology, section 4, I am left wondering what is actually being computed. For example, what is involved in computing equation 11? it says the x_i are elements of Cl(V,q), ok but what is the input to this system? same for eqs. 12 and 13. And what is the objective function? For the first experiment on signed volumes, this seems like an interesting task, but it is not clear what is the form of the input data - a data cloud of x,y,z points? What is the input to the Clifford network? I am having a hard time picturing concretely what is going on here. More specifics would be helpful. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How would you compare to the following paper: https://proceedings.neurips.cc/paper/2021/file/f1b0775946bc0329b35b823b86eeb5f5-Paper.pdf. This paper develops a universal family of neural networks for O(d)-equivariant functions. This seems to stand in tension with the claims in the Clifford algebra paper that scalar feature-based methods cannot capture certain kinds of information. (I'm sure that some more thought could explain what's going on, but I feel like that's work the authors of the submission should do, rather than the reader.) Relatedly, I don't think I saw any statement about whether or not the Clifford algebra approach is universal (maybe I missed it?). Also, the authors end Section 3 by saying "equivariance w.r.t. the Clifford group [...] implies equivariance w.r.t. the orthogonal group," but this statement, by itself, still leaves a question open: Do there exist functions equivariant under O(d) but not the "Clifford group," which their method then couldn't learn? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging feedback. We address their concerns and questions in the following. 1. *What is being computed in, e.g., equation 11?* a. As input data, we consider points in a vector space $V$, for example, $\mathbb{R}^3$. These can be locations, velocities, forces, etc. Since $\mathrm{Cl}^{(1)}(V, q)$ is isomorphic to the vector space $V$ that we construct it on, we can treat the data as objects in the Clifford algebra. As such, these will be multivectors where only the vector part is nonzero. However, as we propagate through the network, the other multivector elements will get populated. As such, equation 11 computes at first linear combinations of vector coefficients expressed in a fixed basis. But later in the network, these will be multivectors. Equations 12 and 13 compute geometric products. **Action taken**: we have added a few sentences clarifying these matters. 2. *What is the objective function*? a. The objective functions in all experiments were mean-squared errors except for the top-tagging experiment. This is a binary classification task and uses the binary cross-entropy loss. **Action taken**: we have updated the text to clarify these matters. 3. *In the volume experiments, what are the inputs to the Clifford network*? a. Indeed, the input is a point cloud (tetrahedron). The positions are expressed as coefficients in a fixed basis, which get processed by the neural network. **Action taken**: we have updated the text to clarify these matters. 4. *Compare to Villar et al., 2021.* a. Thanks for bringing up this paper! The crucial difference is that Villar et al. consider representations of the orthogonal group $O(n)$ acting on $(\mathbb{R}^{n})^k$ and maps from $(\mathbb{R}^{n})^k \to \mathbb{R}^n$, or, more in our notation, $V^k \to V$. This is a subset of the spaces that the orthogonal group can act on. There are also maps from other representations of O(n), e.g., tensor product representations, Clifford algebra representations, (irreducible) sub-representations, etc. The signed volume experiment considers a map into the space of scalars that are not invariant to reflections, which is an example of a map not captured by the theory of Villar et al. **Action taken**: We have added this paper to the literature list and discuss our findings in comparison with these results. 5. *The universality of the Clifford approach. Do there exist functions equivariant under O(d) but not the "Clifford group"?* a. We show in the paper that there are maps that Clifford layers can represent that scalarization methods cannot. Similarly, we can certainly also construct maps that Clifford methods cannot represent. In our paper, we only consider one fixed action for the Clifford algebra (and their subvector spaces), namely the adjusted twisted conjugation. In this setting, $O(d)$-equivariance is the same as Clifford group equivariance. One can consider maps between spaces outside of the Clifford algebra that the Clifford group does not cover. However, the Clifford algebra offers a closed set of nice, geometrically inspired representations that are more expressive than scalars but still tractable. We thank the reviewer again for the valuable feedback. Where applicable, we tried to be as transparent as possible about the adjustments made to the manuscript. Considering these and the above, we hope that the reviewer is open to promoting the score. If any questions or comments arise, we are happy to discuss further.
null
null
null
null
null
null
Learning with Explanation Constraints
Accept (poster)
Summary: This paper tries to answer an important and fundamental question: why can interpretability constraints on model parameters improve model performance. The authors developed an analytic framework. They first define constraints as a functional on a model function and a data point, and the model satisfying the constraint is hence represented as the functional value lies in a set. Following this setup, they defined the surrogate loss based on the constraint violation, and showed the generalization bound on this constraint violation loss. They showed that the constraints helped shrink the hypothesis class. They also showed that with gradient constraints, linear and 2-layer NN models could benefit in requiring fewer labeled data. They also discussed optimization algorithms through projections that could make the model satisfy the constraints. Strengths: This paper tries to answer an important and fundamental question: why can interpretability constraints on model parameters improve model performance. The presentation of this paper is clear, techniques sound, and contributions good. Weaknesses: The paper lacks a discussion on the stringency of their conditions in theorems, which makes it more difficult to judge the significance of the contribution. The paper focus on gradient constraints and lacks a discussion and related work on model shape constraints, such as monotonicity and convexity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the comments in the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: This paper doesn't have potential negative societal impact. The authors are encouraged to discuss the plausibility of the conditions of their theorems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The paper lacks a discussion on the stringency of their conditions in theorems, which makes it more difficult to judge the significance of the contribution. The paper focus on gradient constraints and lacks a discussion and related work on model shape constraints, such as monotonicity and convexity. **Our response:** Our result presents the reduction in the Rademacher complexity of an EPAC model for a class of linear models and two-layer neural networks. (Theorem 3.1, 3.2). We have already provided a short discussion on the assumption of the gradient constraints for neural networks between lines 205-210. Our results apply to a general data distribution (see Theorem H.1 for linear models) and distributions where samples from this have a bounded norm in expectation (Theorem 3.2 for neural networks). This assumption on the bounded norm (in expectation) of data samples for NNs is a standard assumption in practice and is not restrictive of particular tasks; for example, a Gaussian distribution satisfies this assumption. We also make the assumption for our proof for neural networks that the support is large enough, or more specifically, that each partition of the input space (partitions being defined by values of the hidden layer of the neural network) has a positive probability mass. This is not a strong assumption, and it holds for any distribution where the support is the $R^d$, for example, Gaussian distributions. We will add these clarifications to our revision. We focus on gradient constraints since our work is motivated by a line of works in the field of explainability where such explanation constraints are used to train models [1,2], and input gradients are one of the canonical classes of explanations. We will discuss the other notions of constraints that you have mentioned in the camera-ready version. [1] Ross, Andrew Slavin, Michael C. Hughes, and Finale Doshi-Velez. "Right for the right reasons: training differentiable models by constraining their explanations." Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017. [2] Rieger, Laura, et al. "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge." International conference on machine learning. PMLR, 2020. --- Rebuttal Comment 1.1: Comment: Thank you for your response to my questions
Summary: Traditionally, learning from labelled examples has been well studied. However, with the advent of large models (and few-shot learning) and risk of misspecifications (spurious correlations), need for augmenting supervision such as with example explanations is gaining wider interest. This paper deals with this timely topic of analysing when and how learning from explanations can improve sample efficiency. Although learning from explanations is also promising in reducing systematic biases, this paper focusses on sample efficiency. They also propose an interesting learning algorithm to accommodate explanation constraints. I believe the topic and analysis is timely but I have a few concenrns regarding evaluation and analysis. Strengths: - Important practical problem. - Well written and easy to follow. - The proposed variational method is smart, which can exploit both labelled, unlabelled data along with explanations. Weaknesses: Disclaimer: I have not checked the proofs. But the statements seem correct. Consider the following also as questions. *How are explanations different?* Fundamentally, I did not understand why explanation constraints are given a status different from example labels. Can we not simply consider them to be additional labels and minimize the joint risk such as in the Lagrangian method or Rieger et.al. and handle the error bound using the classical analysis as shown between L161 and L162? Especially since the explanations are imposed only approximately through objective functions. *Clarification regarding Theorem 2.2.* From L 156: "... help with our learning by shrinking the hypothesis class $H$ to $H_{\phi,\tau+\epsilon_k}$". Although this sounds intuitively correct, from the definition of $H$, isn't $H_{\phi, \tau+\epsilon_k}$ larger than $H$? Moreover, I do not see how constraints help improve the sample complexity since both $R_n(H_{\phi,\tau+\epsilon_k})$ and $err_D(h^*_{\tau-\epsilon_k})$ increase for a non-zero value of $\epsilon_k$? *Motivation for variational method.* The need for variational method is motivated to avoid evaluating gradients of a potentially complicated surrogate loss: $\phi$, however even the variational method needs to evaluate the same when optimizing for $f$ in the first step. *Evaluation concern*. Lagrangian method may have performed poorly because it is not exposed to unlabelled data unlike variational. What happens if we add an additional term to lagrangian objective on unlabelled data that is similar to self-training? Otherwise, I do not see why lagrangian may poorly perform when compared with variational at least when we can compute gradient of gradients well (which is not much of a problem these days). *Real world performance gains are unimpressive*. (Section 5.2 and Figure 5). I cannot clearly see the observation noted in L312-L313: "We observe that our variational objective achieves better performance than all other baseline approaches, across varying amounts of labeled data." The weak labellers and the task should be better defined. How are the gradients (of $\phi$) computed (for say largrangian method) when weak learners are used for imposing constraints? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. How are explanations different? Fundamentally, I did not understand why explanation constraints are given a status different from example labels. Can we not simply consider them to be additional labels and minimize the joint risk such as in the Lagrangian method or Rieger et.al. and handle the error bound using the classical analysis as shown between L161 and L162? Especially since the explanations are imposed only approximately through objective functions. **Our response:** While you can do something like Rieger et.al., this does not fit into a standard learning theoretic framework, which analyzes when we only have class labeled examples. To theoretically analyze this setup, we need to use a learning theoretic framework that handles the classification loss and explanation loss separately. Explanation constraints here can be information that aren’t standard class labels for our data, e.g. ignoring the background. Since the explanations and labels are different, the classification loss and the explanation loss are different objects. >2. Clarification regarding Theorem 2.2. From L 156: "... help with our learning by shrinking the hypothesis class $H$ to $H_{\phi,\tau + \epsilon_k}$ ". Although this sounds intuitively correct, from the definition of $H$, isn't $H_{\phi,\tau + \epsilon_k}$ larger than $H$ ? Moreover, I do not see how constraints help improve the sample complexity since both $R_n(H_{\phi,\tau + \epsilon_k})$ and $err_D(h^*_{\tau - \varepsilon_k})$ increase for a non-zero value of $\varepsilon_k$? **Our response:** No, $H_{\phi, \tau + \epsilon}$ is a subset of $H$ and therefore is smaller than that of $H$. We recall the definition of $H_{\phi, \tau + \epsilon_k}$ as any function in $H$ that the expected explanation loss is at most $\tau + \epsilon_k$. In fact, we can think of $H$ as $H_{\phi, \infty}$. Consequently, the complexity term $R_n(H_{\phi, \tau + \epsilon_k})$ is smaller than that of $R_n(H)$ - and we give bounds on how much smaller this is for 2 layer NNs and linear models in Thoerems 3.1 and 3.2. >3. Motivation for variational method. The need for variational method is motivated to avoid evaluating gradients of a potentially complicated surrogate loss: $\phi$, however even the variational method needs to evaluate the same when optimizing for $f$ in the first step. **Our response:** This is an astute observation; the underlying motivation for the variational method is to potentially achieve more efficient solutions to complicated constrained optimization problems. When compared to the standard Lagrangian penalized method, as is done in [1], the variational objective simplifies the Lagrangian objective even further by a sum of two terms, one of which is the empirical risk, and the other is a projection error onto the set of hypotheses that satisfy a bound on the explanation surrogate risk. A crucial advantage of such a decoupling is that we can reduce this to two steps (these are steps 1 and 2 in Section 4.1): a projection step of projecting current model onto the set of those with bounded explanation risk, and a synthesis step where we use the training labels as well as pseudo-labels from the projected model. The crucial advantage of this decoupling is that we can solve the projection step (this is step 1 in Section 4.1) with a **simpler** class of models for which the explanation surrogate gradients are more tractable, thus resolving a critical caveat with the Lagrangian objective. We could also use different learning rates and tolerances for the projection step, again with the aim of making the optimization problem more tractable. > 4. Evaluation concern. Lagrangian method may have performed poorly because it is not exposed to unlabelled data unlike variational. What happens if we add an additional term to lagrangian objective on unlabelled data that is similar to self-training? Otherwise, I do not see why lagrangian may poorly perform when compared with variational at least when we can compute gradient of gradients well (which is not much of a problem these days). **Our response:** We already have this comparison in the Appendix section L (additional baselines)! We also observe that a lagrangian with a single round of self-training afterwards is also worse in satisfying constraints on new test data, which in turn leads to worse performance in limited labeled data settings. This supports our method’s use of multiple rounds of incorporating these explanation constraints. >5. Real world performance gains are unimpressive. (Section 5.2 and Figure 5). I cannot clearly see the observation noted in L312-L313: "We observe that our variational objective achieves better performance than all other baseline approaches, across varying amounts of labeled data." The weak labellers and the task should be better defined. How are the gradients (of $\phi$) computed (for say largrangian method) when weak learners are used for imposing constraints? **Our response:** We will make this claim a bit less strong, but we still observe the following: “Our variational objective achieves better performance than all other baselines, on the majority of settings defined by the number of labeled data” The procedure for generating gradients is taken from a prior work [4]. The high level idea is that we can consider random inputs into these weak learners, and differentiate with respect to the parameters of this distribution to get a proxy for the gradient of these weak learners. In the case for weak learners that are hand-engineered lookup functions or regular expressions for language tasks, this corresponds to extracting feature indices for particular words or characters in the regular expression pattern. [4] D. Sam and J. Z. Kolter. Losses over labels: Weakly supervised learning via direct loss construction. https://arxiv.org/abs/2212.06921 --- Rebuttal Comment 1.1: Title: Further clarification Comment: Thanks for the response, some of my concerns are now resolved. Points 2 and 4 are now resolved, thanks. I still have many other concerns. _Theoretical analysis_: I am not yet convinced why explanation constraints are special. This issue was also raised by Reviewer 3fR6, but not yet resolved. Why not treat explanation constraints like any other constraints? It is not hard to see that constraints provide additional supervision, which would improve label efficiency. On the other extreme, supervision on explanations is not that different from label supervision. To elaborate on this, if the underlying hypothesis class is drawn from a GP prior, then observations on function gradient are readily accommodated just as observations on function values [1]. More crisply, what is the significance of the theoretic results in relation to explanations beyond just saying that additional supervision through constraints improved label efficiency. _Experiments_ Q: Could you please clarify how $\lambda$ is obtained in the Augmented Lagrangian objective? Was $\lambda$ fixed? Was it obtained through alternate max-min steps typical of Lagrangian optimization? Q: Variational vs Lagrangian. The advantage of variational over Lagrangian is argued to be only computational. Figure 4 needs to be justified as to why Variational is satisfying explanation constraints better than Lagrangian. Few other issues: Datasets are non-standard. Rieger et.al. and Ross et.al. contain many other standard datasets like Decoy-MNIST or ISIC dataset. But the datasets used to report in this paper have not been used before for this problem to the best of my knowledge. The reason for choosing not to evaluate on standard datasets needs to be justified. Moreover, evaluation needs to be more thorough to demonstrate the generality to new explanations (like the one used in Rieger et.al.), which is also a concern shared by Reviewer o4F1. Moreover, I agree with Reviewer o4F1 that experiment and analysis are almost disjoint. Although I agree that the experiments corroborate label efficiency claim from theory, the connection between analysis and experiments is still very flimsy. For instance, theory did not explain why variational is better than Lagrangian or why Lagrangian failed to satisfy explanation constraints as shown in Fig. 4 (right) or explain the trend in rate of reduction of loss with number of training examples. I am updating my score accordingly. [1] Hennig, P., Osborne, M. A., and Kersting, H. P. Probabilistic Numerics: Computation as Machine Learning. Cambridge University Press, 2022. doi: 10.1017/9781316681411. (See chapter 4) --- Reply to Comment 1.1.1: Comment: We are glad to hear that points 2 and 4 are resolved! Regarding your other concerns: > I am not yet convinced why explanation constraints are special. This issue was also raised by Reviewer 3fR6, but not yet resolved. Why not treat explanation constraints like any other constraints? It is not hard to see that constraints provide additional supervision, which would improve label efficiency. On the other extreme, supervision on explanations is not that different from label supervision. To elaborate on this, if the underlying hypothesis class is drawn from a GP prior, then observations on function gradient are readily accommodated just as observations on function values [1]. More crisply, what is the significance of the theoretic results in relation to explanations beyond just saying that additional supervision through constraints improved label efficiency. Sure, hopefully, this response will help clarify both what we term “explanation constraints” and what is novel about our contributions. We define “explanation constraints” as stochastic constraints on our model that “depend” on the data points. This differs from deterministic constraints that have been considered in existing literature such as l2 regularization. The explanation constraints are closely related to prior works on semi-supervised learning [2] which consider notions of margin or smoothness which are also data-dependent. On the contrary, our paper focuses on the more recent notion of explanations from explainability research. We have already provided this discussion in Appendix A. To reason about stochastic constraints, we require using tools from statistical learning theory to work with the constraints in expectation. This idea follows from prior work [2] that we have also cited in the proof sketch of Theorem 2.2. We believe that we have sufficient theoretical contributions in our paper. First, new definitions of explanation constraints and surrogate loss provide machinery to formalize how to reason about the random explanation constraints. Given these definitions, we discuss what kind of explanation constraints are learnable (Appendix C) and connect the literature on learning with explanations with the classical work on semi-supervised learning (Thm 2.2). Another contribution of our work is analyzing the special case of explanation constraints that are on the gradient of the model. We provide novel bounds for the reduction in Rademacher complexity for both linear models and two-layer neural networks with these constraints, which both require new proof techniques for these respective proofs. Framing these definitions and providing new bounds in this different setting, we believe, is a novel contribution. [2] Balcan, Maria-Florina, and Avrim Blum. "A discriminative model for semi-supervised learning." Journal of the ACM (JACM) 57.3 (2010): 1-46. > To elaborate on this, if the underlying hypothesis class is drawn from a GP prior, then observations on function gradient are readily accommodated just as observations on function values [1]. In some sense, labeled data can be thought of as a special case of an explanation constraint. However, to analyze this from a learning theoretic perspective, we must distinguish between the two of these to provide generalization bounds. You are completely right that they can both be handled (especially in practice), which is what is done in the Lagrangian regularized methods. This distinction needs to be made from a theoretical perspective for our analysis. > Q: Could you please clarify how $\lambda$ is obtained in the Augmented Lagrangian objective? Was $\lambda$ fixed? Was it obtained through alternate max-min steps typical of Lagrangian optimization? Lambda is a hyperparameter that is selected via a held-out validation set, where we are selecting based on highest validation accuracy. > Q: Variational vs Lagrangian. The advantage of variational over Lagrangian is argued to be only computational. Figure 4 needs to be justified as to why Variational is satisfying explanation constraints better than Lagrangian. The advantage of our variational approach over the Lagrangian method is not only computational, and we will clarify this in our revision. Another issue with the Lagrangian approach in our setting is that it does not leverage additional unlabeled data, which is helpful for the downstream task. However, we also argue that solely adding unlabeled data isn’t the only benefit of our approach. We come up with another baseline (Lagrangian + self-training) to determine whether this is the case in Appendix L, and our experiments show that simply using self-training on top of a Lagrangian-regularized model does not maintain the ability of the model to satisfy explanation constraints. Therefore, these experiments support the multiple iterative rounds of projections from our variational approach onto the class of models that satisfy explanation constraints.
Summary: In this work, the authors theoretically investigate learning with constraints on the gradient of the model. Like other works in a similar vein, this paper focuses on gradient-based explanations and on neural networks. Unlike related works, however, this paper formulates explanation constraints as being over the data distribution rather than locally to a single input. In this setting, the authors key contributions come from their focus on a learning theoretic analysis of models that satisfy their constraints (called EPAC models) which I find to be significant contribution. This paper is timely given the current focus on trustworthy AI and an emphasis on explainable AI in proposed legislation. I feel the analysis is simplistic in some places. Additionally, it is unused in the experimental section, and a few key prior works are not mentioned. I do believe this work provides a solid learning theoretic foundation for the budding sub-field of machine learning from explanations (MLX) and for works studying different kinds of learning constraints (e.g., adversarial certification). I will note that though I have given the paper a borderline score, I will happily update my score to an accept if the authors can substantially address my comments during the rebuttal period. Strengths: The paper tackles a considerably interesting problem that will certainly have impact on future works and can serve as an important basis for understanding a budding sub-field of machine learning research. The formulations all appear sound and correct to the best of my knowledge. The paper is very easy to follow as it is well written. I raise a few minor points about the writing below, but in general I appreciate the high-quality of the paper’s presentation. Weaknesses: The main drawback of the paper is that the theory and experiments are almost wholly disjointed. The first portion of the paper focuses specifically on theoretical guarantees using standard learning theoretic tools; however, I do note see where any of these tools are experimentally evaluated even in the toy experiments. Moreover, machine learning from explanation (MLX) papers and even robust explanation constraint papers focus on datasets and models that are considerably more sophisticated than what is proposed in this paper. * EPAC-ERM objective is exactly the same as what is proposed in prior machine learning from explanation (MLX) works, but no discussion is given. * For completeness the work should cite and at least discuss if not experimentally compare to papers [1][2] and [3] There are several scientifically vague sentences which I find inappropriate without further discussion of formalization. Three in particular come to mind: Another part of the paper I find lacking is the existence of several vague scientific statements that are not backed up by formal argument or good intuition. I give three examples and my issues below: “We argue that in many cases, n labeled data are the most expensive to annotate. The k data points with explanations also have non-trivial cost; they require an expert to provide the annotated explanation or provide a surrogate loss.” - Page 6. It seems that for images at least, eliciting explanation annotations from humans is considerably more costly. Moreover, the authors claim that they “argue” the case of having a loss function, the k data points with explanations would be less costly, but I see no such argument. It is not obvious to me that eliciting a high-quality loss function $\phi$ would be any less expensive than eliciting annotations themselves. Happy to hear an argument to the contrary, but on its own this statement feels unscientific and needs to be buttressed with an actual argument. To this end, I would find it very interesting, and I think it would also be interesting to the community working on these problems if a query/sample complexity was explicitly modeled and compared (i.e., the sample complexity of learning a loss function versus the sample complexity of learning a EPAC model from concretely provided labels) “Computing the gradients of this surrogate loss  in turn is much more expensive compared to gradients of the empirical risk.“ - Page 7. Again, I feel this is a vague and unjustified statement. The gradient of the surrogate loss for explanations is of course more expensive than the standard classification loss, but in previous papers regularization of explanations based on gradients has been done for even ResNet-50 sized models without any hint of computational strain. So, while the general case of the loss might be considerably more difficult to differentiate, in the case of studying gradient-based explanations and their losses dont seem “much more expensive” “Thus, the decoupled approach to solving the EPAC-ERM objective is in general more computationally convenient. “ - Page 7. I feel this statement would be easy enough to back up experimentally in the experimental setting put forward in this work. One can simply provide the numbers showing this. However, I am curious if this remains the case when the method is scaled up. 
 [1] Ross et. Al., 2017 https://arxiv.org/abs/1703.03717 (this one is cited but not compared experimentally which I think it ought to be) [2] Wicker et. Al. 2022 https://arxiv.org/abs/2212.08507 (this ought to be cited, but is in a different vein so no experimental comparison needed) [3] Czarnecki et. Al. 2017 https://arxiv.org/pdf/1706.04859.pdf (Similar aim, ought to be discussed again perhaps not experimentally compared) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See the above section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See the weaknesses section Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. The main drawback of the paper is that the theory and experiments are almost wholly disjointed. The first portion of the paper focuses specifically on theoretical guarantees using standard learning theoretic tools; however, I do note see where any of these tools are experimentally evaluated even in the toy experiments. **Our response:** We believe that our theory and experiments are complementary. These standard learning theoretic tools are supported by our experiments - we illustrate in Figure 4 that the gradient constraint helps with performance, especially in settings with limited labeled data. This supports our theoretical result, that such models that include an explanation loss achieve lower sample complexity. > 2. Moreover, machine learning from explanation (MLX) papers and even robust explanation constraint papers focus on datasets and models that are considerably more sophisticated than what is proposed in this paper. EPAC-ERM objective is exactly the same as what is proposed in prior machine learning from explanation (MLX) works, but no discussion is given. For completeness the work should cite and at least discuss if not experimentally compare to papers [1][2] and [3] There are several scientifically vague sentences which I find inappropriate without further discussion of formalization. Three in particular come to mind: **Our response:** Thank you for pointing this out. We will certaintly provide a more detailed discussion with the MLX literature. > 3. Another part of the paper I find lacking is the existence of several vague scientific statements that are not backed up by formal argument or good intuition. I give three examples and my issues below: “We argue that in many cases, n labeled data are the most expensive to annotate. The k data points with explanations also have non-trivial cost; they require an expert to provide the annotated explanation or provide a surrogate loss.” - Page 6. It seems that for images at least, eliciting explanation annotations from humans is considerably more costly. Moreover, the authors claim that they “argue” the case of having a loss function, the k data points with explanations would be less costly, but I see no such argument. It is not obvious to me that eliciting a high-quality loss function would be any less expensive than eliciting annotations themselves. **Our response:** Thank you for your great point. The cost of explanation annotations highly depends on how fine-grained you want your explanation constraint to be. For example, in [1], we just want the model to ignore the background on an image. This is a relatively simple constraint and it is possible to provide a closed-form explanation loss function. For example, if we have access to an image segmentation model. We can use this segmentation model to segment an image and then penalize the feature importance of the background pixel without the need for a human annotator. However, if we want more complex explanation constraints such as identifying the most important features of each image then we agree that the cost of annotation will be considerably more than the cost of labeled data, and providing a high-quality loss function in this case can be difficult. This would be an interesting venue for future research. We will provide a better clarification of this in the camera-ready version. > 4. To this end, I would find it very interesting, and I think it would also be interesting to the community working on these problems if a query/sample complexity was explicitly modeled and compared (i.e., the sample complexity of learning a loss function versus the sample complexity of learning a EPAC model from concretely provided labels) **Our response:** We do provide some discussion about this in Appendix C; here we discuss what types of explanation loss functions are learnable (given finite unlabeled data and “concretely provided labels” in the form of the value of this loss on those data). We also believe that this is an interesting question and that our work provides a preliminary discussion of this. We agree that modeling the complexity of designing/generating this surrogate loss is an interesting question for future work. > 5. “Computing the gradients of this surrogate loss in turn is much more expensive compared to gradients of the empirical risk.“ - Page 7. ... **Our response:** We agree that this is computable in practice for ResNet-50 sized models. That being said, the regularization of explanations based on the input gradients still take a comparatively longer time to run than learning with a standard (cross-entropy) loss. For instance, on synthetic data with 2 layer neural networks, differentiating with respect to the norm of the input gradients is 2.5x slower than using the standard CE loss (for 10 examples over 100 epochs). > 6. “Thus, the decoupled approach to solving the EPAC-ERM objective is in general more computationally convenient. “ - Page 7... **Our response:** We provide experimental results in Appendix N.2 that show that smaller teacher models (e.g., smaller number of hidden dimensions) do not degrade the performance of our EPAC-ERM objective, which supports that this decoupled approach can be more computationally convenient. We agree that scaling this up to larger models is indeed an interesting future research agenda. > 7. [1] Ross et. Al., 2017 (this one is cited but not compared experimentally which I think it ought to be)... **Our response:** We want to point out that the method in Ross et. al. [1] is the same as the standard Lagrangian baseline in our paper; the explanation surrogate loss function here is the l2 norm of the input gradients (along the features defined in the paper), which exactly matches the second term in the objective of [1]. The only difference is the third term of Ross et. al.’s objective, which is a standard L2 regularization. We will make this clearer in the main text of our paper. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications! Comment: I would like to thank the authors for their very detailed response. I found it quite convincing. After reading the response and re-reading my review, I think my point about the experimental section being disjoint from the theory was too harsh. Moreover, I have had time to more carefully examine the Appendices which I found interesting and comprehensive. I have increased my score with the assumption that in the final version the authors properly cite the MLX/explanation constraint literature and make the changes they say they will. Thanks again :)
Summary: Disclaimer: I am not an expert in Learning Theory, but I come from a background of interpretability and explanation model’s decision making. So I am familiar with the methods that this work is trying to describe and some related works. I will review this paper modestly, and hope authors and AC can understand. In the papers, the authors studied recent methods in regularization of prediction models using explanations. An example of explanation constraints would be enforcing certain properties of the gradients of the objective w.r.t. to the data. Another would be requiring a separate teacher model that takes in the prediction model and data and require the teacher model to have specific desired outputs. Authors analyze these explanation constraints from a statistical learning perspective, such as presenting two different explanation constraints and the analysis of their learnability and generalization bounds. Then, the authors proposed the algorithm for learning with explanation constraints, and a variational method for solving the objective. Experiments performed on synthetic data and simple real-world data demonstrates that their variational approach achieves better performance than standard approach. Strengths: The paper is overall well-organized and well-written. The notation seems clear and not cluttered. The formulation and definitions are also not difficult to follow, and the writing describes most of the motivation of formulations well. Since I am not an expert, I cannot comment on the novelty of the work. Nonetheless theoretical analysis of explanation methods or methods that incorporate explanations into training are rare. So this work could potentially be a start to introducing theory in explanation methods. The authors have cited sufficient related works in the appendix. The experiments in this work are solid, where authors compared with multiple baselines and sufficiently discussed the results. The setup of the experiments also seem fair and complete. Appendix also supplies a range of ablation study such as hyperparameters. Weaknesses: I do not observe any major weaknesses in this work. Although, in terms of writing, the transition into section 4.1 was a bit of a jump. It was not immediately clear to me why there is a need for variational method. Although in experiments it shows to obtain better performance that non-variational approaches, it wasn’t immediately obvious what prompted the authors to develop a variational approach, hence there seems to be a gap between going from Line 234 to line 236. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors claim that in Line 29 “An attractive facet of the latter is that we can automatically generate model-based explanations given unlabeled data points.” Can the authors cite works that generate model-based explanations given unlabeled data or clarify what this means? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not address any limitation of their work, but addressed its social impact. One limitation could be that in this work, authors seem to focus on restricting gradients as a explanation constraints. It will be great if future work can also address other versions of explanation constraints, and potential to extensions such as feature selection methods as explanation methods. Analysis of models beyond two-layer neural network regime could potentially be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I do not observe any major weaknesses in this work. Although, in terms of writing, the transition into section 4.1 was a bit of a jump. It was not immediately clear to me why there is a need for variational method. Although in experiments it shows to obtain better performance that non-variational approaches, it wasn’t immediately obvious what prompted the authors to develop a variational approach, hence there seems to be a gap between going from Line 234 to line 236. **Our response:** Thank you for the feedback, we will provide an additional discussion in the transition into section 4.1. The main idea is that, especially in instances where computing the surrogate loss function is computationally intensive, the Lagrangian penalized objective may be expensive to optimize. Therefore, we can consider a different objective, through a projection onto a regularized class of models. While at first, this projection indeed requires computing the surrogate loss as well, the crucial advantage of this is that we can solve the projection step (this is step 1 in Section 4.1) with a **simpler class** of teacher models for which the explanation surrogate gradients are more tractable, thus resolving much of the computationally difficulty of the Lagrangian objective. We could also use different learning rates and tolerances for the projection step, again with the aim of making the optimization problem more tractable. > The authors claim that in Line 29 “An attractive facet of the latter is that we can automatically generate model-based explanations given unlabeled data points.” Can the authors cite works that generate model-based explanations given unlabeled data or clarify what this means? **Our response:** By “automatically generate model-based explanations given unlabeled data points”, we refer to the case when we can evaluate an explanation loss function phi without a domain expert. For example, if we want to encourage the model to not rely on the background features of an image and we have access to an image segmentation model. We can use this segmentation model to segment an image, giving us the background pixels in an image, which then we can penalize the feature importance of the background without the need for a human annotator. Another example presented in the paper is when we want to match the gradient of the model with the gradient of weak labelers (see Section 5.2). Since we have access to these weak labelers apriori, we can evaluate the explanation loss on any unlabeled data. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response. The authors have sufficiently addressed my questions and concerns. As of now, I do not have further questions.
Rebuttal 1: Rebuttal: **General response** We would like to thank all the reviewers for their efforts in providing thoughtful and attentive examinations of our work. We are glad to see that the reviewers highlighted a number of strengths, including: 1. The paper tries to answer an “interesting”(o4F1) and “important” problem (9kMr, w12b) 2. The paper has a “good contribution” (9kMr) and could potentially be “a start to introducing theory” in explanation methods (zYz5) and can “serve as an important basis” for understanding the subfield (o4F1) 3. The paper is “well-written” (zYz5, o4F1, w12b) and has a “clear presentation” (9kMr) 4. The proposed method “works better than other baselines” (3fR6) and the setup of the experiments also seems “fair and complete” (zYz5). We respond to individual reviewer comments in the individual threads. Thank you again for your hard work and consideration.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents the concept of learning from explanations and theoretically shows that learning from explanations can improve the standard excess risk bounds in the agnostic setting. They treat explanations as constraints in the ERM procedure and introduce an extension of PAC model referred to as EPAC model. The paper introduces formal notions of explanations, EPAC learnability, explanation constraint sets etc. The study analyzes the impact of explanations on model learning using standard learning theoretic tools and characterizes the constraints for linear models and two-layer neural networks. The statistical advantages of explanations stem from their role in constraining the hypothesis class: explanation samples improve the estimation of the population explanation constraint, thereby further restricting the hypothesis class. Further, the paper provides an algorithmic solution based on variational approximation to solve the problem of learning with explanation constraints. The proposed solution works better than other methods like augmented Lagrangian. Strengths: Strengths: ------------- 1. The paper provides a theoretical framework for studying learning with explanations by formalizing various key notions like explanation functionals, explanation constraint set, EPAC model and EPAC learnability etc. 2. Their analysis provides insights into why learning with explanations could improve the overall excess error of the ERM solution. The key insight is that having explanation constraints in the ERM procedure effectively reduces the size of the model class leading to smaller excess risk upper bounds. 3. The paper also provides an algorithm solution to solve the ERM objective while satisfying the explanation constraints and the empirical results show that the proposed method works better than augmented Lagrangian and other baselines. Weaknesses: Weaknesses and Questions: --------------------------------- 1. Overall I like the ideas presented in the paper but it was a bit tedious to get exactly what is being done. It first appeared that the paper is proposing a method to learn the explanation functionals i.e. learning a model that can generate explanations by learning from some labeled explanation data. However, the goal is to study how using the explanation information improves the model performance. It would be helpful to improve the presentation to make it clear early. 2. I am not convinced about the notion of explanations introduced. It is assumed that explanations are some vectors. Shouldn’t explanations be human-readable and be in natural language? Then how do you pose the constraint sets in natural language? I might be missing something here as I have not followed explainability research. 3. I didn't quite get why explanation constraints are special? In general, one could put some constraints (e.g. regularization, etc.) along with the ERM objective to reduce the size of the hypothesis class, which will lead to a reduction in the upper bound of the excess risk. Could you please provide some justification or motivation for why the explanation constraints are special? 4. Figure 2. could be improved with some annotations and markers instead of just colors alone. The figures in experiments might be better when run with more random seeds. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. Overall I like the ideas presented in the paper but it was a bit tedious to get exactly what is being done. It first appeared that the paper is proposing a method to learn the explanation functionals i.e. learning a model that can generate explanations by learning from some labeled explanation data. However, the goal is to study how using the explanation information improves the model performance. It would be helpful to improve the presentation to make it clear early. **Our response:** Our paper does address both of these topics: (1) we provide a mathematical framework to analyze how explanation information improves model performance and (2) we propose a variational objective that can learn explanations through explanation-labeled data. We would also like to point out that other reviewers thought that “the paper is organized and well-written”(Reviewer zYz5), “The paper is very easy to follow as it is well written”(Reviewer o4F1), “Well written and easy to follow”(Reviewer w12b), and “The presentation of this paper is clear” (Reviewer 9kMr). > 2. I am not convinced about the notion of explanations introduced. It is assumed that explanations are some vectors. Shouldn’t explanations be human-readable and be in natural language? Then how do you pose the constraint sets in natural language? I might be missing something here as I have not followed explainability research. **Our response:** While some explanations may be in natural language, in this work, we focus on the notion of local explanations considered in explainability research where explanations indeed take a vector form. For example, feature attribution methods [1,2,3] indicate how much each feature contributed to the model predictions, and these are all represented via real-valued vectors not in language. [1] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should i trust you?" Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016. [2] Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in neural information processing systems 30 (2017). [3] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks." International conference on machine learning. PMLR, 2017. > 3. I didn't quite get why explanation constraints are special? In general, one could put some constraints (e.g. regularization, etc.) along with the ERM objective to reduce the size of the hypothesis class, which will lead to a reduction in the upper bound of the excess risk. Could you please provide some justification or motivation for why the explanation constraints are special? **Our response:** We completely agree that our framework is much more general and can handle other notions of constraints - please see our Discussion section (line 326 - 329). In our work, we have drawn inspiration from the field of explainability where such constraints are used to train models [4,5]. The explanation constraints are provided by domain experts for which the constraint depends on the instance. For example, if we want to ignore the background of images, different images would have different constraints. This differs from standard constraints such as L2 regularization which depends only on the model parameters. We provide a specific bound for a gradient constraint which is a canonical class of explanations in the field. As you mentioned (and as we also say in our Discussion section), this framework and the result from Theorem 2.2 holds for any notion of constraints. It is important, however, that there are models that satisfy these constraints that are also able to achieve high standard accuracy. [4] Ross, Andrew Slavin, Michael C. Hughes, and Finale Doshi-Velez. "Right for the right reasons: training differentiable models by constraining their explanations." Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017. [5] Rieger, Laura, et al. "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge." International conference on machine learning. PMLR, 2020. >4. Figure 2. could be improved with some annotations and markers instead of just colors alone. The figures in experiments might be better when run with more random seeds. **Our response:** Thank you for your suggestion. We will update the figure in the camera ready with some more descriptive annotations. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Dear authors, Thank you for responding to my questions. I think my concerns are not adequately addressed. My point 3 was to get better understanding of the theoretical results. They seem to be valid for any constrained ERM. How do we interpret them in the context of explanation constraints? I am looking for a bit more nuanced understanding of these results from explanations point of view. I am also not convinced about why these results are novel or interesting. In general one could put constraints on ERM objective (e.g. regularization etc.) and that will lead to reduction of hypothesis class and hence smaller upper bounds. In these cases the results are more refined, providing more insights on constraint parameters and how they affect the bounds. In this regard, I don't find Theorem 2.2 satisfactory. Do you have any lower bounds on the generalization errors to claim that learning with explanations is helpful? The current conclusions are based on comparing two upper bounds on the generalization errors. In my view, the presentation could be improved to make it easier to follow. Here are some of my suggestions, a. Please include some intuitive examples of explanation constraints early in the paper. b. May be have a few figures, showing these constraints and how they affect the learning procedure. c. Provide a brief summary of the explanation constraints considered in the literature and why they can be abstracted out mathematically the way it is done in the paper.
null
null
null
null
null
null
Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity
Accept (spotlight)
Summary: The paper studies distributed learning in the presense of Byzantine workers (that is, nodes that output adversarial results) and data heterogeneity. A previous notion of robustness for distributed learning, namely G-gradient dissimilarity, is not sufficient to capture data heterogeneity. In fact, this holds even in fairly simple tasks such as least-squares regression (i.e., for any G>0). Motivated by this observation, the paper defines a new notion of robustness for distributed learning, (G,B)-Gradient Dissimilarity. This allows for some additional error term in the average gradient difference between workers, proportionally to B. The proposed theory is used in a least squares regression task to justify observed robustness of stochastic gradient descent under small sets of dishonest workers. Strengths: The paper presents an interesting notion of robustness in distributed learning. It is shown that this notion captures distributed gradient computation when a small number of workers can output arbitrary results. The theory is compelling and seems to justify observed behavior in a realistic setting. A theoretical criterion is given for algorithms to be robust against (G,B)-gradient dissimilarity. This is an interesting theoretical contribution. Weaknesses: The precise formula used in the definition of (G,B)-Gradient Dissimilarity makes sense because the analysis is done for a variant of stochastic gradient descent. However, it is unclear whether this notion is also relevant to other distributed learning tasks. The precise setting regarding honest and Byzantine workers is confusing (see Questions section of this review). Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the (G, B)-gradient dissimilarity in the experiments? Observation 1 does not require any Byzantine workers, right? Perhaps this should be emphasized. In Proposition 1, H is the set of honest workers. How do the honest workers know who is honest? Earlier in the paper it is mentioned that trying to compute any statistic over the whole set of workers is generally impossible in the presence of Byzantine workers. Does this mean that the task in Proposition 1 is impossible to perform in reality because H must be known to all honest workers? If so, this renders the whole model not very realistic. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging comments. Please find below answers to your questions. >What is the (G, B)-gradient dissimilarity in the experiments? Intuitively, $(G, B)$-gradient dissimilarity measures data heterogeneity using the gradient of the empirical loss function. For a given distributed learning problem, there may be multiple couples $(G,B)$ for which the gradient dissimilarity inequality holds (naive example: if it holds for $(G,B)$, then it also holds for $(G,2B)$). To show the tightest empirical upper bounds (Figures 3 and 4), based on our result in Theorem 2, we search through several possible values of $(G,B)$ using a heuristic method explained in Appendix D. >Observation 1 does not require any Byzantine workers, right? Perhaps this should be emphasized. Yes, we agree. Thank you for pointing this out. We will emphasize it in the paper. >In Proposition 1, H is the set of honest workers. How do the honest workers know who is honest? Earlier in the paper it is mentioned that trying to compute any statistic over the whole set of workers is generally impossible in the presence of Byzantine workers. Does this mean that the task in Proposition 1 is impossible to perform in reality because H must be known to all honest workers? If so, this renders the whole model not very realistic. Thank you for bringing up this point. Honest workers and the server do not know the set $\mathcal{H}$. This is in fact one of the main theoretical arguments in the proof of the lower bound of Theorem 1. Of course, in Theorem 2, we analyze the convergence of Robust D-GD for the loss function over the set of honest workers $\mathcal{H}$. >The precise formula used in the definition of (G,B)-Gradient Dissimilarity makes sense because the analysis is done for a variant of stochastic gradient descent. However, it is unclear whether this notion is also relevant to other distributed learning tasks. This is a good remark, as one could think of the applicability of our findings to higher order optimization algorithms for example. In this sense, extending $(G,B)$-Gradient Dissimilarity to incorporate higher-order information is an interesting research direction. However, as long as the loss is differentiable we believe that $(G,B)$-Gradient Dissimilarity is the primary heterogeneity model to study, as was done before in the references in lines 143-144. Finally, we emphasize that, under $(G,B)$-Gradient Dissimilarity, the lower bound in Theorem 1 applies to any algorithm, and not just a specific variant of gradient descent. That is, we do not assume any structure of the algorithm for demonstrating Theorem 1.
Summary: This paper considers under the robust distributed learning settings, the conventional learning error bounds are vacuous as they rely on restrictive assumptions as $G$-gradient dissimilarity (equation 2). The paper relaxes the assumption to a more general $(G, B)$-dissimilarity model that relaxes the bounds on gradients norm of honest workers with a constant of $B$. A new breakdown ratio between Byzantine and honest workers $f / n$ that would yield theoretical robustness guarantees is established as $ 1 / (2 + B^2) $. The authors finally show the tightness of $f / n < 1 / (2 + B^2)$ bound via analyzing the state of the art robustness coefficient $ \kappa = f / (n - 2f) $ on a robust D-GD that would also yield $ 1 / (2 + B^2) $ instead of $ 1 / 2 $ breakdown point. In this way, the gap between the previous theory and the empirical observation of the largest amount of Byzantine workers to have robustness guarantee on global honest loss (equation 1) is reduced. Strengths: The paper borrows the data heterogenity model ($(G, B)$-dissimilarity) from the classical distributed learning settings to the Byzantine distributed learning and build a nice and intuitive theory on the breakdown point on the fraction of Byzantine workers to melt down the robustness guarantee. The paper narratives are clear and easy to follow, and the experiments (Figure 2, 3, 4) are well conducted. Weaknesses: The novel breakpoint essentially involves a dependency over the constant over honest workers' gradient norm. It would be better to have an ablation study on logistic regression that if such constant $B$ is changed, we would observe similar changes in the breakpoint point. Otherwise, it is a great paper! Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am confused on the heuristic method to compute $G$ and $B$ in Appendix D.3, line 743. Could you explain such formula? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I didn't find any other limitations of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging comments. Please find below answers to your questions. We hope that you would increase the score if you find our answers satisfactory. >I am confused on the heuristic method to compute and in Appendix D.3, line 743. Could you explain such formula? First, we recall that a fixed set of loss functions can satisfy the condition of $(G,B)$-gradient dissimilarity for multiple pairs $(G,B)$. For example, if the loss functions satisfy $(G,B)$-gradient dissimilarity, they also satisfy $(G,2B)$-gradient dissimilarity. Thus, our heuristic finds multiple couples $(G,B)$ and chooses that yielding the smallest upper bound (as per Theorem 2). To do so, we consider several possible values of $B$ (whose range is bounded as per the lower bound in Theorem 1). Then, for each possible value of $B$, we find the corresponding tightest $G$ satisfying Assumption 1 (as described in line 742). Finally, in line 743, we find the pair $(G, B)$ minimizing the theoretical upper bound. >The novel breakpoint essentially involves a dependency over the constant over honest workers' gradient norm. It would be better to have an ablation study on logistic regression that if such constant $B$ is changed, we would observe similar changes in the breakpoint point. Otherwise, it is a great paper! This is an interesting remark. It is however quite challenging to conduct such an ablation study. This is mainly because we cannot directly control the constant $B$, subject to Assumption 1, even for simple learning problems like least-squares regression. Thus, in our experiments, we first partition data heterogeneously across workers, and then determine $B$ empirically. --- Rebuttal Comment 1.1: Title: Reply to the rebuttal Comment: Thank you for the clarification. I now understand the estimation approach on $B$ in Appendix D.3. --- Reply to Comment 1.1.1: Comment: Thank you very much for your timely response. We hope all your concerns have been addressed at this point. We will be glad to answer any further question that may lead you to increasing your score.
Summary: The work focuses on the analysis of the limits of Byzantine-robust learning under so-called $(G,B)$-dissimilarity assumption -- a generalization of the standard bounded gradient dissimilarity assumption. The main result of the paper is in establishing the upper bound for the maximal admissible ratio of Byzantine workers: $\delta < \frac{1}{2+B^2}$ that is separated from $\frac{1}{2}$ (and can be very small). The authors also derive the lower bound for the optimization error that matches the known upper bounds (up to numerical factors) and the derived upper bound for the robust version of the distributed gradient descent. Strengths: 1. The authors derive the limits on robustness under $(G,B)$-gradient dissimilarity that are quite interesting. In particular, the ratio of Byzantine workers has to be smaller than $\frac{1}{2+B^2}$ to guarantee the convergence to some neighborhood of the solution. Moreover, the lower bound for the optimization error $\varepsilon$ is established. These bounds are quite pessimistic ($B^2$ is proportional to the condition number in the worst case) and emphasize the non-triviality of Byzantine-robustness in the heterogeneous case. The construction of the worst-case examples is quite simple. 2. The paper is well-written in general, the idea is explained well in the main part of the paper. 3. The proofs look correct to me. Weaknesses: The results of Theorem 2 are not completely novel. In particular, there exist results under $(G,B)$-dissimilarity assumption, see Theorem V from [18] and Theorems E.1, E.2 from [14]. These results are derived for the different notions of robust aggregator, but up to the replacement of $\kappa$ with $c\delta$ they recover Theorem 2 from this submission. Moreover, the method from [14] with $p=1$ and no compression coincides with Robust D-GD. The paper should be modified accordingly. Nevertheless, I see Theorem 1 as the main contribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In [17,18], another notion of robust aggregation was introduced. Are there any benefits of using a different notion of robustness? 2. Theorem 2, "let $f < n/2$": this is a bit confusing, because $\kappa$ implicitly depends on the ratio of Byzantines. 3. Although the proofs are detailed and look correct to me, it is a bit inconvenient to read them. In particular, I suggest moving the proofs of Lemmas 1 and 2 to Appendix B.1.2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging comments. We will implement your suggestions in the final version of the paper. Please find below answers to your questions. >The results of Theorem 2 are not completely novel. In particular, there exist results under $(G,B)$-dissimilarity assumption, see Theorem V from [18] and Theorems E.1, E.2 from [14]. These results are derived for the different notions of robust aggregator, but up to the replacement of $\kappa$ with $c \delta$ they recover Theorem 2 from this submission. Moreover, the method from [14] with $p=1$ and no compression coincides with Robust D-GD. We thank the reviewer for bringing up this point, as we were aware of these results. While we agree on the similarities between the results, there are subtle differences that are important to note. First, unlike the notion of $(f,\kappa)$-robustness (that we use), the so-called $(c,\delta)$-agnostic robustness (used in [14, 18]) is a stochastic notion. Under the latter notion, good parameters $(c,\delta)$ of robust aggregators were only shown when using a randomized method called Bucketing [18]. Consequently, instead of obtaining a deterministic error bound as in Theorem 2, simply replacing $c\delta$ with $\kappa$ in [14, 18] gives a stochastic bound, which is strictly weaker than the result of Theorem 2. Moreover, the corresponding non-vanishing upper bound term for robust D-GD obtained from the analysis in [18] for several robust aggregation rules (e.g., coordinate-wise median) is worse than that we obtain using $(f,\kappa)$-robustness (see also discussion below). We will include this discussion in the paper. >In [17,18], another notion of robust aggregation was introduced. Are there any benefits of using a different notion of robustness? The main benefit of $(f,\kappa)$-robustness (the notion of robust aggregation that we use) is that it was shown (originally in [3]) to hold for several existing aggregation rules (e.g. coordinate-wise median, trimmed mean) with tight rates of $\kappa$, unlike $(c, \delta)$-agnostic robustness introduced in [17, 18]. Also, as argued in [3], existing robustness criteria in the literature can be recovered from $(f,\kappa)$-robustness. Importantly, as pointed out above, $(f,\kappa)$-robustness is a deterministic condition, which allows us to obtain a deterministic bound for robust D-GD (unlike in [18]). >Theorem 2, ``let $f < n/2$": this is a bit confusing, because $\kappa$ implicitly depends on the ratio of Byzantines. The reason we explicitly assume $f < n/2$ in the beginning of the theorem is that no upper bound can be achieved without this condition, as shown in [25]. We then let $F$ be a $(f,\kappa)$-robust aggregation, which is implicit for ``let $F \colon \mathbb{R}^{d \times n} \to \mathbb{R}$ and $\kappa \geq 0$ be such that $F$ is $(f,\kappa)$-robust". The theorem then holds for any $\kappa$ verifying this condition (exact values for $\kappa$ can be found in [3]). --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: I thank the authors for the clarifications: all my questions are adequately addressed. Since the authors promised to modify the paper accordingly, I decided to keep my score unchanged.
Summary: In the present paper the authors study the problem of distributed learning in the presence of adversarial learners (aka Byzantine workers) and hetogeneous data. The paper aims to generalize and improve previous results on homogeneous data, an assumption that is claimed to be very restrictive in practice. For this they introduce a new concept of (G,B) gradient dissimilarity which has the same relation to the more standard G gradient dissimilarity as an affine variance assumptions has to a bounded variance assumption in stochastic optimization. Under their assumption they first prove that the breakdown point, i.e., the share of Byzantine workers beyond which the optimization breaks down is upper bounded by $\tfrac{1}{2+B^2}$, where previous results only yield the intuitive bound of $\tfrac12$. They also confirm this theoretical observation numerically. Second, they also establish sharp error bounds in the regime below the breakdown point by proving a lower bound as well as an upper bound, attained by robust distributed gradient descent. Strengths: I find the paper very clear and well written. The theoretical contributions are very interesting and appear to be novel. Furthermore, the experiments illustrate the sharpness of the results. The proofs in the appendix seem to be sound although I have to say that I didn't check all the details in the proof of Theorem 1. I did check the convergence statement of Theorem 2 the proof of which is rather standard. Weaknesses: I cannot list any significant weaknesses besides the few questions that I have in the next block. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: p.2: Maybe you can make the connection between the G and (G,B) assumptions and bounded / affine variance assumptions in the analysis of SGD. p.6, last paragraph: can one estimate B here to see whether it is close to $\sqrt{2}$ which would fit to the $\tfrac14$ you observe numerically? p.7: Definition 4 should be followed by a few examples for $(f, \kappa)$-robust aggregation rules. Please also mention that in the absence of adversaries, one can just average the gradients and gets a $(0,0)$-robust aggregation rule. p.8: Here you start using nicefrac excessively which, at least for my brain, is very hard to read and process. Maybe you can use or stick to tfrac for inline fractions which you seemed to have used earlier. p.30: Number of honest workers should just be $n-f$ not $n-f=10$. If I understand this correctly $n=10$ and $f$ ranges from $1$ to $9$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors did make the point that probably not all of their results are sharp, e.g., in terms of constants or the slow down factor for linear convergence in the presence of adversaries. Negative societal impact is not to be expected in my opinion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging comments. We will implement your suggestions in the final version of the paper. Please find below answers to your questions. > p.2: Maybe you can make the connection between the G and (G,B) assumptions and bounded / affine variance assumptions in the analysis of SGD. This is a good remark. We will make the connection between these in the final version of the paper. >p.6, last paragraph: Can one estimate B here to see whether it is close to $\sqrt{2}$ which would fit to the $\frac{1}{4}$ you observe numerically? This is an interesting question. We believe it is possible to get the result mentioned, similar to the way we measured $G$ and $B$ empirically in Figures 3 and 4. We will include this result in the final version. >p.30: Number of honest workers should just be $n-f$ and not $n-f = 10$. If I understand this correctly $n = 10$ and $f$ ranger from $1$ to $9$. In experiments, we fix the number of honest workers and vary the number of Byzantine workers. In this particular case, the number of honest workers (i.e., $n-f$) is set to $10$, and the number of Byzantine workers $f$ varies from $1$ to $9$. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thanks for the clarifications. It would be great if you could also add the examples after Definition 4 but besides that I'm happy with the submission and will maintain my score. --- Reply to Comment 1.1.1: Comment: Thanks for your response! We will add examples after Definition 4 in the revision of the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DRAUC: An Instance-wise Distributionally Robust AUC Optimization Framework
Accept (poster)
Summary: This paper considers AUC optimization in the distribution shift setting, which is of great value to the machine learning community. The author formulates the problem as a min-max-min-max problem and relaxes it as an upper bound minimization problem. The experimental results also confirm the effectiveness of this proposal. Strengths: The theoretical results seem reasonable. However, as I am not an expert on distributionally robust optimization, I cannot justify the theoretical significance of this paper. Weaknesses: Table 1 shows the results of CIFAR10-LT, but there is no description of the experimental settings in the context. The proposed method performs poorly on CIFAR10-LT, which seems to contradict the claim of the article. How can this phenomenon be explained? Miscellaneous: The colorful curve in Figure 1 may not be friendly for black-and-white print versions. Providing a legend directly may be a better choice. The definition of $b^*$ in Eq (5) seems to be incorrect. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there more discussion on the results of CIFAR10-LT, why is the performance of the proposal similar to AUCMLoss, and sometimes even weaker than AUCMLoss? --- Thanks for the detailed clarifications, which have addressed my concerns. I would like to keep my score. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not provide a discussion about the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed comments! Our response is as follows: >**Q1:** Table 1 shows the results of CIFAR10-LT, but there is no description of the experimental settings in the context. The proposed method performs poorly on CIFAR10-LT, which seems to contradict the claim of the article. How can this phenomenon be explained? **A1:** Thank you for your valuable question! We appreciate your feedback regarding the ambiguity in our paper and apologize for any confusion. To construct the binary, long-tailed version of the dataset, we followed the methodology proposed in [25]. The first half of the classes are assigned as positive, while the remainder are designated as negative. Subsequently, some examples are excluded to establish a desired imbalance ratio. We plan to relocate this section from the Appendix to the main body of the paper for better comprehension. Responding to your query, we would like to address it from two perspectives. First, we do not directly optimize performance on the clean CIFAR10-LT. Our primary focus is to enhance performance on the local worst-case distribution surrounding the original, uncorrupted distribution. In our experiments, this corresponds to the **corrupted versions** of datasets. Here, we directly list the empirical result on corrupted datasets of our proposed methods and the baseline (AUCMLoss), so you can check our progress directly. |Datasets|Models|Methods|0.01|0.05|0.10|0.20| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|resnet20|AUCMLoss|63.93|76.77|81.75|85.26| |CIFAR10|resnet20|DRAUC-Df|65.3(+1.37)|79.87(+3.1)|84.33(+2.58)|89.0(+3.74)| |CIFAR10|resnet20|DRAUC-Da|65.69(+1.76)|80.05(+3.28)|86.17(+4.42)|89.7(+4.44)| |CIFAR10|resnet32|AUCMLoss|64.00|76.98|81.87|85.66| |CIFAR10|resnet32|DRAUC-Df|64.66(+0.66)|80.25(+3.27)|85.88(+4.01)|90.63(+4.97)| |CIFAR10|resnet32|DRAUC-Da|65.38(+1.38)|80.84(+3.86)|86.28(+4.41)|89.44(+3.78)| |CIFAR100|resnet20|AUCMLoss|55.70|61.91|65.73|69.58| |CIFAR100|resnet20|DRAUC|57.27(+1.57)|62.43(+0.52)|66.25(+0.52)|71.34(+1.76)| |CIFAR100|resnet20|CDRAUC|57.5(+1.8)|62.28(+0.37)|66.11(+0.38)|71.06(+1.48)| |CIFAR100|resnet32|AUCMLoss|56.19|61.87|63.64|69.81| |CIFAR100|resnet32|DRAUC|57.04(+0.85)|61.9(+0.03)|65.96(+2.32)|71.22(+1.41)| |CIFAR100|resnet32|CDRAUC|57.38(+1.19)|62.12(+0.25)|66.13(+2.49)|71.31(+1.5)| |MNIST|small_cnn|AUCMLoss|95.52|98.16|98.04|98.60| |MNIST|small_cnn|DRAUC|96.06(+0.54)|98.38(+0.22)|98.53(+0.49)|98.84(+0.24)| |MNIST|small_cnn|CDRAUC|96.35(+0.83)|98.04(+-0.12)|98.56(+0.52)|98.92(+0.32)| |MNIST|resnet20|AUCMLoss|89.09|97.82|96.26|97.74| |MNIST|resnet20|DRAUC|94.72(+5.63)|98.21(+0.39)|98.32(+2.06)|98.67(+0.93)| |MNIST|resnet20|CDRAUC|94.52(+5.43)|98.01(+0.19)|98.53(+2.27)|98.79(+1.05)| Here are the experiment results we add in the rebuttal phase: |Datasets|Models|Methods|Dog|Bird|Vehicles| |:-:|:-:|:-:|:-:|:-:|:-:| |TINYIMAGENET|resnet20|AUCMLoss|77.35|85.98|82.37| |TINYIMAGENET|resnet20|DRAUC-Df|84.11(+6.76)|87.3(+1.32)|88.67(+6.3)| |TINYIMAGENET|resnet20|DRAUC-Da|83.96(+6.61)|87.61(+1.63)|89.06(+6.69)| |TINYIMAGENET|resnet32|AUCMLoss|77.25|85.20|81.12| |TINYIMAGENET|resnet32|DRAUC-Df|85.79(+8.54)|88.0(+2.8)|88.32(+7.2)| |TINYIMAGENET|resnet32|DRAUC-Da|84.56(+7.31)|87.6(+2.4)|88.46(+7.34)| |Datasets|Models|Methods|Results| |:-:|:-:|:-:|:-:| |MELANOMA|efficientnetb0|AUCMLoss|73.95| |MELANOMA|efficientnetb0|DRAUC-Df|77.82(+3.87)| |MELANOMA|efficientnetb0|DRAUC-Da|78.54(+4.59)| |MELANOMA|densenet121|AUCMLoss|75.67| |MELANOMA|densenet121|DRAUC-Df|77.89(+2.22)| |MELANOMA|densenet121|DRAUC-Da|76.82(+1.15)| Secondly, due to the phenomenon of the **trade-off between clean accuracy and robust accuracy** in adversarial training [a], in many cases, achieving optimal performance on both clean and adversarial examples simultaneously is unattainable. This phenomenon can explain the performance drop in the clean data. --- >**Q2:** The colorful curve in Figure 1 may not be friendly for black-and-white print versions. Providing a legend directly may be a better choice. **A2:** Thank you for your kind suggestion. We will add a legend. --- >**Q3:** The definition of $b^*$ in Eq (5) seems to be incorrect. **A3:** Thanks for the detailed review, the corrected version is $b^* = \mathbb E_{P^-}[f_{{\theta}}({x}^-)]$. --- >**Q4:** Is there more discussion on the results of CIFAR10-LT, why is the performance of the proposal similar to AUCMLoss, and sometimes even weaker than AUCMLoss? **A4:** Thanks again for your valuable question! As discussed in **A1**, we attribute this phenomenon to the inherent trade-off between clean AUC and robust AUC. AUCMLoss can be perceived as an optimal classifier for clean data. However, it is vulnerable to distributional attacks. In contrast, our DRAUC might not perform as well as AUCMLoss on clean data, but it demonstrates superior performance on robust data. --- [a]Zhang, Hongyang, et al. "Theoretically principled trade-off between robustness and accuracy." International conference on machine learning. PMLR, 2019. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and additional supporting results. --- Reply to Comment 1.1.1: Comment: Thank you for your comment! Please feel free to ask if there is any confusion, and we are glad to address any concern you may have.
Summary: Area under the ROC curve (AUC) has been widely used for imbalanced classification, but optimizing the metric requires the assumption that training and test data come from the same distribution, which is not realistic. The authors tackle to explore distributionally robust optimization under the context of AUC optimization. They first integrate DRO and AUC and reformulate it to a tractable optimization problem using surrogate loss functions. Finally, they proposed distribution-free DRAUC and distribution-aware DRAUC optimization frameworks. The proposed frameworks are validated theoretically and empirically via experiments on multiple datasets. Strengths: - The authors propose DRAUC-df by combining DRO and AUC optimization in a tractable way and further extend it to DRAUC-da by considering perturbations on the positive and negative distributions separately. - Two proposed frameworks are validated on several standard datasets, which achieved meaningful performance improvement comparing to the baseline (AUCML). Weaknesses: - Experiments are only conducted on small-scale datasets, like as CIFAR or MNIST with ResNet and CNN architectures. It may need to validate on more realistic large-scale datasets, like as AUCML on large-scale medical image datasets with more recent network architectures (EfficientNet, DenseNet). - This paper addresses the robustness under worst local distribution from DRO perspective, but it restricts to binary classification problem (potentially due to the use of AUC metric) and validated only on binary version of datasets, so I have reservations about the suitability of this problem setting. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In L97, why the Wasserstein constraint in the optimization algorithms can be removed with Theorem 2? - Typos in the definition of \ell_{0,1} and Eq. (2) Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations and potential societal impacts are not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive reviews! To address your concern, we conduct several more experiments on larger datasets, and expand our method toward multi-class settings. >**Q1:** Experiments are only conducted on small-scale datasets. It may need to validate on more realistic large-scale datasets, like as AUCML on large-scale medical image datasets with more recent network architectures (EfficientNet, DenseNet). **R1:** Thank you for your valuable question. To address your concerns, we conduct additional experiments on larger datasets, including **Tiny-ImageNet-200** and a large-scale medical image dataset **Melanoma**, utilizing **EfficientNet** and **DenseNet**. The outcomes validate our results' applicability to extensive datasets and modern network architectures. Here, we use **bold** to mark the highest scores, and the results with '-C' represent the testing performance in corrupted versions of datasets. **Due to the limitation of reply, please refer to Table 1 and 2 in the pdf of the global response for full results!** **Empirical results on Melanoma:** |Methods|Mela-C(Enet-b0)|Mela-C(Dense-121)|Mela(Enet-b0)|Mela(Dense-121)| |:-:|:-:|:-:|:-:|:-:| |CE|69.65|72.05|83.27|83.37| |AUCMLoss|73.95|75.67|85.98|84.33| |FocalLoss|65.64|72.97|78.81|84.64| |ADVShift|70.98|71.91|80.34|81.60| |WDRO|75.88|74.71|86.29|**85.79**| |DROLT|71.50|76.04|82.89|83.30| |GLOT|71.52|69.83|83.95|81.59| |AUCDRO|74.57|76.09|**86.77**|85.51| |DRAUC-Df|77.82|**77.89**|85.06|81.44| |DRAUC-Da|**78.54**|76.82|82.98|81.61| --- >**Q2:** This paper addresses the robustness under worst local distribution from DRO perspective, but it restricts to binary classification problem (potentially due to the use of AUC metric) and validated only on binary version of datasets, so I have reservations about the suitability of this problem setting. **R2: Thanks for this heuristic question! We take a trial on expanding our method toward multi-class setting, and empirically prove that our proposed method also works well in multi-class cases.** AUC is often used in binary classification problems. However, we can generalize our method to accommodate multi-class settings by interpreting a multi-class classification problem as a series of binary decision-making processes. First, we define multi-class AUC: $AUCM(f\_{{\theta}}) = \frac{1}{N\_C}\sum\_{i=1}^{N\_C}AUC\_i(f\_{{\theta}}) = \frac{1}{N\_C}\sum\_{i=1}^{N\_C} \mathbb E\_{\widehat P\_i, {\widehat P\_i'}}[\ell(f\_{{\theta}}(x\_i) -f\_{{\theta}}(x\_i'))]$ where $N\_C$ is the number of classes, $AUC\_i$ denotes the AUC score when assigning $i^{th}$ class as positive, and the rest classes as negative, $\widehat P\_i$ denotes the example distribution of $i^{th}$ class and $\widehat P\_i'$ denotes the distribution or the rest classes. Consistent with the notation in our paper, we use $\widehat Q$ to describe the perturbed distribution. Following the idea of optimizing the multi-class AUC under local worst distribution, we now present the objective of Multi-class DRAUC(MDRAUC) $\min\_{{\theta}}\max\_{\widehat Q:\mathcal W\_c(\widehat Q,\widehat P)\le \epsilon} ~\mathbb E\_{\widehat Q}\left[\frac 1{N\_C}\sum\_{i=1}^{N\_C}AUC\_i(f\_{{\theta}})\right]$ Our formulation of MDRAUC-Df is as follows: ${(MDf\star)}\quad\min\_{{{\theta}},\vec a,\vec b} \min\_{\lambda > 0}\max\_{\vec \alpha} \left\\{\lambda\epsilon + \mathbb E\_{\widehat{ P}}[\widehat \varphi\_{{{\theta}}, \vec a, \vec b, \vec \alpha, \lambda}(z)] \right\\}$ where $\widehat \varphi\_{{{\theta}}, \vec a, \vec b, \vec \alpha, \lambda}(z) = \max\_{z' \in \mathcal Z}\left[\frac 1{N\_C}\sum\_{i=1}^{N\_C}g\_i(a\_i,b\_i,\alpha\_i,{{\theta}},z) - \lambda c(z,z')\right].$ Limited by the length of reply, we place the **detailed derivation, experiment setting, and full results (Table 3 in the pdf file)** in the **global response**. We kindly refer you to get more details from it. We further validate the effectiveness of the above multi-class DRAUC algorithm on CIFAR10-LT, and the experiment results are as follows: **Empirical results of ResNet20 on Multi-class-CIFAR10-LT** Methods|0.01-C|0.05-C|0.10-C|0.20-C|0.01|0.02|0.10|0.20| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CE|89.48|90.95|91.58|92.07|95.95|97.98|**98.58**|98.92| |AUCMLoss|86.77|88.66|90.88|91.29|95.31|97.69|98.21|98.56| |FocalLoss|89.82|91.27|91.97|91.67|**96.42**|**98.15**|98.56|98.95| |ADVShift|70.42|72.48|76.54|74.48|72.15|80.87|82.47|83.96| |WDRO|90.01|91.30|91.22|91.68|96.35|98.04|98.57|**98.96**| |DROLT|89.01|88.72|91.18|90.42|95.44|97.19|98.20|98.52| |GLOT|90.44|91.88|91.63|92.49|96.29|97.85|98.57|98.88| |DRAUC|**91.28**|**93.15**|**94.98**|**95.74**|94.83|96.47|97.87|98.12| |CDRAUC|90.92|92.81|94.20|94.75|95.12|97.06|98.26|98.63| Our methods outperform all competitors in the corrupted dataset in a multi-class setting. We acknowledge that some previous works have studied advanced algorithms of multi-class AUC learning. However, current derivations and empirical results are enough to prove that our method is capable in multi-class settings, and we would like to leave further explorations of multi-class DRAUC as future work. --- >**Q3:** In L97, why the Wasserstein constraint in the optimization algorithms can be removed with Theorem 2? **R3:** Thanks for your valuable question! The Wasserstein constraint is not removed in Theorem 2; it is reformulated using the strong duality of Wasserstein DRO. Detailed proof can be found in [a]. --- >**Q4:** Typos in the definition of $\ell\_{0,1}$ and Eq. (2) **R4:** Thank you for your detailed review. We will adjust our Eq. (1) to ensure consistency with the definition of $\ell\_{0,1}$, as follows: $AUC(f\_{{\theta}}) = {\mathbb E\_{P\_+,P\_+}\left[{\ell\_{0,1}(f\_{{\theta}}({{x}^+}) - f\_{{\theta}}({{x}^-}))}\right]}$ --- [a]Gao, Rui, and Anton Kleywegt. "Distributionally robust stochastic optimization with Wasserstein distance." Mathematics of Operations Research 48.2 (2023): 603-655. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal! --- Reply to Comment 1.1.1: Comment: Thank you for your comment! --- Rebuttal 2: Comment: Dear reviewer u8fo, We would like to kindly inquire if our rebuttal adequately addresses your concerns. If you have further questions, please feel free to ask, and we will be glad to answer.
Summary: The paper proposes an end-to-end framework to optimize the distributional robust AUC. The authors address several key challenges of this task and robust generalization can be achieved with theoretical guarantee. Experiments on several corrupted long-tailed benchmarks demonstrate the superiority of the proposed method. Strengths: 1. The paper addresses several key challenges to integrating AUC optimization with DRO, including computational infeasibility, intractable solution, and label bias. The proposed method is well-motivated and technically sound. And the generalization risk is bounded with theoretical guarantee. 2. Experiments on several corrupted long-tailed benchmarks demonstrate the superiority of the proposed method. 3. The paper is well-written and clearly organized. Sufficient preliminaries on AUC and DRO are provided. Weaknesses: 1. The proposed method is only evaluated on small-scale datasets such as MNIST and CIFAR-10/100. It remains unclear whether the proposed method can be scaled to large datasets. 2. It is well known that DRO suffers from the problem of yielding overly-pessimistic models with low confidence [1]. This weakness might be inherited in the proposed method as Wasserstein DRO is adopted. [1] Hu, Weihua, et al. "Does distributionally robust supervised learning give robust classifiers?." International Conference on Machine Learning. PMLR, 2018. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please check the weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation of the proposed method may lie in scalability and the issue of overly-pessimistic models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive concerns! We would like to clarify the following questions: >**Q1:** The proposed method is only evaluated on small-scale datasets such as MNIST and CIFAR-10/100. It remains unclear whether the proposed method can be scaled to large datasets. **R1:** Thank you for your constructive question! To address your concern, we carried out several experiments on large-scale datasets, including **Melanoma** and **Tiny-ImageNet-200** (due to the limitation of length, kindly refer to the global response for the experiment settings). Additionally, to validate the adaptability of our proposed method across various network architectures, we incorporate the new backbones **EfficientNet** and **DenseNet**. Our results surpass all the competitors on large-scale datasets and network architectures, proving the effectiveness of our proposed method. For clarity in our presentation, we employ **bold** for the highest scores. Results appended with '-C' depict testing performance on corrupted dataset versions. You may also refer to the global response for experiment settings and better represented results(**Table 1 and 2 in the pdf of global response**). **Empirical results on Melanoma:** |Methods|Mela-C(Enet-b0)|Mela-C(Dense-121)|Mela(Enet-b0)|Mela(Dense-121)| |:-:|:-:|:-:|:-:|:-:| |CE|69.65|72.05|83.27|83.37| |AUCMLoss|73.95|75.67|85.98|84.33| |FocalLoss|65.64|72.97|78.81|84.64| |ADVShift|70.98|71.91|80.34|81.60| |WDRO|75.88|74.71|86.29|**85.79**| |DROLT|71.50|76.04|82.89|83.30| |GLOT|71.52|69.83|83.95|81.59| |AUCDRO|74.57|76.09|**86.77**|85.51| |DRAUC-Df|77.82|**77.89**|85.06|81.44| |DRAUC-Da|**78.54**|76.82|82.98|81.61| **Empirical results on Tiny-ImageNet-200(splited by hyper-class):** |Models|Methods|Dog-C|Bird-C|Vehicle-C|Dog|Bird|Vehicle| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |resnet20|CE|78.46|85.19|87.53|93.72|94.49|97.72| |resnet20|AUCMLoss|77.35|85.98|82.37|93.35|94.11|97.34| |resnet20|FocalLoss|78.34|81.48|86.55|93.25|92.87|97.66| |resnet20|ADVShift|81.20|80.94|86.65|93.70|93.53|97.66| |resnet20|WDRO|82.20|85.23|85.92|94.46|95.50|**98.19**| |resnet20|DROLT|80.44|86.91|86.76|93.89|**96.40**|97.86| |resnet20|GLOT|81.96|85.89|86.80|**94.67**|96.14|98.05| |resnet20|DROAUC|75.97|83.26|79.46|92.58|93.04|96.29| |resnet20|DRAUC-Df|**84.11**|87.30|88.67|93.39|95.58|97.50| |resnet20|DRAUC-Da|83.96|**87.61**|**89.06**|93.76|95.94|97.25| |resnet32|CE|82.55|84.64|86.26|94.31|94.49|97.76| |resnet32|AUCMLoss|77.25|85.20|81.12|93.19|95.19|97.57| |resnet32|FocalLoss|77.96|79.80|85.33|93.41|92.85|97.78| |resnet32|ADVShift|84.30|84.56|86.43|92.92|94.71|97.59| |resnet32|WDRO|80.08|85.58|86.94|94.39|95.51|97.67| |resnet32|DROLT|79.25|85.75|86.79|91.68|**96.06**|97.82| |resnet32|GLOT|81.70|83.09|88.24|94.08|95.16|**97.92**| |resnet32|DROAUC|78.21|80.55|85.26|91.56|93.15|96.33| |resnet32|DRAUC-Df|**85.79**|**88.00**|88.32|**94.43**|95.29|97.37| |resnet32|DRAUC-Da|84.56|87.60|**88.46**|94.03|95.96|97.65| --- >**Q2:** It is well known that DRO suffers from the problem of yielding overly-pessimistic models with low confidence [a]. This weakness might be inherited in the proposed method as Wasserstein DRO is adopted. [a] Hu, Weihua, et al. "Does distributionally robust supervised learning give robust classifiers?." International Conference on Machine Learning. PMLR, 2018. **R2: Thank you for your valuable question! We believe such an overly pessimistic problem will not appear in our setting for the following reasons:** While both methodologies employ DRO techniques to augment model robustness, the distinction between approaches leveraging $f$-divergence and those using Wasserstein DRO is essential. Specifically, the DRO based on $f$-divergence considers generating a local most adversarial reweight to maximize the training loss, represented as: $ \min\_{{\theta}}\sup\_{r\in\mathcal U\_f} \mathbb E\_{P}[r(z)\ell(g\_{{\theta}},z)] $ $\mathcal U\_f \equiv \\{r(z) |\mathbb E\_{P}[f(r(z))]\le\epsilon, \mathbb E\_{P}[r(z)]=1, r(z)\ge 0 ,\forall z \in \mathcal Z \\}$ In contrast, the Wasserstein DRO is designed to produce a local adversarial perturbation on the input to maximize the training loss: $\min\_{{\theta}}\max\_{\widehat Q:\mathcal W\_c(\widehat Q,\widehat P)\le \epsilon} ~\mathbb E\_{\widehat Q}\left[\ell(g\_{{\theta}},z)\right]$ For a clearer comparison, akin to the approach in [a], we first present the empirical approximated formulation for both $f$-divergence DRO and Wasserstein DRO as: ${(ERM)}\quad\min\_{{\theta}}\frac 1 N\sum\_{i=1}^N \ell(g\_{{\theta}},z)$ ${(f-divergence \ DRO)}\quad\min\_{{\theta}}\sup\_{r\in\widehat{\mathcal U}\_f} \frac 1 N {\sum\_{i=1}^N}r\_i\cdot\ell(g\_{{\theta}},z)$ $ {(Wasserstein~DRO)}\quad \min\_{{\theta}}\sup\_{z'\in\widehat{Q}} \frac 1 N\sum\_{i=1}^N \ell(g\_{{\theta}},z')$ where $\widehat{\mathcal U}\_f = \left\\{ r |\frac 1 N\sum\_{i=1}\^N f(r_i) \le \epsilon \frac 1 N\sum\_{i=1}\^N r\_i = 1, r \ge 0 \right\\},\widehat Q =\\{Q:\mathcal W\_c(Q,\widehat P)\le \epsilon\\}$ Being overly pessimistic implies that a model trained by $f$-divergence DRO may end up at a stationary point of the original ERM. This gives us an intuition that the adversarial reweight may be useless if structural assumptions are not introduced. As our approach does not engage in example reweighting, we avoid this overly pessimistic issue. Additionally, from an empirical perspective, we compare our results with AUCMLoss, which can be considered the ERM version of our method. The experimental results prove the enhanced robustness of our approach, which contradicts the proposition of over-pessimism. Nonetheless, whether Wasserstein DRO suffers from a similar degradation problem is still an open question, and we want to discover that in future work. --- [a]Hu, Weihua, et al. "Does distributionally robust supervised learning give robust classifiers?." International Conference on Machine Learning. PMLR, 2018. --- Rebuttal Comment 1.1: Comment: Thank you for adding more results though Melanoma is still a small data. One related question here is why Wasserstein DRO? Other techniques such as adversarial training, f-divergence could also improve the generalization. Do authors make comparison to these methods? --- Reply to Comment 1.1.1: Comment: >**Q:** Thank you for adding more results though Melanoma is still a small data. One related question here is why Wasserstein DRO? Other techniques such as adversarial training, f-divergence could also improve the generalization. Do authors make comparison to these methods? **A:** Thank you for your feedback! Our apologies for not providing a clear introduction in our manuscript. We indeed draw comparisons with methods that employ f-divergence. The competitors we refer to in our paper, specifically **ADVShift** ([a]) and **DROLT** ([b]), utilize KL-divergence, which is a particular instance of f-divergence, as their metric. Based on our experiments, our method outperforms these methods. Regarding adversarial training, we introduce PGD as an additional competitor. Following the setting in [c], we generate adversarial examples within an $l_2$-norm ball with a radius of $128/255$ and a step size of $15/255$. We only tune the learning rate on the validation set, and the testing results are shown in the subsequent table. **The results of PGD-10:** |Datasets|Models|0.01-C|0.05-C|0.10-C|0.20-C|0.01|0.05|0.10|0.20| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|resnet20|60.86|72.02|79.59|84.82|62.31|75.58|83.54|88.86| |CIFAR100|resnet20|52.70|58.72|59.50|60.90|53.00|59.84|60.61|62.14| |CIFAR10|resnet32|60.01|69.13|77.84|83.62|61.14|72.05|81.54|87.31| |CIFAR100|resnet32|53.05|55.25|58.69|57.30|53.33|55.82|59.67|58.20| |MNIST|resnet20|96.71|98.17|98.07|98.78|99.59|99.97|99.98|99.99| |MNIST|small_cnn|94.42|98.21|98.83|98.90|99.53|99.93|99.97|99.98| From the results, it is evident that PGD narrows the generalization gap. However, in a long-tailed setting, PGD exhibits suboptimal performance in clean scenarios, which is also potentially attributable to the robust overfitting phenomenon detailed in [c]. Thus, its robust outcomes are not as high as those of our methods. [a]Zhang, Jingzhao, et al. "Coping with label shift via distributionally robust optimisation." arXiv preprint arXiv:2010.12230 (2020). [b]Samuel, Dvir, and Gal Chechik. "Distributional robustness loss for long-tail learning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [c]Rice, Leslie, Eric Wong, and Zico Kolter. "Overfitting in adversarially robust deep learning." International Conference on Machine Learning. PMLR, 2020. --- Rebuttal Comment 1.2: Comment: Thank the authors for the rebuttal. R1 addresses my concern about the scalability of the proposed method since the results on Tiny-ImageNet-200 are provided. However, I can't entirely agree with R2. Wasserstein DRO also suffers from overly pessimistic issues as indicated by [1]. And f-divergence DRO is an important baseline for evaluation, but the author did not seem to provide the results of f-divergence DRO for comparison. [1] Frogner, Charlie, et al. "Incorporating unlabeled data into distributionally robust learning." arXiv preprint arXiv:1912.07729 (2019). --- Reply to Comment 1.2.1: Comment: >**Q:** Thank the authors for the rebuttal. R1 addresses my concern about the scalability of the proposed method since the results on Tiny-ImageNet-200 are provided. However, I can't entirely agree with R2. Wasserstein DRO also suffers from overly pessimistic issues as indicated by [1]. And f-divergence DRO is an important baseline for evaluation, but the author did not seem to provide the results of f-divergence DRO for comparison. **A:** Thank you for your constructive comment! We are glad to hear that your concern regarding scalability has been sufficiently addressed. With respect to f-divergence, we regret the oversight in our manuscript. Notably, our competitors, **DROLT**([a]) and **ADVShift**([b]), employ DRO based on KL-divergence to enhance model robustness, and our methods consistently outperform theirs across all experiments. To our knowledge, [c] reveals a phenomenon quite different to that described in [d]. The former suggests that in Wasserstein DRO, the training process may collapse, leading the model to a trivial solution, even when the perturbation radius $\epsilon$ is relatively small compared to the distance between training distribution and true distribution. To check whether this anomaly manifests in our setting, we conduct a simple two-step experiment on CIFAR datasets. In the first step, we estimate an upper-bound of Wasserstein distance between training and testing distributions. In the second step, we set $\epsilon$ to be larger than the estimated distance, and see if similar issue occurs in our context. **Step 1:** Assume that both training and test sets of the original CIFAR are sampled from the same distribution, and our corrupted CIFAR datasets are sampled from the "real distribution". Directly solving the Wasserstein DRO between these distributions remains intractable. However, given that the corrupted datasets are generated from the original testing sets, a good estimate of the Wasserstein distance can be obtained by calculating the mean example-wise distance between corrupted and clean datasets, resulting in: **The estimated Wasserstein distance between CIFAR and different levels of CIFAR-C:** |Datasets|Level1|Level2|Level3|Level4|Level5| |:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|2.80|4.07|4.67|5.58|6.63| |CIFAR100|2.83|4.11|4.67|5.57|6.61| **Step 2:** Train our model with $\epsilon \in \\{1,2,3,4,5,6,7,8\\}$, which is much larger compared to our experimental parameters. In the 16 groups of experiments we run on CIFAR10 and CIFAR100, only $\epsilon=7$ in CIFAR100 fails to converge. However, by choosing smaller learning rate, our model can evade collapsing into a trivial solution. This experiment proves that the training collapse problem is not that fatal in our setting, even when $\epsilon$ is substantially larger than the distance to the true data distribution, and can be avoided by simply tuning the learning rate. However, shifting the discussion about overly-pessimistic to a broader perspective, beyond a specific setting, we must probe if our methods encounter performance degradation under large, unconstrained uncertainty set. This becomes similar to what we discussed in Prop. 2, i.e., if we place no constraint on the distributional attacker, it will tend to attack the tail class, potentially causing label noise. This might typify the over-pessimistic problem inherent in our setting. Nonetheless, we have proposed our solution by adding structural constraint, namely DRAUC-Da, in our paper. [a]Samuel, Dvir, and Gal Chechik. "Distributional robustness loss for long-tail learning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. [b]Zhang, Jingzhao, et al. "Coping with label shift via distributionally robust optimisation." arXiv preprint arXiv:2010.12230 (2020). [c]Frogner, Charlie, et al. "Incorporating unlabeled data into distributionally robust learning." arXiv preprint arXiv:1912.07729 (2019). [d]Hu, Weihua, et al. "Does distributionally robust supervised learning give robust classifiers?." International Conference on Machine Learning. PMLR, 2018.
Summary: The authors propose a Distributionally Robust AUC (DRAUC) optimization method that combines Wasserstein distance based DRO with AUC optimization. The combination is designed to be robust for data perturbations. The authors provide theoretical analysis from perturbation and generalization perspective, and justify the practical performance on the perturbed data version for MNIST and CIFAR10. Strengths: 1) The motivation is clear: combining DRO (perturbation robust) with AUC. 2) Authors provide both theoretical and practical justifications. Weaknesses: 1) The idea is straightforward and the contribution is limited. Perturbation robustness idea has been explored for adversarial defense (e.g. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, pages 7472–7482. PMLR, 2019; Yin, Dong, Ramchandran Kannan, and Peter Bartlett. "Rademacher complexity for adversarially robust generalization." International conference on machine learning. PMLR, 2019). From my view point, this paper directly extend the perturbation robustness to AUC learning. 2) There are some issues for presentation: i) the $\phi_\lambda(z,\mathbf\theta)$ in line 94 doesn't make sense because the $z'$ will be simply optimized as the value $z$. The problem exists through the whole main content. ii) The '=' should be removed for E.q. 15 to be consistent with other presentations. iii) The $\mathcal W$ below line 144 should be $\mathcal W_c$. iv) The notations for $\hat Q_+$ and $\hat Q_-$ are abused for E.q. 18 and 19, which are confusing. v) Algorithm 1 and Algorithm 2 are similar, which can be further simplified or merged. 3) It might be better to include (D. Zhu, G. Li, B. Wang, X. Wu, and T. Yang. When auc meets dro: Optimizing partial auc for deep learning with non-convex convergence guarantee. In International Conference on Machine Learning, pages 27548–27573. PMLR, 2022) as another baseline for experiments because it also combines DRO with AUC, although it is not designed for perturbation robustness. 4) The Figure 3 is not very informative. I can only observe CE loss is worse than AUC-type losses. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please see the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable questions! To address your concern, we restate our contributions, fix the typos, and add AUCDRO as a new baseline. The details are as follows: >**Q1:** The idea is straightforward and the contribution is limited. Perturbation robustness idea has been explored for adversarial defense. From my view point, this paper directly extend the perturbation robustness to AUC learning. **A1:** Thank you for your inquiry. We want to clarify that the integration of DRO and AUC optimization is not trivial for several reasons: - **Identifying the strong duality of Distributionally Robust AUC is challenging.** To get rid of the intractable Wasserstein perturbation, one must employ strong duality, as in [a], to reformulate the original optimization problem. Nonetheless, the strong duality becomes much more complicated when assessing performance using AUC, which necessitates a pair-wise loss. - **The direct integration of DRO with AUC optimization imposes unaffordable computational costs.** Wasserstein DRO utilizes an adversarial training framework to perturb examples. The inner K-step PGD is computationally expensive. The pair-wise formulation requires a time complexity of $O(N^2dK)$ and a spatial complexity of $O(Nd)$, where $N$ is the size of the training set, and $d$ represents the input dimensions. Such complexities make this problem impossible to solve for large datasets. To this end, we conduct an instance-wise reformulation, successfully reducing the time complexity to $O(Ndk)$ and the spatial complexity to $O(Bd)$, where $B$ is the batch size. - **The reformulation introduces an intractable optimization problem.** Unfortunately, the original min-max-min-max formulation proves to be intractable. To address this, we propose a surrogate loss which serves as the upper bound of the original objective. This is also specifically designed for our algorithm. - **The collision of DRO and long-tailed optimization algorithm results in intriguing outcomes.** Under the assumptions of a neural collapsed feature space, the distributional attacker tends to target the tail classes. In defense of this, we propose distribution-aware DRAUC and evaluate its effectiveness empirically and theoretically. - **Typical Rademacher complexity theorem in DRO and adversarial training does not suit our setting.** The vital challenge is that the symmetrization scheme is not available in the pair-wise formulation of AUC. For instance, $\ell(f_{{\theta}}(x_1) -f_{{\theta}}(x_2))$ is interdependent with $\ell(f_{{\theta}}(x_1') -f_{{\theta}}(x_2'))$, if $x_1 = x_1'$ or $x_2 = x_2'$, which makes prior works not suitable in our setting. To address this issue, we propose the Rademacher complexity based on the instance-wise reformulation and successfully solve this problem entirely under the instance-wise setting. Besides, we build control of the excess risk of DRAUC by this Rademacher complexity, which is simpler and easier to use than prior analysis on the Rademacher complexity of AUC. --- >**Q2:** The $\phi_\lambda(z,\theta)$ in line 94 doesn't make sense because the $z'$ will be simply optimized as the value. The problem exists through the whole main content. **A2:** Thanks for your careful review! The term $\ell(f_\theta,z)$ on the right-hand side should actually be $\ell(f_\theta,z')$, making the corrected equation $\phi_\lambda(z, \theta) = \sup_{z' \in \mathcal Z}\{\ell(f_\theta,z') - \lambda c(z,z')\}$. We also correct the typos in Line 127 and Line 170. --- >**Q3:** The '=' should be removed for E.q. 15 to be consistent with other presentations. The $\mathcal W$ below line 144 should be $\mathcal W_c$. **A3:** We have corrected the typos. Thanks a lot! --- >**Q4:** The notations for and are abused for E.q. 18 and 19, which are confusing. **A4:** Thank you for pointing out the problem. We will modify our description to define the domain of our perturbation: $\mathcal Q = \\{Q|~\mathcal W_c(\widehat Q_+,\widehat P_+)\le \epsilon_+, \mathcal W_c(\widehat Q_-,\widehat P_-)\le \epsilon_-\\}$ and denotes as $\max_{Q\in\mathcal Q}$. --- >**Q5:** Algorithm 1 and Algorithm 2 are similar, which can be further simplified or merged. **A5:** Thank you for your valuable suggestion! We have integrated Algorithm 1 and Algorithm 2 by vectorizing the parameters. --- >**Q6:** It might be better to include AUCDRO as another baseline for experiments because it also combines DRO with AUC. **A6:** Thank you for your suggestion! We add this method as our baseline (AUCDRO), and the results are listed as follows: |Datasets|Models|0.01-C|0.05-C|0.10-C|0.20-C|0.01|0.05|0.10|0.20| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|resnet20|63.35|76.19|81.82|85.96|67.14|84.00|90.92|94.88| |CIFAR10|resnet32|65.10|71.23|81.45|86.23|68.69|78.51|90.67|95.07| |CIFAR100|resnet20|55.96|61.65|62.67|65.72|57.14|64.74|66.59|70.66| |CIFAR100|resnet32|56.93|61.41|64.08|68.93|58.33|64.02|68.71|73.86| |MNIST|small_cnn|94.00|97.80|97.76|98.54|99.12|99.82|99.92|99.95| |MNIST|resnet20|89.11|94.40|95.71|96.73|98.65|99.87|99.89|99.95| Though AUCDRO is a state-of-the-art method in partial AUC optimization, it is not designed to improve model robustness; thus, its results on corrupted data are lower than ours. --- >**Q7:** The Figure 3 is not very informative. I can only observe CE loss is worse than AUC-type losses. **A7:** Thank you for pointing out this problem! In constructing the long-tailed version datasets, we assign the first half of classes as positive and the rest half as negative. This results in a large intra-class variance, making it challenging for even a well-trained classifier to generate collapsed features. To prevent any misunderstanding, we decide to move this figure to the supplementary materials. --- [a]Gao, Rui, and Anton Kleywegt. "Distributionally robust stochastic optimization with Wasserstein distance." Mathematics of Operations Research 48.2 (2023): 603-655. --- Rebuttal 2: Comment: Dear reviewer 2tJ9, We want to kindly inquiry if our response adequately addresses your concerns? If there are any questions, please feel free to ask and we will be glad to clarify. --- Rebuttal Comment 2.1: Comment: It is not clear why pairwise formulation requires (N^2) complexity. Since the algorithm can be always stochastic based on mini-batch samples, it do not need all pairs in the data. For pairs in the mini-batch, it is fairly light compared with model size. The method base on pairwise loss formulation is a natural baseline. --- Reply to Comment 2.1.1: Comment: >**Q:** It is not clear why pairwise formulation requires (N^2) complexity. Since the algorithm can be always stochastic based on mini-batch samples, it do not need all pairs in the data. For pairs in the mini-batch, it is fairly light compared with model size. The method base on pairwise loss formulation is a natural baseline. **A:** We appreciate your insightful feedback. The primary challenge associated with the pairwise formulation lies in generating the local worst distribution. The pairwise formulation of DRAUC is defined as: ${DRAUC}\_\epsilon(f\_\theta, \widehat P) = 1 -\max\_{\widehat Q:\mathcal W\_c(\widehat Q,\widehat P) \le \epsilon} \mathbb E\_{\widehat Q} [{\ell(f\_\theta(x^+)-f\_\theta(x^-))}]$ Generating the local-worst distribution $\widehat Q $ suffers from two main difficulties. Firstly, as highlighted in our response to reviewer 2tJ9, the strong duality becomes very complicated when dealing with pairwise loss. Consequently, the Wasserstein perturbation becomes nearly infeasible. Secondly, generating $\widehat Q $ is loss-dependent, implying that we need to know all the training details to deliver a malicious attack. In our endeavor to secure a performance guarantee for our model, we cannot limit the scope of information accessible to an attacker. This results in a computational cost remaining at $O(N^+N^-dK)$, even within a stochastic setting. We will provide further explanations regarding the unsuitability of the pairwise formulation in Sec 3.1. While the pairwise formulation might not be ideal for a distributionally robust setting, it can be considered as a baseline of natural training. The outcomes are presented as below: **The results of pairwise AUC(PAUC):** |Datasets|Models|0.01-C|0.05-C|0.10-C|0.20-C|0.01|0.05|0.10|0.20| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|resnet20|56.31|75.75|82.69|84.07|57.43|84.35|90.94|94.30| |CIFAR10|resnet32|61.81|75.24|80.44|84.06|65.01|81.46|89.90|94.21| |CIFAR100|resnet20|52.15|61.19|64.98|69.64|52.58|63.24|67.52|73.40| |CIFAR100|resnet32|53.52|61.72|65.21|69.17|54.24|63.65|67.73|73.30| |MNIST|small_cnn|94.58|97.24|98.39|98.90|99.14|99.86|99.92|99.96| |MNIST|resnet20|90.28|92.68|90.35|98.65|98.65|99.87|99.89|99.95| --- Rebuttal Comment 2.2: Title: Thanks for your rebuttal Comment: The reviewer has read the rebuttal and appreciate the efforts made by the authors. However, the reviewer still holds the original evaluation score. Although my previous concerns about the writing issues can be fixed, I don't feel the paper is well prepared by the deadline of NeurIPS. Moreover, the experimental results don't include variance or standard deviation and might not be repeated independently multiple times. --- Reply to Comment 2.2.1: Comment: >**Q:** The reviewer has read the rebuttal and appreciate the efforts made by the authors. However, the reviewer still holds the original evaluation score. Although my previous concerns about the writing issues can be fixed, I don't feel the paper is well prepared by the deadline of NeurIPS. Moreover, the experimental results don't include variance or standard deviation and might not be repeated independently multiple times. **A:** Thank you for your comment! We are glad to hear that our rebuttal has well addressed your previous concerns. Thanks to the valuable suggestions the reviewers proposed, our manuscript has become more complete and convincing; we really appreciate that! Regarding the matter of the error bar, we would like to clarify that due to the considerable computational expense, it is widely accepted within the realms of Wasserstein DRO and adversarial training to refrain from repeated experiments, as evidenced by references [a, b, c, d]. Nevertheless, in order to allay your concerns, we have repeated our method on CIFAR10 three times using varied seeds and present the average results and their standard deviation: **The results were repeated 3 times with different seeds on CIFAR10 using ResNet20:** |Methods|0.01-C|0.05-C|0.10-C|0.20-C|0.01|0.05|0.10|0.20| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |DRAUC|65.18(1.07)|79.87(0.64)|85.78(0.48)|89.35(0.13)|68.05(0.69)|86.19(0.16)|90.78(0.37)|94.38(0.20)| |CDRAUC|65.43(0.18)|79.89(0.55)|85.74(0.41)|89.62(0.31)|67.79(0.50)|84.37(0.65)|90.69(0.34)|94.3(0.19)| The results indicate that the variances of our method are small. [a]Sinha, Aman, et al. "Certifying some distributional robustness with principled adversarial training." arXiv preprint arXiv:1710.10571 (2017). [b]Bui, Tuan Anh, et al. "A unified wasserstein distributional robustness framework for adversarial training." arXiv preprint arXiv:2202.13437 (2022). [c]Madry, Aleksander, et al. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083 (2017). [d]Jiang, Ziyu, et al. "Robust pre-training by adversarial contrastive learning." Advances in neural information processing systems 33 (2020): 16199-16210.
Rebuttal 1: Rebuttal: Dear reviewers, we sincerely appreciate your time and valuable comments. Thanks to your detailed, insightful reviews, we introduce the following changes: - **Additional Empirical Support:** We add AUCDRO[a] as a new baseline. We conduct experiments on **large-scale datasets** including Melanoma and Tiny-ImageNet-200. We also evaluate our proposed method on **modern network architectures** including EfficientNet and DenseNet. - **Method Expansion:** We extend our algorithm to multi-class classification scenarios. - **Improvement on readability:** We combine the DRAUC-Df/Da algorithm together for a more streamlined presentation, and fix several typos in our paper due to your careful reviews. We appreciate the thorough review. **Empirical results:** - **New baseline:** - **AUCDRO([a]):** As mentioned by Reviewer 2tJ9, it is better to compare with AUCDRO due to its integration of DRO and AUC optimization (though it is not designed for perturbed defense). - **New datasets:** - **Tiny-ImageNet-200**: This dataset comprises 110,000 images in 200 classes, with a resolution of 64x64x3. We assign the binary long-tailed version by utilizing the hyper-class information. As detailed, we construct three subsets, Tiny-ImageNet-Dogs, Birds and Vehicles. Model robustness is then assessed on Tiny-ImageNet-C([b]). - **Melanoma:** Melanoma is a large-scale medical image dataset that contains 32,542 positive examples and 584 negative examples. We subdivided the dataset into training, validation, and testing sets in a 7:1.5:1.5 ratio. Due to our use of K-PGD to generate perturbed examples, running experiments on a large-scale dataset is really computationally expensive. We downsampling all images to a resolution of 224x224, and train our model with a batch size 32. To our knowledge, no corrupted version of the Melanoma dataset exists. In assessing the robustness of our proposed algorithm, we created a corrupted version like that presented in [b]. Additionally, we excluded weather-based corruptions such as snow, frost, and fog, which are implausible in the context of medical images. As a result, we incorporated 12 corruptions across five perturbation levels in the testing split of the Melanoma dataset to generate Melanoma-C. - **Multi-class CIFAR10-LT:** Multi-class CIFAR10-LT is a long-tailed version sampled from the original CIFAR10. The degree of class imbalance is indicated by the imratio, defined as the ratio of the smallest class's size to the largest. We conduct similar reformulation to AUCMLoss to compute multi-class AUC. - **New backbones:** - **Efficientnet, Densenet121**. **Methodlogically results:** We expand our DRAUC-Df and DRAUC-Da to multi-class settings, the derivation is as follows: First, we define multi-class AUC: $AUCM(f\_{{\theta}}) = \frac{1}{N\_C}\sum\_{i=1}^{N\_C}AUC\_i(f\_{{\theta}}) = \frac{1}{N\_C}\sum\_{i=1}^{N\_C} \mathbb E\_{\widehat P\_i, {\widehat P\_i'}}[\ell(f\_{{\theta}}(x\_i) -f\_{{\theta}}(x\_i'))]$ where $N\_C$ is the number of classes, $AUC\_i$ denotes the AUC score when assigning $i^{th}$ class as positive, and the rest classes as negative, $\widehat P\_i$ denotes the example distribution of $i^{th}$ class and $\widehat P\_i'$ denotes the distribution or the rest classes. Consistent with the notation in our paper, we use $\widehat Q$ to describe the perturbed distribution. Following the idea of optimizing the multi-class AUC under local worst distribution, we now present the objective of Multi-class DRAUC(MDRAUC) $\min\_{{\theta}}\max\_{\widehat Q:\mathcal W\_c(\widehat Q,\widehat P)\le \epsilon} ~\mathbb E\_{\widehat Q}\left[\frac 1{N\_C}\sum\_{i=1}^{N\_C}AUC\_i(f\_{{\theta}})\right]$ Upon applying the instance-wise reformulation, we obtain $\min\_{{\theta}}\max\_{\widehat Q:\mathcal W\_c(\widehat Q,\widehat P)\le \epsilon} \frac 1{N\_C}\sum\_{i=1}^{N\_C}\min\_{{(a\_i,b\_i)\in \Omega\_{a,b}}}\max\_{{\alpha\_i \in \Omega\_{\alpha} }} ~\mathbb E\_{\widehat Q}\left[g\_i(a\_i,b\_i,\alpha\_i,{{\theta}},z)\right]$ where $g\_i(a\_i,b\_i,\alpha\_i,\theta,{z})= (1-p\_i)\cdot (f\_\theta({x}) - a\_i) ^ 2 \cdot \mathbb I\_{[y = i]} + p\_i \cdot (f\_\theta( x) - b\_i) ^ 2 \cdot \mathbb I\_{[y \ne i]} $ $\qquad\qquad\qquad\qquad\qquad+ 2\cdot(1+\alpha\_i)\cdot(p\_i \cdot f\_\theta( x) \cdot \mathbb I\_{[y \ne i]} - (1-p\_i)\cdot f\_\theta( x) \cdot \mathbb I\_{[y = i]} - p\_i(1-p\_i)\cdot\alpha\_i^2).$ Here, $p\_i = \frac{n\_i} N$ denotes the proportion of examples in the $i^{th}$ class. Moreover, with ${\theta}$ and $\widehat Q$ fixed, $a\_i,a\_j$ are irrelevant $\forall i,j$, as are $b$ and $\alpha$. Thus, we can safely interchange the summation and inner min-max, resulting in $\min\_{{\theta}}\max\_{\widehat Q:\mathcal W\_c(\widehat Q,\widehat P)\le \epsilon} \min\_{{(\vec a,\vec b)}}\max\_{{\vec\alpha }} ~\mathbb E\_{\widehat Q}\left[\frac 1{N\_C}\sum\_{i=1}^{N\_C}g\_i(a\_i,b\_i,\alpha\_i,{{\theta}},z)\right]$ where $\vec a = \\{a\_1,...a\_{N\_C}\\},\vec b = \\{b\_1,...b\_{N\_C}\\},\vec \alpha = \\{\alpha\_1,...\alpha\_{N\_C}\\}$. Using a technical similar to Sec 3.2, we obtain our formulation of MDRAUC-Df as follows ${(MDf\star)}\quad\min\_{{{\theta}},\vec a,\vec b} \min\_{\lambda > 0}\max\_{\vec \alpha} \left\\{\lambda\epsilon + \mathbb E\_{\widehat{ P}}[\widehat \varphi\_{{{\theta}}, \vec a, \vec b, \vec \alpha, \lambda}(z)] \right\\}$ and $\widehat \varphi\_{{{\theta}}, \vec a, \vec b, \vec \alpha, \lambda}(z) = \max\_{z' \in \mathcal Z}\left[\frac 1{N\_C}\sum\_{i=1}^{N\_C}g\_i(a\_i,b\_i,\alpha\_i,{{\theta}},z) - \lambda c(z,z')\right].$ --- [a]Zhu, Dixian, et al. "When auc meets dro: Optimizing partial auc for deep learning with non-convex convergence guarantee." International Conference on Machine Learning. PMLR, 2022. [b]Hendrycks, Dan, and Thomas Dietterich. "Benchmarking neural network robustness to common corruptions and perturbations." arXiv preprint arXiv:1903.12261 (2019). Pdf: /pdf/be374cffd5d3a5cb90c4cd4755c5092918132d04.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fast Optimal Transport through Sliced Generalized Wasserstein Geodesics
Accept (spotlight)
Summary: The paper introduces a variant of slicing of the Wasserstein distance that keeps the optimal transport map property (in particular, existence of an optimal transport map). The typical formulation of slicing loses the Monge map as each slice comes with its own map and there is no natural way to combine maps coming from different slices. As in the Wasserstein case, the variant of the sliced optimal transport distance introduced here is efficient to compute, and performs similarly (both theoretically and in practice) to the sliced Wasserstein distance. Strengths: The idea is very nice. A major advantage of sliced distances are their numerical efficiency but the big drawback is the lack of maps. By being able to define optimal transport maps the authors have removed a significant disadvantage, whilst keeping the main advantage. Weaknesses: (1) More theoretical background/justification, such as when do minimizers in Definition 4.1 exist would be helpful. (2) The distance defined in eq. (13), since it can be written as the square root of \int_{R^d} \| T^{\nu\to\mu_1}(x) - T^{\nu\to\mu_2}(x)\|^2 d \nu(x), is the linearized Wasserstein distance. The authors should connect to the literature on linearized optimal transport distances. (3) There were a MANY grammatical and spelling errors in the paper, including misspelling Wasserstein (line 48). The paper needs a very careful readthrough. This is my reason for scoring the paper as a 2 in presentation. I felt the explanations were otherwise quite good, and I would be happy to give a higher score once corrections have been made. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (a) Connected to (1) in the weaknesses section: can you give sufficient and/or necessary conditions for existence of minimizers in Definition 4.1? (b) What would you do if minimizers do not exist? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There was no discussion on limitations of the method, although a few directions for further work was proposed in the conclusions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your review and for the positive evaluation of our paper. We now provide detailed answers to your questions. - **Existence of minimizer in Definition 4.1.** The existence of $\mu_\theta^{1\to2}$ is always guaranteed, whenever $\mu_1,\mu_2$ are two empirical probability measures. This follows from the fact that the Wasserstein mean (aka McCann interpolant with $t=1/2$) is always explicit in this case (you can refer to [1] Remark 7.1 for example). We propose to include a reference to this property in the background section of the final version. - **Link with the linearized OT.** You are correct; min-SWGG is indeed a linearized version of the Wasserstein distance where the ground measure $\nu$ is chosen carefully. We provided a brief overview of this property in Section C ('Linear Optimal Transport') of the supplementary material. We propose to strengthen this discussion with more literature on linearization. It would be interesting to elaborate on the importance of the ground measure in approximating the Wasserstein distance using the $L^2(\nu)$ distance. For instance, when the ground measure is chosen to be the Wasserstein mean of the two distributions the distance obtained through linearization equals the true Wasserstein distance. - **Grammatical and misspelling errors.** We apologize for the English errors that appear throughout the paper. We are committed to ensuring thorough proofreading in the final version. We have already addressed some issues using online tools and intend to have the paper reviewed by a native English speaker if it is accepted. [1] Peyré, G., & Cuturi, M. (2019). Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6), 355-607.) --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and I've increased my score as I'm happy with their response (I would particularly encourage the authors to be as thorough as possible with their grammar/spelling review, as scientifically I enjoyed reading the paper).
Summary: This paper proposes an approximation to the squared Wasserstein distance between probability measures (typically required in high dimensional space) based on the sliced Wasserstein paradigm. The general idea is to project points onto a line where computing Wasserstein distances becomes significantly simpler (a typical case involves only permutations and sorting). The authors build up their main construction - min SWGG, by leveraging the properties of prior works like Max SW and projected sliced Wasserstein distance (PWD) and derive a tighter upper bound for the squared Wasserstein distance (W2) in comparison to PWD leading to the definition of min-SWGG. To put it very simply min-SWGG is closely related to the PWD, where the minimum is chosen instead of the average over all possible angles. The authors then relate the min-SWGG formulation to Wasserstein Generalised Geodesics - which involves a third measure (called pivot) in order to approximate the actual Wasserstein distance. By choosing the pivot measure to lie on a line, this connection enables a more tractable way to optimize for the best angle/plane to compute min-SWGG. Experiments are demonstrated for (1.) Image Color Transfer (2.) PointCloud Matching (3.) Empirical demonstration of min SWGG optimization. The results demonstrate a proof of concept (i.e. examples showing the min-SWGG can be computed with different options quite reasonably) and some cases of good accuracy and faster computation Strengths: - Overall, I found this paper to be very comprehensive in its core formulation of min-SWGG. This paper appears to rate highly in its theoretical contributions - (1.) The definition of a new OT approximation with interesting bounds to the actual WD and PWD (2.) Connecting min SWGG to the Generalized Wasserstein Geodesic using a pivot measure that lies on a line thereby showing a path for computational tractability (3.) Proofs of metricity, weak convergence, computational and topological properties etc. - The paper is written well with a good introduction and fairly self contained background Weaknesses: - The practical aspect of this paper is sort of underwhelming. It would have been very convincing to demonstrate on truly high dimensional applications. Most experiments are very synthetic, and despite a modest experimental section, I am still grasping for situations where min-SWGG outperforms SWD and perhaps WD in terms of computability, quality of distances and transport maps, barycenters etc. - To this aid, despite the commentary on the use of cumulative distributions in the supplementary (enabling transport plans instead of maps), I could not place any impactful demonstration. That being said almost all of the narrative is based on matching uniform distributions having equal number of points and it's unclear how to generalize from here. - Generally, most figures could do with a more descriptive caption. Especially Figure 4 which I am finding really hard to parse. What is the box with Gaussian 500d indicate? Which are the source and target distributions? What is the message of this experiment? (I suspect it is that with enough iterations, the optimization of min-SWGG provides lower W2 distances - please clarify) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I am curious to visualize the difference between the transport plan obtained using min-SWGG in comparison to the optimal transport plan (i.e. corresponding to WD). This could potentially be visualized on the Pointcloud example. - How do the other SW distances (SW, max-SW, PWD etc.) compare in figure 3 (left)? - What is the evaluation for some other metric in the point cloud example, for eg the chamfer distance, L2 error after alignment etc. Is there a specific reason to report only the sinkhorn divergence? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See Weaknesses for technical limitations. I am not an expert, but I think this paper has no direct negative societal impact. Overall, I am inclined positively on this paper. The new definition and approximation are derived nicely and interesting connections are made to the Wasserstein Generalised Geodesics framework. However, the practical impact (even amongst competing optimal transport techniques) is not equally impressive, and at this point unclear if it is as widely applicable. Put together, I vote a weak acceptance, with an intention to re-assess more carefully after the rebuttal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and provide some answers to the questions and provide feeback about the reported weaknesses. - **The practical aspect of min-SWGG is underwhelming.** In the paper, we conducted several experiments in a wide set of scenarii, with the aim to stress the interest of min-SWGG. We first aimed at illustrating some properties of min-SWGG, e.g. i) study of the random search and optimization scheme (Sec. 5.1), ii) study of the runtimes wrt competitors (Sec. 5.1), iii) study of its weak convergence property (Sec. 5.2). We also provided some results in different contexts, such as color transfer and pan sharpening (Sec. 5.3, E.4 and E.6), point clouds registration (Sec. 5.4) and computing distances between datasets (Sec. E.7). We showed that the optimization scheme is relevant in high dimensional settings (5.1 and 5.2), that its computation is fast wrt competitors (Sec. 5.1 and 5.4) and that it comes with a tranport plan (which is not the case for SWD, Sec. 5.3 and 5.4). It is true that we did not elaborate on a flagship application, and we hope that future studies will allow illustrating the relevance of min-SWGG in a wider set of contexts. - **Visualize the transport plan of min-SWGG.** We propose to include examples of the transport plan in the ICP context in the final version. Meanwhile figure 1 (of the rebuttal's pdf) illustrates the transport plan of min-SWGG and plain OT on two examples of Gaussian or bimodal distributions. - **Evolution of SW, max-SW, PWD,.. in figure 3.** It is entirely feasible to incorporate the competitors into the figure. However the differing orders of magnitude tend to deteriorate the readabiliy of the figure. The attached Table 3 in the PDF accompanying the main author's rebuttal offers an illustration of this magnitude difference in a distinct yet related setup. - **Regarding the ICP experiment.** Tables 1 and 2 (of the rebuttal's pdf) provide additional results for the ICP experiment (with $n=500$ and $n=3000$) and show that min-SWGG performs favorably when considering square Frobenius or Chamfer distance.
Summary: This paper proposed a novel proxy of the squared Wasserstein distance called min-SWGG, which is based on Sliced-Wasserstein distance (SWD) and the projected Wasserstein distance (PWD). This quantity is also shown to be relevant to Wasserstein Generalized Geodesics with pivot measure, which makes it more interesting. The theoretical properties of min-SWGG has been discussed, including the topological properties and the relationships with Wasserstein Generalized Geodesics. The authors provide two efficient algorithms for computing min-SWGG. The experimental results indicates better computational efficiency of the proposed algorithm compared to the state-of-the-art methods. Strengths: The paper is well written and clear, the approach is well supported by the theoretical analysis. The experiment and application seem strong and nice. The authors also made comparisons to other state-of-the-art methods that makes the paper more complete. Weaknesses: - One question about the approach in line 140: In Remark 3.3 the authors mentioned about the overall computational complexity to calculate min-SWGG by random search for L times. I wonder if there is any upperbound for it when the dimension d is high? Or is there any results showing the possible order of relation of L and dimention d, or other quntities? - For the second row of Figure 2, I notice that for Gaussian 500d, min-SWGG (optim) performs the best but for the left one, min-SWGG (optim) may have inferior behaviors. What might be the issue? Could you maybe elaborate on it a little bit? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have not found the limitations of the approach to be addressed in the paper. Maybe the authors can elaborate more on these. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comment regarding the behavior of min-SWGG when $d$ varies. It is indeed an interesting question, and we do believe that min-SWGG is a particularly interesting approximation in this context. First, we recall that, when $d > 2n$, equality between $W_2^2$ and min-SWGG holds (Prop. 3.2). - **Behavior of the number of random directions with the dimension.** We expect that, as $d$ increases, a larger number of projections $L$ will be required to achieve a decent approximation. Empirically, we also observe that within a broad range of scenarios, it is sufficient to take $\mathcal{O}(L^{d-1})$ directions, similar to SWD [2] or max-SW [1]. A deeper investigation of this behavior is deferred to future work. This behavior serves as motivation for designing an optimization scheme through gradient descent. - **Variability of min-SWGG optimization with dimension in gradient flow experimentation.** You are correct: when $d$ is small, it can be observed that min-SWGG (optim) converges a little more slowly than min-SWGG (random search). This is due to the fact that min-SWGG (optim) optimizes a smooth and non-convex surrogate of min-SWGG (see eq. 16). As such, the quality of the minimum found by descent depends on the choice of the initialization. Empirically, we find that solutions provided by this surrogate may differ from those of min-SWGG (random search), even though they are close. The gradient flow experiment is run for 2000 iterations, which accounts for the final difference. [1] Deshpande, I., Hu, Y. T., Sun, R., Pyrros, A., Siddiqui, N., Koyejo, S., ... & Schwing, A. G. (2019). Max-sliced wasserstein distance and its use for gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10648-10656). [2] Zhang, J., Ma, P., Zhong, W. and Meng, C. . Projection-based techniques for high-dimensional optimal transport problems. Wiley Interdisciplinary Reviews: Computational Statistics, 15(2):e1587, 2023. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their reply. My assessment remains inclined to the positive.
Summary: The paper proposes a new formulation and algorithm to approximate Wasserstein 2-distance (i.e., the distance of two points is the squared Euclidean one) between two distributions. Though some of the development is general, the key parts and the algorithms apply only to distributions being the sums of delta functions with the same weights. The approach builds on approaches approximating the true WD by using 1-D slices of the distributions, the distance between which can be computed cheaply as pointwise sum of squared differences of sorted distributions. The basic Sliced WD, which approximates the true WD by the integral of the distance between such 1-D projections over all directions (eq. (5)). Its approximation is Max-Sliced WD, which replaces the integral by maximization (eq. (6)), leading to a lower bound on the true WD. Another existing sliced formulation is Projected WD, which approximates the true WD by the integral of the pointwise sum of squared differences between the two distributions in which the points are sorted as if the distributions were projected onto the direction, over all directions (eq. (7)). The approach proposed in the manuscript replaces the integral here by minimization (eq. (9)), which is called Min-SWGG. It yields an upper bound in the true WD. This problem is non-convex, in fact discontinuous because the objective jumps with every change of either permutation (which sort the points on the projections). Therefore, it is smoothed and at the same time reformulated using the concepts from differential geometry of WD, generalized geodesics. This leads to a smooth non-convex problem, approximating the initial min-WDGG formulation. Its objective for a given direction can be evaluated cheaply: its most expensive part is the WD between a general point cloud and a point cloud on a line, which can be computed cheaply by just sorting 1-D distributions. The obtained problem is solved by gradient descent (yielding thus a local minimum). The experiments compare the Min-SWGG, optimized either by smoothing and gradient descent or by random search or by simulated annealing, with existing (approximate) WD methods on synthetic data. Then it is illustrated on colorization of gray-scale images by OT, where computing the exact OT is intractable. Then it is demonstrated on point cloud registration by the ICP algorithm, where the point correspondence in every ICP iteration is computed either by nearest-neighbor or OT. Strengths: The problem addressed is useful in a number of applications. Technically sound. Clearly enough written. Weaknesses: The formulation of min-SWGG (eq. (9)) is rather straightforward. Namely, approximating the integral (7) by the minimization (9) is analogous to approximating the integral (5) by the maximization (6). In this sense, I see the novelty of the paper as rather incremental. It is inconvenient that a lot of non-trivial information is in the supplement (I wonder if this format of presentation is desirable for NIPS). E.g., in the main paper the problem is restricted for the distributions from $P_2^n(R^d)$, i.e., unweighted averages of delta-distributions (see line 116), so that the exact WD (formula (1)) reduces to a linear assignment problem with the costs being squared Euclidean distances, for which the transport plan is a permutation. I.e., given two clouds of points in R^d with n elements, we want to find a permutation of one point cloud such that the sum of squared Euclid. distances between the points is minimum. In this restricted formulation, the min-SWGG (9) just optimizes over a subset of all possible permutations, given by sorting the projections of the distributions onto some direction. It is mentioned that this formulation can be extended to arbitrary distributions, but the details of this extension are just sketched in the supplement (section A.2). It is not clear from A.2 what would be the complexity of the algorithm in this general setting. In particular, I do not see why the transport plan (eq. (4) in A.2) should always have only $n+m-1$ nonzeros. Moreover, the experiments are done only for the restricted setting. Details of some experiments are also often given only in the supplement. E.g., Section 5.3 has too little details to me, so that I do not understand how exactly image colorization is formulated. The additional info in the supplement is not helpful. In experiments, Table 1 (report on point cloud registration by ICP) might be unfair to the nearest-neighbor method because the table reports Sinkhorn divergence of the registered clouds, which is close to what is optimized in min-SWGG. A better criterion to report might be the difference between the estimated and the ground-truth transformation (assuming that the latter is available, which trivially is e.g. when the target cloud is constructed by transforming the source cloud). Moreover, NN might perform well in the regime when the point clouds are initially close to each other. Minor comments: - The formulas for $\mu_1,\mu_2$ on line 116 are repeated on lines 122. The text could be optimized such that they appear only once. - Denoting the distances (8) and (9) as (min-)SWGG is confusing because generalized geodesics (GG) are not needed for their definitions (they are needed only later to reformulate the definitions). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1) In the restricted setting of unweighted discrete distributions (i.e., from $P_2^n(R^d)$), the problem (1) reduces to linear assignment problem with costs being squared Euclidean distances. This is a classical problem, for which a great number of exact and approximate algorithms have been proposed. Unfortunately, I am not that familiar with existing approximate algorithms for this problem. Did you double check that there is no existing approximate algorithm for this problem that would compare in efficiency with your algorithm but which would however not generalize to arbitrary distribution (e.g., various primal heuristics, moving between permutations by local changes)? In other words, is the strength of the proposed approach only in its generalizability to arbitrary distributions, or already for the distributions from $P_2^n(R^d)$? Q2) In particular, the experiments do not convince me that min-SWGG is more efficient than Sinkhorn. The right-most graph in Figure 3 does not make clear to what accuracy the individual algorithms were run - in other words, it mixes accuracy and runtime (it does not compare the runtimes for the same achieved accuracy, or vice versa). Note, Sinkhorn keeps only dual variables and it need not keep the cost matrix (n-by-n) explicitly, so its memory complexity scales up well with n and d. Please comment. Q3) Please comment on parallelizability of the algorithm. I guess that the gradient descent to minimize (9) is not easy to parallelize. Of course, random search is. Q4) A suggestion: for small $d$, it might be interesting to try optimizing (9) (non-smoothed) by derivative-free methods, such as Nelder-Mead. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The experiments are done only for unweighted discrete distributions. The experiments do not convincingly show that min-SWGG is more efficient than, e.g., Sinkhorn. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your work in revising our paper and for the valuable comments and questions. - **The formulation of min-SWGG is straightforward**. We respectfully disagree. You are correct in stating that PWD and min-SWGG share some similarities. While the definition of min-SWGG might appear simple in eq. (9), it can also be reformulated thanks to Wasserstein Generalized Geodesics (eq. 14), which is far from being trivial. Additionally, we want to emphasize that min-SWGG's appeal lies in two key properties: i) its ability to provide a transport plan, and ii) its suitability for optimization via gradient descent. These two properties do not hold with the original PWD loss. - **The color transfer experiment lacks sufficient details.** Details on the experiments can be found in [1]. More details will be added to the supplementary material. - **Regarding the ICP experiment.** Tables 1 and 2 (of the rebuttal's pdf) provide additional results for the ICP experiment (with $n=500$ and $n=3000$) and show that min-SWGG performs favorably when considering square Frobenius or Chamfer distance. - **The transport plan has (at most) n+m-1 non-null values.** This is a general property of an OT plan (see [2], Proposition 3.4, for example). In our case, the transport plans are optimal between $\mu_i$ ($i=1,2$) and the pivot measure $\nu$. This ensures that they have at most $n+m-1$ non-null values. - **Existing approximations of the linear assignment problem.** We thank the reviewer for this question. To the best of our knowledge, there is no algorithm that solves approximately (with guarantees) the linear assignment (Monge) problem with a complexity lower than quadratic (compared to our superlinear one), without additional assumptions about the nature of the problem/distributions. A relevant reference can be found in [3]. The reviewer is also correct in stating that the strength of the proposed approach also lies in its generalization to arbitrary distributions. - **Comparison with the Sinkhorn algorithm.** In the right part of Figure 3, we ran all the algorithms until convergence. Specifically, Sinkhorn was computed in a favorable convergence setup ($\epsilon=1$). Indeed, Sinkhorn keeps in memory the dense matrix via the dual variables which reduces the memory cost of the method. min-SWGG is more memory-efficient than Sinkhorn as it only stores the permutations ($n$ values). In the case of $\mu_1 \in \mathcal{P}_2^{n_1}(\mathbb{R^d})$ and $\mu_2 \in \mathcal{P}_2^{n_2}(\mathbb{R^d})$ with $n_1\neq n_2$, min-SWGG requires to store the quantile function, which consists of at most $n+m-1$ values. - **Parallelization of SWGG.** The reviewer is right. In terms of parallelization min-SWGG is comparable to the Sliced Wasserstein methods and may be parallelizable for random search but not for the optimization scheme. - **Gradient-free and Nelder-Mead methods.** We appreciate the reviewer's suggestion. We believe that gradient-free optimization schemes for min-SWGG could be practically beneficial, and we will closely examine them in our future investigations. However, let us note that the Nelder-Mead method requires the function to optimize to be continuous, which is not the case in our specific setting. Furthermore, when the measures belong to $\mathcal{P}_2^n(\mathbb{R^d})$, we are currently exploring differentiable sorting methods [4] that optimize directly over the permutahedron (the convex hull of permutations). These methods allow us to maintain the $O(n \log(n))$ complexity of the approach, and might be more efficient than the ad-hoc, yet simple, solution presented in our paper.* [1]Ferradans, S., Papadakis, N., Peyré, G., & Aujol, J. F. (2014). Regularized discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3), 1853-1882. [2] Peyré, G., & Cuturi, M. (2019). Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6), 355-607. [3]Duan, R., & Pettie, S. (2014). Linear-time approximation for maximum weight matching. Journal of the ACM (JACM), 61(1), 1-23. [4]Blondel, M., Teboul, O. Berthet, Q. and Djolonga, J. Fast Differentiable Sorting and Ranking, ICML (2020) --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I like the paper and will be happy to see it published. Some comments on comments: In Q1, I referred to linear assignment (LA) problem **with costs being squared Euclidean distances** (i.e., we have two sets of points and we seek to permute one of the sets so that the sum of squared distances of corresponding point paird is minimized), not just to the general LA problem. I am surprised that I did not hear about this problem before - it is so simple and natural! I wondered how much easier (in complexity) this problem is compared to the general LA problem and whether some classical approximation algorithms (other than for the general LA problem) exist. (I apologize that this question is not very objective.) Along the same line, it might be interesting to see how the (relatively complicated) machinery of generalized geodesics would simplify when we restrict the problem only to the above special case (i.e., with distributions from $P_2^n(R^d)$). --- Reply to Comment 1.1.1: Comment: We thank the reviewer for his/her positive assessment of our paper. The question raised by the reviewer ("Is there an efficient approximation of the Monge problem when the cost is the squared Euclidian distance ?") is indeed of interest and relevant for our problem. This Monge problem not only differs from a general LA problem by the fact that the considered graph is bipartite, but also because of the specific nature of the cost. It is at the heart of computational optimal transport problems. Let us start with the case where one of the two measures is absolutely continuous (*wrt.* the Lebesgue measure), it is possible to characterize the mapping as a gradient of a convex function, thanks to the Brenier Theorem. This had led to several popular approaches to approximate transport mapping with kernels for instance [1] or convex input neural network [2]. However, when the two measures are discrete and with the same number of atoms, one needs to rely on approximate solutions to avoid the super cubical complexity of an exact solver (based on Network simplex). Methods such as the Sinkhorn strategy discussed in the paper can be used [3]. Ad-hoc methods, such as hierarchical/multi-scale approaches [4], or convolution based [5], can be designed to lower the computational cost by leveraging the special structure of the squared Euclidian cost, but mostly rely on specific assumptions about the structure of the samples (*e.g.* regular grids) to be very efficient. In the specific case of multiscale methods [4], it is also unclear if one can simply backpropagate through this type of solvers, rendering it unpractical for machine learning applications. We will complete the related work of our paper with those missing references. Thanks again for your work and time on this paper. [1] Perrot M., Courty N., Flamary R., and Habrard A. (2016) Mapping Estimation for Discrete Optimal Transport. In NeurIPS, Barcelone, Spain [2] Makkuva, A., Taghvaei, A., Oh, S., & Lee, J. (2020). Optimal transport mapping via input convex neural networks. In ICML(pp. 6672-6681). [3] Altschuler J, Weed J, Rigollet P (2017) Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration. In: Proceedings of NIPS 2017, pp 1961–1971 [4] Schmitzer B (2016) A sparse multiscale algorithm for dense optimal transport. J Math Imaging Vis 56(2):238–259 [5] Solomon J., de Goes F., Peyré G., Cuturi M., Butscher A., Nguyen A., Tao Du, and Guibas L. (2015) Convolutional wasserstein distances: efficient optimal transportation on geometric domains. ACM Trans. Graph. 34, 4, Article 66
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive comments and answer their questions below. We hope our clarifications address the reviewers' questions. Reviewers 5WTB and CJnA discussed the fact that the min-SWGG formulation is derived in the restricted $\mathcal{P}_2^n(\mathbb{R^d})$ setting and that the extension to the general $\mathcal{P}_2(\mathbb{R^d})$ setting is not clear. We propose a general answer to clarify this aspect. We chose to derive the full article in the $\mathcal{P}_2^n(\mathbb{R^d})$ setting for sake of readability. Solving OT for generic 1D distributions is based on the generalized quantile function (as stated in A.2, see also [1], remark 2.30). Since min-SWGG relies on solving the 1D OT problem between the projected distributions (see eq. 8), it can be straightforwardly extended to generic distributions. In the final version, we propose to make this more explicit by providing thorough details on the construction of the general version of min-SWGG (i.e. the setting where $\mu_1 \in \mathcal{P}_2^{n_1}(\mathbb{R^d})$ and $\mu_2 \in \mathcal{P}_2^{n_2}(\mathbb{R^d})$ with $n_1\neq n_2$), along with the corresponding code. Additionally, we provide some figures and tables regarding the questions and propositions of the reviewers in the following pdf file. Thank you again for your time and expertise put in reviewing this paper. [1] Peyré, G., & Cuturi, M. (2019). Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6), 355-607. Pdf: /pdf/19b88ab77b433842d677d46668515500ca619b44.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression
Accept (poster)
Summary: This paper proposes 2Direction, a new GD-based distributed training algorithm for convex optimization problems. The improvement of 2Direction over previous works that target communication complexity in the convex case is that 2Direction is both accelerated and allows for bi-directional compression. 2Direction is proven to offer better communication complexity than previous works in certain regimes and not being worse otherwise. Strengths: 1. The scope of the paper and the main problem are well-defined and clear. 2. The paper considers bi-directional compression. SO far, bi-directional compression received less attention in the literature that mostly considers only compressing uplink communication. 2. The paper’s theoretical analysis and results are mostly clear and solid. The paper theoretically compares 2Direction to several SOTA alternatives. 3. The design choices of 2Direction are well articulated. Weaknesses: 1. The communication complexity analysis and its optimization are not fully clear. Specifically, there is dependency between the round cost and the number of communication rounds (i.e., via K and w). 2. 2Direction (Algorithm 1) requires more memory and compute resources in comparison to some previous methods such as DG, AGD and EF21-P + DIANA (e.g., 2Direction requires arrays for storing h, v and solving (11)-(12)-(23)). This overhead and its consequences, especially in federated learning contexts, are not discussed. 3. The experimental part is somewhat weak (appears in the supplementary material and contains a single logistic regression problem with two datasets). 4. The presentation can be improved. In particular, the paper is not self contained and relies on the supplementary material to convey main ideas. Notably: (i) Section 4 points to Sections B and C to articulate design choices; (ii) Section 5.2 points to equations 58 and 63 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can the authors comment on weaknesses (1)(2)(3)? 2. Can the authors shed more light on why with small probability exact values must be sent? Clearly the proof requires this, but more intuition on whether this is inherent or just an artifact of the proof technique would be helpful. 3. The comparison/analysis is done with TopK (downlink) and RandK (uplink) as example compressors. However, these compressors have weaker worst-case MSE guarantees in comparison to SOTA compression techniques that provide asymptotically better worst-case tradeoffs between communication and MSE. Can the authors shed light on that? In particular, if compressors with stronger guarantees are used, can that affect the communication complexity analysis? E.g., since line 10 in algorithm 1 can be viewed as an instance of the distributed mean estimation problem, can the usage of techniques such as [1]-[3] lead to better communication complexity or alter the conclusions? (this also relates to weakness (1)) 4. Assumption 2.4 – while the independence between the workers in the uplink direction is indeed desired, why independence with the downlink direction is important? [1] Suresh, Ananda Theertha, et al. "Distributed mean estimation with limited communication." International conference on machine learning. PMLR, 2017. [2] Davies, Peter, et al. "New Bounds For Distributed Mean Estimation and Variance Reduction." International Conference on Learning Representations, 2021. [3] Vargaftik, Shay, et al. "Eden: Communication-efficient and robust distributed mean estimation for federated learning." International Conference on Machine Learning. PMLR, 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors clearly stated a limitation of their work in Section 8. I do have few additional suggestions about the memory/computational efficiency aspect. Moreover, since federated learning (FL) is mentioned as motivation, it worth discussing whether 2Direction can be potentially extended to common FL scenarios with partial participation and stochastic gradients. I did not recognized potential negative broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! > The communication complexity analysis and its optimization are not fully clear. Specifically, there is dependency between the round cost and the number of communication rounds (i.e., via K and w). * The rounds costs and the number of communication rounds depend on $K$ ($K_{\omega}$ and $K_{\alpha}$) and $\omega$ in all methods (see Table 1). The parameters $K$ and $\omega$ are defined by a chosen compressor. For instance, in the case of Rand$K$, $K_{\omega} = K$ and $\omega = \frac{d}{K} - 1.$ In the case of Top$K$, $K_{\alpha} = K$ and $\alpha = \frac{K}{d}.$ * One can find the communication complexities in (16) or in Table 1 (Comm. Compl = # Communication Rounds × Round Cost) as functions of $\omega,$ $\alpha$ and $K.$ For the Rand$K$ and Top$K$ compressors, we can substitute the parameters to Table 1 [2Direction] or (16) and obtain the communication complexity Round Cost × \# Communication Rounds = $K \times \left(\sqrt{\frac{L}{\mu}} \frac{d}{K} + \sqrt{\frac{L_{\max}}{n \mu}} \frac{d}{K} + \frac{d}{K}\right) = d \sqrt{\frac{L}{\mu}} + d \sqrt{\frac{L_{\max}}{n \mu}} + d.$ Note that this communication complexity is never worse than the communication complexity of AGD. At the same it is very pessimistic because when we were deriving it we used that $\alpha = \frac{K}{d}$ in Top$K.$ However, in practice, $\alpha \gg \frac{K}{d}.$ * Let us illustrate that. For simplicity, let us take Top$K$ with $K = 1.$ Then, one can show that $||C(x) - x||^{2} = ||x||^2_2 -||x||^2_{\infty}.$ One can see that the "effective" $\alpha = ||x||^2_{\infty} / ||x||^2_2.$ It depends on the distribution of coordinates $x$. Yes, we can be unlucky, and in the case when all coordinates are equal, we get the worst case $\alpha = 1 / d.$ But in practice \[6, 7\], the distribution of coordinates is very non-equal to each other. > 2Direction (Algorithm 1) requires more memory and compute resources in comparison to some previous methods such as DG, AGD and EF21-P + DIANA. * The vanilla AGD method also requires additional memory compared to the vanilla GD method due to the fact that AGD uses the acceleration momentum technique. For instance, look at the implementation of AGD in \[1, p.64\]. It also has additional variables compared to GD. * The same reasoning applies to 2Direction and EF21-P + DIANA. Indeed, 2Direction requires more memory. However, it requires more memory by a **multiplicative constant factor**. In other words, the memory complexity of 2Direction, EF21-P + DIANA, AGD, and GD is $O(d),$ where the $d$ is the dimension of the problem. All methods have the same $O(d)$ memory complexity. But we agree that the constant factors that the Big-O notation hides are different. > The experimental part is somewhat weak... Our work is theoretical, and we wish our work to be judged as such. We agree that the experiments are important to test the theoretical bounds. One can see that the behavior of methods in experiments in Section Q supports the theory. We analyze the logistic regression problem, which is the most popular convex optimization task in machine learning. Our method is designed only for convex problems, so we do not provide experiments with neural networks. Note that we provide the new theoretical SOTA communication complexities in the bidirectional convex setting. > Can the authors shed more light on why with small probability exact values must be sent? ... We believe that this is the nature of all *variance-reduced accelerated* methods. If you look at all other methods \[1, 2, 3, 4\] that accelerate variance-reduced methods, they all require intermittent computations via double loop or probabilistic switching (as it was done in our paper) > The comparison/analysis is done with TopK (downlink) and RandK (uplink) as example compressors. ... One can use any compressor in line 7 of Alg. 1 if it is unbiased (see Def 2.3). Also, one can use any compressor in line 13 of Alg. 1 if it is *biased* (see Def 2.2). The theory will hold for any choice of them. One can take his/her favorite compressor and use it our method. It is only necessary to find $K$ and $\omega.$ The parameters $K$ is the number of bits/coordinates that a compressor preserve. The parameters $\omega$ and $\alpha$ usually can be also easily estimated. For instance, Rand$K$ has $\omega = \frac{d}{K} - 1.$ Note that there are many more compressors \[5\]. The choice of TopK and RandK (as examples) is motivated by the fact that these compressors are the most popular in the literature, and they can be easily explained to a reader who is not from the community of distributed compressed methods. > Assumption 2.4 – while the independence between the workers in the uplink direction is indeed desired, why independence with the downlink direction is important? The independence of the uplink and the downlink directions is important. Also, it is important that the compression operators are independent across different iterations. Otherwise, we can not use the conditional expectation trick in the proofs. For instance, we use independence in Lines 528 and 519. **We believe that we addressed all questions. We kindly ask the reviewer to reconsider the score.** \[1\]: Lan G. First-order and stochastic optimization methods for machine learning \[2\] Lan G. et al A unified variance-reduced accelerated gradient method for convex optimization \[3\] Zeyuan A. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods \[4\] Kovalev D et al Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop \[5\] Xu H. GRACE: A Compressed Communication Framework for Distributed Machine Learning \[6\]: Beznosikov, A., Horvath, S., Richtarik, P., and Safaryan, M. On biased compression for distributed learning \[7\]: PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi --- Rebuttal Comment 1.1: Comment: Thank you for your answers. *W1:* based on your answer, is it correct that the reduction in communication complexity comes at the expense of more training rounds? For example, is it right that if K=1, 2Direction requires asymptotically more rounds (and by that gradient computations) than AGD? If this is the case, this should be discussed and conveyed as a legitimate tradeoff (at least for some reasonable choices of K). *W2:* Having previous works doing the same is evidence of a common difficulty in these sorts of algorithms, but it does not shed any light on whether this is an artifact of the proof method or the problem itself. *W3:* How to properly choose K or how different compressors with different guarantees affect the converge rate (and not only the communication complexity) is not discussed but left for the reader to deduce. This also relates to *W1*. *W4:* If I understand your answer correctly, is this assumption required to complete the proof? In that case, is there a simple example where dependence only in the downlink direction prevents convergence or asymptotically slows it down? --- Reply to Comment 1.1.1: Comment: > W1: based on your answer, is it correct that the reduction in communication complexity comes at the expense of more training rounds? For example, is it right that if K=1, 2Direction requires asymptotically more rounds (and by that gradient computations) than AGD? If this is the case, this should be discussed and conveyed as a legitimate tradeoff (at least for some reasonable choices of K). > W3: How to properly choose K or how different compressors with different guarantees affect the converge rate (and not only the communication complexity) is not discussed but left for the reader to deduce. This also relates to W1. Yes, if we take $K = 1,$ then our method will theoretically require more communication rounds than if $K$ was equal to $d$. **However, this is the case of all methods that reduce communication complexity,** just like every analysis of SGD with minibatch size equal to 1 will require more iterations than the full batch case. This is normal, and to be expected. What matters is the total complexity. In our case, it is the product of the number of communication rounds and the bits transferred in each round. We discuss this at length in the paper. Indeed, it turns out that $K = 1$ may not the best choice if we *also* care about the number of gradient computations. We can ask ourselves what is the maximum $K$ we can take that preserves the best complexity. Let us consider our complexity with the Rand$K$ and Top$K$ compressors. Then the communication complexity equals $$K \times \left(\sqrt{\frac{d}{K}}\sqrt{\frac{L}{\mu \alpha}} + \frac{d}{K}\sqrt{\frac{L_{max}}{\mu n}} + \frac{1}{\alpha} + \omega\right) = O\left(\sqrt{d K}\sqrt{\frac{L}{\mu \alpha}} + d\sqrt{\frac{L_{max}}{\mu n}} + d\right).$$ From the last formula, it is clear that we should take $K \leq d \alpha \min\left\[\frac{L_{max}}{L n}, \frac{\mu}{L}\right\]$ in order to preserve the best communication complexity. So instead of $K = 1,$ one can take $K = d \alpha \min\left\[\frac{L_{max}}{L n}, \frac{\mu}{L}\right\].$ We agree that it is not obvious how to calculate this formula in practice. However, we can try to find some reasonable bound for it. For instance, we can know that our problem is well-conditioned: $L / \mu \ll n.$ Thus we can take $K = \frac{d \alpha}{n}.$ In practice, the parameter $\alpha \approx 1$ in the Top$K$ compressor, thus the reasonable choice is $K = \frac{d}{n}.$ With this choice, the number of rounds equals $$\left(\sqrt{\frac{d}{K}}\sqrt{\frac{L}{\mu \alpha}} + \frac{d}{K}\sqrt{\frac{L_{max}}{\mu n}} + \frac{1}{\alpha} + \omega\right) = \sqrt{\frac{n L}{\mu \alpha}} + \sqrt{\frac{L_{max}}{\mu}} + \frac{1}{\alpha} + n.$$ Assume that $\alpha \approx 1,$ then the number of rounds is not greater than $\sqrt{\frac{n L}{\mu}} + n.$ Unfortunattely, this number is greater than the number of rounds $\sqrt{\frac{L}{\mu}}$ of AGD (In all previous methods from Table 1, this gap is even worse). However, our method reduces the communication complexity! The same derivation one can do with other compressors instead of Rand$K$ and Top$K$. We believe that we answered the reviewer's question. We agree that this is an interesting discussion; we will be happy to add it to the paper. However, the main scope of our paper is communication complexity. > W2: Having previous works doing the same is evidence of a common difficulty in these sorts of algorithms, but it does not shed any light on whether this is an artifact of the proof method or the problem itself. If this comment related to "Can the authors shed more light on why with small probability exact values must be sent?", then we tried to explain it in as much detail as possible. The same problem was in many related works; the problem appears in our work. This is just the nature of accelerated variance-reduced methods. **Note that we show that this does not have an adverse effect on the communication complexity.** This is clearly not a weakness of our paper. > W4: If I understand your answer correctly, is this assumption required to complete the proof? In that case, is there a simple example where dependence only in the downlink direction prevents convergence or asymptotically slows it down? Notice that this assumption was considered in all previous papers on bidirection compression. We do not assume anything additional here. This is a standard assumption in the literature. Returning back to the question, the reviewer can look at Lines 528-529. In order to use Definition 2.2, we have to use that the fact $C^P$ is independent of $u^{t+1}$ and $q^{t+1}.$ For instance, the vector $u^{t+1}$ depends on $C^{D,y}_i$ and on the realization of $C^P$ from the previous iteration. In this place, the independence of compressors helps us to finish the proof. If you have more questions, we will happy to answer them.
Summary: This paper proposes a new distributed convex optimization problems with bidirectional compression. This algorithm, named as 2Direction, is claimed to be the first that improves upon the communication complexity of the vanilla accelerated gradient descent. Strengths: 1. The 2Direction method seems novel 2. The studied problem is well-motivated Weaknesses: 1. The proof of the algorithm is rather complicated. I cannot verify the solidness of the established theorem given the very tight review timeline. The appendix I, J, K, L and O are very scary given the symbolically computed expressions. It is basically a computer-aided proof. I will discuss with other reviewers and chairs on how to check this proof. 2. In table 2, some terms are associated with $\Omega$ while the others are not. Please clarify the reason. 3. 2Direction uses a unbiased compressor and a contractive compressor in algorithm design. Can 2Direction use unbiaed compressors for both w2s and s2w compression? Can 2Direction use contractive compressors for both w2s and s2w compression? 4. Does 2Direction have the smallest communication complexity when r = 0 compared with existing uni-directional compression algorithms? Since I cannot verify the correctness of the established theorem, I would recommend borderline reject at this stage. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments! > The proof of the algorithm is rather complicated. I cannot verify the solidness of the established theorem given the very tight review timeline. The appendix I, J, K, L and O are very scary given the symbolically computed expressions. It is basically a computer-aided proof. I will discuss with other reviewers and chairs on how to check this proof. > Since I cannot verify the correctness of the established theorem, I would recommend borderline reject at this stage. We really appreciate your time and effort, thank you! **But it is not fair to leave us with the comment saying you would recommend rejection because you "cannot verify the correctness of the established theorem." We know the proofs and results are correct.** **Please ask us questions regarding the proof. We will be happy to answer them.** The computer-aided part is only listed starting from page 45. Before that, do you have any concerns regarding our proofs? Note that we give a brief overview of our proof in Section 7. * We are not asking to check the generated formulas. It is sufficient to check that we generated the formulas correctly. So it is sufficient to check and understand the code on pages 64-71. The formulas are only listed for reproducibility purposes. * **The computer-aided proofs are very popular and becoming an essential instrument in modern optimization methods. Recent breakthroughs weren't possible without them. See, for instance, the celebrated SAG paper \[2\] (the authors were awarded the Lagrange prize in optimization for this work)! The modern era of variance-reduced methods (SVRG, Katyusha, SAGA, ...) would not be possible without this paper; as this work started this subfield.** We can list many more papers: \[3,4\]. Besides this, formal theorem provers are becoming popular as well, e.g., the Lean theorem prover developed by Leonardo de Moura. * Let us briefly give additional details why we need symbolic computation in a part of the proof. The first time we use this is on Line 704 (formula (87)). We need it there to substitute $\nu$ in (86). In this place, we agree that the computation assistant is not necessary. But it is becoming clear that it is necessary when we get (89). In order to get (89), we have to multiply a large number of fractions. The situation with the number of fractions, when we substitute the parameters to (91), is even worse, and the number of fractions that we have to multiply is even bigger. The symbolic computations are simply used to multiply brackets for us. Note that we could do it without the assistant, but the probability of making mistakes is much higher. Another place where we use symbolic comutations is when we check assertions in Section O. There, we also could it do it by hand, but it would be more difficult without a computation assistant. * It is clear that the method *requires* a computation assistant, and there is nothing that we can do here. **We could write the proof without the computer assistant, but it would make the proof even larger and less reader-friendly. The probability of making a mistake would be higher without the assistant.** > In table 2, some terms are associated with $\Omega$ while the others are not. Please clarify the reason. Because in some papers, the *full* convergence rate is not transparent. For instance, take a look at Theorem 1 from \[1\]. Just to be on the safe side, for all the methods with $\Omega$, we say that the complexity is not better than $\Omega(...).$ We agree that we should leave a footnote to explain that. > 2Direction uses a unbiased compressor and a contractive compressor in algorithm design. Can 2Direction use unbiaed compressors for both w2s and s2w compression? Yes, it can! Because one can get the biased compressor from an unbiased one using the following transformation: $\frac{1}{ \omega + 1} \times C$ (see Lines 75-76) > Can 2Direction use contractive compressors for both w2s and s2w compression? No, it can't. In the proofs, it is important that clients use an unbiassed compressor. The use of contractive compressors for both w2s and s2w compression is a very challenging task. The known methods that work with contractive compressors in both directions cannot even match the communication complexity guarantees of the vanilla GD method (e.g., see the EF21-BC method and its complexity \[5\]). The obtained communication complexities of such methods are much worse than the same methods but using the identity compressor (i.e., variants that do not compress anything). Before our work, even with unbiased compressors an improvement over baselines not performing any compression was not achieved, and the vanilla AGD method had the best communication complexity in the convex setting. > Does 2Direction have the smallest communication complexity when r = 0 compared with existing uni-directional compression algorithms? Yes, it is! We explain it Lines 240-241 and proof in Section S. we show that the communication complexity of 2Direction is not worse than the communication complexity of ADIANA which is the current SOTA method in the uni-directional (w2s) setting! **If you have any additional questions, please ask us. We believe that we addressed all questions. We kindly ask the reviewer to reconsider the score.** \[1\] Liu X. A Double Residual Compression Algorithm for Efficient Distributed Learning \[2\] Schmidt M. et al Minimizing Finite Sums with the Stochastic Average Gradient \[3\] Drori Y. et al Performance of first-order methods for smooth convex minimization: a novel approach. \[4\] Grimmer B. Provably Faster Gradient Descent via Long Steps \[5\] Fatkhullin I. et al. EF21 with bells & whistles: practical algorithmic extensions of modern error feedback --- Rebuttal 2: Title: Let us have some deep discussion here Comment: To Reviewer: Thanks for the questions. Could you check if the author answered your concerns? To Authors: a followup question for 3. Can you explain why the biased compression cannot be allowed, given it is allowed for analyzing the stochastic algorithms such as DoubleSqueeze [Tang et. al. 2019]? --- Rebuttal Comment 2.1: Comment: > To Authors: a followup question for 3. Can you explain why the biased compression cannot be allowed, given it is allowed for analyzing the stochastic algorithms such as DoubleSqueeze [Tang et. al. 2019]? Note that the biased compression *is allowed* on the *server's side*. Let us discuss the compression on the *workers' side*. First, notice that our goal was to design a method that will get the communication complexities guarantees that are not worse than AGD (that does not compress). Our work showed that it is possible to accomplish with unbiased compressors on the *workers' side*. However, the design of a method that will use biased compressors both on the *server's side* and on the *workers' side* is a very challenging task if we care about obtaining strong theoretical guarantees. Indeed, the theoretical communication guarantees of such methods are much worse than the communication guarantees of AGD (that does not compress). In particular, consider DoubleSqueeze [Tang et. al. 2019]. It requires stronger assumptions (Assumption 1.3 or the bounded gradients assumption) than our work and AGD. Next, the worst-case theoretical communication complexity of this work is $\frac{d^2}{K} \times \frac{1}{\varepsilon^3}$ to find an $\varepsilon$-stationary point with Top$K.$ Another recent work \[1\] with only biased compressors guarantees the communication complexity $\frac{d^2}{K} \times \frac{L_{\max}}{\mu} \log\frac{1}{\varepsilon}.$ The same reasoning applies to Dore \[2\]. The communication complexities of such methods are clearly worse than the communication complexity $d \times \sqrt{\frac{L}{\mu}} \log \frac{1}{\varepsilon}$ of AGD (that does not compress!). In other words, we do not allow biased compression on the workers' side because we do not know how to design a method that will have communication guarantees not worse than in AGD. As far as we know, nobody knows how to solve this challenging task. Our work can be a starting point to solve it. Another orthogonal motivation behind using the *unbiased compressors* is their scaling with the number of nodes $n$. The larger $n,$ the smaller the effect of the noises from the *unbiased compressors.* It can be seen in the complexities of EF21-P + DIANA and 2Direction in Table 1. The noise from the *biased compressors* does *not* decrease with the number of nodes $n.$ \[1\]: Fatkhullin I. EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback \[2\]: Liu et al A double residual compression algorithm for efficient distributed learning, 2019
Summary: This paper studies 2Direction, a new method for compressed communications for the centralized distributed convex optimization problem, in the case where both worker-to-server and server-to-worker communications are compressed. Compared to previous work EF21-P+DIANA, it guarantees a total communication complexity no worse than Nesterov's accelerated gradient methods. Strengths: * The paper introduces a new objective of minimizing the *total communication complexity*, which is well defined through equation 4 and clearly explained with the problematic lines 118-122. * To alleviate some of the burden of understanding the seemingly cumbersome algorithm and proof, efforts were made to describe some of the steps of the research/thought process that led to the final method in Section 4, and to provide a sketch of the proof in Section 7. * For reproducibility purposes, I appreciated the presence of the SymPy code used for the symbolic computation part of the proof. * The final result is original, seems to have required more than a *"small tinkering"* of previous methods (Section 4), and displays both theoretical and practical improvements over previous state of the art algorithms. Weaknesses: * While efforts were made to ease the understanding of the general ideas behind the algorithm and proof, they seem so cluttered that it deters from delving into the technical details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * line 130 claims a *"new error feedback mechanism"* but only compares it to the EF21-P method. What is the difference between equation 9 and other mechanisms such as MEM-SGD [1] ? * In the experiments part (appendix Q), why use a single value of $K$ ? Would it be possible to display the evolution of the convergence rate at different degrees of compression ? * line 1033, the stepsize $\bar L$ is said to be finetuned. However, if we assume $L = L_{\max}$ (which we can find for logistic regression), wouldn't equation 44 give a closed form expression of $\bar L$ ? **Typos :** * The order of Equations 6 and 7 seems to have been reversed. * line 195: a *"and"* seems to have been left between *"general analysis"*. **References :** [1] Sebastian U Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. *Sparsified SGD with memory*. In NeurIPS, 2018. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation and comments! > While efforts were made to ease the understanding of the general ideas behind the algorithm and proof, they seem so cluttered that it deters from delving into the technical details. Unfortunately, this is the nature of all accelerated methods. In our method, everything is overcomplicated by the fact that we have bidirectional compression. This is the first method that has better communication guarantees than the vanilla accelerated method. We tried to explain the proof idea in Section 7. > line 130 claims a "new error feedback mechanism" but only compares it to the EF21-P method. What is the difference between equation 9 and other mechanisms such as MEM-SGD [1] ? If we understood their work correctly, MEM-SGD is simply the EF mechanism from (Seide et al. (2014)). Their contribution seems to be not on the algorithmic part, but in theory (Seide et al (2014) did not provide any theory). In the abstract and in Lines 127-128, we explain that EF21-P is a reparameterization of the celebrated EF mechanism (see details in \[1, Section 2\]). So the difference between (9) and EF (MEM-SGD) is the same as the difference between (9) and EF21-P (under re-parameterization of variables). Note, however, that we had to use a new and modified version of the EF21-P mechanism (see Lines 167-176 where we explain the difference), and that unlike in [1], where classical EF is used on the workers' side, we use our new EF mechanism at the server side. > In the experiments part (appendix Q), why use a single value of $K$? Would it be possible to display the evolution of the convergence rate at different degrees of compression? There are many parameters, including $K,$ # of workers $n,$ dataset type... It is practically infeasible to experiment all possible parameters. In our experiments, we took several key parameters to check that the dependences from the theory holds in practice as well (it does). Note that the acceleration nature of the method is independent of the chosen $K.$ But we can easily also add experiments where we study the choice of $K$. We do not think though that such experiments would add much value. > line 1033, the stepsize $bar{L}$ is said to be fine-tuned... In general, for logistic regression, we can only find an *upper bound* for $L_{\max}.$ Thus, it is still not tight. Moreover, the reviewer can see that (44) is proportional to the large constant which is just an artifact of the proof. It is possible to get a better constant, but the proof will be even much larger. Note that the step size in EF21-P + DIANA is also finetuned, so the comparison is fair. \[1\] Gruntkowska K. et al EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression
Summary: The authors considered a distributed convex optimization setting where both uplink and downlink communications should be considered. In this setting, the authors proposed an accelerated method called 2Direction which generalizes AGD into the bidirectional compressed communication setting and prove its convergence rate and communication complexity. Finally, empirical results are provided. Strengths: For this new setting, the authors propose new strategies to make the adaption of AGD possible, which is nontrivial. Meanwhile, the bounds seem OK to me and can be applicable in the smooth transition from uplink to downlink. Weaknesses: Such a generalization for the bidirectional case fills an important gap, but it lacks enough motivational examples for why such settings deserve study. The algorithm name 2Direction is confusing as we don't really know what it means. It is better to follow the tradition and use the abbreviation of the full algorithm name. The authors show the motivation of the new strategy is the existing approaches failed in the bidirectional case but don't show the concrete reason why failed. Is the failure superficial or essential? The experiments should be included in the main body. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! > For this **new setting**, the authors propose new strategies to make the adaption of AGD possible, which is nontrivial. Meanwhile, the bounds seem OK to me and can be applicable in the smooth transition from uplink to downlink. > Such a generalization for the bidirectional case fills an important gap, but it lacks enough motivational examples for why such settings deserve study. The bidirectional setting is not a new setting. The first methods that consider this setting can be traced back to at least in 2019 \[3, 10, 11\]. This topic is increasingly popular in the community \[6\] (ICML 2023), \[7]\ (ICML 2023) , \[8\] (NeurIPS 2021). See also the recent paper \[9\] (ICML 2023) on fine-tuning large language models, where bidirectional compression was essential to obtain the strong empirical results (btw: their theoretical communication complexity is worse than that of vanilla gradient descent; our results point to *better* rates than vanilla accelerated gradient descent). In short, this is a very important setting. The bidirectional compression is often overlooked because the analysis of unidirectional methods is much easier and by the assumption that the broadcasting from a server to the workers **is free.** Clearly, the assumption that your phone/laptop can receive data from the server for free is an oversimplification. The communication speeds in both directions have some limited speed in reality. Please try to visit speedtest(dot)net and check download and upload speeds. These speeds will be equal in most locations, and it is quite unlikely that the download speed will equal $\infty.$ > The algorithm name 2Direction is confusing as we don't really know what it means. It is better to follow the tradition and use the abbreviation of the full algorithm name. 2Direction = TwoDirection $\approx$ Bidirectional. It is just a game of words. One can think that 2Direction is an abbreviation of a "Bidirectional Method." As authors, we have the right to choose a name for our method. Some authors use abbreviations of full algorithm names (when such abbreviations lead to "good" names), some choose playful names (e.g., Hogwild! - a test of time award winner; CocktailSGD \[9\]; or DoubleSqueeze \[10\] - our name is in some sense similar to this one), some authors choose female/male names (e.g., consider the celebrated Katyusha method \[1\], celebrated Adam method, or Diana / Marina \[2\]), and so on. This is clearly not a weakness of our paper; we see it at best as a minor recommendation based on personal preference of the reviewer. We happen to have a different preference in this particular case. As such, we believe that our choice of the algorithm name should not affect accept / reject recommendations. > The authors show the motivation of the new strategy is the existing approaches failed in the bidirectional case but don't show the concrete reason why failed. Is the failure superficial or essential? The failure of preceding methods seems essential - neither the authors nor us know how to prove that these methods have better communication complexity than vanilla GD (with the exception of \[6\]), let alone accelerated GD. This is not a proof of impossibility, but we believe the preceding methods and simply not designed well, and a major redesign of the algorithms was needed. In other words, we believe that our new strategy is essential. In Lines 167-176, we explain why. It is clear from the proof that the designed modification is required to obtain the convergence of a Lyapunov function in our method. If you compare the Lyapunov functions of non-accelerated \[4\] and accelerated methods \[5\], you will see that the control variables "converge" to the non-fixed vectors (please see explanation in Lines 167-176). > The experiments should be included in the main body. Our work is theoretical, we believe that the theoretical insights are by an order of magnitude more important than the experimental section. Having said that, if the paper gets accepted, we expect to get an extra page and we plan to do as you advise. If you have any additional questions, then let us know please. \[1\] Z Allen-Zhu et al Katyusha: The first direct acceleration of stochastic gradient methods \[2\] Gorbunov et al. MARINA: Faster non-convex distributed learning with compression \[3\] Liu et al A double residual compression algorithm for efficient distributed learning, 2019 \[4\] Gorbunov et al. A unified theory of SGD: Variance reduction, sampling, quantization and coordinate descent. \[5\] Z. Li et al Acceleration for compressed gradient descent in distributed and federated optimization \[6\] Gruntkowska K. EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression \[7\] Dorfman R. DoCoFL: Downlink Compression for Cross-Device Federated Learning \[8\] Philippenko C. Preserved central model for faster bidirectional compres335 sion in distributed settings. \[9\] Jue Wang, Yucheng Lu, Binhang Yuan, Beidi Chen, Percy Liang, Christopher De Sa, Christopher Re, Ce Zhang. CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks, 2023 \[10\] Hanlin Tang, Xiangru Lian, Chen Yu, Tong Zhang, Ji Liu. DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression, 2019 \[11\] Horvath et al, Natural Compression for Distributed Deep Learning, 2019 --- Rebuttal Comment 1.1: Title: It is worth to discuss the relationship with [10, 9, 3] Comment: How is the 2Dicrection compression approach different from the "error compensated" compression in double directions in [3, 9, 10]? --- Reply to Comment 1.1.1: Comment: From the theoretical point of view, we can get substantially better theoretical guarantees. The works [9,10] consider only the non-convex setting and obtain the convergence guarantees of the norm of gradients, so their result is much weaker in the convex case. Also, [9,10] require stronger assumptions (e.g., Assumption 3.3. in [9] and Assumption 1.3 in [10]). In the convex case, [3] is not ignored in Table 1 (Dore method in Table 1). This work requires $\frac{\omega}{\alpha n} \frac{L_{\max}}{\mu} \log \frac{1}{\varepsilon}$ rounds to converge. This complexity is strictly higher (= worse) than our guarantees (see 2Direction in Table 1). For example, the theory in [9] is strictly worse than the previously best-known theory for bi-directional compressed methods provided in [12] (also, there is a mistake in their proof - their final complexity is worse in its dependence on the contraction factor of the compressor). Their paper is in this sense not interesting from the theoretical point of view. However, it's an excellent empirical work. Our work massively improves on the previous theoretical SOTA from [6], which provides much better guarantees than [12]. So, in brief, **our theory >> theory from [6] > theory from [12] > theory from [9].** Such comparisons can be made with all other previous results., including [3], [8], [10], [11]... From the design's view, we discuss the difference between our approach and the previous approaches in detail in Section 4. Let us briefly repeat it here: *On the workers' side*, we do *not* use the "error compensated" compression. Our compression technique is based on works of \[5\], which is very different and only works with unbiased compressors. *On the server's side*, our compression technique is new. The difference of designs we discuss in Lines 167-176. Algorithmically, our approach and the "error compensated" compression are different. One can compare the formulas (8) and (9). [12] Fatkhullin et al, EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback, 2021
Rebuttal 1: Rebuttal: Dear AC and Reviewers, Thank you for your work and effort. In this comment, we quickly clarify our main contribution to the community. The field of optimization methods with compressed communication methods is very popular, rapidly growing, and will only increase in importance (e.g., see Jue Wang, Yucheng Lu, Binhang Yuan, Beidi Chen, Percy Liang, Christopher De Sa, Christopher Re, Ce Zhang. CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks, ICML 2023 where the authors fine-tune a large language model in distributed manner, and bidirectional compression was essential). There are hundreds, and maybe even thousands of papers that design optimization methods to reduce communication overhead. **However, as far as we know, none of the previous papers provided theoretical communication guarantees that would be no worse than the communication guarantees of the vanilla accelerated Nesterov's method (AGD), let alone guarantees that can in some regimes be substantially better. In short, we believe that our work is an important theoretical breakthrough in an important subfield of ML.** Before our paper, the best theoretical guarantees were obtained by AGD that **does not compress**. The communication complexity of AGD is $d \times \sqrt{\frac{L}{\mu}} \ln \frac{1}{\varepsilon}$ (= # of send bits/coordinates $\times$ # of iterations/rounds). **Our paper is the first one that broke this fundamental baseline in the bidirectional setting.** **We believe that the low scores given to us by some reviewers do not reflect the quality and import of our work.** Thank you again, we really appreciate your reviews and comments! Please ask us any questions! authors
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimality in Mean Estimation: Beyond Worst-Case, Beyond Sub-Gaussian, and Beyond $1+\alpha$ Moments
Accept (poster)
Summary: The paper studies one-dimensional mean estimation in a beyond-worst case setting. The main contributions are twofold. First, it gives indistinguishability results which, given a distribution p of finite mean, construct a “partner” distribution q that preserves the moments yet has mean far from p. This implies the subguassian rate via median-of-means is optimal in general. Hence, this result can be viewed as an impossibility theorem against beyond-worst-case analysis for mean estimation. Second, they give a new framework of neighborhood optimality, and show that median-of-means is optimal up to a constant. Strengths: The question studied here is well motivated. I believe this is the natural next step following the recent work of Lee-Valiant (2020) and early papers on median-of-means. The paper essentially settles the beyond-worst-case analysis problem in this space (in the negative direction) This is a quite solid paper at a technical level. The main contribution is interesting and to me somewhat surprising. I didn’t not check the proof details from the appendix, but reading the main body the main claims appear correct. Overall the paper is well-written. Weaknesses: I would be most interested in an extension of the result to the multivariate setting. Other minor comments below. By “worst case”, this paper refers to the worst over all distributions (under certain moment constraints). I think this should be remarked early in the introduction, since beyond-worst-case analysis in the TCS community typically refers to non-worst-case data. In our case, the data are drawn from a distribution already. So we are not using this notion in the classic sense. Abstract line 17: The construction of q preserves the density of p up to a factor of 2 -> I would just state it formally: “the construction ensures dq / dp <= 2 at all points, thus [...]” Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: This indistinguishability claim reminds one of the sample amplification framework proposed by https://arxiv.org/abs/1904.12053. It might be worth a remark if there’s any formal connection. In the related work section, I would suggest citing this survey: https://arxiv.org/abs/1906.04280 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: It's addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our work and for your questions. We address your comments below. ----- **High dimensional setting**: This is an interesting question, and would likely need the full extent of the notion of neighborhood optimality to address, going beyond the "$p$ versus $q$" counterexamples ("singleton neighborhoods") analyzed in this paper. We conjecture that, even though in the low dimensional case, the $\sigma \sqrt{\left.\log\frac{1}{\delta}\right/n}$ term in sub-Gaussian estimation error is replaced by our error function $\epsilon_{n,\delta}(p)$ for instance-by-instance bounds, the $\sqrt{\mathrm{Tr}(\Sigma)/n}$ term in high-dimensional sub-Gaussian estimation will probably remain the same. This is because the term is an expectation/constant probability phenomenon, and independent of the failure probability $\delta$. Also, in contrast to the low-dimensional setting where constructing the lower bound examples is challenging, in the high-dimensional setting we expect the main challenge to be finding and analyzing a neighborhood-optimal estimator. **"Worst case"**: thank you for pointing this out; we will add a clarification in the paper. We also point out that, even within the TCS community, the terms "worst case" and "instance optimality" can be used to refer to distributional instances, in the same sense our paper uses. See for example [VV16,VV17]. **Sample amplification** (Axelrod et al.): that line of work is pursuing a rather different goal from ours. In both cases, we are trying to prove indistinguishability (which is a standard approach/argument), but: 1. in sample amplification, $q$ is essentially an output of the algorithm and not just a tool inside the analysis; the input $p$ is crucially *unknown* and only available via sample access. 2. in our paper, we crucially aim to construct a $q$ that has mean far from that of $p$ despite being indistinguishable, while the other paper has no particular restrictions on $q$. Thus while both works may use some techniques that are standard in the field (e.g. Hellinger distance as a proxy for indistinguishability), we think it is hard to formally connect the two frameworks. **Survey**: We agree that the Lugosi-Mendelson survey provides relevant context, and we will cite it. ----- We also refer the reviewer to our general response, that, despite our result essentially settling instance-by-instance mean estimation (in 1-d) in the negative direction, we view our paper more as a "call to arms" than as a negative result. We believe that there is more work to be done in identifying reasonable distributional assumptions (in addition to symmetry), such that the sub-Gaussian error bound barrier can be overcome. --- Rebuttal Comment 1.1: Comment: Thanks for the response! I maintain my rating and recommend accept. It would be nice to add your conjecture about the high-dimensional case to the final version of the paper.
Summary: This paper is about analyzing the optimal bounds for mean estimation, beyond the worst-case setting given in the recent breakthrough result of Lee and Valiant giving optimal subgaussian error rates for all distributions with unknown, finite variance. This prior work shows only that there exists a single "problematic" distribution class, for which the given error rates are best possible. Thus, a natural approach to make progress is to prove instance-dependent bounds for estimating the mean that are able to outperform the worst-case bounds at least for some distributions. The main result of this paper is that, assuming only that the distribution has finite mean, this natural approach cannot asymptotically improve over the worst-case optimal subgaussian error rates. In particular, given a distribution $p$, a failure probability $\delta$ and a number of samples $n$, the authors construct a distribution $q_{n,\delta}$ such that: 1. $q_{n,\delta}$ cannot be distinguished from $p$ with $n$ samples and success probability $1-\delta$. 2. The means of $q_{n,\delta}$ and $p$ differ by an amount that asymptotically approaches the optimal worst-case subgaussian bound. Together, these two properties show that instance-dependent bounds in this setting offer no improvement. The paper also introduces a framework called "neighborhood optimality" which interpolates between the (too weak) notion of admissibility and the (too strong) instance-optimality. Strengths: The paper is well-written and gives and intuitive, straight-forward construction showing that it is not possible to improve on the worst-case optimal subgaussian bounds assuming only that the distribution has a finite mean. This is a valuable contribution by itself, and also has to deal with subtle technical difficulties arising in the case that the distribution has small mass very far from the mean, which none-the-less contributes significantly the the variance. Further illustrating the utility of such a construction is the fact that follow-up work has already utilized the intuition behind it to prove improved bounds for symmetric distributions. In particular, the construction relies heavily on the ability to consider distributions with tails that are very skewed in one direction, and so symmetric distributions are a natural candidate for improved bounds guided by this construction. Weaknesses: The neighborhood-optimality framework seems interesting, but the results in the paper so far are not entirely convincing that this is the "right" way to go beyond the worst-case for mean estimation. However, the construction showing that subgaussian bounds are optimal when there is a finite, unknown mean is in my view the main contribution and quite significant on its own. So any potential limitations of the neighborhood-optimality framework are not much of drawback in my view. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Minor comment: Line 203 states that $q$ satisfies all the properties of definition $N_{n,\delta}(p)$, but as far as I can tell $N_{n,\delta}(p)$ hasn't been defined yet. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and in-depth review of our paper. Regarding your comment about the neighborhood optimality definition, we want to emphasize that, prior to this paper, the community would basically just use the local minimax definition. As we show in Appendix A, local minimax allows meaningless bounds which should be rejected as absurd, and it is thus reassuring that our neighborhood optimality notion provides a principled framework with which to reject these bounds. Furthermore, for the common use case of indistinguishability between two distributions, neighborhood optimality is a stronger notion than local minimax. Thus, in this paper we are at least giving an improvement to a standard definition. As discussed by reviewer x13e, our paper gives a *flexible* framework, accommodating many different instantiations and interpretations, and our contribution to the mean estimation problem is philosophically different from much prior work. Even *if* neighborhood optimality doesn't turn out to be the "right" definition, we want to note that it is common for a community (e.g. in crypto or in various other parts of math) to "negotiate" definitions in a series of papers, until a good definition is ultimately found. We hope this paper at least initiates this conversation, by pointing out the shortcomings of local minimax and proposing an improved alternative. We welcome the continuing academic process of presenting a new definition which is then discussed by the community, and iterated on in future papers. --- Rebuttal Comment 1.1: Comment: Your response makes sense. I didn't realize that people were regularly using the local minimax definition to prove things. In that case, I would view the contribution of this part of your paper to be two-fold: (1) Pointing out that a previously used definition doesn't really make sense (2) Proposing an alternative that does not have the same flaw. That is, a big part of the contribution of the neighborhood optimality definition is as an existence proof showing that a reasonable definition along these lines is possible. Thank you for your response. I will increase my score to 7.
Summary: This paper studies the fundamental task of 1-dimensional mean estimation. There has been recent interest in the community to understand “beyond worst-case” analyses of statistical problems, where guarantees are able to encompass properties of instances that may make them more tractable than worst-case instances. This work motivates and initiates such analysis for mean estimation. The main result aims to show negative evidence for this perspective: proving that for any distribution $p$ with finite mean there is another distribution $q$ where it is hard to distinguish $p$ vs $q$ and their means are separated by the sub-Gaussian rate. Moreover, in many senses $p$ and $q$ are “similar”, so this indicates hardness to beat the sub-Gaussian rate even when just designing an algorithm to perform on $p$ and “similar” distributions. This perspective is further developed by the formalization of “neighborhood optimality” and median-of-means is shown to be approximately neighborhood optimal. Strengths: It is an extremely fundamental pursuit to understand mean estimation and what nice properties of distributions permit better rates. The perspective introduced in this work of analyzing the optimal algorithms that perform on some distribution $p$ and “similar” distributions is both novel and insightful for progress in this pursuit. While this paper mostly focuses on one (relatively well-motivated) notion of “similarity”, the landscape of analogous results clearly is dependent on what similarity notion is used. This is perhaps one of the most appealing aspects of the paper, as it gives a concrete framework within which to use notions of similarity as proxies for the properties that make distributions nicer in the beyond worst-case sense. As an example, the notion of similarity in this paper enforces $\frac{dq}{dp} \le 2$, indicating that the similar distributions which are still hard have similar moments and thus limiting how much moment-based guarantees can make distributions more tractable. On the other hand, the later work of [GLP23] (I believe) implies that analogous results cannot hold when similarity preserves both symmetry and Fisher information, highlighting how permissible similarity notions in this framework may help demarcate helpful properties. Weaknesses: Although likely outside the reasonable scope of this work, the existence of results such as [GLP23] indicate how this perspective would benefit from deeper investigations into various neighborhood functions. From the exposition, it is not immediately clear how conducive the neighborhood optimality perspective is towards thinking about tightness without constant-factor lossiness (the paper seems to hint that proving a guarantee without such lossiness may be a reasonable-sounding future direction). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could you further explain your claim on line 70 that your paper shows introducing extra assumptions are necessary for the [GLP23] results? A naive reading of this claim feels strong in that worst-case lower bounds already show the need for additional assumptions, but perhaps the desired claim is that your work shows the assumptions must not be qualities that $q$ preserves in your transformation? Relatedly, does [GLP23] imply that it is open whether a result similar to your Theorem 2 is possible where $q$ additionally preserves (i) symmetry, or (ii) Fisher information, but it is impossible to (iii) simultaneously preserve symmetry and Fisher information? Do you have more intuition to share regarding whether it is a reasonable goal to naturally hope for $(1+o(1), 1+o(1))$-neighborhood optimality? As some of the motivation for neighborhood-optimality originates from failing the hardcoded estimator, are there known references that alternatively tackle this by restricting the estimator to have desirable properties (e.g., being translation invariant)? Minor details: * On line 380, is the mention of $\log(1-d^2_H(p,q)) \ge \frac{1}{2n} \log 4 \delta$ redundant from $q \in N_{n,\delta}(p)$? * The equation above line 391 has an extra pair of parentheses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations are well-discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insight on our work, and your questions. We particularly enjoyed reading your summary of the strengths of the paper. Here we respond to the questions raised in the review. **Q1**: What we meant is that, in order to beat the sub-Gaussian bound for *any* distribution, even asymptotically, we would need to introduce new assumptions that are not properties satisfied by the construction of $q$ in Theorem 2. And so the claim is not only about the worst case, but in fact essentially for *all* cases. We will clarify this in the paper. **Q1.5**: You are correct. If the construction of $q$ preserves both symmetry and Fisher information, then there can't be a Theorem 2 with a mean separation as large as the sub-Gaussian bound (even only asymptotically). The tight construction that yields a Fisher information rate lower bound, we conjecture, is probably $q$ being just $p$ shifted slightly, or some variant of that, without changing the shape from $p$ to $q$. The Fisher information rate bound comes from the parametric problem of location estimation. So far, there has been no proof of a finite-sample high-probability Fisher information rate lower bound for location estimation, as far as we know (from communication with the authors of [GLP23]). On the other hand, there is an essentially tight result if $p$ is sufficiently smooth (if $p$ is the convolution of some distribution with a small Gaussian), see [Gupta, Lee, Price, Valiant NeurIPS 2022]. It is indeed unclear what happens if we require $\frac{\mathrm{d}q}{\mathrm{d}p} \le 2$ yet only 1 of symmetry or Fisher information not blowing up, whether we can still get a construction with asymptotically sub-Gaussian mean separation. We do want to point out, however, that a hypothetical construction preserving symmetry yet getting sub-Gaussian mean separation is almost by definition "unreasonable", in that the resulting neighborhood Pareto bound is larger than the upper bound given by [GLP23]. This suggests that the neighborhood choice is too weak, that the "$q$" construction did not preserve enough properties of $p$ (e.g. the Fisher information, in this case). **Q2**: We draw the distinction between the potential *existence* of such $(1+o(1), 1+o(1))$-optimal estimators, and their *analysis*. Existence should be doable: if we relax the neighborhood enough, then we get back to admissibility, and existence is trivial (hardcoded estimators are $(1+o(1))$-admissible, which is easy to prove). We don't see any good reason why a $(1+o(1))$-optimal estimator shouldn't exist for a more reasonable neighborhood structure. Analyzing such estimators seems much more challenging: we believe that the literature doesn't have the right tools yet. To get a tight analysis (through indistinguishability between two distributions), we need to develop tight tools to calculate the $n$-sample TV distance between a pair of distributions. However, the only generic tools we're aware of are *asymptotic* results coming from the "large deviations principle" such as Cramer's theorem, or non-asymptotic bounds that are off by constants in the exponent of the failure probability, even in the Gaussian case---based on the high-probability Pinsker inequality (based on KL divergence), or based on squared Hellinger distance. None of these approaches can give tight finite-sample bounds on the estimation error. Therefore, before we could refine our "$q$" construction and give tight analysis, we would first need to find better tools to bound the $n$-sample TV distance. This is probably a medium/long-term project, and not something accessible in the immediate future. Nonetheless, we emphasize that constant-factor results (such as those in this work) are meaningful, since variance can be much more than a constant factor different from Fisher information. **Q3**: We are unaware of additional references. Stein's paradox shows (particularly for the multidimensional case) that in general, we shouldn't take translation-equivariance for granted - the right estimator might not be translation-equivariant, depending on the objective/loss. We agree that it is interesting to continue to develop and refine new frameworks and perspectives on the mean estimation problem, and on beyond-worst-case analysis in general. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. All of my questions have been properly answered.
Summary: This paper studies the 1-dimensional robust mean estimation from the perspective of "beyond the worst-case analysis". They show somehow a negative result, that is, for any distribution, we can always construct another distribution that is close in probability distance, but their means are well-separated. Furthermore, to analyze the fine-grained optimality of algorithms, they propose a new definitional framework called “neighborhood optimality”. They show that the classic MoM estimator is neighborhood optimal up to some constant. Strengths: - This paper studies the 1-dimensional robust mean estimation from the perspective of "beyond the worst-case analysis", which is a very important and interesting view for this problem. - Their construction might be adapted in other settings. Weaknesses: - As discussed in the paragraph (lines 69-76), there exists a class of distributions (symmetric distributions) whose error rate can be better than the sub-Gaussian rate. My question is can we modify the current construction to reflect this result by restricting the feasible domain of the perturbed distributions? Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Yes, one could use the framework of this paper to attempt to design a construction for symmetric distributions $p$ (constructing a symmetric alternative, $q$), and thereby obtain a Fisher information rate estimation lower bound. In fact, (as discussed in [GLP23]) the Fisher information rate bound of symmetric mean estimation really comes from the related problem of location estimation, which is the "parametric" version of mean estimation: suppose we know the density of the data distribution, but not its shift, the goal is to estimate the shift from i.i.d. data. As such, the correct lower bound construction is likely just shifting $p$ slightly to form $q$, or slight variants of this. The algorithmic techniques of [GLP23] for symmetric mean estimation come from their study of the location estimation problem in previous papers. From our communication with the authors, they have also been trying to prove lower bounds for location estimation, but so far they have not been successful in getting a finite-sample and high-probability Fisher information rate bound. Thus, it remains an open question to show such a bound. On the other hand, if we only care about the error variance, then the well-known Cramer-Rao bound states that every shift-invariant location estimator must have variance at least $I/n$, where $I$ is the Fisher information of the distribution. So there's already good (but not definitive) evidence that the Fisher information rate is optimal. We also refer the reviewer to our general response discussing how we view our paper more as a "call to arms" than as a negative result. We believe that there is more work to be done in identifying reasonable distributional assumptions (besides symmetry), such that the sub-Gaussian error bound barrier can be overcome. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Thanks for your detailed response, which I have learned a lot. I also agree that such an attempt o beyond-worst-case analysis is very meaningful. Hence, I decide to increase my score from 5 to 6.
Rebuttal 1: Rebuttal: We thank the reviewers for the positive assessment of our work, and of course also for the constructive reviews and many interesting questions raised. We are particularly encouraged that the reviewers find our paper 1) well-motivated/important (Reviewers iso5, bzea), 2) an extremely fundamental pursuit (Reviewer x13e), 3) giving a surprising result (Reviewer bzea), 4) significant (Reviewer Z2th) and 5) ultimately, opening up a new research direction (paraphrasing Reviewer x13e). Here, in this overall response, we would like to re-emphasize the main conceptual message and contribution of the paper in light of the reviews. We address the research questions raised by the reviewers as responses to each review. This paper studies the problem of optimal mean estimation in the most basic and widely used setting of 1 dimension. We go beyond worst case analysis, by considering optimality on an instance-by-instance basis. Motivating this from a different perspective: Gaussians are known to be hardest instances for mean estimation under the finite but unknown variance assumption. We thus ask, are there "easier" distributions for which algorithms can beat the sub-Gaussian error bound? Can an algorithm leverage the beneficial structure of a distribution, without explicit knowledge of this structure? Our paper provides an unexpected and subtle answer: "yes in limited cases/regimes, but in general no". For some distributions, even standard algorithms can beat the sub-Gaussian bound, but only for a limited parameter regime *per distribution*---namely, if the number of samples is not too large (Proposition 14). In general however, we show a strong and comprehensive negative result. Fixing any distribution $p$, then for large enough number of samples $n$: no estimator can outperform the sub-Gaussian bound by more than a constant factor, unless it has essentially "hardcoded" knowledge of $p$, and thereby misestimates another "similar" distribution $q$ (see Theorem 2/Corollary 3, and the construction of $q$ in Definition 4). As reviewer x13e puts it, our paper provides a correspondence, mapping a neighborhood structure---"similarity" notion---to a beyond-worst-case analysis. For the natural and minimalist neighborhood notion that "$q$ has tails not much larger than those of $p$", we show a strong negative result, essentially ruling out better-than-Gaussian performance. The key point---a **call to arms** in a sense---is that such negative results are to be circumvented, through identifying additional favorable distribution structure for the mean estimation problem. Inspired by our results and construction, [GLP23] already showed that if we additionally assume the data distribution to be *symmetric* about its mean, then it is possible to get substantially (and in fact arbitrarily) better-than-Gaussian performance, in fact achieving the Fisher information rate. We hope the perspective and framework introduced in this paper will inspire further work, investigating structures other than symmetry, towards understanding how to break worst-case performance barriers and overcome them algorithmically.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Thinker: Learning to Plan and Act
Accept (poster)
Summary: This paper presents *Thinker*, a method for augmenting an MDP with a learned model so that model-free algorithms can perform “planning” on the augmented MDP. Specifically, Thinker creates a cross-product MDP between the “real world” and a learned world model. This augmented MDP has a union action set that enables an RL agent to either take an action in the real world at a given step, or take an action in the world model. Decision-making is broken down into K-1 steps of acting in the world model (“planning”), followed by a single step of acting in the real model, potentially exploiting information from the K-1 steps of simulation. The authors argue that this allows model-free algorithms to learn how to execute various common planning algorithms. Evaluation is performed on Sokoban and 57 ALE games. The authors find that IMPALA applied to the Thinker-augmented MDP achieves a higher solve rate for a fixed number of world steps than IMPALA applied to the base MDP, as well as better results than one pre-existing baseline. Analogous results hold on the ALE benchmarks. Strengths: - **[S1]** The fact that this method merely augments the MDP with some extra transitions (and a learned world model) means that it is both conceptually simple and broadly applicable to many existing reinforcement learning algorithms. - **[S2]** Experimental evaluation is thorough. Sokoban is a genuinely difficult benchmark for planning (it’s PSPACE hard in general), and the paper also evaluates across 57 Atari games, which greatly reduces the probability that results were cherry-picked. The ablations in the appendix were also quite thorough. - **[S3]** The paper is tackling a problem of significance for the community (planning in reinforcement learning) and was clearly written. Weaknesses: - **[W1]** It’s not clear how much the empirical benefits of Thinker come from actually doing search as opposed to just giving the model a more expressive feature space (since the policy can depend on the rollout history at each node), or something else of that nature. The fact that the algorithm’s benefits seem to plateau after K=10-20 planning steps is particularly surprising: traditional planning algorithms might take tens of thousands of node expansions to find a plan of tens to hundreds of steps. The ablations go some way towards resolving this issue, particularly the ablations that decrease K and the ablation that removes some of the redundant features from the state space. However, more analysis of what the algorithm is actually doing during the planning phase would be helpful. For instance, the paper could include a qualitative analysis of Sokoban rollouts that tries to explain what kind of planning the algorithm is doing. It could also include an extra ablation in which actions are randomly chosen during the planning phase, rather than being chosen by Impala—does learning the planner with Impala actually help? - **[W2]** As I understand it, the main contribution of the paper is a way of augmenting MDPs, as opposed to a specific algorithm. Indeed, the paper uses this as a justification for the fact that it only has one third-party baseline for Sokoban and one for Atari. However, the paper only applies the Thinker augmentation to one RL algorithm (Impala). This is not a fatal problem for the paper, but it would be valuable to see the Thinker augmentation applied to other model-free RL algorithms to verify that it actually does work well for algorithms other than Impala. - **[W3]** The paper appeals to the “generality” and “universality” of the Thinker augmentation, applying that it can express any (?) planning algorithm. I think this is over-claiming: the Gym-like interface available during planning steps makes it hard to implement even a simple algorithm like BFS, and I’m not sure how Thinker would express more complex algorithms like regression (backwards search from the goal) or partial-order planning, not to mention planning algorithms that require complicated heuristics. It would be helpful if the paper was more precise about what planning algorithms Thinker can and can’t express. Note that W1 is my biggest objection, since I feel it cuts against the core claim of the paper. I would really like to see this addressed in the rebuttal. W3 is a more minor objection but is pretty easy to address, so it would be good to see the paper updated to reflect that (or to see an argument in the rebuttal for why it can’t be done). W2 is a big request for more experiments & so I understand if the authors don’t have time or compute to do this; it’s more of a nice-to-have than something I consider essential to publication. I’m marking this a weak accept for now due to the issues listed above, but I’m overall pretty happy with the paper, and expect to upgrade my rating later so long as the rebuttal is reasonable & no other reviewers identify major flaws in the experiments or conceptual basis of the idea. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Please explain in more detail why you believe the benefits of this algorithm come from it doing planning (systematic search), as opposed to some other benefit of the Thinker augmentation (i.e. address W1 above). 2. Is there a crisp way of separating the class of planning algorithms that Thinker can emulate from those that it can’t? (W3) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I felt the limitations were pretty thorough, although I would have liked it if this section listed the expressiveness concern (W3) as well, or at least made more precise claims about the expressiveness of Thinker-augmented model-free RL algorithms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and detailed feedback! We appreciate the reviewer’s concise summary of the proposed algorithm and acknowledgement of our experiments. The weaknesses are addressed as follows: **(R1) Algorithm’s benefits seem to plateau after K=10-20 planning steps is particularly surprising** Please refer to (G3) in the global response for the ablation results on imaginary actions and section G in the appendix for the analysis of learned agents’ behaviour. We found that RL agents can learn to search much more efficiently compared to other handcrafted planning algorithms, as shown in Fig G5 in the appendix. [1] also shows that a trained neural network can search much more efficiently than MCTS in a supervised learning setting. Note that if the better performance of Thinker stems merely from the expanded feature spaces of the model, and the imaginary actions from the RL agent don't enhance performance, then one would expect the rollouts in runs without planning rewards to be completely random. This is because, if the planning reward is disabled, the RL agent would only be incentivized to maximize true returns, and the entropy regularization would push all non-useful actions towards a uniform distribution. However, we observe that the rollouts in runs without planning rewards remain specific and meaningful (e.g., at 8:35 timestamp of the video in the paper, the agent imagines solving the task in various ways). The new experimental results in Fig A3 provide more direct evidence of the benefits of the learned imaginary actions. **(R2) Thinker augmentation applied to other model-free RL algorithms** We aimed to demonstrate the general improvement offered by the augmented MDP by running a standard actor-critic algorithm across a wide range of domains (56 different environments). While it would be interesting to explore how the augmented MDP interacts with various RL algorithms, rather than different environments, the substantial computational cost associated with the algorithm prohibits us from doing so. We leave this for future research. Moreover, given that we employed a straightforward actor-critic algorithm without modification, it's reasonable to expect that the augmented MDP would be compatible with more advanced RL algorithms. We greatly appreciate the reviewer’s understanding on this matter. **(R3) Planning capacity under the augmented MDP** We recognize that our claim that an RL agent can express most planning algorithms is misleading, and we apologize for it. We will revise the paper (e.g., replace line 30 *any planning algorithm* with *common forward-based planning algorithms*) and include the following discussions in the appendix: *An RL agent under the Thinker-augmented MDP can implement any forward-based planning algorithms that do not involve heuristics beyond the value functions. Uninformed search strategies [2, Ch3.4] that do not require backward search, including Breadth-first search (BFS), Depth-first search (DFS), MCTS can be implemented by the RL agent. For example, to implement BFS with two actions available, the agent can select (action_1, reset), (action_2, reset), (action_1, not reset), (action_1, reset), (action_1, not reset), (action_2, reset), … as the imaginary actions\*. If we treat the values as heuristics, then informed search strategies [2, Ch3.5] that do not require backward search, can also be implemented. For example, A\* could be implemented by expanding the unexpanded node that has the highest rollout return in the tree, which can be achieved by keeping a record of unexpanded nodes along with the rollout return in the memories.* *Nonetheless, as an MDP does not necessarily involve a goal state as in planning problems and our trained model does not support backward unrolling, any backward search from a goal state, such as bidirectional search [2, Ch3.4] and partial-order planning [2, Ch10.4], cannot be implemented by the agent. Agents are also unable to implement planning algorithms that involve heuristics beyond value function, such as GRAPHPLAN [2, Ch10.3] that employs heuristics based on planning graphs.* *In practice, the learned planning algorithm is very different from the aforementioned handcrafted planning algorithms, as illustrated in Appendix G and the video visualization. This is because, unlike planning algorithms whose goal is to solve the task in the search, the RL agent's primary incentive is to provide useful information for selecting the next immediate real action.* \*We note that searching in this manner is not efficient, as the agent must begin expanding from the root node every time instead of directly starting from the unexpanded nodes (or the *frontier nodes*). Also, implementing BFS requires the agent to have memory, as whether one should reset at a specific node depends on whether all nodes at the same level have been expanded, which is beyond the auxiliary statistics. The questions raised are addressed as follows: > Please explain in more detail why you believe the benefits of this algorithm come from it doing planning (systematic search)... Please refer to our answer (R1) above. > Is there a crisp way of separating the class of planning algorithms that Thinker can emulate from those that it can’t? Please refer to our answer (R3) above. We hope that the answers provided above adequately address the issues raised by the reviewer, and we kindly ask the reviewer to consider adjusting the score based on our responses. We are grateful for the reviewer’s comments and welcome any further questions! **References** [1] Guez, A., Weber, T., Antonoglou, I., Simonyan, K., Vinyals, O., Wierstra, D., ... & Silver, D. (2018, July). Learning to search with MCTSnets. In International conference on machine learning (pp. 1822-1831). PMLR. [2] Stuart, R., & Peter, N. (2010). Artificial Intelligence A Modern Approach Third Edition. --- Rebuttal Comment 1.1: Title: Good rebuttal Comment: ### Overall take * The authors have addressed all concerns that I thought needed to be addressed before publication. * I'm in favor of acceptance and have updated my score. * I've read the other reviews and they didn't give me reason to change my score much either way given the author rebuttal. ### (R1) Algorithm’s benefits seem to plateau after K=10-20 planning steps is particularly surprising > Note that if the better performance of Thinker stems merely from the expanded feature spaces of the model, ... then one would expect the rollouts in runs without planning rewards to be completely random [because] entropy regularization would push all non-useful actions towards a uniform distribution. This is a good point that I hadn't considered. > The new experimental results in Fig A3 provide more direct evidence of the benefits of the learned imaginary actions. Thank you! A3 resolves my main concern here. ### (R2) Thinker augmentation applied to other model-free RL algorithms > the substantial computational cost associated with the algorithm prohibits us from doing so Fair enough. The existing experiments look pretty brutal, computationally. Well done for getting through all of them! ### (R3) Planning capacity under the augmented MDP > We will revise the paper (e.g., replace line 30 any planning algorithm with common forward-based planning algorithms) and include the following discussions in the appendix: ... Thanks a lot, this is very precise and I think strengthens the paper.
Summary: The authors propose Thinker, a model-based RL algorithm that transforms a MDP into one where at each step, the agent is required to generate plans in imagination before selecting a real world action. Then through the RL gradient, the agent learns to plan in order to act. The authors get good results in Atari and Sokoban benchmarks. Strengths: - An interesting way of combining planning and learning by integrating planning into the MDP itself. By viewing planning as part of the MDP itself, an RL agent maximizing this MDP will learn how to plan rather than relying on a given planning algorithm to maximize reward. - Good results on Sokoban, a hard planning task, and analysis (some issues, see below) Weaknesses: There are two major concerns I have, one in experimentation and one in writing. **Experimentation issues**. - **Lack of baselines** In Sokoban, the work currently compares to one baseline, DRC in Sokoban. DRC is a good choice for a baseline, since it also is a "learning to plan" baseline. - However, it is worth comparing the learning to plan strategy against hand-crafted planning and other ways of using the world model for RL. For example, Dyna and model-based value expansion are two alternative ways to leverage a world model for RL. I would like to see competitive baselines from these families of MBRL agents (e.g. DreamerV3 and STEVE). - The authors claim to omit common baselines since they want to investigate the benefit of the Augmented MDP. If so, I would like to see the Augmented MDP be applied to a few other RL algorithms (Rainbow, DreamerV3, etc.). Then the focus isn't on raw performance, but rather how much Augmented MDP improves over the vanilla MDP. - **Low number of seeds (3)** reduces my confidence in the results. Please run at least 5 seeds before reporting metrics, and even better, use IQM and 95% bootstrapped confidence intervals. - **Dual network ablation:** There is no ablation on the dual network itself. What happens if we just use 1 big RNN with the same amount of parameters to predict all quantities? - **Parameters for Thinker vs. baselines:** Is it possible the gains from Thinker are just from the increase in world model parameters compared to baselines? **Poor Writing and Presentation.** This paper unfortunately suffers heavily from writing and presentation issues, which I believe will make it inaccessible to readers. The authors tend to write dense paragraphs about the important ideas and mechanisms, rather than summarizing points succinctly and relying on visual figures. It seems like the paper is easy to understand for those deep in the subfields of "learning to plan" and MuZero style of MBRL. However, readers not in this particular area will find it hard to grasp. L88 - The paragraph on explaining the planning stage is verbose and hard to understand. The provided Figure 2 hides a lot of the details in the planning stage. I was initially a bit confused between K and L. Thought that K was the max length of 1 rollout. But instead, it seems like K is the # nodes of a planning tree, which is composed of several rollouts all starting from the same root node. Would have been nice to just have a figure of the planning tree to avoid this confusion, perhaps Figure 2 can be updated to have these details? Some more explanation of planning reward is needed. How does this avoid penalizing searching for paths with low rollout? Is this because we are using Max instead of mean return? Technical Quality: 3 good Clarity: 1 poor Questions for Authors: I think there is a lot of potential in this paper. It needs a good amount of polishing on the writing side to make the idea digestible to a general reader. Next, the experiment section can be improved in a variety of ways - more baselines, ablations, and seeds. Some additional questions: Why is the non-stationary network needed? Couldn't we get value and policy distributions directly by evaluating the policy and value functions on the predicted states? Why does K=1 ablation perform poorly? I thought this would be equivalent to the Raw-MDP baseline. What is the agent policy vs. the model policy? Is the agent policy the actor in IMPALA and the model policy the world model's predicted action probabilities? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: The limitations section is already quite fair. However, there are some missing pieces. 1. Computational Cost. MBRL algorithms can be very slow to run in comparison to model free RL. I would like to see more information about the computational and resource cost of Thinker, and how it compares to the model-free baseline. How many GPUs does it require, how long does it take to run, and what are the main performance bottlenecks? 2. Hyperparameter tuning. There seems to be quite a bit of hyperparameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and detailed feedback! We appreciate the reviewer recognizing the potential of our paper while also raising valid concerns regarding the experiments. The weaknesses are addressed as follows: **Insufficient baselines** - Please refer to (G1) in the global response. **Number of seeds** - We have added two more seeds to the Sokoban run, bringing the total to five seeds. The results can be found in Fig A4. In general, Thinker’s performance on Sokoban has a very low variance, so having more seeds won’t affect the conclusion. Regarding Atari, we acknowledge that using more seeds can enhance result reliability. However, the significant costs associated with it (55 runs, with each run requiring 7 days of training on two A100 workstations) prevent us from incorporating a larger number of seeds. As our code is fully public, one can also reproduce our results easily. Moreover, we have not claimed that our results would achieve state-of-the-art performance in Atari, where a more number of seeds might be necessary. Lastly, we kindly ask the reviewer to consider the limited budget we have. **Dual network ablation** - Please refer to (G2) in the global response. **Parameters for Thinker vs baselines** - Please refer to (G3) in the global response. **Poor writing**- We thank the reviewer for pointing out the unclear aspects of the paper, and we will improve the clarity of the paper as follows. **The paragraph explaining the planning stage** - The paragraph in line 88, along with Figure 2, is intended to give an overview of the planning stage. We acknowledge that it may be too verbose. We will start a new paragraph after *the new real state* on line 96 and remove the line *In each of the K steps, the agent sees....*, which should be better explained in Section 3.2. We also include a better version of Figure 2 that should be clearer, which can be found in Fig A1 of the attached pdf. **About K and L** - In our paper, K represents the total number of steps taken during a planning stage. Of these steps, K-1 correspond to the 'imaginary actions', which is the same as the number of nodes the agent traverses within a tree. It's worth noting that this doesn't equate to the total number of nodes present in the tree, because the agent can traverse the same node multiple times. On the other hand, L denotes the maximum depth the agent can reach during a rollout. If the agent's search hits this depth (L), it will be reset to the starting point or root node. Typically, the value of L (for instance, L=5) is smaller than K (for instance, K=20). **Planning reward** - Due to page constraints, we have detailed the planning reward in Section H of the appendix. The reviewer is right: using `max` over `mean` avoids penalizing rollouts with low returns. As elaborated in the appendix, the planning reward acts as a heuristic, guiding the agent to maximize the highest possible rollout return. This gives the agent incentive to try rollouts that have both a low expected return and high variance. If we were to use `mean` instead of `max`, the agent would not have incentives to try rollouts with low expected returns. The questions raised are addressed as follows: > Why is the non-stationary network needed? First, the predicted states may not be perfect, especially during early training, so directly applying a value or policy function to predicted states may lead to inaccurate outputs. As the non-stationary network itself is an RNN, one can recover the MuZero network's way of predicting future policies and values, even when the predicted states are all wrong. Secondly, the RL agent is an RNN that sees both the past imaginary transitions and real transitions. The RL agent receives the augmented state as input in each recurrent step instead of the raw state. Thus, the value and policy function from the RL agent cannot be directly applied to predicted states. Another way of seeing this is that the RL agent's policy acts like a system 2 policy that requires heavy planning, while the model's policy acts like a system 1 policy that only requires a single feedforward pass. The system 1 policy serves as the distilled version of the system 2 policy and guides the search in the model, so they cannot be replaced by one another. > Why does K=1 ablation perform poorly? Please refer to our response to Reviewer WUsk (the second question). > Is the agent policy the actor in IMPALA and the model policy the world model's predicted action probabilities? Yes. The limitations raised are addressed as follows: **Computational Cost** We acknowledge the need for a more thorough discussion on the computation cost and will add a discussion in the appendix. Please refer to our response to Reviewer WUsk regarding computation cost (the last weakness). The main bottleneck of the algorithm is training the actor-critic network, and that is why replacing the RNN in the actor-critic network with an MLP can significantly reduce training time. The bottleneck becomes training the model if we use an MLP actor-critic network. **Hyperparameter tuning** Yes, our method does introduce additional hyperparameters, specifically the number of planning steps, K, and the maximum search depth, L. However, we found that our selected hyperparameters are largely transferable across different tasks. As evidence, our Atari results were achieved by directly applying the hyperparameters from the Sokoban task, with the only modifications being an increase in network depth and a decrease in the learning rate. The same set of hyperparameters is used on 56 environments. This suggests that extensive hyperparameter tuning is not necessary for most environments. We hope that the answers provided above adequately address the issues raised by the reviewer, and we kindly ask the reviewer to consider adjusting the score based on our responses. We are grateful for the reviewer’s comments and welcome any further questions! --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and new experiments. The additional baselines and evaluations clears up my concerns about experimentation. The new figure does explain the Thinker planning process more clearly. I see the point about K, L, and the fact that the tree can reuse nodes if there are repeat states. Could you also add a figure showing some example trees with varying K and L? That would be very easy for readers to quickly get the idea and not get confused from just reading the text. Updating my score to a 6. --- With my immediate concerns cleared up, I would like to add some more discussion. First, my opinion is that Thinker is tightly coupled with its model-based RL algorithm. However, conceptually, it seems like we can use other RL algorithms with the Thinker-augmented MDP. It would be very interesting to compare commonly used RL algorithms with their Thinker counterparts, like SAC, PPO, etc. Perhaps some RL agents will be better with Thinker, and others will not depending on their design choices. At the very least, the authors should include this in the limitations and pose it as a question for future work. Next, planning with K steps every env step seems costly. Can we come up with a version of Thinker that does variable length planning? --- Reply to Comment 1.1.1: Comment: Thank you for your comment and the updated score. In the main paper, we will add a figure showing an example search tree with the corresponding imaginary actions, to better clarify the concepts of planning steps (K), maximum search depth (L), and the reset action. > First, my opinion is that Thinker is tightly coupled with its model-based RL algorithm. However, conceptually, it seems like we can use other RL algorithms with the Thinker-augmented MDP. Throughout our study, we employed IMPALA—a standard, model-free actor-critic RL algorithm—on the Thinker-augmented MDP. The preference for IMPALA over PPO was mainly due to its superior computational efficiency; IMPALA's design facilitates parallel threads for collecting self-play transitions. We postulate that the performance of PPO or SAC on the Thinker-augmented MDP would align closely with IMPALA's since all these are variants of actor-critic algorithms. We agree that it would be very interesting to experiment with how the Thinker-augmented MDP interacts with other model-free RL algorithms, especially value-based ones like Q-Learning. As cited in line 333 of the paper, *"Exploring how other RL algorithms perform within the Thinker-augmented MDP is an additional direction for future work."* > Next, planning with K steps every env step seems costly. Can we come up with a version of Thinker that does variable length planning? We have briefly experimented with introducing an extra action in the augmented MDP, so the RL agent can choose to stop planning and act. We found that the final performance at 5e7 frames is similar, and the number of planning steps is reduced by around half. We did not include this in the paper as (i) the focus of the paper is the performance of Thinker given a limited number of frames instead of wall time, (ii) as Thinker opens up a new way of using a learned model in model-based RL, we believe the algorithm should be as simple as possible so as to allow future work to build on it easily.
Summary: - This paper introduces a novel algorithm, Thinker, which learns how to plan by interacting with a learned world model using “imaginary actions”—negating the need for hand-designing a planner. The algorithm produces SOTA results on Sokoban, and its efficacy on Atari 2600 is demonstrated as well. Strengths: - The algorithm design is natural given the motivation of the work. - The introduced algorithm has a number of novel components, including the planning stage used over the augmented MDP, and the augmented reward, and the duel network architecture. - Thinker produces state-of-the-art results on Sokoban (over previous state-of-the-art, DRC), and in Atari 2600, the Thinker-augmented approach outperforms the non-Thinker-augmented baseline. - The writing is clear and relatively concise, the paper is well-structured, the significance of the work is well-communicated and compelling, and the related work section appears to be complete. Weaknesses: - The authors note “as the goal of the [Atari 2600] experiments is to investigate the benefits of the augmented MDP, we do not include other common baselines here” (line 307). While this point is understood, the results on Atari would have been more compelling if Thinker were to have outperformed comparable learned planning approaches, e.g. those mentioned in related work. - Figure 1 is a bit difficult to read—it is hard to note the difference between each of the successive images. Better visualization to highlight the differentials (and zooming in on the level, maybe) could have been employed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - In line 234, various approaches using gradient updates to plan within a planning stage are cited. These approaches do not appear to be used in evaluation as baselines/ablations—why not? - Could some of the improved results be due to the novel duel network architecture, rather than the new algorithm? I don’t see an ablation for this in the main body or supplementary material. - How were the output statistics chosen? Could alternatives have been used? - I am confused by the phrase “the real action at the first K-1 steps” (132), since I understand that the fist K-1 actions are imaginary. Is it possible the authors are trying to say that we’re not using real actions for the first K-1 steps? It seems like these real actions are not simply “not used”, but they don’t exist in the first place. The structure of the sentence also makes it unclear to me which “imaginary action” we are talking about (I would suggest maybe ditch the last comma for clarity, since this also corresponds to the last step, K). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - The authors note the limitations that (i) the algorithm carries a large computational cost, (ii) only rigid planning is supported, i.e. requiring agent to roll out from the root state and restricts it to planning for fixed number of steps, and (iii) a deterministic environment is assumed. I agree with this authors that these elements are appropriate for future work. - No other major limitations stand out to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive comments and detailed feedback! We appreciate the reviewer’s recognition of our work. The weaknesses are addressed as follows: **Insufficient baselines** - Please refer to (G1) in the global response. **Clarity of Figure 1** - We will use a bordered window to emphasize the agent and enlarge the images. Additionally, we will update the border color of the rollout image to match the rollout summary on the left. The questions raised are addressed as follows: > In line 234, various approaches using gradient updates to plan within a planning stage are cited. These approaches do not appear to be used in evaluation as baselines/ablations—why not? This is because [1, 2] requires perfect information about the MDP, meaning one needs a perfect simulator of the environment, while [3, 4] requires a continuous action space. We focus on the conventional RL setting, where prior knowledge about the MDP is not available, and only consider MDPs with a discrete action space. > Could some of the improved results be due to the novel duel network architecture, rather than the new algorithm? I don’t see an ablation for this in the main body or supplementary material. Please refer to (W3) in the global response for the ablation results on imaginary actions. If we replace the learned imaginary action with a random imaginary action, performance drops significantly. As such, the improved results are not only due to the novel dual network architecture but also the learning of imaginary actions in the augmented MDP. Section G in the appendix contains additional comparisons with MCTS. > How were the output statistics chosen? Could alternatives have been used? The most important output statistics are values and policies, inspired by the following rationale: Values allow evaluating a rollout without the need to roll till the end of the episode, while policies allow the search to be more efficient by providing hints to the agents. Both of these quantities are critical - see Fig F4 in the appendix, which shows learning cannot occur without these two quantities. We do not know if there are any alternatives that can replace these two quantities. > It seems like these real actions are not simply “not used”, but they don’t exist in the first place. Yes, one can interpret that the real actions at the first K-1 planning steps, or the imaginary actions at the K planning step, do not exist at all. However, since the actor-critic network outputs both real action and imaginary action with different heads on each planning step, so in terms of the code, those actions do exist (but they will be discarded). We acknowledge that it is clearer if we state the first K-1 actions are imaginary actions while the K action is a real action instead of saying the other actions are not used. We will revise the paper to reflect this. Thank you for the suggestion. We hope that the answers provided above adequately address the issues raised by the reviewer. We are grateful for the reviewer’s comments and welcome any further questions! **References** [1] Thomas Anthony, Robert Nishihara, Philipp Moritz, Tim Salimans, and John Schulman. Policy gradient search: Online planning and expert iteration without search trees. arXiv preprint arXiv:1904.03646, 2019. [2] Arnaud Fickinger, Hengyuan Hu, Brandon Amos, Stuart Russell, and Noam Brown. Scalable online planning via reinforcement learning fine-tuning. Advances in Neural Information Processing Systems, 34:16951–16963, 2021. [3] Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Universal planning networks: Learning generalizable representations for visuomotor control. In International Conference on Machine Learning, pages 4732–4741. PMLR, 2018. [4] Mikael Henaff, William F Whitney, and Yann LeCun. Model-based planning with discrete and continuous actions. arXiv preprint arXiv:1705.07177, 2017. --- Rebuttal Comment 1.1: Comment: The authors have addressed my questions adequately and I maintain my positive opinion of the paper.
Summary: This paper presents a model-based RL approach, Thinker, that learns a world model and plans to take actions by generating imaginary rollouts with the learned world model. The paper claims that Thinker is a general method for any RL algorithms and does not rely on any hand-crafted planning algorithm. Experiments were conducted on Sokoban and the Atari 2600 benchmark, showing better performance over model-free RL methods. Strengths: 1. The basic idea of learning world models and training model-based RL policies with the learned models is technically solid. Experiments indeed show better performance against model-free baselines 2. The method description is clear and it is good to see performance in different domains. Weaknesses: 1. I am very surprised that Thinker is not compared against model-based RL baselines. 2. The contribution is unclear to me: a) I do not understand the claim about not having a hand-crafted planning algorithm. Based on the method described in the paper, you did introduce an MCTS-type of planning algorithm. This claim needs to be explained and justified. b) What is the difference between your method and other model-based RL methods? Particularly, it looks similar to MuZero, except that you have a different network architecture for learning the world model. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Please address my questions in the review. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: It is hard for me to judge whether limitations have been sufficiently addressed since I do not even understand what new technical advances have been proposed here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the questions. The questions raised are addressed as follows: **Insufficient baselines** - Please refer to (G1) in the global response. **The difference with MuZero** - Please refer to the first three paragraphs of the global response and reviewer #fWg3’s summary of the algorithm. In particular, *we did not introduce any MCTS in our algorithm*. An actor-critic is trained to select imaginary action (instead of the UCB formula in MCTS) and real action (instead of based on visit counts in MCTS) by maximizing the return in the augmented MDP. An RL agent can learn a variety of planning algorithms depending on its needs. Please refer to Appendix G for the analysis of the agents’ learned planning algorithm, which differs significantly from hand-crafted algorithms such as MCTS. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: After reading the responses, I understand better about the contribution. I also appreciate the additional results. I have increased my rating accordingly.
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive comments and detailed feedback. We appreciate that most reviewers recognize the novelty and value of our proposed algorithm, Thinker. We note that some reviewers seem to underestimate the novelty, viewing Thinker as merely a variant of MuZero, or focusing overly much on the architecture as opposed to the core novelty of our approach, which is to unify planning and acting within a single augmented MDP (hence the paper’s title). For reference, Reviewer #fWg3 provided a succinct summary of the algorithm, and a clearer illustration of the algorithm (replacing Fig 2 in the paper) is shown in Fig A1 of the attached pdf. To clarify the novelty with respect to MuZero, MuZero can be understood as (i) MCTS and (ii) a learned model, while Thinker can be understood as (i) an RL agent in an augmented MDP and (ii) a learned model with the dual network architecture. Thus, *Thinker does not employ MCTS in any capacity.* While the specific architecture of the model is not our central concern, we do note that the use of a dual network enhances performance, and consider it a significant contribution, just not the main one. **Thinker is the first work showing that an RL agent can learn to plan with a learned world model in complex environments.** The area of 'learning to plan' is still nascent, largely due to its intrinsic challenges. Prior works either do not satisfy (i) learning to plan, i.e., training a neural network to select imaginary actions, (ii) using a learned model or (iii) being evaluated in a complex environment. IBP [20] shares similar ideas with Thinker, but their models do not predict values and policies, which we found necessary in complex domains (see Fig F4 in the appendix). As such, their work has only been shown to work in simple domains, and (iii) is not satisfied. TreeQN [14] and I2A [15] do not satisfy (i). VIN [19] and DRC [10] do not satisfy (ii). Another line of work [21-24] corresponds to algorithms that perform gradient updates within a planning stage. This requires either a perfect model [21, 24] or simple environments [23], so either (ii) or (iii) is not satisfied. ([22] is an imitation learning algorithm instead of an RL algorithm.) Common issues raised by reviewers are addressed as follows: **(G1) Additional Baselines** We added six new baselines in the Sokoban domain, including Dreamer-v3 [52], MuZero [1], I2A [15], ATreeC [14], VIN [19], and IMPALA with ResNet [35]. We use open-source implementation for Dreamer-v3, with the result being averaged over three seeds. MuZero’s result is taken from [53]. Results of DRC (original paper), I2A, ATreeC, VIN and IMPALA with Resnet are taken from [10]. The results are shown in Fig A2 and Table A1 in the attached pdf. In hard planning domains, planning-orientated RL algorithms, such as DRC, can generally achieve a much better performance than other RL algorithms. And within planning-orientated RL algorithms, DRC usually outperforms others by a wide margin [10]. As such, we only included DRC as the baseline in our submission, but we agree with the reviewers that more baselines would enrich our paper. As for Atari, we added the new baselines of UNREAL [54] and LASER [55] from previous papers, and the results are shown in Table A2. Readers who are interested in more baselines can refer to other papers, as Atari 200m is a common benchmark. Note that the goal of the Atari experiments is to evaluate the benefits of the Thinker-augmented MDP in other domains beyond planning domains, instead of competing with baselines. **(G2) Ablation on the Dual Network** We conducted three additional ablation experiments: (i) Single network that predicts the relevant quantities but not future states, which is the same as MuZero's model. (ii) Same as (i), but with the addition of predicting states using L2 loss (though the reviewers did not request this run, we believe it provides valuable insights. The experiment is still running, and we expect to complete the training by Aug 13). (iii) Dual network with L2 state prediction loss (instead of feature loss). The result is shown in Fig A3. **(G3) Ablation on Imaginary Action** To address reviewers' concerns that the performance may be due to the model's increased parameter count or additional features, we included an ablation experiment by using random imaginary actions while training the RL agent only on real actions. The result is presented in Fig A3. We also note that the IMPALA with ResNet shown in Fig A2 has 52.5m parameters, whereas Thinker has 8.5m parameters, so increasing the parameter count does not necessarily improve performance. All aforementioned figures will be incorporated into the revised appendix. With the new experiments included, *our paper now encompasses over 70 distinct runs, spanning evaluations across 56 environments and 16 ablation studies, in addition to the extensive analysis in the appendix.* Most related works on learning to plan have fewer experiments. For example, [24], published in NeurIPS 2021, evaluated their algorithm on two environments with a single baseline. We hope that our new experiments, along with consideration of the computational cost, could address the reviewers’ concerns regarding experiments. **References** [1-51] See the main paper’s references. [52] Hafner, D., Pasukonis, J., Ba, J., & Lillicrap, T. (2023). Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104. [53] Hamrick, J. B., Friesen, A. L., Behbahani, F., Guez, A., Viola, F., Witherspoon, S., ... & Weber, T. (2020). On the role of planning in model-based deep reinforcement learning. arXiv preprint arXiv:2011.04021. [54] Jaderberg, Max, et al. "Reinforcement learning with unsupervised auxiliary tasks." arXiv preprint arXiv:1611.05397 (2016). [55] Schmitt, S., Hessel, M., & Simonyan, K. (2020). Off-policy actor-critic with shared experience replay. In ICML (pp. 8545-8554). PMLR. Pdf: /pdf/862ab80cbf0f418ecf7d6b3ee2b3165f127eec58.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work presents Thinker, a new model-based reinforcement learning algorithm that achieves state-of-the-art performance on the puzzle video game, Skoban. The main contribution made by the authors is using a dual network architecture which addresses the sample inefficiency problem that other algorithms such as MuZero have. This dual-network setup consists of two sub-networks, namely a stationary and non-stationary network. The stationary network takes as input the current state and a sequence of raw actions and predicts future states, rewards and episode termination probabilities. The non-stationary network takes as input both the stationary network’s inputs and its predicted next states. In addition to the supervised training loss over the four predicted quantities, the authors proposed another L2 loss that encourages the encoded representation of the stationary network’s predicted state to match the encoded representation of the true state observed. The encoder helps the non-stationary network focus on only encoding task-specific information. The static network encodes static policy-independent information, while the non-static network encodes information that changes as the policy changes. Lastly, the encoder is only updated using the loss of the non-static network. The static network uses this encoder, without updating it, and updates its future state predictions to minimise the L2 squared distances in latent space. Strengths: The authors show that their proposed system achieves a new state-of-the-art result on Sokoban. Furthermore, they proposed using an L2 distance in a latent representation instead of using the full state. This makes the entire system much more scalable. They further provide a detailed breakdown of the algorithm in the appendices and open source their code. Weaknesses: The main weakness I see is that the use of these dual networks is not well motivated. Along with a better motivation an additional experiment or two is needed to demonstrate that they outperform a single network setup. Furthermore, it would be helpful to directly compare against open-source versions of models such as DreamerV3 and MuZero. This can help motivate in what ways their approach is superior. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. On line 245 you state, “ In contrast to these methods, our proposed dual network fits the next-state prediction with a feature loss that enables visualization and prioritizes the learning of task-relevant features.” How does a feature loss enable visualisation? Especially if the agent only needs to remember task-relevant information. When the stationary network predicts the next state, would it not be able to ignore parts of the state that the encoder does not use? Therefore the latent space distance is still close to zero, but the predicted state might have some missing information. 2. On line 210 you state, “It is important to note that g is not being optimized when minimizing this loss, and ŝt+l does not receive gradients from the non-stationary network, as the two sub-networks are separately optimized. Further details regarding the model’s training can be found in Appendix B.” If it uses the encoder from the non-stationary network would that not make the stationary network’s updates non-stationary as well? As the policy changes the encoder (g)’s parameters might change. Which in turn changes the next state representation that gets generated. 3. On line 192 you state, “Nonetheless, this approach suffers from sample inefficiency since it discards information from the future state, which carries a rich supervised learning signal.” In what way does the dual network approach address the inefficiency problem that you state other methods such as MuZero and DreamerV3 have? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This proposed solution currently only works for environments with deterministic dynamics. Furthermore, the planning component is quite computationally expensive, which might make it too expensive to use in many scenarios. The main limitation I see is that there has not been enough evidence provided that the dual network setup performs better than alternatives. Especially since the stationary network actually uses components from the non-stationary setup. It would be of great benefit to have additional experiments that show the superiority of this architecture when compared to alternatives (such as predicting every value using a single network). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive comments and detailed feedback! The summary gives a clear description of the proposed dual network, but we would like to point out that the main focus of the paper is the augmented MDP and learning to plan (thus the paper’s title), and dual network is only a part of the overall algorithm. The weaknesses are addressed as follows: **Insufficient baselines** - Please refer to (G1) in the global response. **Ablation on the dual network** - Please refer to (G2) in the global response. The questions raised are addressed as follows: > How does a feature loss enable visualisation? Previous work [1] has shown that autoencoder training can rely solely on a feature (or perceptual) loss, omitting pixel-space losses, while still achieving good pixel-level reconstruction. We conjecture the reasons for that are (i) feature loss retains the spatial structure as convolutional layers are used, (ii) usually activations from early layers (1-5) are employed so the features are still close to the image itself. It is common in computer vision that the network used to compute the feature (or perceptual) loss is pre-trained and fixed. However, Thinker deviates from this norm by utilizing features derived from a network that is concurrently trained to predict policy and values. Nevertheless, we believe the intuition is the same. Moreover, arguably because the feature loss should be satisfied in the dynamic of a concurrently trained network, it incentivizes the network to also predict well in the image space. Finally, it’s worth noting the domain of games is much simpler than the domain of natural images, possibly another reason why feature loss reconstructs images so well in our experiments. We agree that the feature loss does not necessarily lead to accurate reconstruction. Indeed one can see some artefacts in the video on the main paper, especially in Atari games. We believe these artefacts are typical for feature loss. For instance, Fig 6 in [1] shows that by using L2 loss, the network omits the purple ball in the reconstructed image, but the feature loss reconstructs it with a dotted grid-like object. The same dotted grid-like object can be seen at the 30:44 timestamp of the video in the paper. Nonetheless, we observe that reconstructed images from the video are of high quality and can be easily interpretable. > If it uses the encoder from the non-stationary network would that not make the stationary network’s updates non-stationary as well? Yes, the loss function is non-stationary, which changes as the non-stationary network is updated. But we did not find any issues with the non-stationary loss. An ablation analysis of directly using L2 loss on state space is also included in Fig A3, showing that the non-stationary loss yields better performance than the vanilla L2 loss on the raw state space. We agree that the name *stationary network* may be confusing as it uses a non-stationary loss. As such, we will revise the naming from *stationary network* to *state-reward network* and *non-stationary network* to *value-policy network* to better reflect their nature. > In what way does the dual network approach address the inefficiency problem that you state other methods such as MuZero and DreamerV3 have? We include the feature loss such that the learned model has to predict future features of the environment. This provides an additional supervised learning signal for the model. The stationary network (or the state-reward network) is designed to predict future states, which then act as intermediary steps or hints for estimating future output statistics. Take, for instance, the task of predicting the last reward after making the moves (up, up, up, right) in Sokoban. Predicting the reward becomes easier based on projected future frames. For example, if the character is anticipated to be on the left of a box after three upward moves, then the likely reward prediction would be one. The primary role of the stationary network (or the state-reward network) is to make these future frame predictions. While, in theory, a neural network could autonomously learn this intermediary step, doing so would necessitate significantly more data. As seen in Figure A2, MuZero generally requires a much larger amount of frames to reach a competitive performance. As for DreamerV3, we believe its model is data efficient (it performs well in the Atari 100k benchmark) as it also includes image reconstruction loss in training the model. However, DreamerV3 still performs a lot worse than Thinker in Sokoban - this is likely due to the extensive planning imposed on the Thinker algorithm. Again, Thinker differs from both MuZero and DreamerV3 in both how a model is used (augmented MDP in Thinker v.s. MCTS in MuZero v.s. Dyna-like simulation in DreamerV3) and the model itself (dual network in Thinker v.s. single network in MuZero v.s. VAE in DreamerV3, which does not predict future values or policies that are necessary for an agent to learn in the augmented MDP). We hope that the answers provided above adequately address the issues raised by the reviewer, and we kindly ask the reviewer to consider adjusting the score based on our responses. We are grateful for the reviewer’s comments and welcome any further questions! **References** [1] Gustav Grund Pihlgren, Fredrik Sandin, Marcus Liwicki. Improving Image Autoencoder Embeddings with Perceptual Loss, arXiv preprint arXiv:2001.03444, 2020 --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their response and for the proposed updates to the paper. I think the updated network names are much better. Furthermore it is now easier to verify the necessity of using dual networks over just a single network. The results also seem to indicate that Thinker significantly outperforms Dreamer-v3 and MuZero on Sokoban when using environment frames as a reference. I have updated my score to 5. One question I still have is how Tinker compares against the other algorithms when considering wall clock time. Do the other algorithms perform better using this metric? If they perform better here, what is the main advantage of Tinker? Would it maybe perform better in environments with computationally intensive/slow environment steps? --- Reply to Comment 1.1.1: Comment: Thank you for your comment and the updated score. Thinker does indeed outperform both Dreamer-v3 and MuZero on Sokoban when using environment frames as a reference. While Dreamer-v3 isn't specifically tailored for the planning domain, its performance understandably lags behind RL algorithms designed for planning. Regarding MuZero, its model's sample inefficiency means it needs considerably more frames than Thinker (it requires 20 billion frames for Atari). Moreover, substituting MCTS with an RL agent introduces greater flexibility in both planning and acting. For example, if planning proves non-essential, the RL agent can bypass the search and behave like a typical plan-free RL agent; or, as Appendix G demonstrates, an RL agent might choose to stick to an existing plan unless encountering an unfavorable leaf node, which is not possible with MCTS. When we take wall time as a reference, the same conclusion holds - Thinker outperforms both Dreamer-v3 and MuZero on Sokoban when using wall time as a reference. As we mentioned in our response to Reviewer WUsk, Dreamer-v3 and MuZero require similar or longer training times. Wall time is not the primary focus of our paper, and various modifications and implementation details can enhance Thinker's wall time. For instance, substituting the RNN in the actor-critic with an MLP and reducing the planning steps from 20 to 10 could decrease the training time by over 70% while maintaining similar performance levels. Our interest mainly lies in evaluating the performance of an RL algorithm based on a fixed number of frames, a more typical reference point as seen in the Atari-200m benchmark.
Summary: This paper presents the Thinker algorithm, a method that augments an MDP such that an agent takes 'imaginary actions' in a learned model prior to executing actions in the environment. This allows the agent to incorporate planning strategies into a policy learned with RL. State-of-the-art performance is achieved on Sokoban and competitive results are shown on Atari. Strengths: - The paper is well-written and polished. - The appendix is thorough and documents implementation details and architecture details. Source code is provided for reproducibility. - The proposed algorithm for augmenting an MDP to induce planning behavior in an agent seems to be original, as is the architecture of the dual network for learning the model. The results on Sokoban and certain Atari tasks also appear to be quite strong. - An analysis of the learned behavior of agents (e.g., when resetting happens during planning, what types of imaginary and real actions are taken) is provided in the Appendix. This is useful for characterizing the behaviors of the learned policies. Weaknesses: - Comparison is given on Sokoban to DRC and to Rainbow on Atari. Most significantly, I think this work would benefit from comparison to model-based RL baselines to help contextualize how well the Thinker algorithm performs. - The method has multiple moving parts and several involved design choices (e.g., what elements to include in the augmented state, non-stationary planning reward, dual network architecture), though ablations are provided for some of these choices in Appendix F. - Related to the above point, regarding the dual network. The paper notes that "one could adopt the same architecture and training methodology used by MuZero" but that "[MuZero] suffers from sample inefficiency since it discards information from the future state." It would be useful to provide a quantitative comparison/ablation here, where the encoder/unrolling function/prediction function are learned in the manner of MuZero, as described, in order to justify the proposed architecture. - The augmented MDP seems to increase the complexity of the MDP (in the state-action space as well as the time-horizon). An expanded discussion on the computational cost of both training and evaluation, compared to using the raw MDP and also to other model-based methods, would be useful. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I would appreciate clarification on the items described in weaknesses; some additional questions: - Is it correct that the augmentation increases the cardinality of the action space from |A| to 2|A|^2? Does this adversely affect the difficulty of learning a policy? - Is there an intuition for why the K=1 case does worse than in the raw MDP in Figure 5? - An ablation on the unroll length L is provided in Figure F.3 in the Appendix. Have the authors tested additional values <5 or >10 -- i.e., is there a clear trend here? I am also curious how sensitive the values of L and K are to a particular task. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - The authors have noted some limitations in Section 6, including computational cost and rigid planning for a fixed number of steps, including determinism of the MDP. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive comments and detailed feedback! The weaknesses are addressed as follows: **Insufficient baselines** - Please refer to (G1) in the global response. **Multiple moving parts** - Please refer to (G2) and (G3) in the global response for more ablation analysis. **Ablation on the dual network** - Please refer to (G2) in the global response. **Computation cost** - We acknowledge the need for a more thorough discussion on the computation cost, and will add the following discussion in the appendix when discussing the new baselines: *As discussed in the main paper, Thinker augments the MDP by adding K-1 steps for every step in the underlying MDP. Consequently, there is an increased computational cost due to the extra K-1 imaginary steps, in addition to the added computational cost of training the model. On our workstation equipped with two A100s, the training durations for the raw MDP, DRC, and Thinker on Sokoban are around 1, 2, and 7 days, respectively.* *However, when benchmarked against other model-based RL baselines, Thinker's training time is similar or even shorter. For instance, Dreamer-v3 (the open-sourced implementation) requires around 7 days for a single Sokoban run on our hardware, primarily due to its high replay ratio. MuZero, when conducting a standard number of simulations (e.g., 100), requires 10 days of training. It is noteworthy that despite the Thinker's additional training overhead for RL agents, the algorithm compensates by requiring significantly fewer simulations. Furthermore, if one replaces the RNN in the actor-critic network with an MLP (as detailed in Appendix F), without compromising the performance, the computational time for a single Thinker run can be reduced to 3 days.* The questions raised are addressed as follows: >Is it correct that the augmentation increases the cardinality of the action space from |A| to 2|A|^2? Does this adversely affect the difficulty of learning a policy? The cardinality of the action space is 2|A|, as we have (real action / imaginary action, reset) as the new action set. Note that the real actions on imaginary steps (the first K-1 step in a planning stage) and the imaginary actions on a real step (the K step in a planning stage) are not used (or as Reviewer U2R9 points out, they do not exist), so they can be considered to be the same type of action. As the new action space’s cardinality is still linear in |A|, this shouldn’t adversely affect the difficulty of learning a policy. >Is there an intuition for why the K=1 case does worse than in the raw MDP in Figure 5? The reason for this is that, instead of providing the raw image as input to the actor-critic network for all K steps, we feed in the model's hidden state. While this hidden state might be more effective for predicting future states and rewards of the MDP, it can be more difficult for learning a good policy. This is particularly the case because the model undergoes updates throughout the training process. This approach potentially explains why in a small number of Atari games, the actor-critic applied to the raw MDP occasionally outperforms the one applied to the Thinker-augmented MDP, as highlighted in Appendix E3: *Interestingly, the actor-critic applied to the raw MDP occasionally outperforms the one applied to the Thinker-augmented MDP. This could be attributed to the fact that we provide the agent with the model’s hidden state instead of the true or predicted frames. It is conceivable that the actual frame is simpler to learn from, given that the model learns the encoding of frames by minimizing supervised learning loss rather than maximizing the agent’s returns.* Future research could consider providing both the raw image and the model's hidden state as inputs to the RL agent to potentially address this issue. >Have the authors tested additional values <5 or >10 -- i.e., is there a clear trend here? We have not conducted tests for scenarios where L < 5. However, we have tested the case of L=20 (implying no restriction on search depth given that the number of planning steps, K, is 20) and conducted a brief behavioural analysis. Notably, the learned maximum search depth within a planning stage averages between 16 and 17. This suggests that RL agents typically prefer to unroll a single, extended rollout. In particular, for some simpler Sokoban levels, the RL agent can imagine solving the level in the first real step. Nevertheless, the initial learning speed is notably slower, while the final performance observed is marginally worse. Our conjecture for this trend is that longer training is required for the model to learn to unroll over a span of 20 steps. We did not include this result in the paper as the results are from the early version of Thinker, where the MDP was augmented in a slightly different manner. We hope that the answers provided above adequately address the issues raised by the reviewer, and we kindly ask the reviewer to consider adjusting the score based on our responses. We are grateful for the reviewer’s comments and welcome any further questions! --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: Thank you for the response. I think that the additional ablations (including on the dual network, since it is considered a significant contribution) as well as the additional model-based baselines are necessary additions to the paper, and am glad the authors have included these in their rebuttal. I continue to lean towards accepting the paper.
null
null
null
null
Reliable Off-Policy Learning for Dosage Combinations
Accept (poster)
Summary: The paper proposed a novel method for off-policy learning for continuous dosage combinations that aims to be robust to action space sparsity. To account for drug-drug interactions, they proposed a dosage combination network (DCNet) which leverages a tensor product basis to estimate smooth interaction effects in the dose-response surface, incorporating all dosages simultaneously into parameterization by only one head. The DCNet is updated by minimizing the MSE between the individualized dose-response function (the expected patient outcome of a given treatment in given patient covariates) and patient outcome. Then they use conditional normalizing flows to model the stochastic and multimodal behaviour policy which they call the generalized propensity score (GPS). Finally, they learn a parametric policy in a constrained optimization problem that maximizes individualized dose-response function under the learned policy subject to the probability of the proposed treatment in the GPS being larger than a threshold. This is approximately solved as an unconstrained Lagrangian problem through stochastic gradient descent-ascent with random restarts. Experiments were conducted on two real healthcare datasets. Comparisons to ablated versions (MLP instead of DCNet, without overlap constraint) suggest that the proposed components reduce regret and improve stability. Strengths: 1. proposed a novel method for off-policy learning for continuous dosage combinations 2. a viable design for constrained policy optimization Weaknesses: 1. dosage dimensionality in experiments is just 2 or 3, it is hard to say wether the obtained results can generalize to much higher dimensionalities, or treatments intended for different diseases simultaneously 2. the individualized weights in the proposed DCNet are only trained in an unsupervised fashion, i.e. the training objective does not reflect individual dosages 3. illegible fonts in figures Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. do you have to learn an analytical form of the GPS if it is only to indicate overlap? 2. what is the harm of assuming independent dosage dimensions, which is mentioned several times throughout the paper but never clearly explained 3. why the single-head individualized dose-response function trained against the overall treatment outcome can account for drug-drug interactions? Is it possible that one drug dominates the others in the model training, or one common combination of dosages dominate less frequent combinations that actually entail more subtle drug-drug interactions? As I understand, the tensor product basis of the proposed DCNet is all inner work of the model, whereas the input, output and training objective are all subject to non-individualized information. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: please see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for giving us the opportunity to clarify certain aspects of our paper. ### Response to “Weaknesses” 1. Thank you for raising this important point. Our framework can be easily extended to much higher dimensions. To demonstrate this, we added new results for higher dimensions (see **PDF, Fig. 2**). We observe that our framework scales well with higher dimensions and clearly outperforms the other baselines. As such, our framework is directly applicable to multi-disease settings, each requiring different treatments. 2. We would like to clarify that our DCNet is indeed trained in a **supervised** fashion. We further would like to clarify that the training objective _does_ reflect individual dosages. We consider an observational dataset for $n$ patients with i.i.d. observations $\{(x_i, t_i, y_i)\}{_{i=1}^{n}}$ sampled from a population $(X, T, Y)$, where $X \in \mathcal{X} \subseteq \mathbb{R}^{d} $ are the patients' covariates, $T \in \mathcal{T} \subseteq \mathbb{R}^{p} $ is the assigned $p$-dimensional dosage combination per patient (i.e., multiple continuous treatments), and $Y \in \mathbb{R}$ is the observed outcome per patient. Each patient $x_i$ is then assigned one individualized dosage combination $t_i$ consisting of one dosage per each of the $p$ drugs leading to the outcome $y_i$, reflecting individual dosages and supervised data. The challenge here is that we only observe _one_ individual dosage combination per patient, but we want to estimate the individualized dose-response function for _all_ potential dosage combinations in $T$ for an individual patient $x$, i.e. $\mu(t, x) = \mathbb{E} [Y(t) \,\mid\, X=x ]$. However, when the standard assumptions of causal inference hold (i.e., consistency, ignorability, and overlap), we can estimate the function for _all_ potential dosage combinations by regressing $\mu(t, x) = \mathbb{E} [Y\mid T=t, X=x ]$. Furthermore, our DCNet will then **generalize** to different values of $t$ and $x$, that is, to different dosages and different patients. As a result, we thus estimate $\mu(t, x)$ for individualized regions. This is a major advantage of our method, and it is the reason why we build upon causal inference. Still, our framework avoids regions of the policy with large uncertainty to ensure reliable decisions. 3. We will increase the font size in our plots. ### Responses to “Questions” 1. We need to learn the GPS, i.e. the conditional density $f(T, X)$, over the whole covariate-treatment space for our Lagrangian optimization objective. The reason is that the learned GPS $\hat{f}\left( \hat{\pi}(x_i), x_i \right)$ needs to be evaluated at every training step for every proposed dosage combination $\hat{\pi}(x_i)$, so that we can calculate the training loss $\mathcal{L}_{\pi}(\theta, \lambda)$ and its gradients. Intuitively, it is necessary to have a fully learned form of the GPS in order to update the policy network parameters to shift the suggested dosage combinations into regions with sufficient overlap, when the previous suggestions target regions with an estimated GPS below the reliability threshold $\overline{\varepsilon}$. To clarify this, we will change the wording from “analytical form” to “fully estimated form”. 2. We thank you for the opportunity to spell out the importance of modeling the _dependency_ of different drugs. We proceed in two ways: \ (i) **Numerical explanation.** We added a comparison against VCNet where drug-drug interactions are **not** explicitly modeled but rather taken as **independent**. The results are in **PDF, Table 2**. Evidently, our proposed framework _with dependency_ outperforms the ablation _without dependency_ by a clear margin. \ (ii) **Theoretical explanation.** Let us assume a framework _without_ dependency. Then, non-linear interaction effects between the drugs can *not* be captured and the dose-response function can *not* be estimated sufficiently but will be lost. In medical applications, this can have severe negative and even harmful consequences and should be avoided. For example, the VCNet baseline (as opposed to our DCNet) will underestimate synergistic effects of different drugs. Instead, a framework is needed where drug-drug interactions are carefully modeled. To the best of our knowledge, our DCNet is the first framework for causal inference in such setting. 3. Our proposed DCNet can successfully model drug-drug interactions. The reason is that the dosage information is fused through a tensor product basis, which allows different drugs to interact in complex, non-linear, and non-additive ways. \ We carefully checked whether certain drugs may dominate others but did not find such evidence. Instead, we performed a simulation where we simply set certain drug dosages to 0 for some patients and then examined the performance. Here, we find that our results are robust. Still, we find that some drugs that are highly influential in changing patient health, also explain larger parts of the treatment effect, in line with what is expected from medical practice. This demonstrates the effectiveness of our framework. \ Finally, we would like to emphasize again that our data, our model, and our training objective carefully consider _individualized_ information from patients. Importantly, we do neither learn aggregated nor average effects; instead, we learn individualized effects at the individual, patient-level, so that the estimated effects vary from patient to patient (we kindly refer to our second answer from our response to “Weaknesses” for technical details). If there is still something unclear, we are happy to clarify this in the discussion period. References: [1] Nie, Lizhen, Mao Ye, and Dan Nicolae. "VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments." International Conference on Learning Representations. 2020. --- Rebuttal Comment 1.1: Comment: Thank you authors for your response and additional experiment results. 1. By unsupervised I meant the training objective does not differentiate between dosages, i.e., there is no 'p' in the MSE loss in L188. However, I understand that the weights reflect individual dosage effect. If by this the authors mean the capability of addressing drug-drug interactions, please indicate in paper. 2. It would be good if the observations made in what the authors 'carefully checked' in the response to questions 3 could be shown in paper. I would like to increase my overall rating to 5. The paper plus the new pdf has demonstrated that the proposed method can scale to high action dimensionality, but still lacks the maths/experiment to live up to their claim of accounting for drug-drug interactions. --- Reply to Comment 1.1.1: Title: Additional evidence for modeling drug-drug interactions Comment: Thank you very much for your response and for updating your overall rating! We would like to respond to both of your points: 1. We would like to clarify that our training objective **does** differentiate between the dosage combinations. In the MSE loss in L188, the observed dosage combination per patient $t_i = \left(t_i^{(1)}, \ldots, t_i^{(p) }\right)$ consists of the observed dosages per each of the $p$ drugs (i.e., $ \mathcal{L}=\frac{1}{n}\sum_{i=1}^n \left[ \hat{\mu}\left(\left(t_i^{(1)}, \ldots, t_i^{(p) }\right), \left(x_i^{(1)}, \ldots, x_i^{(d) }\right)\right) - y_i \right]^2 $ ), with $d$ is the dimension of patient features. Hence, we differentiate between dosages and incorporate all of our dosages in the training objective. We will improve the writing of our paper at this point to make it more comprehensible. Further, we do **not** want to decompose the loss into separate, independent terms per dosage (e.g., having a separate MSE loss per each of the dosages while neglecting the other dosages), because then we would **not** be able to capture drug-drug interactions anymore. Nevertheless, we performed an ablation where we used instead an independent loss as training (**Response PDF, Table 1&2**; see VCNet). Thereby, we show that, in comparison, our novel DCNet is clearly superior and demonstrate the importance of **explicitly incorporating the dependence of the dosages** in our training objective. 2. We will report the results in our revised paper. Nevertheless, we also can summarize how we ensured that our training is successful: - We plotted the true dose-response surfaces against the predicted ones for single patients (we kindly refer to **Appendix D**). Here, we observe that, by using our DCNet, both dosage dimensions are modeled appropriately and none of both is dominated by the other. Furthermore, we observe that DCNet accounts well for drug-drug interactions. - We performed additional ablation studies using VCNet as a baseline (**Response PDF**, Table 1&2). VCNet is most similar to our architecture but can **not** explicitly model drug-drug interactions, which thus offers a powerful ablation. The reason is that VCNet models the effects of the dosages independently, and, therefore, VCNet is thus focused on learning marginal effects, even if they are subtle (i.e., to put it in your words, VCNet is not prone to neglect non-dominant effects, even subtle ones). As we see in our results, our DCNet clearly outperforms VCNet by a clear margin. This has two important implications: (1) DCNet is able to capture also subtle effects. (2) DCNet is able to model drug-drug interactions sufficiently, which is the reason for the large performance improvements. - We additionally evaluated the **SD-ISE**, that is, the __standard deviation__ of the mean inner squared error (MISE $= \frac{1}{n}\sum_{i=1}^n\int_{[0, 1]^p}(\mu(t, x_i)-\hat{\mu}(t, x_i))^2dt$, approximated by grid evaluation. The SD-ISE is listed below for areas with sufficient overlap ($| \hat{f}(t, x)>\overline{\varepsilon}$): | Model | MIMIC-IV | TCGA | | :---------------- | :------: | ----: | | MLP | 0.06889 | 0.74508 | | VCNet | 0.74899 | 0.47407 | | **DCNet** | **0.03196** | **0.00957** | Intuitively, the SD-ISE is the variation of the prediction error of the dose-response estimator over the covariate-treatment space with sufficient overlap. Hence, a low value implies that **all possible dosage combinations** are estimated **properly** and that certain, **less frequent or non-dominant** combinations are **not** neglected. We observe that our DCNet also performs **clearly best** wrt. the SD-ISE, undermining its ability to successfully **account for the different drug-drug interactions**. We hope that this fully addresses your questions satisfactorily, especially by further providing clear evidence that we can successfully account for drug-drug interactions, and that you can recommend acceptance of our paper.
Summary: (I have more background in offline RL than causal inference, and my description below may include some RL terminologies as the two are closely related.) This submission studies the problem of off-policy learning of dosage recommendations in personalized medical decision making, using a causal inference framework. The proposed approach operates in three stages: the first two stages aim to learn the dose-response function (similar to the Q-function) and the logging policy (used to identify poor overlap), whereas the third stage combines the two to learn a reliable policy using constrained optimization (max performance while avoiding poor overlap), solved via a minimax gradient optimization of the Lagrangian form of the objective. Experiments are conducted using semi-synthetic datasets for ventilation (MIMIC-IV) and gene expression (TCGA). Strengths: - Writing is clear with very organized descriptions of each stage of the proposed approach. - Well-motivated problem by real-world medical applications where one needs to determine multiple continuous treatments with dependent effects. - Code is provided as part of the supplement (I did not check or run the code). Weaknesses: - The contribution of this work should be further clarified. As "several methods for off-policy learning aim at discrete treatments" (L27), it may be worth pointing out whether solutions exist for multiple *discrete* treatments (instead of continuous dosages) and explain why those approaches don't directly apply, and how well do we expect those methods to perform for dosage combinations if we discretize the dosages. Furthermore, specific architectures are used for estimating dose-response and GPS, and I'm not entirely convinced why simpler neural networks do not work. - Furthermore, there are many recent works on offline RL and OPE that make use of value functions and logging policies to achieve reliable policy learning (e.g., references below, including many for continuous actions). Some discussion about how this work is related and/or different from those approaches would make the paper more complete and resonate with a wider audience. - In the experiments, the baselines considered may be more accurately described as ablation studies. These are certainly important for understanding the sources of performance gains, but I think it is important to include more direct baselines. Since "there are no existing methods tailored to off-policy learning for dosage combinations under limited overlap" (L329), how well does a standard causal effect estimation method work here? References: [1] Off-Policy Deep Reinforcement Learning without Exploration. https://arxiv.org/abs/1812.02900 [2] Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction. https://arxiv.org/abs/1906.00949 [3] Conservative Q-Learning. https://arxiv.org/abs/2006.04779 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For medical applications, how common is it that practitioners use exact dosages vs dosage levels? What's the practicality of an exact dosage specification in real-world problems? More discussion about this is appreciated. - L131 Why is ε = 0 considered weak overlap instead of no overlap? - L183 mentions “polynomial spline basis functions”, a possibly unfamiliar concept to many readers. Can you provide the precise definition? - Sec 4.1 is very dense, I still don't quite understand it after several read throughs. It requires a lot of thinking to understand p-dimensions, may be easier to illustrate using a simple example with two dimensions. - L274 training stability: "as there is no guarantee for global convergence in the non-convex optimization problem", do you observe any training instability in the experiments and how much variation is there? When selecting the best run over K random restarts, what is K set to in the experiments? - L95, L247, Sec 4.3: a neural network policy is trained to approximate the individualized optimal dosage, and is claimed to have a computational advantage. However, no empirical results (runtime of two implementations) or analysis (big-O complexity) is shown for compute scalability. Also, is there any performance loss from using this neural network policy compared to an exact argmax? Minor: - Figures: fonts are too small in all figures - L95 "unfeasible" -> "infeasible" - L116 “Ω denotes the set of all possible policy classes” is confusing, I think Ω should be the set of all policies. - L246 I think "max" in the equation should be argmax - L289 "when filtering for T" -> what does this mean? --- **Updates after rebuttal and discussion:** Thank you authors for the response, it resolved most of my questions. I have raised my rating from 5 to 6. I would like to see the final version incorporating the updated limitation section and the details on dataset characteristics. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Sec 6 Conclusion separately discusses limitations and broader impacts. Though the proposed approach aims to avoid unreliable treatments, I think it is still necessary to acknowledge potential failure scenarios of the proposed approach and discuss what are the possible downstream impacts, especially since this work is focused on the high-stakes decision making of treatment dose recommendation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to “Weaknesses” - **Importance of modeling _continuous_ treatments.** Many applications in medicine rely on continuous treatments where discretization would often miss the optimal solution for patients (e.g., chemotherapy).\ We performed additional experiments to demonstrate the gain of our continuous setting over discretized settings **(PDF, Table 5&2)**. Specifically, we demonstrate the shortcomings of discretization, where we evaluate an oracle performance of different grid discretizations, assuming perfect knowledge of the dose-response function and perfect treatment assignment. We thus interpret our “oracle” as an upper bound for all policy learning methods with discretization. We find that discretization itself leads to a higher regret than our proposed method. To this end, our results confirm the advantages of modeling continuous dosages. \ **Simple neural network baseline.** To show the necessity of our specifically developed network architectures, we performed **additional experiments (PDF, Table 1&2)**. We find that our proposed framework is consistently superior: Ablation (1) compares a simple MLP in Step 1 of our method, which is especially inferior for settings with highdimensional confounders (as in the TCGA dataset). (Intuition is given in Appendix D.) Ablation (2) compares the SOTA network for estimating dosage effects VCNet, which is designed for *independent* dosages and not for *dependent* dosage combinations. Different from VCNet, our DCNet is able to model non-linear interactions between the $p$ dosages. This results in clearly superior performance. Ablation (3) varies the estimation in Step 2 of our framework. Therein, we seek to model the GPS, which requires modeling a multidimensional conditional density rather than a simple conditional mean prediction. Hence, a simple neural network is *not* applicable. Instead, we compare our CNFs to mixture density networks (MDN). Evidently, MDNs lead to a very large standard deviation across runs and thus fail to achieve reliable policy learning. As such, our CNFs are preferred. - **Difference to other literature streams.** Thank you for the suggestion to compare our setting against ORL for continuous actions. The main **differences** are: ORL assumes a Markov decision process and sequential decision-making. In contrast, we focus on non-sequential, off-policy learning from observational data. Also, unlike ORL or standard OPE, we leverage the causal structure of treatment assignment and the dose-response function to return **causal** estimands. This allows us not only to learn decisions but also causal estimates for potential outcomes. This is crucial for medical practice, where physicians would like to reason over different treatments rather than blindly following a fully automated system. - **Additional naive baselines.** We would like to emphasize that the ablation studies (first learning a dose-response estimator like the MLP and then optimizing over it) **also** are standard baseline causal effect procedures for off-policy learning, often referred to as the “direct method”. To even better assess our method, we added two further causal baselines to our results **PDF, Table 1, 2**: (1) VCNet as part of the direct method (2) DR Policy Forest as a SOTA doubly-robust method for discretized dosage combinations (5x5) (using logistic regression for propensity score modeling and a weighted lasso regression for outcome modeling). Note that both baselines ignore specific characteristics of our setting (e.g., dependency among dosages). Altogether, we observe that our framework consistently performs best. **Action:** We will include our additional above experimental results in our final paper. We will also discuss more extensively the proposed literature streams. ## Answer to “Questions” - Continuous dosages are common for instance, in chemotherapy, where current guidelines are based on different multiplicative formulas using continuous patient features. **Action:** We will expand our motivation to highlight the need for modeling dosages as a continuous variable. - We define overlap as $f(t, x) > \varepsilon$, meaning that $\varepsilon=0$ still implies a non-zero density. (“weak overlap”). - We will add a more detailed background about tensor product splines to our revised paper. - We will add an intuitive example to our revised paper. - We set $K=5$ in our experiments. The variation within the $K$ runs is displayed in the column “std”. We also included additional results for different restarts of our method with different seeds and train/val/test splits in **PDF, Table 4&5**. Overall, our framework has the lowest variation and is thus highly **stable**. - The complexity of a direct optimization is non-trivial and depends on the complexity of the forward pass of DCNet and CNFs, and the number of patients to evaluate. Also, the exact complexity of the direct optimization depends on the solver cost $\sigma$, it is at least $O(n \times k \times \sigma)$. This becomes costly for large-scale datasets. In contrast, our policy network is trained only once. Hence, upon deployment, it is constant (i.e., $O(1 \times \sigma)$ when measured in terms of $\sigma$). - The argmax cannot be determined analytically, because it depends on the learned nuisance parameters. However, when comparing our predicted policy against directly optimized dosaging, we observed barely any differences. One reason could be that the policy network introduces additional regularization for observations in low overlap regions. ## Response to “Minor” Thank you! We will add the suggested changes to our paper. “Filtering for T (L289)”: MIMIC-IV is a large dataset of patients in intensive care units for different causes, where we filter for patients who are ventilated. ## Response to “Limitations” Thank you! We will expand our discussion around further potential failure scenarios and possible downstream impacts when applying our method in clinal practice. --- Rebuttal Comment 1.1: Comment: Thank you authors for the response, it has resolved most of my questions. I'm happy to raise my overall rating from 5 to 6. Feedback on the response: - Thank you for clarifying my "weak overlap" question, sorry I missed that. - Please include the compute complexity analysis (e.g., a short paragraph in appendix) to support the claim that policy network is more efficient at inference. I have some further clarifications: - **PDF Table 2**: it seems that "MLP+reliable" led to competitive performance on MIMIC but not TCGA. - What's different about the two datasets? The response mentions "highdimensional confounders" but I'm still a little confused. Can you provide a small table summarizing the characteristics and dimensions? Currently it is hard to locate this information, I see MIMIC dosages are respiratory rate and tidal volume, but I can't seem to find what the dosages are for TCGA. - This indicates perhaps the proposed approach is not a one-size-fits-all solution. When might one consider using the simpler MLP as opposed to DCNet? - Re **Response to Limitations**: I suggest the authors elaborate what specific limitations they plan to discuss, e.g. how might proposed approach fail, when might it not be the best approach. - One of the questions by Reviewer 5Gdr is "what is the harm of assuming independent dosage dimensions". I think this is very problem dependent. For example, [Tang et al. 2022](https://openreview.net/forum?id=Jd70afzIvJ4) found that for certain problems it is beneficial to assume independent treatments, as it allows for extrapolation into "underexplored regions" aka regions with "limited overlap". This includes not only when the treatments/dosages are truly independent, but also when there is positive interaction (as opposed to negative interaction, e.g. adverse drug-drug interaction). More discussions about this and what assumptions or domain knowledge is available for the applications of interest should help better clarify the paper contributions. Tang et al. "Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare". NeurIPS 2022. I'm keeping my confidence score at 3, given my somewhat incomplete expertise in causal inference (cf. other reviewers), but personally I believe this paper does bring some good contributions. --- Reply to Comment 1.1.1: Title: 1/2: Discussing Limitations Comment: Thank you very much for your response and for updating your overall rating! As you suggested, we are going to include the complexity analysis of our previous response in the Appendix of our accepted paper to further support the efficiency of our method at inference time compared to direct optimization. We are happy to respond to your further questions: - **PDF Table 2:** - Please find the table summarizing the characteristics and dimensions of our standard setting below. Please note that we are in a semi-synthetic data setting (see also **Appendix B**) so that we can vary the dimension of modeled dosages of $T$ in our ablation studies (as in **Main Paper, Fig. 3**; **Appendix, Fig. 2**; **Response PDF, Fig. 2**). Here, we can give meaning to the additional dosages by additional parameters to control for mechanical ventilation, and additional drugs for chemotherapy, respectively. | | MIMIC | | TCGA || | :---------------- | ------: | :----| ------: | :----| | Variable | Dim. | Meaning | Dim.| Meaning | | $X$ | 33 | patient covariates (age, sex, respiratory and cardiac measurements) | 4000 | gene expression measurements | $T$ | 2 | respiratory rate, tidal volume | 2 | chemotherapy drugs (e.g., doxorubicin, cyclophosphamide) | | $Y$ | 1 |chance of patient survival | 1 |chance of patient survival | - It has been shown that using a naïve neural network with $T$ and $X$ concatenated as inputs (called “MLP” in our experiments) can lead to diminishing effect estimates of $T$ when $X$ is high-dimensional (see reference Shalit et al. 2017). Instead, our DCNet enforces the expressiveness of the effects of the dosage combinations by its custom prediction head and, therefore, is highly powerful for highdimensional settings (as in TCGA with 4000-dimensional features $X$). Intuition is also provided in **Appendix D**. In lower dimensions (as in MIMIC with 33-dimensional features), this benefit is simply smaller, and the MLP shows competitive performance wrt. to the selected policy. Nevertheless, when evaluating with multiple runs over different train/val/test splits, DCNet also shows better performance than MLP at MIMIC, especially wrt. to the variation (**Response PDF, Table 3&4**). Hence, our DCNet should also be preferred in lower-dimensional settings. In contrast to that, a possible setup where the MLP could outperform DCNet is in settings with extremely unsmooth, stepwise dose-response surfaces. While this setup is highly unlikely for dosing problems in medicine, we could imagine this might be the case when applying our method to different modalities, e.g., personalized advertisement in marketing. - **Response to limitations**: - Thank you for the additional input regarding the potential harms of assuming independence between the dosages. While we show the advantages of modeling dependencies in our setting empirically (see our response **Additional evidence for modeling drug-drug interactions** to Reviewer 5Gdr), we would also like to state the importance from a perspective using domain knowledge. As also implied in Tang et al. 2022, while modeling independence can allow for better extrapolation in regions with limited overlap in the case of positive interactions, this does **not hold for non-monotonous or negative (adverse) interactions**. Especially in chemotherapy, negative drug-drug interactions occur frequently (e.g., a combination of doxorubicin and cyclophosphamide can increase the risk of heart problems (Ismail et al. 2020)). Hence, even if it might result in limited extrapolation, for following the Hippocratic Oath ("First do no harm") **modeling potential interactions is fundamental for reliable decision-making in medicine**. To incorporate your feedback, we would like to present our first draft of our revised **Limitations** section in the **following comment:** References: \ Shalit, Uri, Fredrik D. Johansson, and David Sontag. "Estimating individual treatment effect: generalization bounds and algorithms." International conference on machine learning. PMLR, 2017. \ Ismail, Mohammad, et al. "Prevalence and significance of potential drug-drug interactions among cancer patients receiving chemotherapy." BMC cancer 20 (2020): 1-9. \ Tang et al. "Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare". NeurIPS 2022.
Summary: The paper deals with an important and interesting problem in personalized decision making. In particular, the paper considers the setting when there are multiple continuous treatments available and models the joint effect of dosage combinations. A novel off-policy learning algorithm is proposed to solve the problem using the well-constructed network architecture. Specifically, the paper also considers the issue when the (strong) positivity assumption of the propensity score does not hold. To solve this problem, the paper modifies the proposed algorithm to achieve a reliable estimation of the policy value by avoiding regions with limited overlap. Finally, several experiments and real data analysis are provided to demonstrate the performance of the proposed method. Strengths: 1. The paper has a good presentation to illustrate the target problem, underlining causal framework, proposed methods, and related advantages. The authors have skillfully laid out the paper, providing a clear and concise overview of the problem at hand. The methods proposed by the authors are well-described and thoroughly explained, ensuring that readers can easily grasp the technical aspects of the approach. Furthermore, the paper showcases the advantages of the proposed methods, demonstrating how they address the limitations of existing approaches and offer novel contributions to the field. The well-designed format of presenting the problem, framework, methods, and advantages in a cohesive and coherent manner enhances the overall quality of the paper. 2. The paper provides real data analysis in multiple dosage settings. This empirical analysis adds substantial value to the research as it validates the proposed methods and demonstrates their effectiveness in practical scenarios. Moreover, the multiple dosage settings explored in the analysis indicate the robustness of the proposed methods, making the findings applicable to a wider range of scenarios. Weaknesses: 1. The paper addresses a specific issue related to limited overlap, which can lead to unreliable estimations of policy value. However, it is worth noting that similar and related problems have been discussed in the literature. To the best of our knowledge, the following papers have explored these problems: (1). Ma et al. (2023) Learning Optimal Group-structured Individualized Treatment Rules with Many Treatments. Journal of Machine Learning Research. (2). Ma et al. (2022) Learning Individualized Treatment Rules with Many Treatments: A Supervised Clustering Approach Using Adaptive Fusion. NeurIPS. Although the above papers primarily focus on settings involving discrete treatments, they have also carefully analyzed the limited overlap problem. Specifically, they examine cases where the number of treatments approaches $+\infty$, which can be treated as one of the special cases of continuous treatment, and propose methods for identifying latent treatment grouping structures (combine treatments or combine levels of doses) to address issues of unbalanced treatment assignment and limited overlap. It would be beneficial to include further discussion and comparisons with these methods in the paper, as they may offer valuable insights and contribute to the overall understanding of the problem. 2. The paper mentions an important term, $\bar{\epsilon}$, which acts as a pre-specified threshold controlling the minimum overlap required for the policy estimation when shrinking the treatment-covariate space. It is interesting and important to investigate and provide guidelines for selecting an appropriate value for $\bar{\epsilon}$ in the experimental analysis. This could involve sensitivity analysis or exploring different threshold values to evaluate their impact on the estimated policies and their performance metrics. Assessing the robustness of the proposed method to the choice of $\bar{\epsilon}$ would provide insights into its practical applicability and generalizability across various scenarios. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In contrast to the approaches mentioned in the provided references, which focus on combining similar treatments and estimating treatment structures, this paper tackles the limited overlap problem in the context of continuous dosage by constraining the search space and shrinking the entire searching space. While the methods may differ in their specific techniques, there could be interesting connections between the two approaches. Exploring and discussing these potential connections could provide valuable insights into how different strategies can address the limited overlap problem across different treatment settings. Such a discussion would contribute to a broader understanding of the problem. In particular, are there any connections between these two methods? That would be interesting to discuss. 2. When reducing the searching space to address the limited overlap problem, there may be concerns about whether the estimated policy with the proposed method leads to sub-optimal values at the population level. Understanding the potential trade-offs between reducing the search space and achieving optimal population-level outcomes may be valuable for both theoretical considerations and practical applications. If reducing the searching space, will the estimated policy with proposed method leads to a sub-optimal value in the population level? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Please see it in Weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed, constructive, and positive feedback! ### Response to “Weaknesses” 1. **Importance of continuous vs. discretized treatments.** Thank you for introducing this very interesting and related additional literature to us!\ We understand the importance of benchmarking our framework for continuous treatments against methods that work instead on discretized settings. In principle, translating our continuous setting into a discretized setting would allow us to use a variety of established methods (such as the works that you cited above). We thus added experiments where we discretize the setting and then evaluate the solution quality (see **Response PDF, Table 4&5**). We find that the discretized approaches are consistently inferior compared to our approach which directly operates on continuous values. In particular, we also benchmark an “oracle” discretized baseline (where we assume perfect knowledge of the dose-response function and perfect individual treatment assignment). This serves as an upper bound for all policy learning methods with discretization. Still, that discretization leads to a higher regret than our proposed method, which confirms the advantages of leveraging continuous dosage information. \ **Action:** We will cite the papers. We will add further discussions and comparisons between our method and the proposed papers on learning individualized treatment rules with many treatments. Specifically, we will discuss the connections and differences of the methods for tackling the challenges of limited overlap and biased treatment assignment, as well as the role of discrete and continuous treatment. We will also add the numerical results from the discretization baselines to our revised paper. 2. **Guidelines to select the reliability threshold.** We thank the authors for raising the important point of how to select the reliability threshold $\overline{\varepsilon}$ and for giving us the opportunity to elaborate on the approach we used in our work. We offer a detailed description in **Appendix C** and summarize key points in the following. For our framework, we developed a heuristic guideline to choose $\overline{\varepsilon}$ dependent on $x\%$-quantiles of the estimated GPS on the train dataset. As a result, we aim to learn optimal dosages, which have at least the same estimated reliability as $x\%$ of the observed data. In our experiments, we set $x=5$. Furthermore, we performed a series of sensitivity analysis where we explore different threshold values and then evaluate the impact of $\overline{\varepsilon}$ on the regret of the estimated policies. We show the results in **Response PDF, Figure (1)**. We find that, depending on the dataset, different quantiles for selecting $\overline{\varepsilon}$ can lead to small performance differences. However, we find that lower values tend to lead to less variability. Also, we find that, even when there is higher variability within the $k$ runs, for every setting a run with low regret is selected. This demonstrates that, by using our proposed guideline, our method yields **robust results also over different values of the reliability threshold**. ### Responses to “Questions” 1. We fully agree that discussing potential connections between the provided references and our method will contribute to our paper in terms of providing an even more extensive background of reliable off-policy learning under complex treatments with limited overlap. \ **Action:** We will cite the papers and add a discussion to our revised paper. We will also benchmark the original methods and our “oracle” approach from above to show the importance of dealing with continuous treatments in medicine. 2. We thank you for the interesting question and the opportunity to discuss the effects of reducing the search space on the population-level outcomes. Importantly, even though our framework reduces the search space, it does so in a way that we still optimize against the optimal solution. - **Theoretical explanation.** We would like to emphasize that our method reduces the search space of the treatments during policy learning on an individualized level (conditioned on the patient covariates), but not necessarily on the population-level. This ensures that the treatment recommendation is still flexible across different patients and thus enables optimal individualized policy learning. Formally, recall that the empirical policy value $\hat{V}(\pi) = \frac{1}{n}\sum_{i=1}^{n} \hat{\mu} \left( \pi(x_i), x_i \right)$ is a metric for the population-level performance of a policy. It is then the same as the mean of the individual predicted outcomes. Hence, optimizing over the individual level and the population level leads to the same optimal policy. Hence, our framework which constrains the search space still targets the optimal policy at the population level. - **Empircial confirmation.** We also demonstrate the above empirically in our experiments. Here, we measure the regret which is the difference between the policy value of the optimal policy and the true policy value of our learned policy. We observe that our framework constantly outperforms the policies with unrestricted search spaces (called “naive” in our paper) wrt. to the regret. As a result, this provides empirical evidence that demonstrates the strong performance of our framework at a population level. --- Rebuttal Comment 1.1: Comment: Thanks for the authors providing the clarifications for my review. That would be good if the paper can add the comparison about treatment combination between discrete and continuous treatments. I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for increasing your overall rating! We will include all suggested points in the paper. Should you have any further questions or comments that should be addressed before a final decision, feel free to further post them here.
Summary: The paper introduces a new methodology to derive a policy to assign individualized dosage combinations relying on retrospectively collected observational data. The method relies on off-policy learning and try to account for the joint effect of multiple dependent dosages. A neural network is used to estimate individualized dose-response functions, accounting for the joint effect of multiple dependent dosages. The policy is then trained, avoiding regions with limited overlap to ensure reliable policy evaluation. The developed method is then evaluated using semi-synthetic data relying on MIMIC-IV and TCGA. The robustness of the methodology is also assessed testing different levels of overlap for patients under different treatment regimens. The policy derived using the proposed method achieves low regret and low variance on all the experiments performed. Strengths: The paper combines the idea of several recently published papers to derive the global method accounting for joint effects of dosage combinations. - A tensor product basis is used to estimate interaction effect and integrated into a model similar to [VCNet](https://arxiv.org/abs/2103.07861); - The generalized propensity score is estimated using conditional normalized flows, increasing reliability and avoiding discretization of the problem; - A NN is used for policy learning relying on the estimated dose response function and GPS; A nice protocol is then performed, assessing the performance of the model in the presence of dose combinations with a joint effect. Weaknesses: In the optimization problem of equation (6) (line 237), the objective function and constraints can be non convex. There is no reason when applying the method of Lagrangian multipliers to have a function that is concave in the Lagrangian multipliers $\lambda$. The method you rely on, [gradient descent ascent](https://arxiv.org/pdf/1906.00331.pdf) explicitly requires the concavity in $\lambda$. This can lead to sub optimality and all the nice properties of the method are lost. Given this, we can expect that the methodology will induce a lot of variability. If part of it is captured with the random restart proposed, we still expected it to be highly unstable. The results don't reflect that with a surprising std at 0 for the policy derived, whatever the task is. This might be due to the overly simple ground truth dose response function and GPS. Elements to assess this hypothesis are provided in the Questions. This also might be due to seed tuning. It would be interesting to repeat the split into train, validation and test sets and take the variation in the best underlying regret into account. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As mentioned above, the results are only displayed for one split, being potentially prone to seed tuning, it would be interesting to repeat the split into train, validation and test sets and take this variation into account. Given that you have access to the ground truth for the dose response function and GPS, could you also provide the error to estimate them in your experiments. This could provide insights on why the observed std is a 0 and motivate a more realistic dose-response function. In the TCGA dataset and MIMIC-IV, the outcome Y is taken as the patient's survival. It's very uncommon to get a dataset with full observation of Y and the outcome is often right censored. Did you think of a way to take this into account? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: T is the assigned p-dimensional dosage combination. In practice most of the time it's a sequence of treatment, varying dosage and number of cycles. The setting in the paper is an advancement over even more simple settings but is far from reflecting the data you can access in clinical routine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Answer: Thank you for your detailed and constructive feedback. We improved our paper in the following ways: ## Responses to “Weaknesses” - Thank you for giving us the opportunity to clarify the optimization part in our paper. We agree that our final objective in Eq. (7) is non-concave in the neural network parameters $\theta$. However, for fixed $\theta$, Eq. (7) is a linear function in terms of the Lagrange multiplier $\lambda$ and thus **concave**. We therefore fulfill the assumption of the mentioned paper on gradient descent-ascent (Lin et al. 2021). Moreover, **we follow established literature** (e.g., Bui et al. 2022) on adversarial learning that uses gradient descent-ascent for non-convex concave objectives. The fact that Eq. (7) is non-convex in $\theta$ means that we may converge to local minima, yet which must not be attributed to the optimization but to the neural network approach. In our paper, we thus address this by re-running the optimization with different initializations of the learnable parameters and find that the performance remains stable (next bullet item). - We thank you for the opportunity to explain why the performance of our framework method between the $k$ different re-runs is stable and of low variability. To improve our paper, we have thus added new results: (1) We show the performance across different splits and find that there is low variability ( **PDF, Table 3**). (2) We added an additional semi-synthetic experiment with a more complex setting. Therein, we again find that the performance is stable and of low variability ( **PDF, Table 4**). To this end, our performance is **not** the result of seed tuning or overly simplistic nuisance functions. Rather, we exploit the properties of our medical setting (see Appendix B), as we have a complex multimodal dose-response function, where the dosage assignment is already informed to a certain degree by domain knowledge but must then be further optimized. This means, with increasing dosage bias $\alpha$, the probability mass around the optimal dosage combinations increases, while, in areas further away from the optimum, overlap becomes more limited. This is a reasonable assumption in medicine, as clinical practice is based on expert knowledge for deploying drug-dosage prescriptions, which minimizes the risk of observing harmful treatment assignments frequently. **Action:** We will add additional numerical results that demonstrate that our performance is stable to our paper. As shown above, the results confirm the effectiveness of our framework. ## Responses to “Questions” We thank the reviewer for the suggestions and added the following experiments. - **Performance across different splits (PDF, Table 3&4)**: We find that our novel method consistently outperforms the baselines, yielding a consistently lower regret and lower variability. Note that this is *between* the different seeds as well as *within* the seeds between the k=5 restarts. As such, we conclude that the performance of our framework is robust. - **Results for nuisance estimators (PDF, Table 1)**: We further report the performance of our nuisance estimators. Here, for the dose-response estimators we show: (1) the performance on the test dataset measured by the mean integrated squared error (MISE) on the whole covariate treatment space; and (2) the MISE only in regions with sufficient overlap. Our results demonstrate that our DCNet is **highly effective** in learning the dose-response function. Compared to the baselines, we observe that our DCNet does not try to learn the whole complex dose-response surface (which includes low-trust regions) but focuses strategically on the areas with sufficient overlap (as desired). This is key for our framework as it ensures accurate targeting of the optimal policy. We provide further intuition in Appendix D.\ We also report results from a more complex setting **(PDF, Table 4)** to show that our method generalizes to other complex scenarios. For this, we model the GPS as a multimodal conditional density and ensure that the optimal dosage combination is not necessarily located in regions around the maximal density. Importantly, our framework outperforms the baselines by a clear margin (i.e., lower regret and lower variability). This confirms the robustness of our framework. - **Extensions of our experimental setup.** Thank you for this important question. We designed our analysis around patient survival as it is often used for benchmarking in medical practice. Needless to say, our method can be adapted to other metrics of interest and even to right-censored data. Here, an interesting direction is to extend our framework with, for example, the approach in Curth et al. (2021). Importantly, our framework consists of three steps (dose-response estimation, GPS estimation, constrained optimization) which are not restricted to our specific setting of observed outcomes but are general. In particular, our framework can thus be extended to survival analysis with right-censored data in a straightforward manner. **Action:** We will add the above experiments to our revised paper. We will also add a discussion on how to handle right-censored data. ## Response to Limitations We agree that medical data often is even more complex than in our scenario, and we will add this to our discussion of limitations. Nevertheless, we would like to emphasize that our paper makes important contributions over existing literature in order to advance personalized treatment design in real-world clinical applications. ## Remark to Contribution Since the contribution was evaluated as “1 Poor”, we would like to emphasize the novelty of our work and refer to our General Author Rebuttal. Bui, Tuan Anh, et al. "A unified wasserstein distributional robustness framework for adversarial training." arXiv (2022)\ Curth, Alicia,et. al. "Survite: Learning heterogeneous treatment effects from time-to-event data." NeurIPS (2021) --- Rebuttal Comment 1.1: Comment: Thanks a lot for the answers and the additional experiments you performed. My main concern was to observe an std at 0 for the policy derived, whatever the task is. Table 3 in the new pdf seems to address this. It's quite unusual however to have a confidence interval for the std and the std reported two time (in the mean column and in the std column). Maybe it would be more clear to just compute the mean and the std over the 5 runs and the 5 restarts (over the 25 results). Given the new experiments, extending to more complex setup and better assessing the robustness of the global framework, I updated my overall rating to 5. --- Reply to Comment 1.1.1: Title: Interpretation of Response PDF, Table 3&4 Comment: Thank you for your response and for raising your overall rating! We are happy we could address your concerns sufficiently. We agree that **Response PDF, Table 3&4** may seem unusual at first sight. Here, want to elaborate on the advantages of our presentation and give a short intuition. 1. The column “**selected**" displays the final performance of our method when using the finally recommended policy (chosen by our validation loss in **Main Paper, Eq. 9**). Hence, its $ mean \pm sd $ can be considered the expected regret with Monte-Carlo-CV confidence intervals (including retraining the nuisance functions) and should be primarily used for evaluating the performance and for comparison with different policy learning methods. We observe that our method significantly outperforms the baselines. 2. The column “**mean**” represents the means within the $k$ restarts of the policy learning (step 3 in our method). As such, its values can indicate how a randomly selected policy out of the $k$ restarts would perform compared to the “selected one” (e.g., we observe that for the “naive” baselines, by using a naive MSE-loss selection often even suboptimal methods are selected, whereas our method shows robust behavior). 3. The column “**std**” shows the standard deviation within the $k$ restarts of the policy learning (step 3 in our method). As such, its values indicate how often similar performing policies are learned. It thus refers to the robustness within the $k$ runs. As a consequence, lower values imply that (i) probably fewer restarts $k$ are needed to find the targeted optimum, and (ii) fewer local optima are learned, which reduces the risk of selecting a suboptimal policy. Hence, in combination with summarizing multiple re-runs over different splits, these three metrics give valuable insights into the robustness of our method. In contrast, when we would just calculate the mean of the different selected runs (or the mean of the means) and display the standard deviations over all 25 restarts, this would lead to incorrectly very wide confidence intervals, especially in very complex data settings where there might be a high standard deviation within the $k$ policy restarts but still a low variation between the finally selected policies of the split-re-runs. Hence, we suggest using the “**selected**” column ($ mean \pm sd $) for comparing the performance to all baselines methods (e.g., including the ones using discretization); and the other two columns (“**mean**”, “**std”**) to additionally show the inner robustness of our method compared to ablation baselines. We hope we could address all your concerns adequately and that you can recommend acceptance of our paper. If you still have any open questions or further suggestions on how we could improve our paper, we would be happy to address all of them during the remaining discussion period.
Rebuttal 1: Rebuttal: Thank you very much for the constructive and positive evaluation of our paper and your helpful comments! We addressed all of them in the comments below and uploaded additional results as a PDF file. Our main improvements are the following: - We provide further **extensive experiments**: \ (1) We report the performance of our nuisance estimators and show our framework is superior. \ (2) We report results for multiple train/val/test splits for (a) our existing setting and (b) for an even more complex, multimodal setting. Here, we demonstrate that the performance of our framework is robust across all settings. \ (3) We include additional causal baselines for (a) dose-response estimation, (b) GPS estimation, and (c) policy learning. We observe that all components of our framework lead to performance gains. \ (4) We show that our framework for continuous treatments outperforms baselines based on discretized treatments. \ (5) We compare additional, standard causal baselines and find that our framework still perfoms best. \ (6) We demonstrate the scalability of our framework to higher dimensions. - We discussed the **connections of our method to further related work** (e.g., offline reinforcement learning, causal inference for discrete treatments) and clarify the necessity and advantages of using our method for reliable off-policy learning for dosage combinations. - We elaborate on the **role of the reliability threshold** $\overline\varepsilon$ for reliable policy learning in our method. We further propose a guideline for selecting $\overline\varepsilon$. We also show that our framework is robust to different choices of $\overline\varepsilon$. As a summary, we would like to emphasize our contribution again: To the best of our knowledge, we are the first to tackle the problem of learning optimal individualized dosage combinations from a causal perspective. For this, we adjust for the naturally occurring but non-trivial problems of drug-drug interactions and limited overlap. Crucially, our study setting is highly relevant in medical practice and present the foundation for personalized decision-making in cancer therapy and critical care. Even more, we implemented additional baselines and show that our method clearly outperforms existing methods. Rather, our problem requires a tailored and non-trivial framework as ours. As such, we are confident to provide a valuable contribution toward learning reliable policies for dosage combinations in medicine. We will incorporate all changes (labeled with **Action**) into the camera-ready version of our paper. Given these improvements, we are confident that our paper will be a valuable contribution to the machine learning for healthcare literature and a good fit for NeurIPS 2023. Pdf: /pdf/a5d8856d56f857eaa369fef1da44da3b87632fba.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DP-SGD Without Clipping: The Lipschitz Neural Network Way
Reject
Summary: - The paper investigates the use of Lipschitz constrained networks to replace clipping functions and limit gradient sensitivity in DP-SGD. - Lipschitz constrained networks are utilized as an alternative to clipping in order to address the issues of clipping's impact on convergence and performance in DP-SGD. Strengths: - The idea of removing clipping as an alternative to clipping itself is promising, as clipping is known to have detrimental effects on convergence and performance of DP-SGD, even without noise addition [1]. - The paper introduces the replacement of Vector-Jacobian product with Scalar-Scalar product to reduce computational complexity. The proposed methods outperform existing SGD approaches by a significant margin in terms of speed, which is crucial as memory usage and time inefficiency are major drawbacks of DP-SGD. [1] Differntially Private Shaprness-Aware Training (ICML’23) Weaknesses: - Please refer to the questions. - (Minor) There are several typos, such as the use of "cotangeant vector" which sounds little awkward, and inconsistencies in figure references (e.g., Fig 4 vs. Figure 5). Please carefully review the grammar and correct the typos. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The main concern is whether "Clipless DP-SGD" is truly effective compared to "Clip DP-SGD" in terms of training stability and performance. While it is understood that the main idea is to address the detrimental effects of clipping by proposing a clipping-free approach, the proposed method still relies on approximations or has drawbacks such as the Lipschitz constrained condition or gradient approximation with scalar-scalar form. Thus, (i) Can you provide mathematical bounds to support the strength of your methods? or (ii) Can you provide additional experimental results beyond Figure 5 to further demonstrate the performance of your approach? - Recent papers have explored different ways of using DP-SGD, particularly in fine-tuning tasks, to mitigate memory usage and time inefficiency [2,3]. Considering that their approaches may differ from training from scratch tasks, it would be worth to investigate whether your proposed networks are well-suited for fine-tuning tasks to improve generalization performance. - Can you provide more details on the implementation of GNP networks in DP-SGD? The implementation of 1-Lipschitz seems to be naive and detrimental to traininig, similar to selecting a norm clipping value in clip DP-SGD. However, the authors mention a problem in this implementation, stating, "With Gradient Norm Preserving (GNP) networks, we expect to mitigate this issue" (line number 236). Could you further explain how your methods address this problem? or let me know that my understanding is incorrect. I’d be happy to raise my score if the authors can address the weaknesses/questions in the rebuttal. [2] Large language models can be strong differentially private learners (ICLR’22) [3] Scalable and efficient training of large convolutional neural networks with differential privacy (NeurIPS’22) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The paper provides a detailed discussion of its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading and your challenging questions. #### **Question (i)** Note that the scalar-scalar form is not an approximation: it is a valid upper bound (the “=” symbol is a typo, it is actually “<=”) coming from applying Cauchy-Swartz inequality to a matrix-vector product. Therefore all the guarantees reported are correct, and that has been verified empirically. Could you detail the nature of the mathematical bounds you wish to see? If it concerns the expressiveness of Lipschitz constrained networks, please see our common response. For more details on the computation of gradient bounds, and on the variance of the gradient, you can take a look at the proofs in Appendix E.2 and E.3. For an analysis on the consequence of gradient clipping in Lipschitz networks see Appendix B.5. #### **Question (ii)** We ran additional experiments on two tabular datasets (Adult and Breast Cancer) , where our approach is competitive with vanilla DP-SGD for a comparable hyperparameter optimization budget. Please see the general response and attached PDF for detailed results. > Can you provide more details on the implementation of GNP networks in DP-SGD The whole network can be constrained to be 1-Lipschitz with respect to the input, and will remain able to fit arbitrary decision boundaries (see Bethune et al.) so there is no need to tune the Lipschitz constant. Moreover the parametrization is not so naive, as discussed in Appendix B.1. In short, we rely on spectral normalization to enforce the 1-Lipschitz constraint. The leading eigenvector (corresponding to maximum singular value) is cached from one gradient step to another to speed-up computations. Moreover, Bjorck algorithm (see Anil et al.(2019)) projects matrices on the Stiefel manifold (i.e close to being orthogonal), which ensures that the gradient norm remains unchanged after one gradient step. Indeed, if the matrix is orthogonal, then the Jacobian is orthogonal too, which ensures that VJP products are norm preserving (see Li et al (2019)). *Béthune, L., Boissin, T., Serrurier, M., Mamalet, F., Friedrich, C. and Gonzalez Sanz, A., 2022. Pay attention to your loss: understanding misconceptions about lipschitz neural networks. Advances in Neural Information Processing Systems, 35, pp.20077-20091.* *Anil, C., Lucas, J. and Grosse, R., 2019, May. Sorting out Lipschitz function approximation. In International Conference on Machine Learning (pp. 291-301). PMLR.* *Li, Q., Haque, S., Anil, C., Lucas, J., Grosse, R.B. and Jacobsen, J.H., 2019. Preventing gradient attenuation in lipschitz constrained convolutional networks. Advances in neural information processing systems, 32.* > it would be worth to investigate whether your proposed networks are well-suited for fine-tuning tasks to improve generalization performance Although this paper focuses on training networks from scratch, it is worth noting that it is compliant with fine tuning in two different ways: 1) By using a 1-lipschitz backbone (recent work of Serrurier et al. trained a backbone that reached 70% accuracy on imagenet 1k) 2) Our algorithm is compliant with the partial training of the last layers: one can finetune a standard backbone with the last layers replaced by their Lipschitz equivalent. The only non-trivial change is that the input of this last layer must be bounded, which can be enforced easily. We leave this for future work. --- Rebuttal Comment 1.1: Comment: Thank you for responses and clarifications that the authors provided. - One of my remaining questions is about the performance aspect. As my first question is little ambiguous, I point out that my original question point was to highlight that GNP appears to be inherently more effective than gradient clipping in the context of deep learning optimization even though theoretically the proposed methods have smaller upper bound. My concern arose after observing the CIFAR-10 accuracies depicted in Figure 5. Although the authors mention the potential for future integration, it seems, in my view, that employing GNP is still worried rather than using gradient clipping. At least in my opinion, using GNP is better than gradient clipping when $\epsilon$ is small. --- Reply to Comment 1.1.1: Comment: Thank you for your insighful question. > At least in my opinion, using GNP is better than gradient clipping when is small. Your remark raises interesting questions about the dynamics of DP training in neural networks. This reminds us the following excerpt of De et al (p6 of their work) that we’d like to share here: "*[...] reducing the variance introduced by noise may be more important than reducing the bias introduced by clipping.*" We remind their concern was attaining high accuracy in higher $\epsilon$ regimes. Gradient clippings reduces variance at the cost of increased bias. Clipless DP-SGD removes the bias but suffers from higher variance if the gradient bounds are too lose. If your hypothese is correct, it means that low bias is more important is low $\epsilon$ regimes, and reduced is more important is higher $\epsilon$ regimes. Intuitively, that seems natural. We tested your hypothesis by tuning hyper-parameters in the small $\epsilon$ regime, and we obtain the following results on Cifar-10: | **Epsilon** | 3.5 | 4.1 | 4.9 | | ---- | ----- | ---- | ----- | | **Val. Acc.** | 38.9 | 40.4 | 42.1 | With smaller epsilon constraints: | **Epsilon** | 0.75 | 0.96 | 1.14 | ---- | ----- | ---- | --- | | **Val. Acc.** | 31.5 | 34.2 | 35.2
Summary: Differentially Private (DP) Deep Neural Networks (DNNs) face challenges in estimating tight bounds on the sensitivity of the network’s layers. Instead, they rely on a per-sample gradient clipping process (as argued by the authors). This process not only biases the direction of the gradients but also proves costly in both memory consumption and computation. To provide sensitivity bounds and avoid the drawbacks of the clipping process, the authors provide a theoretical analysis of Lipschitz constrained networks, and uncovers a previously unexplored link between the Lipschitz constant with respect to their input and the one with respect to their parameters. By bounding the Lipschitz constant of each layer with respect to its parameters, the authors argue it will guarantee DP training of these networks. Strengths: The paper is well-structured and clearly written. The theoretical part is simple and easy to follow. Weaknesses: Estimating Lipschitzness with respect to parameters may not be necessary. If the network is Lipschitz continuous with respect to the input, its gradient will be bounded, and thus the weight update will also be bounded. So, the motivation may not be rational. Experimental results do not support the arguments. The validation accuracy of the DP-SGD is lower than several referenced works. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Adam uses a normalized gradient between the first-order momentum and square root of the second-order momentum, which makes it robust to Lipschitz variations in different layers. Does it still need gradient clipping? 2. Regarding per-sample gradient clipping, in the current deep learning framework, we usually do not use per-sample gradient clipping. Instead, we apply gradient clipping after obtaining the averaged gradients. In the abstract, the authors mention the difficulty of per-gradient clipping, but I don't understand the reason? 3. Also, gradient clipping is a fast operation. I don't understand why it results in higher memory consumption and computation. 4. Regarding the experiments, the results do not show that the DP-SGD achieves better performance (validation accuracy) than the referenced works. How should we evaluate the effectiveness of this work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses and my questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your efforts in engaging in this discussion. From what we understand, there are a few points which we need to clarify further. As for the summary, our main contribution is to make the DP training process quicker and more memory efficient by **eliminating** the need for per-sample gradient clipping. Our new algorithm is named **Clipless DP-SGD**, while the “DP-SGD” you refer to is the seminal work of Abadi et al (2016). > in the current deep learning framework, we usually do not use per-sample gradient clipping. Instead, we apply gradient clipping after obtaining the averaged gradients. [...] In the abstract, the authors mention the difficulty of per-gradient clipping, but I don't understand the reason? There is a common confusion between the *per-sample gradient* clipping of DP training literature with the one used in non-private training (where people usually work on the *averaged gradient*). In Differential Privacy, the noise must be calibrated to the sensitivity of the function (here, the gradient computation) to the change of one data point (see the seminal paper of the field of Differential Privacy, Dwork et al (2006)). For a traditional neural network it is not possible to compute bounds on the sensitivity for parameter-wise gradient (which is the source of the privacy leakage), as observed in Abadi et al (2016). This is why they proposed DP-SGD: a variation of SGD in which the per-sample gradients are clipped, which ensures they are bounded. This idea is at the core of all DP libraries for deep learning (Opacus, tf_privacy, jax_privacy, etc.), except ours. In order to grasp the difficulty of implementing per-sample clipping, please take a look at this implementation of per-sample gradient computations in Linear layers of Opacus library: [it can be cumbersome and costly](https://github.com/pytorch/opacus/blob/main/opacus/grad_sample/linear.py). *Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K. and Zhang, L., 2016, October. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (pp. 308-318).* *Dwork, C., McSherry, F., Nissim, K. and Smith, A., 2006. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3 (pp. 265-284). Springer Berlin Heidelberg.* > If the network is Lipschitz continuous with respect to the input, its gradient will be bounded, and thus the weight update will also be bounded [...] Estimating Lipschitzness with respect to parameters may not be necessary. Your intuition is correct but this requires additional hypotheses (like Lipschitz loss, bounded input, bounded biases): our whole framework is about introducing those necessary hypotheses (Section E in appendix) and automating the computation of the bound. Without those hypotheses, a conventional neural network **is not** Lipschitz with respect to its parameters over the whole domain. Knowing that the gradient is bounded is not sufficient: to report valid $(\epsilon,\delta)$ guarantees we **need** to calibrate the noise, which requires to know in advance the exact upper bounds on the gradient. Without this knowledge, no DP guarantees can be given. > Also, gradient clipping is a fast operation. I don't understand why it results in higher memory consumption and computation. With private training, we have to clip the gradients on a per-sample basis, which significantly increases runtime and memory consumption. The clipping itself is not the most expensive operation, but rather the cost comes from the computation of per-example gradients. This has been observed since the early work of Abadi et al (2016), and has proven to remain a difficult challenge to this day (please read this excellent survey “How to DP-fy ML?” and references therein for more details). It can also be seen on the runtime of competing frameworks (Figure 4 of our paper). While efficient implementations exist, like discussed in the early work of Goodfellow (2015) or a recent paper of Lee and Kifer (2021), it has proven challenging to overcome for frameworks like tensorflow or pytorch. To the best of our knowledge, only Jax natively supports these operations efficiently. *Goodfellow, I., 2015. Efficient per-example gradient computations. arXiv preprint arXiv:1510.01799.* *Ponomareva, N., Hazimeh, H., Kurakin, A., Xu, Z., Denison, C., McMahan, H.B., Vassilvitskii, S., Chien, S. and Thakurta, A.G., 2023. How to dp-fy ml: A practical guide to machine learning with differential privacy. Journal of Artificial Intelligence Research, 77, pp.1113-1201.* *Lee, J. and Kifer, D., 2021. Scaling up differentially private deep learning with fast per-example gradient clipping. Proceedings on Privacy Enhancing Technologies, 2021(1).* > Adam [...] Does it still need gradient clipping? Yes! For example, [tf_privacy has its own implementation](https://github.com/tensorflow/privacy/blob/fafa69b65c2a20c8e06cab7032e8b25e8dbcea43/tensorflow_privacy/privacy/optimizers/dp_optimizer_keras.py#L600). > Experimental results do not support the arguments This is not true. There are no overclaims in the paper. We claimed that our framework was among the fastest, and this can be verified in Figure 4. We did not claim that our framework yielded SOTA results in Figure 5. Hence, the experimental results support the claims. Once again, thank you for engaging in the discussion. We will use your feedback to improve the introduction and the context. Please do not hesitate to tell us which sections and parts of the paper you would like further clarifications on. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer Coyf Comment: Thank you for your responses and clarifications. - I remain unconvinced by the argument that per-sample gradient clipping significantly enhances speed. Consider the computational complexity of a convolution operator, which stands at $O(C_{in} \times C_{out} \times K \times K \times W \times H)$. In contrast, gradient clipping necessitates only O($C_{in} \times C_{out} \times K \times K$) comparison operators. Similarly, for a linear layer, the computational complexity is $O(C_{in} \times C_{out} \times D)$, while gradient clipping requires only O($C_{in} \times C_{out}$) comparison operators. In both convolutional and linear layers, the reduction in computation appears negligible. - Numerous studies delve into the realm of local or global Lipschitz continuity. I remain skeptical about the challenge in guaranteeing bounded inputs, loss, and bias terms. Thus, I prefer to keep my previous rating. --- Reply to Comment 1.1.1: Comment: > I remain unconvinced by the argument that per-sample gradient clipping significantly enhances speed I think there is a misunderstanding here: **per-sample gradient clipping is the solution of litterature, not the one of our paper**. Beside, as we explain in our review, the computional cost comes from per-sample gradient computation. We are confused by your review. It feels like your criticizing the whole field of research rather than our contribution. You say you remain unconvinced, but did you read the references we sent ? If you believe that speed is not an issue in DP training, are you suggesting that existing libraries like Opacus or tf_privacy are not implemented correctly ? Are you suggesting that the survey "*How to dp-fy ml*" (that was presented in ICML 2023 as a tutorial) is ill-informed on the challenges of the field ? How do you explain the speed improvement of our framework over concurrent frameworks if you think the problem does not exist at all ? > Similarly, for a linear layer, the computational complexity is [...] convolutional and linear layers, the reduction in computation appears negligible. You omitted to take into account the batch size $B$ in your complexity evaluation. Your computations are only valid for gradient clipping on the *average gradient*, whereas the field of DP training relies on *per-sample gradient* computations. Therefore, it is not relevant for DP training. We suggest you take a look at Section 4 of the seminal paper of Abadi et al. We hope their analysis can convince you. > Numerous studies delve into the realm of local or global Lipschitz continuity. I remain skeptical about the challenge in guaranteeing bounded inputs, loss, and bias terms. Would you mind citing a *single one* of those "numerous studies" that provides formal Lipschitz guarantees over parameter space like we do ? Can you show us a *single neural network* (beside ours) for which you know the global upper bound of the Lipschitz constant over the parameter space ?
Summary: The paper addresses the problem of efficiently bounding the sensitivity of gradients in DP-SGD by using special architectures the layers of which can be proven to be Lipschitz with respect to the parameters, hence bounded gradient. They show how to recursively calculate the sensitivity of a sequence of layers, and incorporate the method in an algorithm to perform private SGD without clipping. Strengths: The writing is exceptionally lucid. The concept is original and potentially significant, although there are at present many limitations. Weaknesses: There are a lot of constraints on the architecture that severely limit the potential of the method for short-term impact. I'm torn, because introducing the concept at this stage is of value, but far more work must be done -- both theoretical, in establishing the requisite bounds for popular architectures -- and experimental, in demonstrating that the approach achieves good points on the privacy/utility/efficiency Pareto frontier -- before we can assess the significance of the work. I'm not convinced that it isn't a major problem that the gradients can vanish during training. This is the reason for the success of adaptive (layer-wise) clipping strategies. In particular see "EXPLORING THE LIMITS OF DIFFERENTIALLY PRIVATE DEEP LEARNING WITH GROUP-WISE CLIPPING" which would seem to enjoy the efficiency of your approach without the drawbacks of vanishing gradients or restricted architecture class. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: What is meant by “the activation is centered in zero”-- in particular, is ReLU really centered? Experimental results. Figure 4 is too small. I can't see tensorflow_privacy? Also it appears that the advantage over optax is negligible. In Figure 5, the other algorithms get one or a few points, while their own algorithm is given the benefit of thousands of hyperparameter choices to establish the Pareto frontier. Moreover the result is negative: for EMNIST the other algorithms match yours, and for the other datasets the other algorithms have higher accuracy. Probably the DP definition should be using add/remove-one neighborhoods, in accordance with the use of (pretend) Poisson sampling for accounting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Limitations are honestly and adequately discussed. No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Despite your rating, we note that you recognize the originality of our work and its potential impact, and that you did not find any major issue in our method nor our methodology. Your concern seems to be about the short term impact of the paper. We are not convinced that beating SOTA results is necessary for a novel idea to be interesting to the scientific community, especially when it is accompanied by a Python package to ease reproducibility and future evaluations. Below, we detail why the theoretical and empirical constraints you pointed out are not so severe. ### Theoretical work > There are a lot of constraints on the architecture that severely limit the potential of the method for short-term impact. The `DP_Layer` blocks we implemented are flexible enough to accommodate most popular neural network architectures used in computer vision or tabular data (e.g convolutions, Lipschitz pooling, Lipschitz activations, layer centering in Appendix C.2.3, etc). These blocks also allow residual connections. For instance, we easily re-implemented Lipschitz constrained ResNets (see Appendix B4 p17, and Figure 8 p18), VGGs and even the recent MLP Mixers [54] (Appendix C.2.4), and we report their individual results in Figure 10 p26. You can also take a look at the folder “experiments” in the Python package, or the documentation of the Python package. Note that Theorem 1 is also quite general and covers most feed-forward architectures. See our common response for more details about the expressiveness of this family of architectures. ### Experimental work Please see our general response regarding the positioning of the paper,the empirical comparison to prior work, and additional experiments against a vanilla DP-SGD baseline. In Lipschitz networks training, vanishing gradients can be a serious issue (see Li et al (2019)), and they yield unfavorable signal to noise ratio during backprop for private training with our clipless method. We are grateful for your suggestion regarding the use of group-wise gradient clipping. Nonetheless, the method you mention does not intend to get rid of per-sample clipping, hence it is orthogonal to ours. The two methods could however be complementary and their combination investigated in future work. *Li, Q., Haque, S., Anil, C., Lucas, J., Grosse, R.B. and Jacobsen, J.H., 2019. Preventing gradient attenuation in lipschitz constrained convolutional networks. Advances in neural information processing systems, 32.* ### Questions > What is meant by “the activation is centered in zero”-- in particular, is ReLU really centered? Activation is centered in zero means that $\sigma(0) = 0$. We define it formally in Assumption 2 p28 in the appendix. This includes ReLu, LeakyRelu, tanh, Groupsort, and a few others. This hypothesis allows tighter bounds in Theorem 1 than just assuming “bounded activations”. We will clarify the meaning of this hypothesis. > Figure 4 is too small. I can't see tensorflow_privacy? Thank you for your comment. We will make the figure bigger. The tensorflow_privacy library seems inefficient in terms of runtime and memory usage, therefore we had to cap the plot to batch sizes not bigger than 256. > In Figure 5, the other algorithms get one or a few points, while their own algorithm is given the benefit of thousands of hyperparameter choices to establish the Pareto frontier The results we give as baselines for the traditional DP-SGD are results specifically selected by previous papers that typically required extensive tuning across multiple studies to yield SOTA results on these datasets. These papers often do not provide a comprehensive figure of the Pareto frontier and our aim was to be more transparent about our privacy/utility values on a larger spectrum of privacy budgets. Also the green lines you see appear since we log the (epsilon,val_accuracy) tuple corresponding to every epoch of every run: an individual point is not yielded by one hyperparameter choice, rather, it is the status of one of our experiments at one point in time. Thanks to the efficiency of our framework, these experiments have been conducted on a single GPU in a few days. We see this exhaustivity as further proof that our method scales well. *Since you are not the only reviewer to have misinterpreted this figure we will modify the caption.* > Probably the DP definition should be using add/remove-one neighborhoods Your remark is correct, we will modify the definition accordingly. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I appreciate that it should not be necessary to beat all SOTA results as a precondition for initial publication. But I feel there is a high potential for this method to produce poor models in terms of accuracy, and there are not sufficient experimental results to demonstrate that this is not occurring. My concern is the same as reviewer 6vik's weakness #1. Without clipping, if you have very small gradient norms, your signal-to-noise ratio is low, and the trained model will have poor performance. Clipping lets you control this signal-to-noise ratio more effectively, particularly wtih adaptive clipping, or better yet layer-wise adaptive clipping, which also solves the efficiency problems around per-example clipping. So I feel there needs to be more clear empirical evidence that the method does not suffer compared to those methods in terms of accuracy. Another idea that might help to alleviate this concern would be to actually measure the empirical gradients and report their distribution: what is the empirical signal-to-noise ratio? But that would be outside of the scope of this review process. I maintain that it is not fair to show your Pareto frontier compared to a single point from another algorithm. I see your point that ideally we would all be publishing our Pareto frontiers, but since we are not, the honest comparison is to run your own hyperparameter tuning and to compare it to the other published result. Thank you for clarifying that the architectural constraints are not as severe as I had thought. I will increase my score to 3 and decrease my confidence. --- Reply to Comment 1.1.1: Comment: Thank you for opening this discussion. > Another idea that might help to alleviate this concern would be to actually measure the empirical gradients and report their distribution We did it in **figures 6.b) and 7.b) in appendix p17**. We see that the gradient norms for GNP networks is *higher* than the one of conventional networks. > Without clipping, if you have very small gradient norms, your signal-to-noise ratio is low, and the trained model will have poor performance. The signal-to-noise ratio is not the only metric that influences the final utility: the gradient need not to be unbiased for the signal to be actually useful. By getting rid of clipping, we also get rid of the induced bias. > I appreciate that it should not be necessary to beat all SOTA results as a precondition for initial publication [...] I see your point that ideally we would all be publishing our Pareto frontiers We agree; that is indeed a fairer experimental procedure, and our work pushes towards this ideal. > I feel there is a high potential for this method to produce poor models in terms of accuracy [...] Note that in the rebuttal we provided 2 new datasets on which the final utility is competitive (or exceed) the one of vanilla DP-SGD. > I maintain that it is not fair to show your Pareto frontier compared to a single point from another algorithm [...] the honest comparison is to run your own hyperparameter tuning and to compare it to the other published result When offering a library, average behavior becomes more indicative of typical performance than seeking out the extremes. In a scenario with an unlimited computational budget, even a random baseline holds a nonzero chance of outperforming the current state of the art (SOTA). Hence, when we emphasize the highest score in our reporting, what are we truly measuring apart from the computational budget? However, we understand your overall concern, please see this additional experiment of Clipless DP-SGD that uses standardized Cifar-10 dataset (assuming no privacy loss in computing mean and std, just like Opacus tutorial): | **Epsilon** | 4.05 | 5.0 | 8.01 | 11.1 | 15.03 | | ----- | ------ | ------ | ------ | ----- | ---- | | **Val. Acc.**| 37.6 | 39.3 | 42.0 | 43.2 | 44.5 | We see that we close the gap with the results of De et al, which are among the top-performers on Cifar-10.
Summary: The paper studies the question of how to do differentially private optimization without using per-sample gradient clipping, in order to simplify and speedup the iteration cost. The paper proposes to restrict the class of functions to feed-forward neural networks for which it is feasible to compute bound on the gradient norm (Lipshitz constant), and proposes to compute adaptively the bound on the gradient norm (layer-wise) at every step of DP-SGD depending on the current iterate point. Paper provides the description of the algorithm, as well as evaluates its practical behavior. Strengths: - An efficient implementation of the algorithm is provided. - Experiments show that per-iteration runtime of the proposed algorithm is indeed faster. - Overall the paper is interesting and novel and provides a new direction for future research. Weaknesses: 1. No clear comparison of the proposed algorithm to the baseline method (DP-SGD) is given in terms of the final accuracy. When restricting to the same architecture, it is unclear if the proposed algorithm can still reach the good accuracy compared to the classical DP-SGD with gradient clipping. Without clipping the gradients, the amount of the added DP noise to each gradient is larger than if you clip the gradients, which might hurt the final performance. 2. From the experiments on CIFAR10 one might conclude that for the same privacy $\epsilon$ the final accuracy of the baselines is much better than of the proposed algorithm, which makes the proposed algorithm not applicable. 3. In the “local” strategy (line 201), how exactly did you calculate the amount of the noise to be added? I did not find a clear description of the “local” strategy, and how it is different from the “global” strategy. 4. Some parts of the paper are not very clearly written (see questions below). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What do the green lines in pareto-front on Figure 5 represent ? 2. Remark 1 says that the paper proposes a more efficient way to compute X_d, however I did not find the algorithm in the paper. 3. It is unclear from the current presentation what is the connection of Theorem 1 with Algorithm 1, is it only for the easier computing of step 6 of Algorithm 1 ? What about steps 2 and 7? 4. I did not understand the paragraph in lines 258-267. However some hybrid approach might be indeed a good solution: if the estimated worst case gradient norm is large, the training might benefit from clipping in order to reduce the amount of noise added to the gradients, however, when estimated worst case gradient norm is small, the training would benefit of using the estimates and have little added noise. 5. Many experimental details are missing: which architecture/hyperparameters did you use, etc. 6. A very minor comment: when printing on the paper, formulas in green and yellow color are not very visible, try using a different color. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed questions and careful reading. #### **Remark 1** We agree with your observation that a direct comparison to vanilla DP-SGD is valuable. Please see our general response for preliminary results on Cifar-10 and on two tabular datasets. Note that there isn’t exactly a 1-to-1 mapping between conventional networks and Lipschitz networks, hence some architectures can be advantageous towards one algorithm or the other (see Appendix A1). #### **Remark 2** Note that the competing results we report are not baselines, nor vanilla DP-SGD, but rather the SOTA in the DP training literature. They were obtained using different architectures and relied on ad-hoc pre-processing and algorithmic techniques on top of vanilla DP-SGD. For instance, in the case of CIFAR10: * Abadi et al. rely on a differential private PCA embedding on the images central pixels. * Feldman & Zrnic use a Rényi filter to account for individual privacy leakages. * Bu et al. leverage the properties of f-Differential Privacy. * Chen & Lee use the Armijo condition with noisy gradients and function estimates to find the optimal step size. * Papernot et al. uses tempered sigmoids as activation functions. * Nasr et al. encode the gradient and appeal to denoising techniques whereas we learn the raw cifar10 from pixels. * Yu et al. appeal to the definition of concentrated Differential Privacy for tighter bounds and also implement a dynamic privacy budget allocator to improve model quality. We note that our framework is theoretically compatible with most of these improvements. On MNIST, they are not used as extensively and our approach is on par with the SOTA results. #### **Remark 3** In “local strategy” the sensitivity is computed on a per-layer basis, and the noise added is calibrated to each layer individually. In the global strategy, the noise is calibrated to the sensitivity of the whole gradient vector. Please see Appendix A2 p13 (and Algorithm 3 p14) for more details. #### **Question 1** Each semi-transparent dot on a line is a (val_acc, epsilon) pair measured at the end of an epoch, and over several epochs they correspond to the privacy/accuracy curve of a given set of hyper-parameters. The green line materializes the Pareto front (convex hull of all measurements). #### **Question 2** The papers we cite in Remark 1 propose formal certification methods (i.e bounding boxes, or bounded polytopes, mixed integer nonlinear programming) to bound the output of the network as function of the bounds given in input. Our Algorithm 1 propagates balls, which is significantly faster than formal certification methods, but also less tight. We will clarify this remark. #### **Question 3** Theorem 1 gives an analytical bound on the global sensitivity $\Delta$, as a function of various parameters (maximum bias, smoothness of the loss, maximum input bound, etc). This is useful to get a better understanding of the behavior of the algorithm. Of course in practice we use Algorithm 1 as it is much more practical and versatile since it back-propagates the bounds that have been manually derived in Theorem 1. In particular, the main steps of Algorithm 1 are: * Step 2 : the input bound is computed with a forward pass. * Step 6 : computes the layer sensitivity from the gradient bounds and from the characteristics of the layer. * Step 7 : scalar-scalar backpropagation of the gradient bounds starting from the last layer. #### **Question 4** We agree that this paragraph could be written more clearly. In a nutshell, we introduce a clipping layer as a final layer in our model that clips the gradients of the loss (wrt the logits) during backpropagation. We gain several advantages : * As you said we benefit from a better “gradient to noise” ratio: please see Appendix B5 p18 for a discussion on the bias it induces. * As opposed to standard (per-sample) clipping, this process does not slow down the algorithm. Indeed, it is applied only on the last layer with a small dimension (and not on each weight of the full network). #### **Question 5** The architectures used and the range of hyper-parameters are given in Appendix D p25. Please see Figure 10 p26 to see the results broken down per architecture type. Finally, the exact architectures for each task can be found in the folder “experiment” of the Python package. If you believe some important information is missing we will be glad to add it to the supplementary. #### **Question 6** Thank you for the heads up ! --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their clarifications that addressed most of my concerns, as well as for extra experiments of DP-SGD. My only remaining question is: Remark 1 & 2. Would it be possible to fix architecture to Lipschitz networks and compare to DP-SGD on these networks? As well as to fix data preprocessing, and compare to the algorithms on the exactly same fixed setting (same data preprocessing & same architecture) ? --- Reply to Comment 1.1.1: Comment: Thank you for your continuing effort in engaging the discussion. > Would it be possible to fix architecture to Lipschitz networks and compare to DP-SGD on these networks? Could you clarify your question? From what we understand, you are curious about the performance of DP-SGD on GNP networks? If so, take a look at figure 11 in appendix D3, p26. > on the exactly same fixed setting (same data preprocessing & same architecture) ? There isn't an exact 1-to-1 mapping between GNP and conventional networks. For example, residuals connections are argued to mitigate the vanishing gradient phenomenon, but this is redundant with the orthogonality condition of GNP networks. Other difficulties exist: the manifold dimension of orthogonal matrices is half the one of all matrices, so at equal width the number of degrees of freedom is actually smaller. We attempt a run of Clipless DP-SGD on Cifar-10 with a MLP Mixer, with a standardized Cifar-10 (assuming no privacy cost in computing mean and std, just like in optax tutorial) and we obain: | **Epsilon.**| 4.05 | 5.0 | 8.01 | 11.1 | 15.03 | | ----- | ------ | ------ | ------ | ----- | ---- | | **Val. Acc.**| 37.6 | 39.3 | 42.0 | 43.2 | 44.5 |
Rebuttal 1: Rebuttal: We are happy to see that the reviewers appreciated the originality and potential of our contributions. According to the reviewers, “the concept is **original and potentially significant**” (reviewer mBgX), “the paper is **interesting** and **novel** and **provides direction for new research**” (reviewer 6vik), and “**the idea** of removing clipping as an alternative to clipping itself **is promising**” (reviewer vJyY). They also found that "**the writing is exceptionally lucid**" (reviewer mBgX). We address common concerns below and reply to specific comments in the individual response to each reviewer. ## About experimental results and comparisons to prior work A concern shared by all reviewers is about the empirical performance of our approach. We introduce a new baseline approach for DP training, clipless DP-SGD, where the costly per-sample clipping operation used in all current implementations of DP-SGD is replaced by an automatic method that computes upper bounds. Hence, **our core objective is to ensure a fair evaluation** of the idea. We report what can be achieved by this approach in end-to-end training, out of the box, against the SOTA results found in the literature. We believe there are some pitfalls when it comes to evaluating DP algorithms. * Some SOTA results used **ad-hoc pre-processing**. For example, the seminal work of Abadi et al (2016) trained the network on the PCA embeddings of Cifar-10. While this yields excellent results, they also set a score that has not been beaten to this day by other “training from scratch” methods (it is known that handcrafted features or pre-training on public data can boost performance in some regimes, see Tramer and Boneh (2020)). Thus hiding the true effectiveness of vanilla DP-SGD on this task. * Previous papers usually **do not report the Pareto front**: instead, they often report their final accuracy for a few different values of *epsilon* in a table, *after* the extensive fine-tuning of their hyper-parameters. We believe our methodology is more exhaustive and transparent: for example, Figure 4 reports all the runs (including the ones that “failed”) over the broad-range of hyper-parameters (see Appendix D p25 for details). * Oftentimes, a Lipschitz constrained network and its conventional counterpart may yield completely different results, since they leverage different implicit biases. Hence, it is hard to compare the two algorithms: some architectures may be more beneficial or detrimental to one than the other. While reaching the SOTA with a new approach is an interesting question per-se, we consider that it is outside the scope of this work. To improve accuracy, we could combine our approach with the many tricks introduced in recent literature (our approach is compatible with many of them, see answer to reviewer **6vik**), and even consider hybrid approaches (as discussed in the paper, and with reviewer **vJyY**). But this would at the same time tend to obfuscate the true nature of our contribution, threaten reproducibility, add complexity and make the evaluation harder. **As we want to foster reproducibility and explorative research, we provide the lip-dp package with reference implementations and documentation.** We keep the framework **as simple as possible**, and as close as possible to the original idea, **so that additional methodological improvements can be evaluated fairly** on top of this baseline. By providing a Python package, we allow every researcher to contribute on the topic and speed-up research advancements. ## Additional experiments As relevantly noted by reviewer **vJyY**, to complement our original experimental results which compare our approach to the current SOTA in DP training, it is interesting to compare our approach to the fairer baseline of vanilla DP-SGD (with clipping) under a similar computational budget. In the attached PDF, **we have included some preliminary experiments** in this direction: * A comparison to vanilla DP-SGD with Opacus which uses a ResNet18 with weights pretrained on Imagenet-1k, and by performing a grid search over (learning_rate/maximum gradient norm/batch size) triplets. **We see that our Clipless DP-SGD** (which uses a smaller network trained from scratch) **is close to DP-SGD with pretrained Resnet-18**. * A comparison to vanilla DP-SGD with Opacus on two tabular datasets (Adult and Breast Cancer). In this case, the architectures are the same except for the activation (see caption of the figure for details). Again, **the results show that our approach is competitive in terms of classification performance.** In the final version of the paper, we will add the complete Pareto fronts, as done in the original experiments. ## Building Lipschitz networks The Lipschitz constraints are actually a mild limitation : universal approximation in the set of 1-Lipschitz functions has been proven by the work of Anil et al, and thanks to Bethune et al. we know that 1-Lipschitz NN do not lack expressiveness in classification tasks. The main constraint is that we must rely on projections, and not on re-parametrizations (see the discussion in Appendix A.1.1). We agree that building Lipschitz constrained networks is not easy (see the discussion in Appendix B.1). Settling the question of the best Lipschitz architecture is still an active research field (see the recent works of Wang and Manchester, Meunier et al, Araujo et al.), beyond the scope of our work. We expect our work to evolve together with this field: it will benefit from the progress in Lipschitz networks. *Wang, R. and Manchester, I., 2023, July. Direct Parameterization of Lipschitz-Bounded Deep Networks. In International Conference on Machine Learning (pp. 36093-36110). PMLR.* *Araujo, A., Havens, A.J., Delattre, B., Allauzen, A. and Hu, B., 2022, September. A Unified Algebraic Perspective on Lipschitz Neural Networks. In The Eleventh International Conference on Learning Representations.* Pdf: /pdf/072e7654ccd777b8d1e9097b531ed59c0eb02029.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Efficient Sampling of Stochastic Differential Equations with Positive Semi-Definite Models
Accept (poster)
Summary: This article investigates the approximation of the Fokker-Planck equation and the fractional Fokker-Planck equation using positive semi-definite (PSD) models. The authors demonstrate that for a given accuracy $\epsilon$, there exists a PSD model with a dimension not exceeding $\epsilon^{-(d+1)/(\beta - 2s)}(\log 1/\epsilon)^{d+1}$ that approximates the corresponding partial differential equation (PDE). By utilizing this result in conjunction with the PSD model-based sampling method proposed in [1], the authors are able to provide samples of the stochastic differential equation (SDE) corresponding to the Fokker-Planck equation or fractional Fokker-Planck equation at any given time $t$. [1]Marteau-Ferey U, Bach F, Rudi A. Sampling from arbitrary functions via psd models[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2022: 2823-2861. Strengths: 1. This article demonstrates clear and logical writing, making it easy to understand the problem formulation considered by the authors as well as the fundamental concepts and properties of the PSD model adopted. 2. As a newly proposed model, this paper presents a novel application scenario for the PSD model, which has not been previously explored in related works. This contributes to its significant novelty. 3. From the perspective of sampling based on SDEs, the authors provide a new design approach for sampling by combining existing works, offering a fresh perspective on sampling techniques. Weaknesses: 1. Although the author mentions it in the paper, the influence of initial conditions on the solution of the PDE can sometimes be significant. It would enhance the completeness of the theoretical analysis in this article if the author could provide an analysis of whether the PSD model can satisfy the initial conditions or, in cases where exact satisfaction is not possible, characterize the approximation of the initial conditions' impact on the final solution's approximation. 2. Personally, I have limited knowledge of non-log-concave sampling and numerical modeling of PDEs. However, it would be beneficial if the author could compare the effectiveness of the PSD model-based sampling method with other traditional methods (such as Langevin Monte Carlo) in terms of sampling performance. Additionally, comparing the approximation methods of the PSD model for PDE solutions with other existing approaches, either from the perspective of complexity analysis or purely experimental results, would provide valuable insights. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In line 180, should it be $E e^{j\langle u, L_{t-s}^\alpha\rangle} = \exp\{-(t-s)||u||_2^\alpha\}$? 2. In line 216, should it be $Y \in \mathbb{R}^{n\times d^\prime}$? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Despite the theoretical guarantees provided by the author, I believe the biggest limitation of this article lies in the lack of clear demonstration of the advantages of the PSD model. It would be beneficial if the author could explain how the PSD model stands out compared to other similar models in terms of approximating FPE and subsequent sampling tasks. This would significantly enhance the persuasiveness of the article. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for positively assessing our paper. Here below we address the concerns raised by the reviewer. **Approximation Error with Initial Condition**: We bound the $L_2$ error of the difference between the true value and approximated value of the (fractional) Fokker-Planck equation. However, to achieve this, we need to get the approximation error of all three terms individually in the equation and that also includes approximating $p(x,t)$. Please refer to Eqns 39, 40, 41, and 42, and the block of equations after 47 in the appendix where we bound each term individually. This continues in the rest of the proofs. We get the same guarantees for $p(x,t)$ that we get for our PDE which is similar to (Rudi and Ciliberto, 2021, Theorem 6). Hence, the initial distribution will not change any approximation result in the paper. It will only act as a constraint in the optimization problem to obtain the parameter of the PSD model. Also, please see our response to reviewer T6Bn regarding the algorithm. **Weakness, Comparison, and Limitations**: Previous papers about Langevin Monte Carlo and projected Langevin monte carlo algorithms relied on assumptions such as dissipativity, log-Sobolev, or Poincare inequality, which are not always satisfied and we do not require any of such assumptions. Furthermore, we are not aware of an analysis that covers such a broad class of solutions that we cover in this paper. We only assume that the density function is $\beta$ time differentiable. This assumption is much weaker. The existing result for fractional Fokker-Planck is even more scarce. We are not aware of an approach that provides approximation guarantees of the solution of the fractional Fokker-Planck equation that we can sample from. Since the assumptions about the density function are quite different in sampling literature and in our paper, it is hard to do one on one comparison of the results. We have provided worst-case guarantees under the much weaker assumption ($\beta$ times differentiable) and it is not fair to compare them to methods that make stronger assumptions (dissipativity, log-Sobolev, Poincare) to prove convergence results. Our sample complexity results for approximating the solution of the (fractional) Fokker-Planck Equation in theorems 2 and 5 are better than approximation guarantees in [LCL+21]. On top of that, our method comes with one more advantage over existing machine learning models to approximate the solution of a (fractional) Fokker-Planck equation. Generally, the approximate solution of a PDE obtained by ML models [LCL+21, RPK19, Y+18] is not bound to stay positive and hence can not act as a candidate to approximate the solution of a (fractional) Fokker-Planck equation whose solution is a probability density. In the previous work [RC21], it was shown that the PSD model can be used for the effective representation of probability density. Hence, PSD model-based representation acts as a good candidate to approximate the solution of a (fractional) Fokker-Planck equation. We theoretically show the validity of the PSD model in approximating the solution of the (fractional) Fokker-Planck equation and provide a sample complexity guarantee that was not known before. This guarantee immediately can be used to provide a sample quality guarantee in Corollary 1. Sampling result in such generality was not known before. We will take care of the minor comments in the revision. In line 180, $(t-s)^\alpha$ is correct. We will fix the dimension of Y in line 216. Also, please look at our response to the reviewer oceV regarding contributions and experiments. Please look at our response to reviewer biEz for general remarks. We hope that we have answered all your concerns and that the scores will be raised further. --- Rebuttal Comment 1.1: Title: Start of the author reviewer discussion Comment: We hope that our response answers the questions the reviewer asked. Please let us know the further questions or confusion about this work so that we can address it. If we have answered all the questions from the reviewer, then in light of all the novel contributions we have made in this paper, we would like to request the reviewer to reevaluate the score assigned to this paper. --- Reply to Comment 1.1.1: Title: Reminder: Start of the author reviewer discussion Comment: Please let us know if you have further questions or confusion about this work that we can address. If we have answered all the questions from the reviewer, then in light of all the novel contributions we have made in this paper, we would like to request the reviewer to reevaluate the score assigned to this paper.
Summary: The authors introduced a new method of sampling solutions of the Fokker--Planck equation (FPE) using a positive semidefinite (PSD) model. While the algorithm used for the PSD model is not new, the authors created a framework to sample from the FPE, which is quite original based on my knowledge. While I believe the results are very neat, I would like to clarify several questions with the authors before providing a confident score. For now I will recommend borderline accept, and will raise the score once my questions are adequately addressed. Strengths: The approach to drawing samples for the Fokker--Planck is very unique and original. I believe alternative methods that hold promise should be welcomed and more carefully studied, regardless of how practical it is at the moment. Weaknesses: There are a few clarifications I would like to have regarding the implementation of the algorithm. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. To start, while I understand the algorithm of [MFBR22] is not part of the contribution for this work, can the authors provide a high level explanation of how the algorithm works, and comment on its computational complexity? I believe all readers can benefit from understanding the core algorithm, and plus I believe readers should be able to implement the algorithm presented in this paper without having to reference other works. 2. Can authors clarify the role that $q$ plays in this problem? In particular, is $q$ given or chosen by the user, and how does the value of $q$ affect the computational complexity of this algorithm? 3. Under the discussion of "Efficient sampling method" the authors mentioned a step of finding the best model $\hat{A}_m$ by minimizing an error function. Suppose I am an applied scientist who wants to sample from a given FPE, how would I compute this model $\hat{A}_m$ from the PDE? On a high level, I would like to understand how to implement this sampler end to end, and I don't think I fully understand this part. Once again, I would be happy to raise my score once my questions are adequately answered. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for assessing our work positively. Here below we address the major concerns raised by the reviewer. **Sampling Algorithm**: We will add the details of the sampling algorithm in the revision. Here below, we are describing the methodology of [MFBR22] for sampling algorithm for PSD models on a finite hyper-rectangle. The algorithm is an iterative sampling procedure. Given a bounded hyper-rectangle $Q$, $p_Q$ denotes the PSD models-based density in the hyper-rectangle Q. Details are in [MFBR22]. Given the function $f$, the algorithm will take three inputs $(Q,N, \varepsilon)$: the hyper-rectangle $Q$ (with sides parallel to the axes) from which we would like to sample, the number of i.i.d. samples $N$ which we would like to obtain, and a parameter $\varepsilon$ which defines the quality of the approximation. at first, $Q$ is cut in half in its longest direction forming two sub-rectangles $Q_1$ and $Q_2$. If $X_Q$ were a random variable following the law of $p_Q$, then $X_Q \in Q_i$ with $p_i$ (can be computed in closed form) and $X_Q | \{X_Q \in Q_i\}$ follows the law of $p_{Q_i}$ Therefore when looking for a sample from $p_Q$, one of the two smaller sub-rectangles $Q_i$ are chosen randomly with probability $p_i$ in which we look for the sample and then call the algorithm recursively to get a sample from $p_{Q_i}$. By applying appropriate stopping criteria that depend on $\varepsilon$, we get a random sample in $Q$. More details about the algorithm are provided in [MFBR22]. The computational and statistical guarantees are provided in Theorems 1 and 2 of [MFBR22]. **Effect of q**: From Lemma 2 in the appendix, it is clear that the approximation error grows linearly with $q$. Hence, the sample complexity will increase linearly with $q$. Intuitively, as $q$ grows, the complexity of the target density function increases and hence, it would require more samples to approximate the target density using the PSD model. However, we are not allowed to choose $q$ as it is a property of the target density. For example, if the probability is bounded away from 0 then q=1. (For further details, please look at: ''*Second order conditions to decompose smooth functions as sums of squares*", Ulysse Marteau-Ferey, Francis Bach, and Alessandro Rudi) **Algorithm**: The algorithm to sample consists of two parts. In the first part, we get the approximate solution of the corresponding (fractional)-FPE using the PSD model and in the later part, sampling can be done using the method described in [MFBR22]. There are various ways to obtain the PSD model-based approximation for the solution of the (fractional)-FPE. Here below we describe a couple of ways. \\ (1). If the drift $\mu$ satisfies the regularity condition as in [BRS16, Theorem 1.1], then the drift function $\mu$ can be approximated by a function $\hat{\mu}$ in Gaussian RKHS and the approximation guarantees are due to interpolation result in Lemma 4. Once, we have $\hat{\mu}$, we can solve the following constraint SDP to obtain the ${A}$. $\min_{\tilde{p}(x,t)} {\left\\| \sum_{i =1}^d \sum_{j =1}^d D_{ij}\frac{\partial^2}{\partial x_i \partial x_j} \tilde{p} (x,t) - \frac{\partial \tilde{p}(x,t)}{\partial t} - \sum_{i =1}^d \frac{\partial }{\partial x_i}(\hat{\mu}_i(x,t) \tilde{p}(x,t) ) \right\\|}$ such that $\tilde{p}(x,0) = p_0(x)$. $p_0(x)$ is the initial density and the norm is $L^{2} (\tilde{\mathcal{X}})$ norm. The above optimization problem is clearly a constraint semi-definite program because of the representation of $\tilde{p}(x,t)$ as a PSD model. Given $m$ base points pair $(\tilde{x}_i,\tilde{t}_i) \in \tilde{\mathcal{X}}$ for $i \in \{1,2, \cdots , m\}$ of space and time (that is selected uniformly over the ball of radius $R$), we have, $\tilde{p}(x,t)$ given in equation 6 of the paper. Hence, the above optimization problem optimizes for a PSD matrix $A$. Now, the only remaining question to answer is if the norm $L^{2} (\tilde{\mathcal{X}})$ in the above optimization problem can be computed in the closed form or not. Since, the representation of $\tilde{p}(x,t)$ is in the span of the product of two functions in Gaussian RKHS and $\hat{\mu}$ lies in Gaussian RKHS by approximation, hence $\\|\cdot\\|_{L^{2} (\tilde{\mathcal{X}})}$ can be computed in closed form. Result in Corollary 1 of the paper is presented under this scheme. (2). As a second procedure, we can follow the approach described in section 3.2 in [RC21] and optimize an empirical version of the problem that is $\min_{A \geq 0} \frac{1}{n} \sum_{k=1}^n {\left\\| \sum_{i =1}^d \sum_{j =1}^d D_{ij}\frac{\partial^2}{\partial x_i \partial x_j} \tilde{p} (x_k,t_k) - \frac{\partial \tilde{p}(x_k,t_k)}{\partial t} - \sum_{i =1}^d \frac{\partial }{\partial x_i}(\hat{\mu}_i(x_k,t_k) \tilde{p}(x_k,t_k) ) \right\\|}^2$ such that $\tilde{p}(x,0) = p_0(x)$ and $\tilde{p}(x,t)$ is given in equation 6 of the paper. In the above optimization problem, $(x_k,t_k)$ for $k \in \{1,\cdots , n\}$ are iid samples from the hyper-rectangle of side $R$ and $(\tilde{x}_i,\tilde{t}_i) \in \tilde{\mathcal{X}}$ for $i \in \{1,2, \cdots , m \}$ are $m$ base points pair of space and time (that is selected uniformly over the ball of radius $R$). As long as $(x_k,t_k)$ for $k \in \{1,\cdots , n\}$ remain iid, the sampling procedure of $(x_k,t_k)$ would not change our theoretical result. Hence, we can also choose uniformly at random. A small amount of regularization can be added to control the norm of $A$. However, this procedure would require us to study the error arising due to empirical approximation in the problem (Theorem 7, RC21). Since the focus of this work is to study the approximation properties of the PSD model in approximating the solution of the (fractional) FPE and sampling from it, we do not discuss the empirical approximation method in the paper. We hope that we have answered all your concerns and that the scores will be raised further. --- Rebuttal Comment 1.1: Title: Start of the author reviewer discussion Comment: We hope that our response answers the questions the reviewer asked. Please let us know the further questions or confusion about this work so that we can address it. If we have answered all the questions from the reviewer, then in light of all the novel contributions we have made in this paper, we would like to request the reviewer to reevaluate the score assigned to this paper. --- Reply to Comment 1.1.1: Title: Reminder: Start of the author reviewer discussion Comment: Please let us know if you have further questions or confusion about this work that we can address. If we have answered all the questions from the reviewer, then in light of all the novel contributions we have made in this paper, we would like to request the reviewer to reevaluate the score assigned to this paper.
Summary: This paper considers numerical methods for the Fokker-Plank equation and its "fractional Laplacian" version. These partial differential equations describe the evolution of densities of certain random processes: in the case of standard FP, the process is a regular diffusion (an SDE driven by Brownian motion), whereas the fractional version corresponds to a process driven by $\alpha$-stable noise. Accordingly, the authors focus on initial conditions given by probability densities. The idea behind the paper is to adapt the fairly recent methodology of positive semidefinite models (PSD) to this PDE problem. PSD models estimate nonnegative functions via feature maps $\phi:X\to H$, where $X$ is the domain of the function and $H$ is a Hilbert space. Given $\phi$, the nonnegative function of interest is estimated as $\langle \phi, M\phi\rangle$, where the PSD linear operator $M$ is chosen from the data. The whole method is based on the same idea as the well-known kernel trick: there is no need to explicitly deal with the Hilbert space $H$; only the kernel matrix over data points will matter. In a nutshell, the paper gives a method for approximately solving the two aforementioned PDEs via PSD models. The salient features of this approach are as follows. 1) The PSD estimator can be obtained via a semidefinite program (as shown in previous work). 2) The method will output probability densities, and one can sample from them via methods from previous work. 3) The complexity of the problem grows like $\approx \epsilon^{-d/\beta}$, where $\epsilon$ is the target accuracy, $d$ is the dimension and $\beta$ measures the degree of smoothness of the solution. The exponent is not quite right, but the point is that sufficient smoothness will avoid the curse of dimensionality. One may note that this method has rigorous error guarantees. This sets it apart from other PDE solvers based on machine learning ideas, such as Physics Inspired Neural Nets (PINNs). Strengths: The paper gives strong convergence guarantees for a new PDE solving method. Although several ingredients come from previous work, they are combined with new ideas into an impressive package. As mentioned above, this is a ML-based PDE solver with rigorous guarantees. Importantly, these guarantees do not require coercivity or hypercontractivity properties that are usually required from Langevin samples for solutions. Weaknesses: * No experiments are performed to compare the method with other PDE solvers (though this is understandable in a heavily mathematical paper). * The writing is a bit sloppy in places: I found quite a few typos and have several minor comments on writing (I could probably find some more typos if I kept looking). See the list below. NB: were it not for the sloppiness, I would have given this paper a higher score. --- _List of typos and comments on writing_ (Line numbers refer to the full paper in the SM.) Lines 146/7: "It was" and "It has" should be "They were" and "They have". Line 180: why define $j=\sqrt{-1}$ if you could have just written $\sqrt{-1}$ in the exponent? (You never use this notation again, do you?) Line 185: Shouldn't the condition for infinite $p$-th moment be $p\geq \alpha$? Line 201 and elsewhere: I find it a bit odd that the transpose notation is used for vectors in Hilbert space. If you are going to use it, at least explain here what is going on. Line 251: why is this sensence needed here? Line 205: $\eta$ was treated as a vector before, and will be treated as a vector later in the text. Here, however, it seems to be a scalar. Line 223: "The cost of the algorithm is..." -- per sample point, right? Line 254: "Hence, $p^\star(x,t)$" Line 255 should end with ":". Line 270: the definition of $\psi$ should be in a numbered display for easier refererence. Display (8): if $p^*$ is the true solution, why do we have it in here? (Its contribtion vanishes.) Indeed, the definition of the loss later does not involve $p^*$; see eg. (9). (On the other hand, $p^\star$ shows up again in the display right after line 989.) Line 294 and elsewhere: when you are mentioned a numbered theorem or assumption, start the corresponding word with a capital letter. Line 295: $\mathcal{H}_X\otimes\mathcal{H}_T$ Line 318: "by Gaussian kernel approximation" -- should this be here? Line 365: "Similarly to the previous section, two steps are required to obtain" Line 327 and elsewhere: I think $\mathbb{E}_t$ was not defined anywhere. Line 352: "If the kernel satisfies the Bochner..." Line 355 and elsewhere: this is a matter of taste, but couldn't you replace "utilize" with "use". Line 376: "The major difficulty" Line 391 (taste): "does require" could be "require". Line 395: I think "approximated" should be "approximating". Proposition 4 is also in Evans's book, which was cited previously (I am saying this from memory, so please check. $W_2$ is replaced by $H$ in that book.) Many spots of the SM: the inequality $\|f\star g\|_p\leq \|f\|_p\|g\|_1$ should be mentioned, as it is used in several steps. Lemma 2: it seems that $\tilde{p}$ is replaced by $f$ in the statement of the Lemma and at several steps of the proof. (Eg. the display right after 665.) This also seems to take place in the next Lemma. Line 680: Since you are repeating a definition from the preceding proof, please say so. Line 682: "We use the result of Lemma 1" Line 746: Should "The following result holds" be here? Display (34): this "fill distance" is the smallest value of $h$ such that $\tilde{X}$ is an $h$-net of $[-R,R]^d$. Display below line 777: how should I interpret the norm in the RHS? Lines 873/4: why are you spaeking of conditional probabilities here? Line 891: what is $f(\delta)$? Line 910: "redefine" should be "define again" (they are not the same). Line 928: capitalize "Holder". Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: * Have you attempted experiments against other ML approaches, or other methods from the literature? * Methods have been proposed recently to speed up SDPs, including randomized sketches (https://math.paperswithcode.com/paper/scalable-semidefinite-programming). Could these methods be applicable in the present setting? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: They have not addressed limitations explicitly, but I do not think that that would have been necessary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for assessing our work positively. Here below we address the major concerns raised by the reviewer. **Typos and Presentation**: Thank you very much for pointing us to typos. We will fix the typos and improve the presentation. We will also include suggested citations in our paper. Line 251: If a probability density function has a limit at infinity, that limit is always zero. We mention this because the PSD model-based representation of the density vanishes at infinity. We will clarify. Confusion regarding transpose in RKHS and $\eta$ will be clarified. Line 777: For our case, this is just the supremum of $f$ in the set $\mathcal{X}$. Line 873: We will fix that. That would be just "probability". Line 891: $f(\delta)$ is $k(\delta)$. Other minor confusion regarding $\tilde{p}$ and $f$ would be fixed. **Contributions and Experiments**: There is a significant lack of literature on solving the (fractional) Fokker-Planck equation and sampling from the solution. Our framework not only addresses this problem but also allows for the approximation of the fractional Laplacian acting on density, which is a distinct research problem. We believe that our work represents a valuable contribution to the field and has the potential to inspire further research in this area. The theoretical results presented in this paper are non-trivial and come with worst-case guarantees, which makes them stand out from methods that rely on stronger assumptions. Our findings are particularly significant in situations where traditional assumptions, such as log-Sobolev and Poincare fail. Despite the absence of real-world experiments, we hope that the reviewer could evaluate our theoretical contributions from the perspective of not requiring strict assumptions. Thank you very much for pointing out a reference for faster SDP via sketching. This method can certainly be utilized in solving the SDP program arising in the PSD model faster. --- Rebuttal Comment 1.1: Title: Start of the author reviewer discussion Comment: We hope that our response answers the questions the reviewer asked. Please let us know the further questions or confusion about this work so that we can address it.
Summary: This paper studies the problem of efficiently sampling from a SDE given a drift function and diffusion tensor. It proposes a solution based on computing a PSD model that satisfies the Fokker-Planck solution associated with the SDE, and then sampling from the resulting PSD model. Strengths: The paper seems to have a lot of technical content, and it is clear that the authors have put in a lot of effort. Weaknesses: This paper is extremely difficult to read. The theorem statements (and the paper overall) need to be drastically simplified to be understandable. The problem doesn't seem to be well motivated, and it's not clear from reading the paper how the solution compares to others in the literature. For instance, the [LCL+21] paper is mentioned in related work, but it's not at all clear how the solution proposed of learning a PSD model compares to their work. There are no experiments to support this idea of fitting a PSD model. Overall, I think this paper needs to be rewritten to be more easily understandable. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - Can you provide experimental evidence for the theoretical claims? - Can you provide simplified theorem statements? - Can you provide a more clear comparison with related work? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for spending time reviewing our work. We agree that our paper is on the technical side; however, we suspect that the reviewer might have found the paper difficult to read if the subject is not close enough to their field of expertise. In such a case, we believe that the strong rejection would be an unfair suggestion, as the paper would be clear and understandable and the contributions would be significant for its intended audience at NeurIPS. We would now address the concerns raised by the reviewer here. We also request the reviewer to read our responses to other reviews to get further clarification. We hope that the reviewer could reconsider their decision based on our explanations. **Theorem Statements**: Theorem 2 and theorem 5 are about the approximation error guarantees of approximating the solution of the Fokker-Planck and fractional Fokker-Planck equation with the PSD model. More specifically, the statement says that we get $O(\varepsilon)$ approximation error given enough samples $m=\tilde{O}(\varepsilon^{-(d+1)/\beta-2s})$ where $\beta$ and $s$ are described in the paper. Corollary 1 provides a guarantee about the sample quality of samples obtained from the PSD model and the error guarantee is provided in Wasserstein-1 distance. Theorem 1 and Theorem 4 are intermediate approximation results for the infinite-dimensional PSD model. These results are useful in obtaining the final approximation results in theorems 2 and 5. Finally, the result of theorem 3 is about approximating the fractional Laplacian operator acting on a probability density represented by a PSD model. Theorem 3 clearly states that while using PSD model-based representation for probability density, one can approximate the fractional Laplacian operator (non-local operator) acting on a probability density for a wide choice of kernel functions. Approximating non-local operators is an independent research problem where we show the effectiveness of PSD-based representation (theorem 3). **Motivation**: As we described in the introduction of our paper, the main motivation behind this work is to propose an approach that can provably sample from a stochastic differential equation. We take an altogether different approach to address this problem that was not explored in the literature before to the best of our knowledge. We propose a two-step procedure. First, we approximate the solution of a (fractional) Fokker-Planck equation (using the PSD model) that corresponds to a stochastic differential equation. In the next step, a sample is sampled from the PSD model. Under our proposed approach, we managed to prove the sampling and approximation results under much weaker conditions on the solution density functions ($\beta$ times differentiable) than what exists in the literature. **Advantage and Comparison**: To the best of our knowledge, we are not aware of any algorithm which provably can sample from the solution of a Fokker-Planck in a bounded domain under as much of a weak condition on the solution($\beta$ times differentiable) as ours. The convergence bound for existing sampling algorithms like Langevin requires dissipativity, log-Sobolev constants, or similar quantities. For example, very recent works ([Lam21] ; [ZL22] ) are able to provide convergence only under dissipativity-like assumptions. We make no such assumption and still show that there exists an algorithm that can be used to sample from complex solutions of (fractional) Fokker-Planck equation in a ball given the solution is regular enough. If the solution is $d$ times differentiable or more (that is often the case), the dependence on the dimension vanishes up to logarithmic factors. It is clear from the results in theorem 2 and 5 that the sample complexity result in our work is better than the one reported in [LCL+ 21]. Also, the solution function obtained using *Physics-inspired neural networks* (PINN) [RPK19], *Deep Ritz method* [Y+18], and other existing machine learning methods are not bound to stay positive in the domain and are often not integrable. Since the solution function is a probability density in our case and PSD based model satisfies the property of a density function (section 3.2 and [RC21]), hence comparison with results for methods like PINN and Deep Ritz Method [LCL+ 21] is not fair as those methods are not applicable in our case and can not be used in sampling from the solution. We have tried our best to include existing related results from PDE and sampling in the related work section of our paper. **Experiment and Contribution**: There is a significant lack of literature on solving the (fractional) Fokker-Planck equation and sampling from the solution. Our framework not only addresses this problem but also allows for the approximation of the fractional Laplacian acting on density, which is a distinct research problem. We believe that our work represents a valuable contribution to the field and has the potential to inspire further research in this area. The theoretical results presented in this paper are non-trivial and come with worst-case guarantees, which makes them stand out from methods that rely on stronger assumptions. Our findings are particularly significant in situations where traditional assumptions, such as log-Sobolev and Poincare fail. Despite the absence of real-world experiments, we hope that the reviewer could re-evaluate our theoretical contributions from the perspective of not requiring strict assumptions and the novel contributions we make in multiple research problems. --- Rebuttal Comment 1.1: Title: Start of the author reviewer discussion Comment: We hope that our response to the reviews (all the reviews) provided more clarification on the contributions we made to this work. Please let us know the further questions or confusion about this work so that we can address it. Please also look at other reviews and our responses to the questions asked by other reviewers. In light of all the novel contributions we have made to this paper, we would like to request the reviewer to reevaluate the score assigned to this paper. --- Reply to Comment 1.1.1: Title: Reminder: Start of the author reviewer discussion Comment: Please let us know if you have further questions or confusion about this work that we can address. If we have answered all the questions from the reviewer, then in light of all the novel contributions we have made in this paper, we would like to request the reviewer to reevaluate the score assigned to this paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction
Accept (spotlight)
Summary: The paper brings NeRFs to a new dimension of imaging at transient timescales enabling the rendering of transient imagery. It incorporates the underlying image formation model of the lidar and recovers improved geometry and conventional appearance when training on a few input viewpoints. The paper also proposes a first-of-its-kind dataset of simulated and captured transient multiview scans from a prototype single-photon lidar and tests the transient NeRF on the dataset. Strengths: The paper brings NeRFs to a new dimension of imaging at transient timescales and proposes a first-of-its-kind dataset of simulated and captured transient multiview scans from a prototype single-photon lidar. To deal with the new data type, HDR-Informed loss function and Space carving regularization are proposed to further improve the performance. The experiments are sound compared with other NeRF-based methods. The full paper has a clear structure and a clear narrative. Weaknesses: It only tests in a dark environment because of the single-photon Lidar. It is hardly used in broad application scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do you do the ablation study of HDR-Informed loss function and Space carving regularization? Can the method work under natural light? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. Please see the above response to all reviewers where we address questions related to the ablation study of the HDR-informed loss and the space carving regularization. We discuss effects of natural lighting below. **kaKw: Scanning under natural light.** Our hardware setup is relatively robust to natural light because we place a laser-line spectral filter (Thorlabs FL532-10) in front of the SPAD, which attenuates ambient illumination by a factor of 10,000. Under indoor lighting, we observe roughly 300–3,000 photon counts per second, depending on the albedo of the target, which is <2% of the detected laser photons (~150,000 counts per second). However, operation in much brighter environments (e.g., outdoors under direct sunlight) would result in non-negligible background counts. In that case, it is possible to use other laser wavelengths (e.g., 1550 nm), where sunlight is heavily attenuated because of absorption by the atmosphere. We will note this in the paper. --- Rebuttal 2: Comment: Thank you for the response. I am satisfied.
Summary: The paper describes a variant of NeRF specially designed for the image formation model of a time-of-flight imaging sensor. In contrast to earlier works using ToF sensors in a neural radiance field setting, the method is not designed to merely take depth maps or point clouds from the ToF sensors as input. Instead, the formulation goes one level deeper and presents a NeRF and image formation model on the basis of the time resolved photon count histograms that are used as basis for the final depth measurement by the ToF camera. To this end, the authors propose a time-resolved version of the volume rendering equation for image formation, as well as adapted HDR aware loss functions for training the neural radiance fields on photon histograms. The approach is tested on simulated data as well as real data captured with a ToF prototype setup. The authors show that in particular in a sparse view setting, the new photon count based formulation has advantages over depth-supervised or image-based NeRFs. Overall, I think this is a strong paper showing very innovative ideas. I enjoyed reading it. Strengths: Overall, I think this is a strong paper. It introduces quite a few clever ideas on how to adapt the neural radiance fields concept to the image formation model of a time-of-flight camera. This ranges from adapting the image formation model to a time-resolved version, to introducing a HDR aware loss and a space-carving regularization tailored to the peculiarities of the imaging properties of the employed sensor. To my knowledge, this is the first paper approaching neural radiance fields in this way. As the authors suggest in their discussion the formulation on the basis of time resolved photon histograms may enable additional improvements in 3D reconstruction, e.g. of complex shapes, material properties etc. Weaknesses: Not much to say here. One point that could have been discussed more is from what number of viewpoints on the traditional or depth-supervised approaches catch up. At the moment, advantages of the approach are mostly shown in the sparse view setting, which is fine and showcasing an advantage of the approach. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: see the weaknesses section One issue with time-of-flight sensors in general is their often non-trivial noise characteristics. Would explicitly incorporating the noise model into the formulation help here ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The discussion in the paper is adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. Please see the above response to all reviewers where we discuss training on more viewpoints. We also discuss noise modeling below. **vJ74: Forward model choices–noise modeling**. As we note in the paper, the SPADs used in our hardware prototype follow a Poisson noise model (Eq. 2). Previously, we experimented with a loss function based on the corresponding negative log-likelihood (see, e.g., the formulation in O’Toole et al. [13]). Empirically, we found the L1 loss of the log transients (Eq. 5) to work well while also being simple to implement. However, we think incorporating additional reconstruction priors motivated by Poisson statistics could be a very promising direction for future work (see, e.g., Rapp and Goyal [17]).
Summary: NeRF is a method that has recently become popular for view synthesis. It allows estimation of a 3D scene density and 5D radiance map using a few intensity images and their camera poses. These can then be used to render intensity and depth images from any novel 2D view. This paper extends NeRF to allow estimation of scene density and radiance from “scene transients” instead of intensity/depth images. Previous methods have tried to incorporate depth information into NeRF in the form of point clouds, but the proposed method directly uses raw scene transients (histogram of photon counts) captured by a SPAD sensor. This method can then be used to render scene transients given a novel 2D view. The authors' main contribution is to extend the volume rendering equation of NeRF to produce time resolved measurement instead of integrated over time. Additionally their training/rendering takes into account the shape of the laser pulse, spatial footprint of the laser spot etc. And lastly they add modifications to the NeRF objective to ensure their method works well for the high dynamic range SPAD data (predict radiances in the double log space, and apply loss in the log space). The authors also apply a regularization (space carving) to encourage the density estimates to be sparse (similar to what other NeRF variants have done e.g urban NeRF). The authors also capture a real world multiview LiDAR dataset with their hardware prototype. Using this dataset and another simulation based dataset, they provide quantitative and qualitative comparison of their method with other baselines for intensity and depth reconstruction from novel views. The results from simulation data show their method’s superior performance. The results from captured data also show that their method is superior, but by a smaller margin (which they attribute to alignment imperfections in the captured dataset). Strengths: 1. This is a nice extension to the NeRF setup that allows directly leveraging the raw lidar measurements to learn a 5D representation instead of learning from intensity/depth images which won't contain all the information available in the raw measurements. The improvement in estimation accuracy is evident in the results/experiments shown. 2. The paper is written very well, with good attention to details (e.g. modeling assumptions, experiment setup). Background work is explained well. 3. Authors perform exhaustive experiments with baseline methods using both simulated and captured data, and results look impressive. Weaknesses: From a novelty of algorithm standpoint, the contributions of the paper are limited. From what I understood, It is a straightforward application of the NeRF framework to a new domain, i.e, the authors represent the scene density/radiance just like in NeRF, the only difference is that they modify the rendering logic to produce 3D histograms instead of 2D images. The image formation model of SPADs (which is used in the rendering equation) is also not novel. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. You mention that you model the spatial footprint of the laser/sensor spot when rendering the transients. Is that necessary? Does performance deteriorate if you just assume an ideal spot? 2. I'm not convinced about the utility of rendering raw lidar measurements from novel viewpoints. Other than generating simulation data, are there any practical applications? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. Please see the above response to all reviewers where we address questions related to the spatial footprint of the laser/sensor spot. We address other questions related to novelty and applications below. **Zi1e: Novelty.** Our method is the first to apply a NeRF-type approach to multiview rendering of single-photon lidar data. Algorithmically, we extended the NeRF method to properly model a single-photon lidar system, including time-resolved rendering and modeling the temporal response of the system (i.e., laser pulse width and sensor jitter) as well as the footprint of the laser and detector. Further, we demonstrated our approach in practice with a first-of-its kind multiview single-photon lidar setup and captured dataset. We believe the novel hardware prototype and dataset will be appreciated by the community and inspire follow-on work. **Zi1e: Transient rendering applications**. One compelling application of the method could be generating novel views of lidar measurements for downstream tasks in training autonomous driving or robotics. We also believe that our work is an important first step for the task of transient neural rendering, which could enable advances in material segmentation, recovery of reflectance functions (e.g., BSSRDFs) from sparse views, non-line-of-sight imaging, and free-viewpoint rendering of light propagation and optical phenomena. --- Rebuttal Comment 1.1: Comment: Thanks for your response. It would be great if you could include the motivation for spatial footprint (the depth discontinuity failure case that you mentioned) in the final revision. --- Reply to Comment 1.1.1: Comment: Thank you, reviewer Zi1e. We will include the above motivation in the paper as requested.
Summary: This work proposes a novel-view synthesis method for active sensors (single-photon LiDAR sensors) based on the Neural Radiance Field formulation. To this end, the volume rendering formulation for active sensors is derived, properly taking into account the measurement formation process (two-way distance, intensity fall-off) of LiDAR sensors. Along with that, a new loss functions for empty-space carving is proposed. The proposed method is implemented based on the INGP NeRF model and evaluated on both synthetic dataset of transient multiview scans (one of the contributions of this work), as well as on real-world data that was captured using a prototype single-photon LiDAR system. In these experiments, the proposed method outperforms SoTA NeRF methods (also with depth supervision) across all metrics. Strengths: - Original and important problem formulation. Novel-view synthesis for active sensors is an under researched, but a very important problem. - A volume rendering formulation for active sensors that properly takes (at least some of the) the properties of the measurement formation process into account. These include, two-way path when computing the transmittance, intensity falloff, beam divergence. - Simulated dataset of transient multiview scans that will be made publicly available (along with the scripts used to generate it) - A dataset of real-world captures that were acquired using a prototype single-photon LiDAR which, if I understand correctly, will also be made publicly available Weaknesses: There are two main weaknesses in my view: - **Clarity**: The clarity of the paper could in my opinion be improved, and the main confusion stems from including the RGB images from simulated data in figure 1 and using **c** to denote number of photons (?) in Figure 2. I might have misunderstood something, but if I am not wrong, the actual LiDAR system can only measure the photon count and depth (time-of-flight) and the volumetric rendering formulation should be tied to that. However, in the Equation 3 the radiance is a vector quantity (RGB color?) same as in L150? The simulated data seems to be full RGB (what is the reason for this?), does that mean that for the simulated data the evaluation is actually similar to the original NeRF (PSNR and LPIPS computed over the RGB images?). - **Experimental evaluation**: In my perception, the experimental evaluation seems somewhat biased. The proposed method is compared to the following baselines *INGP*, *DS-NeRF* and *UrbanNeRF* which are supervised through color/intensity supervision (and *DS-NeRF* and *UrbanNeRF* also using the depth). However, the depth supervision only constrains the integral over the density and the supervision signal might even be very noisy (log-matched filter in case of real world data are usually very noisy). On the other hand, the proposed method is supervised with the full histograms per each ray (before integration) which provide additional supervision and helps to constrain the empty space. This is especially important in the low-view setting that was for some reason selected in the evaluation (2-5 views on synthetic data?). I am wondering why the low-view setting was selected for simulated data? In my opinion this pronounces the bias as it emphasizes the differences in the supervision signal between the methods Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What exactly does **c** denote in Equation 2 and L150? - I like that the formulation tries to follow the measurement formation process, but currently only the intensity fall-off and round-trip are considered. Would it make sense to add at least the effects of the incidence angle? - Similar to the comment above: why does simulated data simulate RGB and why is the evaluation considering low-view setting? - I also like that the beam divergence is modeled through sampling multiple rays (L180), but it would be good to clarify at which point in the volume rendering formulation the contributions are averaged? - The depth of the proposed methods is obtained as the argmax across the histogram, however for the baselines the integral along the ray is used. Does that yield better results as taking the argmax of the volume density profile? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are discussed in the conclusion section, while broader societal impacts are not (but I also don't believe that there are any significant ones). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. Please see the above response to all reviewers where we address questions related to the experimental evaluation, effects of incidence angle, consideration of the low-view setting, and the depth evaluation. We address other questions below. **tsWB: Will the dataset be made public?** Yes, both the captured and simulated datasets will be made public upon publication of the paper.  **tsWB: Clarification on dimensionality of symbol “$\mathbf{c}$” in rendering equations.** The lidar system measures photon count histograms. We simulated RGB photon count histograms, which could be captured in practice using an RGB laser. However, in our experimental setup, we only had access to a single-wavelength picosecond laser (green; 532 nm), and so our captured results have a single color channel. Hence, in our model, the dimension for $\mathbf{c}$ (i.e., the radiance output by the neural representation at each point along a ray) is different when applied to simulated and captured data. Specifically, in the simulations we use $\mathbf{c}\in\mathbb{R}^3$, whereas in the captured data $\mathbf{c}\in\mathbb{R}^1$. **tsWB: Clarification on evaluation metrics**. For evaluation on simulated data, we chose to use three color channels (RGB) and selected the evaluation metrics (LPIPS, PSNR) to be consistent with previous work (Instant NGP, DS-NeRF, Urban NeRF). To generate 2D images from the ground truth photon count histograms (or histograms rendered by the proposed method), we integrate the histograms over the time dimension and apply normalization and gamma correction (L259–262, L270–282). We also evaluated the depth estimates produced by our method as described on L182–185. **tsWB: Clarification on spatial footprint modeling.** The weighted summation is done as the last rendering step by averaging the corresponding time bins within the photon count histograms produced by each ray for a given pixel. We will clarify this in the revision. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I would like to thank the authors for providing a detailed response. Some of my concerns were addressed, but I would still like to note a couple of things. **Clarification on dimensionality of symbol in rendering equations.** It would be good to make this clear in the paper (for other scalars you use non-bold font, so this might be confusing to others as well). If I understand things correctly (it might be that I don't, I am not an expert on the photon lasers), the photon count output of your network is strongly correlated with the density (module the differences in material and translucence). It would be good to discuss this in the paper and provide some histograms of both density and photon count ($\textbf{c}$) outputs of your network. **Comparison to baselines.** I would respectfully disagree that the comparison to the baselines is completely apples-to-apples. While the raw data is indeed the same, the baselines methods only have access to the data after the log-matched filter, which reduces a histogram to a single value representation (obviously a lot of information is lost here). The histogram data is especially useful in the sparse view setting as it provides **spatial supervision** that helps to constrain the solution (empty space supervision) which other methods don't have access to. I would even go as far as to argue that the main insight ***raw measurements provide better supervision*** was "known/suspected" before. Baseline methods do not use spatial data as it is simply not available by most consumer grade LiDAR systems (full-waveform LiDAR are rare in robotics, AV and other typical use cases). In fact, due to the missing spatial data, previous methods try to reason about it using heuristic observations, e.g. UrbanRadianceFields introduce line-of-sight priors about empty space and center a Gaussian at the measured depth to obtain additional supervision. Do not get me wrong, I think that this work is good, and the insights are interesting, but I would wish that they would be presented in a bit more "humble" (for the lack of better term) way and also discuss the availability of such data to prior methods and its effect on the evaluation. I am convinced that the previous works would also try to use full histogram data if it would be available to them (incorporating manually defined spatial priors based supports this). Nevertheless, the proposed formulation is interesting and might spark new ideas in the future. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for taking the time to respond. **Clarification on dimensionality of symbol “$\mathbf{c}$” in rendering equations.** We will clarify the dimensionality of $\mathbf{c}$ (the radiance predicted by the network) in the paper. Indeed, the time-resolved output of the network depends on both the density and the radiance, and will show some plots in the revision to visualize how these quantities compare to the rendered output along the ray. **Comparison to baselines.** We agree with the reviewer that our method differs from the baselines in terms of using point cloud data vs. photon count histograms. In the rebuttal we intended the “apples-to-apples” wording to emphasize that the evaluation of the proposed method uses the same photon count histograms as are used to estimate the point clouds used for the baselines. Since previous methods do not have access to histogram data, we used the “Urban-NeRF-M” baseline to explore performance of point cloud-based methods with additional spatial priors—here we augmented previous work with ground truth segmentation masks to facilitate space carving. While it’s perhaps intuitive that using the raw measurements should provide some improvement over using point cloud data alone, it was not obvious to us a priori how significant the benefits would be. Moreover, implementing time-resolved NeRF supervision using photon count histograms was not trivial and required accounting for additional factors (e.g., laser/sensor footprint, system temporal response, building the hardware prototype, etc.). Additionally, we believe the proposed work provides value because it quantifies the improvements from using raw lidar data in the context of NeRF reconstruction. The reviewer is also correct that most conventional lidars do not output full waveform data—while they typically use fast avalanche photodiodes to capture measurements, and they initially capture the full lidar waveform, the data are preprocessed to point cloud format before output. We will discuss the fact that raw lidar data were not readily available to previous methods. We hope that our dataset of multiview photon count histograms will provide more opportunities to investigate 3D reconstruction using raw lidar data.
Rebuttal 1: Rebuttal: We thank all reviewers for their helpful suggestions and insightful questions.  Our work brings NeRFs to a new dimension of imaging at transient timescales, enabling rendering of photon count histograms from novel views for the first time. Further, we show that supervision on photon count histograms enables improved reconstruction of geometry and novel view synthesis compared to point cloud-based supervision when training on few input viewpoints. Our first-of-its-kind multiview dataset of photon count histograms (including both simulated and captured data) will be made publicly available upon publication. We address most reviewer concerns in this shared section of the rebuttal and respond to other individual concerns as separate responses to each reviewer. Please see the attached rebuttal PDF for Tables R1–R6. **Adjustment to simulated results.** After submission, we noticed a slight inconsistency with the pre-processing of the simulated dataset which we have now corrected. Specifically, we created the RGB images used for training by summing the simulated photon count histograms across the time dimension and normalizing by the maximum value _per view\._ We reprocessed the simulated results to use normalization _across all views_ (consistent with the captured results, L230-231). We include the updated results in Table R1 of the rebuttal PDF (compare to Table 1 of the paper). The overall trends and qualitative results are still consistent with what we previously reported, and our method still outperforms all baselines in terms of novel view synthesis and depth estimation. We will use these results in the revision unless reviewers object. Tables R2 and R3 also use normalization across all views. **tsWB, vJ74: Number of views.** The paper focused on results for sparse views (i.e., 2, 3, and 5 views) since this is the regime where depth supervision improves the most over only using 2D supervision. We tried using 10 simulated training views sampled at equal angles around the Lego scene and our approach still outperforms the baselines when evaluated on the same test views (see Table R2). If reviewers request, we can include results with even more training views in the revision. **tsWB: Clarification on experimental evaluation.** The reviewer notes that the depth supervision used by previous methods (DS-NeRF, Urban NeRF) only constrains the integral of the density based on point cloud data (though Urban NeRF also includes point cloud-based space carving losses, see their Eq. 15).  It turns out that supervision using the raw lidar data (i.e., photon count histograms) provides better performance, especially for reconstruction from few input views; **this is a key insight of the paper**. The evaluation uses fair, apples-to-apples comparisons based on the same input photon count histograms. That is, for DS-NeRF and Urban NeRF, we estimate point clouds by applying the constrained maximum likelihood estimate (i.e., log-matched filter) to the photon count histograms. The proposed method uses those same histograms, and we show that supervision with raw data outperforms using pre-processed point clouds in the sparse-view setting considered by DS-NeRF and Urban NeRF. **Importantly,** **this result improves over the standard practice of preprocessing raw lidar data to point clouds.** **tsWB: Depth evaluation.** DS-NeRF and Urban NeRF calculate depth differentiably using the integral along the ray (i.e., the expected ray termination distance) and incorporate this in their loss functions. Since the baseline methods are supervised with this depth estimate, we opted to use the same approach for evaluation. For completeness, we provide an updated version of the L1 depth in Table R3 calculated for all methods using the maximum ray termination probability (i.e., using argmax; see L183). Baseline performance using argmax is indeed improved for most of the methods, though Transient NeRF still performs best.  **tsWB: Normals in forward model**. Estimating the normals requires a noisy finite-difference operation on the predicted density values, which we found results in speckle-like artifacts in the novel views. Instead, we model the effects of incidence angle by conditioning the neural representation on view direction. See Table R4, which ablates using the normals to add cosine factors to the rendering equation (Eq. 3) for the Cinema scene. **Zi1e: Ablation–laser spatial footprint.** When the laser spot and sensor footprint pass over a depth discontinuity, we observe two peaks in the resulting photon count histogram (corresponding to the two depths across the discontinuity). Assuming an ideal spot (i.e., using a single ray) completely fails to model this effect. Training and rendering the Lego scene (which has many depth discontinuities) with the ideal spot model results in much worse performance across novel views (see Table R5). **kaKw: Ablation–HDR-informed loss function.** We provide an ablation study of the HDR-informed loss function in Table R6 on the Chef and Food scenes. Incorporating this loss provides some improvement by preventing very bright regions from dominating the loss. Moreover, in the case of 5 input views, performance without the HDR-informed loss drops due to specular highlights appearing in one view, but not an overlapping nearby training view (e.g., on the globe in the Chef scene or the bag of chips in the Food scene). Without the HDR-informed loss, the network has difficulty modeling the large variation in radiance between views, and so the optimization produces spurious patches of density to model this view-dependent effect (despite regularization with the space carving loss). **kaKw: Ablation–space carving.** An ablation study of the space carving regularization on the Cinema scene is in Supp. Table 2 (reproduced in Table R4). Without space carving regularization we find that spurious clouds of density appear in empty regions, worsening performance across most metrics. Pdf: /pdf/44ce32abe828133d5caa768c5538ea6df36040c3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Method
Accept (oral)
Summary: The paper presents a thorough consideration on the parametrization of optimizable solution generators that allow using gradient descent to find solutions for combinatorial optimization methods. They find a set of assumptions that enable complete, compressed and efficiently optimizable representations, in particular existence of feature maps of solution and instances onto bounded (norm, diameter and dimensionality) and variance preserving feature spaces which allow for a bilinear cost oracle $c(I,S)=f_I^T M f_S$ for instance/solution features $f_I/f_S$ and a matrix $M$ s.t. $\Vert M \Vert_F\$ is bounded by a constant $C$. The implications of this result (including the existence of such parametrizations for TSP) are discussed, notably that existance of such generators does not mean P=NP since sampling from the generator might still take exponential time. The theory is developed on max/min cut, max-k-CSP, max-weight BP matching and TSP and experimentally evalauted on max-cut Strengths: Originality: I love this paper for taking the obvious in hindsight question for actually asking what types of representations make combinatorial optimisation via first order methods work on a *principles* basis Clarity: The paper is superbly written and conveys it's ideas and limitations well Quality: while I did not have the time to carefully check every brief, on a cursory reading they appear to make sense and align well with intuitions. Significance: I think this paper will completely change how the deep learning community will think about dealing with combinatorial optimization (or at least I hope so) Weaknesses: I cannot point to any Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: I think everything was addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very encouraged from the very positive feedback of the reviewer and their excitement about our work and potential impact. We are also very excited about our work and believe that our work will initiate a principled theoretical study of neural combinatorial optimization and its theoretical insights will be of practical significance. --- Rebuttal Comment 1.1: Title: Acknowledging rebuttal Comment: Commenting to confirm I have read the rebuttals and other reviews and will digest and engage in discussion.
Summary: The paper provides a theoretical analysis of policy-gradient methods for combinatorial optimization problems. It defines a set of three desirable properties one might wish to be fulfilled for such a property, namely being able to generate an approximately optimal solution, being small in size, and enabling efficient optimization via SGD. The main result of the paper is to prove that there exists a policy gradient method satisfying all three properties simultaneously. The proposed method is general enough to capture many well-known combinatorial optimization problems. A small empirical study is provided, too. Strengths: I think the idea behind this work is great. Theoretical foundations for reinforcement learning methods applied to CO problems are highly needed. Defining a set of desirable properties and analyzing whether and how they can be satisfied simultaneously seems to be the right approach. Weaknesses: Since I believe that the contributions of this paper are great, I feel really sorry that I cannot give a more positive evaluation at the moment. I try to give as much constructive feedback as possible and encourage the authors to revise the manuscript and, if not accepted here, resubmit to another top venue. My main concern is: - I have a few mathematical issues / questions, where I do not understand the paper. I think one source of these issues is that the model of computation is not well-defined. The authors wildly switch between bit-representations and real models of computation. To solve these issues, the authors should clearly define in which model of computation they work and make all assumptions / requirements / statements consistent with this model. There are a few more unrelated issues, see my more specific questions in the "Questions" section. Apart from that, I have a whole bunch of secondary comments, see below: - I think the paper (implicitly and sometimes explicitly) oversells the contribution of [BPL+16] to amplify the motivation for their own work. So far, reinforcement learning is NOT a state-of-the-art method for combinatorial optimization problems like TSP. The work of [BPL+16] is great as a proof of concept, but statements like "[BPL+16] [...] generate very good solutions for (Euclidean) TSP instances" without mentioning that these are very small toy problems should be avoided. I think the fact that NNs for CO are still in their beginnings should be pointed out more clearly and the motivation of this paper should build upon a larger variety of prior work than only [BPL+16]. - line 103: I suppose you want to refer to Def. 2 and not to Qu. 1 here (in particular, because otherwise you refer to Qu. 1 before even stating it). - line 143: I find it more natural to write this with $L(s,I)$ instead of $\mathcal{O}(s,I)$. I understand that this is the same by definition, but here the focus is more on the cost structure and less on the fact that such an oracle exists. Maybe one could even drop the notation $\mathcal{O}(s,I)$ everywhere in the paper and use always $L$? In Definition 1, one could instead just write in words that such an oracle exists. - line 196: I feel that the split into supervised and unsupervised models here is very artificial because papers in both categories apply sophisticated algorithms mixing very different paradigms. I doubt that the negative result [YGS20] is directly applicable to the settings of the three papers you cite for the supervised case. I suggest not to artificially split the related work into these two categories. - Related work: even though of a very different flavor, I think the following two papers about the theoretical ability of neural networks to solve CO problems should be discussed in the related work section: Hertrich, C., & Skutella, M. (2023). Provably good solutions to the knapsack problem via neural networks of bounded size. INFORMS Journal on Computing. AND: Hertrich, C., & Sering, L. (2023). ReLU neural networks of polynomial size for exact maximum flow computation. In International Conference on Integer Programming and Combinatorial Optimization. - line 219: please either provide a proof or a reference for the reformulation of the MaxCut problem via the Laplacian. - line 241: say that Thm. 4 is in the appendix. - line 246: a unknown -> an unknown. Also the whole sentence is a bit hard to read, maybe split into two? - line 286: "the point $\bar{W}=-\tau M$ when $\tau\rightarrow+\infty$" is not a point. What you write here is not mathematically precise, even though I understand what you mean. Consider revising. - lines 323/324: I recommend a comma between "fast" and "making". - line 344: there is a "to" too much (between "we" and "pick"). - line 382: you should not assume that everyone knows what $G(n,p)$ is. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - line 73: why is this the gradient? Please provide some explanation. In particular: where does the logarithm come from? - line 75: what do you mean by "using only access to a solution cost oracle"? Do you want to point out that something particular is not used? If so, what? I suppose you also need to be able to sample $I$ and $s$ according to their respective distributions, and you need to be able to compute the gradient of $log(p(s;I;w))$ w.r.t. $w$. - line 96: what is the "description size of the parameter space $\mathcal{W}$"? Is it the number of bits required to represent any parameter $w\in \mathcal{W}$? But then, $\mathcal{W}$ is a finite set, so how can you ever perform gradient descent on it? - line 99: why do you allow running time polynomial in $1/\epsilon$, but the parameter space must be polynomial in $\log(1/\epsilon)$? - line 107: I do not understand what "the full parametrization of all distributions over the hypercube" is. There are uncountably many such distributions, so how can you ever represent them in a set $\mathcal{W}$ of finite description size? - Remark 5: Is this a formal statement or an intuitive statement? I really would like to see more details about how exactly the cited paper implies a negative result for neural network solution samplers in the setting of your paper. - Remark 6: What if I WANT to encode the weights into the instances? I suppose you would it call "known" weights then. But it is less clear how to define the feature maps then and still preserve bilinearity. I do not see why what you call "unknown" weights is the more general / difficult case. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: In principal, the authors provide all details required to understand the scope of their results. The discussion on limitations could be improved by answering the following two questions: - Are there natural combinatorial optimization problems to which the framework is not applicable? If so, which problems, and why? - It seems like the proposed method is theoretically superior to what people use in practice. Is this true? And if so, why is it not the standard method in practical settings nowadays? Do you expect it to become that? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer (i) for finding the contribution of our paper great and for providing extensive constructive feedback, (ii) for pointing out various typos in the current draft, which we will fix in our first revision and (iii) for proposing additional references and suggestions for the Related Work section; we added the papers' for a more complete and detailed exposition. We continue by explicitly addressing the reviewer's questions. > I think the paper (implicitly and sometimes explicitly) oversells the contribution of [BPL+16] ... The work of [BPL+16] is the (experimental) work that introduced the framework of Neural Combinatorial Optimization. Our work is the first theoretical approach to design a rigorous and principled way to study and argue about the fundamental questions behind this problem. Hence, we chose to design our theoretical framework on top of this (arguably) well-accepted work. We do not claim that the experimental results of [BPL+16] are SOTA. However, we find that this work is the simplest and most natural setting to go from a completely practical to a rigorous theoretical framework. Hence, our work does not build on the experimental contribution of [BPL+16] (as the reviewer correctly mentions there are various follow-up works that improve on this front) but builds on the conceptual contribution of [BPL+16] providing the first formal theoretical guarantees. We will clarify the fact that the results of [BPL+16] are for smaller instances. > Line 219 This is standard, see e.g., Spielman (2010). We will add a reference, as the Reviewer proposes. [Spielman, Algorithms, graph theory, and linear equations in Laplacian matrices, 2010] > Line 73 This is the standard expression for policy gradient, for the reference see e.g., Section 2 and 5 in the paper of Kakade (2001) as we mention in our manuscript and for a proof see the book by Szepesvári (2022). [Kakade, A natural policy gradient, 2001] [Szepesvári, Algorithms for reinforcement learning, 2022] > Line 75 One does not need access to an explicit representation of the cost function (e.g., the Laplacian matrix of the Graph in the Max-Cut problem) but only black box access to the cost oracle (i.e., the values of the function). > Model of Computation > > Line 96 Gradient descent is not explicitly performed on a finite set – it is just the rounding of the iterates (due to finite memory in its implementation) that implicitly makes the set discrete. Our description size captures exactly this finite precision issue. Say we require $d$ parameters for the parameter space of gradient descent; then a description size of $O(d \log(1/\epsilon))$ would mean that when you implement gradient descent you should use $O(d \log(1/\epsilon))$ bits for the representation of its iterates (roughly $\log(1/\epsilon)$ bits for each parameter). Since the loss that we use is $\mathrm{poly}(d)$-Lipschitz with respect to the parameter $w$ (assuming that $w$ belongs in the continuous space), rounding the iterates of Gradient descent to $O(d\log(1/\epsilon))$ bits does not introduce substantial error. We performed the analysis of gradient descent in the real-valued model for simplicity; we will add a remark on this in our updated manuscript. > Line 99 As it is standard in the optimization literature (see e.g., Bubeck (2015)), we cannot hope for truly polynomial dependence on $1/\epsilon$ unless the objective enjoys some special structure such as strong convexity. To this end, we settle for the natural $\mathrm{poly}(1/\epsilon)$ dependence. In contrast, we stress that the parameter space should be of truly polynomial size, i.e. polynomial in $\log(1/\epsilon)$. [Bubeck, Convex optimization: Algorithms and complexity, 2015] > Line 107 Each point of the hypercube corresponds to a cut in the graph. The full parametrization corresponds to assigning a single parameter to any possible cut (which are exponentially many). The description is finite due to the finite precision requirement (see also our response to the previous question regarding finite precision). > Remark 5 As our wording of Remark 5 shows this is a high-level/intuitive statement. The fact that optimizing (even simple) two layer neural networks is computationally intractable indicates that end-to-end optimization of deep solution samplers that can generate samples efficiently (in polynomial-time) is most likely impossible. For more evidence we remark that having an efficient (and “small”, i.e., with polynomial number of parameters) neural network sampler that can also be provably optimized via gradient descent in polynomial iterations would imply a polynomial-time algorithm for the problems considered in this work (e.g., for Max-Cut) which is a well-known NP-hard (even to approximate) problem. > Remark 6 Let us consider the weighted Max-Cut problem, with known weights. In this case, one can take the weighted Laplacian matrix and again express the loss function in the exact same form as in the unweighted case. Then the feature mappings are exactly the same. > Are there natural combinatorial optimization problems to which the framework is not applicable? If so, which problems, and why? We kindly refer to our response to Reviewer VMa2. > It seems like the proposed method is theoretically superior to what people use in practice. Is this true? And if so, why is it not the standard method in practical settings nowadays? Do you expect it to become that? Yes, our method is theoretically superior to the vanilla objective as the additional reguralization and the fast/slow mixture generators help avoid minimizers at infinity and vanishing gradients (see Section 3 of our manuscript). Our preliminary experimental evaluation indicates that some of our theoretical insights (entropy regularization, fast/slow mixing) leads to better performance. We also kindly refer to our response to Reviewer jXn4. --- Rebuttal Comment 1.1: Comment: I thank the authors for their extensive response to my review and the raised questions. Trusting that the authors will incorporate the promised changes, I am increasing my score from 4 to 6.
Summary: This paper proposes a theoretical framework for analyzing the effectiveness of deep models trained by gradient-based methods as solution generators for combinatorial problems. The authors first investigate the complete, compressed and efficiently optimizable properties of the solution generator on many combinatorial tasks. To address the challenges of minimizers at infinity and vanish gradients, the authors devise an entropy regularization and a fast/slow mixture generation scheme. Experiments demonstrate that the proposed method helps address vanishing-gradient issues and escape bad stationary points. Strengths: 1. The paper gives a positive answer to the existence of complete, compressed and efficiently optimizable solution generators for combinatorial tasks, and it designs a family of such solution generators. 2. The paper addresses the challenges of minimizers at infinity and vanishing gradients by the proposed entropy regularization and fast/slow mixture generation scheme. 3. A general and solid foundational theorem (Theorem 1) is clearly presented. Weaknesses: 1. The authors may want to conduct experiments on more combinatorial problems with larger scales to demonstrate the effectiveness of the proposed method. 2. The authors analyze the existence and properties of the feature mappings for the solutions and instances, but how to learn such mappings remains unexplored. 3. The authors apply MLP to combinatorial problems with a fixed number of graph nodes. However, MLPs fail to process instances with varying scales. The authors may want to consider more suitable models such as the graph neural network (GNN). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Could you please give some examples of combinatorial problems that do not satisfy Assumption 1? 2. Could you please provide more details on the representations of input data, such as the input features and data format? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Assumption 1 might be difficult to check for a general combinatorial problem, which may limit the applicability of the theoretical results in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the very positive feedback and the provided questions. > *The authors may want to conduct experiments on more combinatorial problems with larger scales to demonstrate the effectiveness of the proposed method.* *Response:* We kindly refer to our general response about experimental evaluation on large instances for Max-Cut. As a direct future direction we aim to extend our experiments to other interesting combinatorial problems. > *The authors apply MLP to combinatorial problems with a fixed number of graph nodes. However, MLPs fail to process instances with varying scales. The authors may want to consider more suitable models such as the graph neural network (GNN).* *Response:* We agree with the reviewer that GNNs are more suitable for such tasks and plan to use them in future experimental evaluation of our work. Since our current work is primarily theoretical, we tried to keep our simulations as simple as possible (we do not aim to improve over SOTA methods for NCO). > *Could you please give some examples of combinatorial problems that do not satisfy Assumption 1?* *Response:* An interesting combinatorial problem not captured by our framework is SAT. Given a combinatorial problem, Assumption 1 essentially asks for the **design** of feature mappings for the solutions and the instances that satisfy desiderata such as boundedness and variance preservation. Max-Cut, Min-Cut, TSP and Max-$k$-CSP and other problems satisfy Assumption 1 because we managed to design appropriate (problem-specific) feature mappings (see Section 2) that satisfy the requirements of Assumption 1. There are interesting combinatorial problems for which we do not know how to design such good feature mappings. For instance, the "natural" feature mapping for the Satisfiability problem (SAT) (similar to the one we used for Max-$k$-CSPs) would require feature dimension exponential in the size of the instance (we need all possible monomials of $n$ variables and degree at most $n$) and therefore, would violate item 4 of Assumption 1. > *Could you please provide more details on the representations of input data, such as the input features and dataformat?* *Response:* At each iteration, the input is an instance (e.g., a graph). A potential representation of the graph is the Laplacian matrix (e.g., in Max-cut). Hence, the input features are a collection of Laplacian matrices. We note that the input features depend on the combinatorial problem in hand. > - *The authors analyze the existence and properties of the feature mappings for the solutions and instances, but how to learn such mappings remains unexplored.* > > - *Assumption 1 might be difficult to check for a general combinatorial problem, which may limit the applicability of the theoretical results in this paper.* *Response:* In this work, we present a general framework for establishing the first theoretical understanding regarding challenging and fundamental combinatorial problems. Assumption 1 allows for our framework to be quite general and expressive. The mildness of our Assumption is justified by the number of problems it captures (Max-Cut, TSP, and Max-$k$-CSP). We emphasize that the featured mappings for the instances and the solutions correspond to a design problem; in our work, we designed appropriate feature mappings for Max-Cut, TSP and other combinatorial problems. Hence, it is not exactly the case that Assumption 1 is hard to check; the challenge is to design good feature mappings, which currently is a problem-specific task. Designing principled ways to find these mappings is an interesting direction.
Summary: This paper deals with a significant question at the intersection of combinatorial optimization and continuous-based optimization: is it possible to design solution generators for combinatorial problems that are (1) expressive enough to generate approximately optimal solutions, (2) tractable so that their parameterization is only polynomial in the number of inputs, and (2) efficiently optimizable so that only a polynomial number of stochastic gradient descent (SGD) steps is required to learn a parameterization with almost optimal performance? The paper answers this question in the affirmative. It develops a general framework that can be used to describe complete, compressed and efficiently optimizable solution generators, which can be instantiated to accommodate a wide range of NP-hard problems. The framework requires feature mappings for both the instances and the solutions tat have a number of desired properties. Assuming this is the case, the authors provide a very general positive result. The paper highlights two challenges related to the optimization. First, the loss function may accept a minimizer at infinity, so that gradient descent may get stuck to sub-optimal configurations. The authors address this by introducing an entropy regularization term. Second, vanishing gradients may slow down optimization, rendering it inefficient. This is addressed by introducing a mixture of a fast and a slow solution generator: the fast component helps to reach the optimal solution while the slow component helps to keep a non-zero variance throughout the optimization process, hence avoiding the problem of vanishing gradients. The authors provide experimental evidence in favor of their entropy regularizer and the fast/slow mixing scheme. Strengths: 1. The paper is extremely well-written. The authors have done an excellent job presenting in a simple yet rigorous manner the main insights behind their framework and the various design choices. Every single choice is adequately justified, which makes it very easy even for non-familiar readers to grasp the main messages of this work. I also liked the the fact that the authors took extra care to elucidate some potentially confusing aspects, e.g., with respect to the P vs. NP question. 2. The main result requires very reasonable assumptions and is general enough to encompass several standard hard problems. 3. The two proposed tricks (entropy regularization and fast/slow mixing scheme) are extremely well motivated but also justified theoretically. The ablation study at the end confirms their usefulness empirically. 4. The framework is novel and a major part of its novelty precisely stems from its generality. 5. Even though I was not able to check all math details, the part that I was able to check was free of problems. Weaknesses: 1. I understand that the main focus of this work is to provide a generic framework with reasonable assumptions and strong performance guarantees. I feel that the authors have delivered on that premise. That said, readers may be left with the question: what is the true empirical potential of this framework, especially when we compare it to state of the art combinatorial neural solvers? After reading the paper, I was not really clear whether this work is mainly about introducing a generic framework, or if it was also about proving its practical value on various benchmarks. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Is this work only intended to theoreticians or to practitioners as well? If the latter is the case, then the authors would need to provide evidence for that claim, e.g., performance of their solution generators compared to other continuous solvers. I feel that this question is not answered in the current paper. 2. On a similar note, the number of nodes n=15 used in the ablation study is small - could these methods scale to larger instances? Do the authors have any take-away messages for practitioners who may be interested in trying out such methods? Is it perhaps fair to say that the proposed framework mostly serves as an abstraction but without necessarily powerful practical implications? 3. In the discussion of P vs. NP, the authors explain that sampling from their solution generators may be computationally expensive but add that technique based on Langevin dynamics may help to circumvent this issue in practice. It would be worthwhile for the authors to see whether such techniques would be of any help, especially with large instances. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive feedback and insightful questions. > - *I understand that the main focus of this work is to provide a generic framework with reasonable assumptions and strong performance guarantees. I feel that the authors have delivered on that premise. That said, readers may be left with the question: what is the true empirical potential of this framework, especially when we compare it to state of the art combinatorial neural solvers? After reading the paper, I was not really clear whether this work is mainly about introducing a generic framework, or if it was also about proving its practical value on various benchmarks.* > > >- *Is this work only intended to theoreticians or to practitioners as well? If the latter is the case, then the authors would need to provide evidence for that claim, e.g., performance of their solution generators compared to other continuous solvers. I feel that this question is not answered in the current paper.* *Response:* In this work, we present a general framework for establishing **theoretical** understanding regarding challenging and fundamental combinatorial problems such as Max-Cut, TSP, and Max-$k$-CSP. Hence, our work is mainly intended for theoreticians and we leave more extensive experimental evaluation of our work as an interesting question for future work. However, as our simulations (see also the additional experiments on larger instances provided in this rebuttal) show that some of the insights we obtain as a byproduct of our theoretical proof of convergence may be of practical significance. Moreover, we would like to point out an interesting connection between our theoretical work and a prior experimental paper. The work of Kim et al. (2021) uses an entropy maximization scheme in order to generate diversified candidate solutions. This experimental heuristic is quite close to our theoretical idea for entropy regularization. In our work, entropy regularization allows us to design quasar-convex landscapes and the fast/slow mixing scheme to obtain diversification of solutions. [Kim, Park, Kim, Learning Collaborative Policies to Solve NP-hard Routing Problems, 2021] >- *On a similar note, the number of nodes $n=15$ used in the ablation study is small - could these methods scale to larger instances? Do the authors have any take-away messages for practitioners who may be interested in trying out such methods? Is it perhaps fair to say that the proposed framework mostly serves as an abstraction but without necessarily powerful practical implications?* >- *In the discussion of P vs. NP, the authors explain that sampling from their solution generators may be computationally expensive but add that techniques based on Langevin dynamics may help to circumvent this issue in practice. It would be worthwhile for the authors to see whether such techniques would be of any help, especially with large instances.* *Response:* We refer to our previous answer and the additional simulations in the general response. In our additional simulations, we implemented our method using approximate samplers based on Langevin dynamics to circumvent this issue. Given that even in larger instances our method outperforms the vanilla objective we are optimistic that our theoretical insights are going to be of practical significance. --- Rebuttal Comment 1.1: Title: thank you for the responses Comment: I thank the authors for their detailed response. I encourage them to include these clarifications in the final paper version.
Rebuttal 1: Rebuttal: **General Response** We thank all reviewers for taking the time to read our manuscript carefully and for providing constructive and insightful feedback. We are very encouraged by the positive comments of the reviewers on the novelty and significance of our theoretical framework for Neural Combinatorial Optimization (Reviewers jXn4, dPn4), theoretical contributions (Reviewers m1vy, dPn4) and the writing quality and the clarity of the presentation of the ideas (Reviewers jXn4, VMa2, dPn4). We provide detailed responses to each reviewer separately. We look forward to engaging in further discussion with the reviewers, answering questions, and discussing improvements. Before that, we start with a general comment where we provide detailed additional experimental results on large instances. **Additional Simulations - Larger Instances/Approximate Samplers** Our experiments in the paper (which do not need Langevin dynamics for sampling) showed that even for very small instances of Max-Cut (i.e., with 15 nodes), optimizing the vanilla objective is not sufficient and the iteration gets trapped in local optima. In contrast, our method using entropy regularization always manages to find the optimal cut. A natural question raised by the reviewers is whether this improvement is also apparent in larger graphs. We focus on the case of random $d$-regular graphs with $n$ nodes. It is well-known that for this family of graphs, with high probability as $n \to \infty$, the size of the maximum cut satisfies $\mathrm{MaxCut}(G(n,d)) = n(d/4 + P_\star \sqrt{d/4} + o_d(\sqrt{d})) + o(n)$, where $P_\star \approx 0.7632$ is a universal constant [Dembo et al., 2017]. We aim to find a good approximation for the normalized cut-value, defined as $(\mathrm{cut\_value}/n - d/4)/\sqrt{d/4}$, which (roughly speaking) takes values in $[0,P_\star]$. We obtain approximate random samples from the density $e^f$ using the Metropolis-Adjusted Langevin Algorithm (MALA). In particular, an approximate sample from this density is obtained by the Euler–Maruyama method for simulating the Langevin diffusion: $x_{k+1} = x_k + \tau \nabla f(x_k) + \sqrt{2\tau} \xi_k$, where $\xi_k$ is an independent Gaussian vector $\mathcal{N}(0,I)$. MALA incorporates an additional step based on the Metropolis-Hastings algorithm (see [Besag, 1994, Song and Kingma, 2021]). In our case, the score function $f$ is a simple 3-layer ReLU network. **In our additional experiments for 3 larger random regular graphs (600 nodes) using the fast/slow mixing technique along entropy regularization we see that our method leads to improvements over the vanilla objective.** Plots of the trajectories of the vanilla and our method can be found in the figures provided in the pdf of the rebuttal. In the horizontal axis we plot the iterations and in the vertical axis we plot the normalized cut score of each method (higher is better) -- we stop the plot of the vanilla trajectory after 200 iterations because we observed that its output has fully converged and is stuck. [Dembo, Montanari and Sen, Extremal cuts of sparse random graphs, 2017] [Besag, Comments on “Representations of knowledge in complex systems” by U. Grenander and MI Miller, 1994] [Song and Kingma, How to train your energy-based models, 2021] Pdf: /pdf/0b9ee3c7a0885e797d9bc1356efdeb577a64f9ce.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Causal Framework for Decomposing Spurious Variations
Accept (poster)
Summary: The paper proposes a novel theoretical framework for decomposing spurious causal effects. For this purpose, the authors introduce the concept of partial abduction and prediction. Here, only a subset of the latent variables in a given structural causal model (SCM) is updated when conditioning on observed evidence. The authors use this concept to derive nonparametric decompositions of spurious effects and provide identifiability results both for Markovian and semi-markovian SCMs. The paper ends with an empirical example using the COMPAS dataset. Strengths: - The paper addresses a relevant problem: decomposing the spurious effect in the presence of multiple confounders. This has potential applications in e.g., causal algorithmic fairness. - Novelty: I am not aware of any other work done in this regard. The authors develop a novel theory for defining such decompositions based on Pearl's SCM framework. - The paper is technically solid and rigorous. Definitions are clear and precise, all assumptions are stated transparently, and proof and illustrative examples for theoretical results are provided. Weaknesses: - My only main concerns are regarding the applicability of the proposed framework in practice. To my understanding, the full (semi-markovian) causal graph needs to be known (including the spurious arrows between the observed confounders) in order to apply the decomposition result from Theorem 3. This may be the case for simple examples like the COMPAS dataset. However, in many applications, confounders are potentially high-dimensional and the causal graph is unknown. - I think the paper would benefit from expanding the experiment section, i.e., by including a second example dataset. Minor points: - Eq. (7): I suggest using the notation $p(Y_x = y)$ so that $y$ on the right-hand side of the equation is well-defined. - Typo in Definition 7: path causal path - Proposition 2: I suggest moving the definitions of TV and TE from Appendix A1 to the main paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - It seems like the decomposition in Theorem 1 depends on the chosen topological ordering of the observed confounders $Z_i$. Consequently, would the "spurious contribution" of confounder $Z_i$ change when selecting a different ordering? This would seem counterintuitive for a framework that aims at decomposing the overall spurious effect into individual "spurious contributions". Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I suggest including a section on limitations. Here, the authors should particularly stress the assumptions necessary for the identification of spurious decompositions from observational data. Furthermore, a discussion on these assumptions in the context of the empirical example in Section 5 should be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in sharing his/her thoughts and comments. We respond point-by-point to each of the concerns raised. --- [W1: Required causal assumptions] Indeed, you are correct that a fully specified graph is needed for Theorem 3. We also agree that this may be challenging to obtain in practice. However, one also needs to consider that fine-grained spurious (or causal / mediated) effects can be inferred from such a fully specified graph (and data of course). In particular, our tools decompose the spurious effect, and give a fine-grained quantification of what the main confounders are. As is common in causal inference, the granularity of the obtained knowledge needs to be matched with the strength of the causal assumptions. Conversely, in the absence of such assumptions, fine-grained knowledge usually cannot be obtained. For instance, it has been shown in the literature in the result known as the causal hierarchy theorem [1, Theorem 1], which shows that no claim can be made about higher level counterfactuals from lower level data (observational or experimental) without causal assumption, which we consider through the language of causal graphs. Having said that, there is a possible relaxation of our approach, based on the recent development of cluster diagrams [2], which offers some formal way of trading off knowledge versus quantitative predictions. For cluster DAGs, one can consider groups of confounders, and thus the specification of causal assumptions becomes easier and less demanding. However, causal decompositions can still be applied to cluster diagrams. This provides a way to choose a different level of granularity for settings where domain knowledge may not be specific enough to elicit a full causal diagram. [1] Bareinboim, Elias, et al. "On Pearl's Hierarchy and the Foundations of Causal Inference." In Probabilistic and Causal Inference: The Works of Judea Pearl, ACM, Special Turing Series, pp. 507-556, 2022. [2] Anand, Tara V., et al. "Effect identification in cluster causal diagrams." Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023. --- [W2: Adding experiments] Thanks for the suggestion; please see the global response [G1]. We now add experiments with a known ground truth. --- [W3: notation $p(Y_x = y)$] The reviewer is indeed correct that this is a slight abuse of notation. However, this notation is standard in the graphical approach to causal inference, which makes us afraid that deviating from it may hurt the legibility to causal readers familiar with it. (For example, this is the notation in place at least since the Causality book, Pearl, 2000.) Still, to address the concern, we added an explicit remark saying there is a slight abuse of notation, and explaining the $Y = y$ is sometimes just replaced with $y$. --- [W4: Minor points] Thanks for all of these. We have fixed all three! --- [Q1: Topological Ordering] Thanks for raising this interesting point. In fact, the decomposition may change when the topological ordering is changed. However, this should not come as a major surprise, since the decomposition for path-specific causal effects has exactly the same property (i.e., depends on the ordering). However, in our setting, the topological ordering can also be seen as the "natural" ordering, and it is also the only identifiable one. Additionally, one may also consider that the decomposition is unique (independent of ordering) for the additive case (this is again similar to the path-specific causal effects). Let us know if we can elaborate further on this note. --- [L1: Limitations] Thanks, please see global response [G2] which describes the discussion we added. Furthermore, regarding the necessary assumptions for identification, our key proposal is to bring some of thee examples from the appendix into the main text. In particular, in Example 7 (currently Appendix B.4), we ground the definitions of an anchor set, and also anchor set exogenous ancestral closure. We believe that adding this will increase the transparency of the proposed definitions, and make more explicit the types of assumptions that are needed for decomposing spurious effects in the Semi-Markovian setting. --- Rebuttal Comment 1.1: Title: Thank you for your clarifications Comment: I appreciate the detailed response and additional experiments. I still have some concerns regarding practical applicability, but as pointed out by the authors, strong assumptions are to be expected and are in line with previous literature. Overall the paper is a valuable contribution to the causal inference literature. I raise my score accordingly and recommend acceptance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our response, and providing swift feedback. We also acknowledge the strength of assumptions needed for using the tool, but remark that we are in particular optimistic about the usage of cluster diagrams in this context. In particular, we are grateful that the reviewer sees our framework as a valuable contribution to the causal inference literature, which is quite encouraging. Also, thank you for adjusting the score!
Summary: This paper considers the problem of decomposing spurious variations in both Markovian and Semi-Markovian models, which are potentially useful to many real applications. A non-parametric decomposition of spurious effects is given given and identification issue is also discussed under suitable conditions. Strengths: - decomposing spurious variations can be potentially useful to many real applications - Presentation is very good with well-structured preliminary and examples - Proof is complete and concise Weaknesses: It seems that there is no discussion on limitation of the developed theory. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The paper is very clear in both motivation and technical part. As the first work on this problem with a general setting, the content of the present version is sufficient for an acceptance. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: It seems that there is no discussion on limitations. Developing more identification theories is worth future investigation, as also mentioned by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in providing his/her feedback. We were encouraged by the fact that the reviewer appreciated the main strengths of the manuscript. We respond point-by-point to the raised concerns. [W1: Limitations of the developed theory] Thanks for noting this; Please see global response [G2] for a detailed discussion of the limitations, which is now added to the Discussion in the paper!
Summary: The authors tackle the problem of decomposing the source of variation in causal analysis. In many real-world problems, there are many factors that can introduce spurious correlation between a treatment and outcome, and breaking down the contribution of each variable to that spurious correlation can be useful in many fields where an explanation is important. The authors propose a methodology for both Markovian and semi-Markovian models, based on the 'abduction-action-prediction' method. It allows for the application of evidence to only a subset of exogenous variables, which then allows us to separate the variation from each exogenous variable. Strengths: This paper is well-written and well-motivated. The problem it seeks to address is important, and the introduction does a good job at setting the theoretical foundation and describing the problem. For the most part, the terminology is introduced at a good pace, making the overall narrative easier to follow. I appreciate the authors interspersing examples with figures, since without them, the paper would likely bog down with all the math. Overall, I found this a compelling approach to an important problem. Weaknesses: I think, for such an equation-heavy paper, the notation could be made clearer/more explicit in sections. For example, Proposition 1 is the first time that we see the $e^{U_1}$ notation, which it seems like means 'this evidence is applied to everything except for $U_1$'. That's a fairly counter-intuitive notation for such a thing, since generally a subscript or superscript implies that we're using those variables in someway, rather than explicitly excluding them (I realize you need to use them to exclude them, but conceptually, the notation is non-obvious for me). If I'm not misunderstanding what this notation means, then a more explicit definition before Proposition 1 would be helpful, given how central it is to the rest of the paper. As another notation point, Theorem 2 uses the notation $z_{-[i]}$, but until now, the $Z_{[i]}$ notation has meant, as defined in Definition 2, "all $Z_i$'s, from 1 to i", and it's not obvious how this translates to a negative. The COMPAS demonstration is great, but since it clearly lacks ground truth, additional synthetic experiments would be helpful to help demonstrate both the correctness of the theory and how this method can be used in practice on other types of data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there a list of all of assumptions required for your method? In Section 4.1, you say that the decomposition needs to follow a topological order of the variables U_i, but are there others? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In terms of social impact, this method is more likely to have a positive social impact, since causal inference is often applied in domains where spurious variation is present but where the decisions made can have significant effects on people's lives (e.g., the COMPAS example) Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in sharing his/her thoughts and comments. We were quite glad an encouraged by the fact that the reviewer appreciated the main strengths of our work. We respond point-by-point and hope to have addressed the raised concerns. --- [W1: Clearer notation] Thanks for this suggestion; the reviewer’s summary “evidence is applied to everything except for $U_x$” precisely captures the intuition behind the operator (i.e., there is no misunderstanding). However, regarding the choice of superscript / subscript – we have little flexibility here, as we see it. The subscript operator in the causal inference literature is already taken, and it has been used heavily to indicate (and index) interventions. Therefore, the superscript notation is the only option available to us (note that, in general, one may apply different parts of evidence to different latent variables, and thus the indicator for latent variables needs to be specific for each part of the evidence). Still, we definitely agree on the centrality of such operator, and thus re-write the proposition slightly to make explicit how the operator is defined: > Let $P(Y = y \mid E = e^{U_1})$ denote the conditional probability of the event $Y = y$ conditional on evidence $E = e$, defined as the probability of $Y = y$ in the PA submodel $\mathcal{M}^{U_1, E=e}$ (i.e., the exogenous variables $U_1$ are not updated according to the evidence). Then, we have that: \begin{align} P(Y = y \mid E = e^{U_1}) = \sum_{u_1} P(U_1 = u_1)P(Y = y \mid E = e, U_1 = u_1). \end{align} We hope this addresses your concern but, please, let us know. Additionally, we wonder if the notation $$ x^{\underline{U_{[i]}}}$$ would be better (the underline indicating the evidence is not updated). We are happy to introduce such a change if the reviewer feels strongly about this (although the reviewer will, just like us, note a slight trade-off between making the notation more bushy vs. more informative). --- [W2: $Z_{-[i]}$] Thanks for spotting this glitch, which is an omission on our side and not defined anywhere. We now mention this explicitly in Definition 2. Specifically, the $-[i]$ notation corresponds to the complement, that is: $$Z_{-[i]} = \{Z_{i+1}, \dots, Z_{k}\}.$$ --- [W3: Synthetic Experiments] Please, see global response [G1]. We now add synthetic experiments with a known ground truth! --- [Q1: Assumptions for the Method] Thanks for asking this, it is indeed a good question. We distinguish three different considerations to try to clarify this point: (i) Firstly, a fully specified causal diagram is needed for the procedure. This is now also discussed in the limitations paragraph of the Discussion. (ii) Regarding the topological ordering of variables $U_i$ (which follows from the topological ordering of the $Z_i$) – this part is necessary for the identifiability of the spurious decomposition. However, for defining the decomposition itself, it is not strictly necessary. In other words, there may be a decomposition that does not follow a topological ordering, but we will not be able to compute it from the data. However, the one that does follow a topological ordering will be computable, as shown in Theorem 2. We now add a remark on this in the text! (iii) A similar consideration is also present in the Semi-Markovian case; here, however, we do not invoke a topological ordering, since there is no one-to-one correspondence with the observed variables. However, the very same intuition is behind the condition called anchor set ancestral closure (Definition 7). That is, for a specific order, as shown in Theorem 4, we will be able to compute the decomposition (otherwise, the decomposition can still be defined, but it cannot be uniquely computed from the data). We appreciate the thoughtful review and hope these answers help clarify the required assumptions and the provided results. Please let us know if there are any further questions!
Summary: This paper provides a new framework for decomposing spurious effects and its identification conditions. In addition, they proposed a nonparametric method for decomposing the spurious effect under sufficient conditions. Strengths: - In general, this paper is well-motivated. - It provides a procedure to decompose spurious effects. - The theoretical results in this paper are sound. Weaknesses: - The paper could clarify the difference between "spurious effect" and "spurious variations", or state if they are used interchangeably. More explanation of how these two terms are defined would help the reader understand if they refer to the same concept. - More details could be provided on how the procedure for Semi-Markovian spurious decomposition is implemented. In particular, explaining the steps for removing the effects of all exogenous variables would make the approach clearer. - Comparing the proposed method against existing methods on estimating causal effects could highlight an advantage. Since the proposed method can estimate redundant effects, the resulting causal effect estimates may be more accurate compared to current approaches. This comparison could emphasize the improved performance enabled by accounting for spurious variations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in providing your feedback. We respond to each point in the sequel and would be happy to provide further clarification if you feel suitable. --- [W1: Spurious Effect vs. Variation] Thanks; we acknowledge that in the paper the distinction between the two is not entirely clear. We now add the following clarification in the Intro: “A spurious effect is a quantification of spurious variations (or a subset thereof)”. Thus, for a spurious effect, we usually have some kind of quantity in mind, for example, the experimental spurious effect (Exp-SE) in the l.h.s. of Eq. (2). Spurious variations are a more broad concept and represent co-variations induced by the latent variables in the SCM, for example, a specific value of a latent $U_z$ that affects both $X$ and $Y$ and induces a correlation between them. Hence, an effect is usually tied to a quantity, whereas variations are a more broad term. In the text, in most places, these two notions could indeed be used interchangeably (although we do not think of them as identical, as described above). Please let us know if this helps clarify the distinction. --- [W2: Details of Semi-Markovian procedure in practice] Thanks for mentioning this. Indeed, some of the examples related to Theorem 4 have been pushed into the appendix due to limitations of space. Our key proposal here is to bring some of these examples into the main text. In particular, in Example 7 (currently Appendix B.4), we ground the definitions of an anchor set, and also anchor set exogenous ancestral closure. We believe that adding this will increase the transparency of the proposed definitions, and make more explicit the types of assumptions that are needed for decomposing spurious effects in the Semi-Markovian setting. We thank you for this suggestion! --- [W3: Comparing against existing methods] Thanks for bringing up this point. In fact, for estimating the Exp-SE, one needs to estimate $P(y \mid x)$ (which is easy for a binary $x$), and also $P(y_x)$, which is a causal effect. Therefore, the approach in the paper builds on methods for estimating causal effects, and is not really a competitor in this sense. It would be indeed nice to have a result saying that a decomposition of the Exp-SE quantity can improve the estimation of causal effects. For concreteness, Pearl was able to show that in a special case, a causal (not spurious) effect can be estimated in pieces and then composed so as to obtain a more efficient estimator of this effect [1] ([2] may also be interesting in this context). However, there are challenges of how to do this regarding spurious effects and under more general causal diagrams, which we hope to investigate further in the future. Again, here our goal is mostly to introduce spurious effects that were not even defined, and also to understand the decomposability and other key properties of this quantity. The natural subsequent question after having this foundational step solidified is to investigate estimation properties, both from a statistical and computational perspective. [1] J. Pearl, "What is Gained from Past Learning" UCLA Cognitive Systems Laboratory, Technical Report (R-472), March 2018. Journal of Causal Inference, Causal, Casual, and Curious Section, 6(1), Article 20180005, March 2018. https://doi.org/10.1515/jci-2018-0005 [2] J. Hahn and J. Pearl "Precision of Composite Estimators" UCLA Cognitive Systems Laboratory, Technical Report (R-387), September 2011. Working paper. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: Thank you for the clarifications. I’m happy to increase my evaluation score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for a constructive review process and acknowledging our clarifications. We are quite encouraged by the grade increase from the reviewer, thanks!
Rebuttal 1: Rebuttal: The authors would like to sincerely thank all the reviewers for this paper. The main strengths and novelty were clearly appreciated, and the questions raised were quite useful for us to revise and improve the paper. In our response, we index all weaknesses with W, questions Q, and limitations L. We do not fully cite reviewer's questions (due to character limit), but try our best to provide a caption for each W/Q/L. Here, we would like to provide two global responses, which are then cited in the individual responses as well. --- [G1: Extending the Evaluation] As several reviewers suggested, synthetic experiments where the ground truth is known could be a useful addition. We have conducted such experiments, and we provide an example in the pdf that accompanies our review response (an additional experiment for the Semi-Markovian case will also be added). Synthetic A experiment explanation (see pdf): For this example, we set the parameters $\lambda_1 = \lambda_2 = \lambda_3$ in the described SCM. We then vary each parameter $\lambda_i \in [0, 0.2]$. This changes the value of the effect associated with latent variable $U_i$. We compute the ground truth values for the spurious effects based on the true SCM, using rejection sampling. We also estimate the values using the estimator described in Theorem 2. The plot in the accompanying pdf shows the result. In particular, we see that the effects associated with each $U_i$ change as the $\lambda_i$ increases. Furthermore, we see that the associated estimates for the effect are correct, and the ground truth values fall within the 95% confidence interval (CI). Therefore, this confirms the correctness of our approach. --- [G2: Discussion of the Limitations] The following discussion is added to the main text, to clarify the possible limitations of the proposed approach > “The main limitation of the approach proposed in the paper is the need for a fully-specified causal diagram, including all the bidirected edges that indicate confounding. Specifying such a graph may be challenging in practice. However, one also needs to consider how much knowledge can be ascertained from a fully specified graph and the data. In particular, our tools decompose the spurious effect, and give a fine-grained quantification of what the main confounders are. Such knowledge can be quite powerful and informative in practice. As is common in causal inference, the granularity of the obtained knowledge needs to be matched with the strength of the causal assumptions (in this case, specifying the causal diagram). Conversely, in the absence of such assumptions, fine-grained quantitative knowledge about these effects cannot be obtained in general. For instance, the causal hierarchy theorem [1, Theorem 1] shows that claims about counterfactuals from observational or interventional data cannot be made without causal assumption, as usually specified through causal diagrams. We hypothesize a similar result, that is, a precise quantification of spurious effects is not attainable in the absence of a causal diagram. >Furthermore, another technical solution is possible that may alleviate some of the difficulty of causal modeling. Recently, cluster diagrams have been proposed [2], in which one can consider groups of confounders (instead of considering each confounder separately), and thus the specification of causal assumptions becomes easier and less demanding (i.e., due to clustering, the number of nodes in the graph is smaller). However, causal decompositions as described in this paper can still be applied to cluster diagrams. This offers a way to choose a different level of granularity for settings where domain knowledge may not be specific enough to elicit a full causal diagram, while a causal analysis still needs to be undertaken. >Finally, we also mention that the identification criteria for Semi-Markovian spurious decompositions provided in Theorem 4 are not complete (i.e., they are sufficient but not necessary). We hope to address this question in future work, and provide an even stronger identification criterion, or prove the completeness of the current one.“ Pdf: /pdf/587728c1bdddf6e9bb878e10b095dd925b7e3ab8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The manuscript studies the decomposition of spurious variations. Specifically, a tool is developed for the non-parametric decompositions in both Markovian and Semi-Markovian models. Strengths: 1. The considered problem is practical. As stated in the introduction, the related tools are almost entirely missing in the literature (though I'm not totally sure about it). 2. The theoretical framework is rather complete. Weaknesses: 1. The empirical evaluation seems to be not strong enough. The proposed tool has only been applied in one specific case. The generality and stableness of the method could be better illustrated by more types of evaluation, perhaps including some simulations. 2. Since causal inference in the presence of latent confounders is a well-studied problem, it will be helpful if more discussion on the considered task and the other related ones can be conducted. This can provide additional insight and broaden the understanding of the subject matter. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What could be the limitations of the proposed tool? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I didn't find a discussion on the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort in providing the comments. We respond point-by-point to the concerns raised. In the strengths, the reviewer states we are solving a practical and important problem, and provide a complete theoretical framework. We kindly ask the reviewer to reconsider the gap between the current literature and the tools proposed in our paper, based on the clarifications provided below. --- [W1: Empirical Evaluation] Thanks for the suggestion, and for giving us a chance to clarify this further. We would also like to emphasize the generality of the theoretical framework proposed. We handle both the Markovian and the Semi-Markovian cases in the non-parametric setting. Furthermore, we now also added two synthetic experiments, to test the method on a setting where the ground truth is known. Please also see global response [G1]. --- [W2: More discussion on considered task] We would like to clarify the issue. While there are indeed methods in the literature for handling latent confounders, the estimation target is almost invariably a causal effect or its conditional / mediated versions. That is, no method addresses the specific challenge investigated in this manuscript, that is, how to decompose spurious variations. Notwithstanding, the problem of understanding spurious variations is quite pervasive in applied sciences, such as epidemiology, and in other contexts such as explainable/fair AI. For instance, exposure to a harmful toxin and the development of a disease may be confounded by other work-related hazards. The newly proposed method allows one to quantify which hazards confound this relationship, and how strong they are, which may be quite important for epidemiologists. The types of questions our manuscript addresses have not been explicitly considered before. However, the type of reasoning about the strength of the confounders has mostly been done for linear models (where confounding can be associated with the estimated coefficients) or through variable importance. Therefore, our work can be seen as providing the first, non-parametric generalization of these basic types of reasoning, which decomposes the spurious effect quantities. In this sense, we believe it fills in an important gap, complementing the existing literature on the estimation of causal effects, and providing another tool for more precise and fine-grained causal analyses. --- [Q1: Limitations] Please see global response [G2]. In particular, the discussion on cluster DAGs shows how some of the causal assumptions may be relaxed. --- Rebuttal Comment 1.1: Comment: Thank the authors for the clarification. I will keep my score.
null
null
null
null
null
null
Language Quantized AutoEncoders: Towards Unsupervised Text-Image Alignment
Accept (poster)
Summary: This study focuses on the translation of images into a sequence of discrete language tokens, aiming to enhance the understandability for language models, such as GPT-3. By achieving this, the vision-language tasks can be seamlessly translated into LLM tasks using zero-shot or few-shot learning techniques. Strengths: 1. the paper has a very clear presentation, and the paper writing is good. It well demonstrates their method and experimental results 2. the paper has a good motivation, and it attempts to solve the image understanding tasks by translating the images into quantized tokens for LLM understanding. Weaknesses: 1. the experimental results do not look strong enough. the best performance only reaches 54 top1 accuracy for 2-way image classification. and the model performance does not scale up with more training data. training on 75% data performs worse than 50% training data. 2. The idea does not seem very natural. convert the vision features into quantized tokens through Frozen RoBERTa codebook will lose a lot of visual information. Compared to those models such as MiniGPT-4 or LLaVa, they translate the images into soft prompts that LLM can be more easily understandable through CLIP vision encoder. Technical Quality: 3 good Clarity: 3 good Questions for Authors: would be better to evaluate also on other tasks such as vision-Language tasks such as VQA or captioning. And compare with MiniGPT-4, and see if your method has a better visual encoding performance. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: adequately discussed the limitation Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns below. > Q1: Top1 accuracy is only best at 54%, and the model does not scale with more training data We believe that there may be a misunderstanding. If you are referring to Figure 5, the x-axis is the masking ratio during training, and not the percentage of training data. We apologize for the confusion, and will update the figure caption to clarify. These results reflect similar to those trends observed in other related papers (e.g. MAE), where there is generally a “sweet spot” in terms of optimal masking ratio. Regarding 54% top1 accuracy, the value for this metric is computed as an average over several different evaluation settings. Table 1 shows a more fine-grained breakdown for the 54%, where we can see the average is greatly pulled down by outliers. Generally, LQAE can achieve 65-70% accuracy on the task. > Q2: The idea does not seem very natural over models such as MiniGPT-4 or LLaVa Although LQAE diverges from the standard paradigm of learning vision-language models, it introduces several key benefits over existing approaches. Firstly, LQAE is far more sample efficient (text-image pairs) compared to prior methods, which are generally trained on millions or hundreds of millions of paired examples. LQAE’s pretraining phase only requires unpaired data, and only a few examples (1-5) are needed during inference. This property is especially useful in domains where paired data may be potentially difficult to collect or curate. Secondly, LQAE presents a more computationally efficient method than prior works, as we do not require backpropagation through an LLM that could be potentially 10-100Bs of parameters. Promising future directions may involve scaling up our model, or investigating sample efficiency trade-offs by incorporating small amounts of paired data during training. Overall, we believe that our work provides a different methodology around vision-language learning that would be of interest to the NeurIPS community as a whole. > Q3: Comparison against MiniGPT-4 or LLaVa While we do discuss both of these works in the Related Works section, as per Neurips guidelines, we consider these two papers as concurrent works - as both papers were released April 17 and April 20 and within the specified 2 month period prior to the Neurips submission deadline. In addition, we would like to clarify the difference in data and computational requirements between LQAE and LLaVa / Mini-GPT4 as outlined in our response to Q2, which makes them less meaningful baselines to compare against. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification of the figure 5 results. I have resolved the concern for this issue. Another point is that you claim that " This property is especially useful in domains where paired data may be potentially difficult to collect or curate", could you show some results in which your model can work much better for the situation where the paired data maybe potentially difficult? I am not quite convinced by this claim. You may compare the results generated by your model with the results from MiniGPT-4, LLaVa or any other good open-sourced vision-language models. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: We would like to thank the reviewer for their detailed assessment of our work. We found the reviewer’s follow up questions and suggestions insightful, and we list our plans to incorporate them below. Please let us know if our answers address your questions. > Comparison with MiniGPT-4, LLaVA, or any other good open-sourced vision-language models. We conducted experiments to compare LQAE and MiniGPT-4 on the few-shot image classification and FastVQA benchmarks, results presented in the tables below. For both tasks, LQAE outperforms baseline methods that lack access to text-image pairs, and slightly underperforms MiniGPT-4 (53.97, 59.29 respectively on 2-way classification). As expected, LQAE performs much worse on zero-shot (as it does not have access to any text-image pairs), and significantly closes the gap to only a few percentage points with **as few as 2 few-shot examples (1 example per class)**. In contrast, MiniGPT-4 requires 100s of millions text-image pairs when considering all stages of pretraining, as MiniGPT-4’s vision component is initialized from BLIP-2 https://arxiv.org/abs/2301.12597, which is initialized from EVA-CLIP https://arxiv.org/abs/2303.15389. Therefore, we strongly believe that LQAE presents a promising direction for future work in better leveraging unsupervised learning methods for more data efficient multimodal learning. We thank the reviewer for suggesting the experiments, and we plan to incorporate them in the revision. | Few-shot Image Classification Setting | Task Induction | no | yes | yes | yes | yes | yes | yes | Avg | |----------------------------------------------------|----------------------|--------|--------|--------|--------|--------|--------|--------|-------| | | Inner Shots | 1 | 1 | 3 | 5 | 1 | 1 | 1 | | | | Repeats | 0 | 0 | 0 | 0 | 1 | 3 | 5 | | | No image or text | ASCII (64x64 img) | 0 | 5.2 | 5.9 | 6.5 | 4.5 | 4.8 | 5.2 | 4.59| | Image pretrain + Image-text finetune | MAE + Linear | 0 | 8.9 | 11.4 | 13.5 | 12.8 | 15.6 | 19.8 | 11.71 | | Image-text pretrain | Frozen | 1.7 | 33.7 | 66.0 | 66.0 | 63.0 | 65.0 | 63.7 | 51.30 | | Image Pretrain | untrained LQAE | 0 | 8.2 | 13.8 | 14.5 | 10.4 | 12.7 | 15.6 | 10.74 | | Image Pretrain | LQAE | 1.5 | 35.2 | 68.2 | 69.8 | 68.5 | 68.7 | 65.9 | 53.97 | | Image pretrain + text pretrain + Image-text finetune | MiniGPT-4 | 20.8 | 44.8 | 70.9 | 71.3 | 70.8 | 67.9 | 68.5 | 59.29 | | Few-shot FastVQA Setting | Inner Shots | 0 | 1 | 3 | 5 | Avg | |----------------------|--------------------------|------|------|------|------|---------| | No image or text | ASCII (64x64 img) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Image Pretrain + Image-Text Finetune | MAE + Linear | 0.0 | 0.0 | 0.5 | 1.4 | 0.5 | | Image-Text pretrain | Frozen | 3.7 | 7.8 | 10.1 | 10.5 | 8.0 | | Image Pretrain | untrained LQAE | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | Image Pretrain | LQAE | 0.0 | 8.5 | 11.9 | 12.8 | 8.8 | | Image-Text Finetune | MiniGPT-4 | 7.2 | 10.2 | 13.7 | 15.2 | 11.5 | > Could you show some results in which your model can work much better for the situation where the paired data maybe potentially difficult? Our work primarily focuses on vision-language alignment due to its ubiquitous nature in the community, as well as availability of open, or closed source models that can be leveraged. However, more generally LQAE seeks to align two modalities in an unsupervised manner by training an autoencoder (VQ-VAE) in one-modality with a denoiser (BERT) in another modality. As such, we believe that LQAE is promising for future direction in other domains where paired data may be harder to collect, and unimodal data is more readily available. For example, we can consider the domain of audio-language understanding. Unpaired text and audio data is fairly easy to collect - text is well studied, and unpaired audio data can be extracted from video datasets such as ACAV (100M audio-video pairs). However, although text-audio datasets exist, much fewer samples exist compared to the number of unpaired samples. AudioSet(https://research.google/resources/datasets/audioset/) is a large dataset that contains ~ 3M pairs, but a large fraction (~ 2M) are music / speech related, and only ~ 1M related to natural sounds. This is much fewer than the current standard of utilizing 100 millions, or billions of text-image pairs to train current visual language.
Summary: The paper proposes Language-Quantized AutoEncoder (LQAE), a representation learning approach that quantizes the latents to the codebook/vocabulary of a language model, here, a BERT model. During training, the quantized latents are masked and decoded by the frozen BERT model to implicitly impose language structure onto the learned latents. Once the vision encoder is learned on unpaired image-only data, the latent representation can be used for few-shot prompting of LLMs since the latent can be directly translated to text. Strengths: - The proposed idea of using a BERT model to guide the latent representation to have a language-like structure is novel and interesting. - Extensive ablations evaluate the model from different perspective and allow insights into the model decisions. - LQAE is competitive and performs better on downstream tasks like few-shot classification and VQA compared to Frozen [23] Weaknesses: - One key motivation is that paired data is more costly to obtain than single modality data. While true, large-scale paired data such as LAION is cheap to collect. Downstream task performance is far from models trained on paired data such as (Open)CLIP. - Frozen is portrayed as the image-text pretrained model to beat although more recent models such as Flamingo [A] outperform it by a large margin. - The task generalization that can be achieved with the proposed model could have been highlighted more by evaluating on more tasks (see [A] for a reference). - Some details of the method are not clear, specifically around the loss function and gradient propagation. See questions below. References: [A]: Alayrac et al., Flamingo: a Visual Language Model for Few-Shot Learning, NeurIPS 2022 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How do gradients propagate through the network? Are straight-through operations applied when an embedding is replaced by a codebook entry? E.g., how does the gradient of the reconstruction loss travel back to the image encoder? Similarly, how does it work for the MLM loss? Since there seems to be no stop-gradient applied on the last term of the loss Eq. I presume the gradient travels both to the "target" $z$ as well as to the inputs $z_m$ with a stright-through applied to reach the image encoder? Please correct me if this understanding is wrong and elaborate. 2. Intuitively it makes sense that BERT should guide the embeddings $h$ to follow a language structure. In theory, however, what exactly prevents the embeddings to "get stuck" and collapse always to the same codebooks? In VQ-VAE this problem is solves by re-initializing unused codebooks to be close to frequently used ones, but this is not possible here. What guarantees that a diverse set of tokens/codebook entries are being used? 3. On first sight, Figure 3 looks like two duplicate figures. Switching the example might help better illustrating the difference in the two task configurations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: - In the end, it comes down to practical application and efficacy of the proposed LQAE. On paper, it has the nice property of not requiring paired data. In practice, this can rather become a limitation as models that make use of said large-scale paired data can learn relations more effectively and ultimately outperform this approach (e.g. [A]). It would be interesting to know to what extend scale of the model is the limiting factor here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your review. We address some of your questions and concerns below. > One motivation of LQAE is that paired data is more costly to collect, but paired data is still easy to collect at a large-scale such as LAION. While one can source paired data from the internet such as LAION, it suffers from low text quality and needs extensive cleaning for well-aligned text-image pairs. Unpaired image data is much more widely available. Finally, future works can consider combining paired data and unpaired data in LQAE, in a semi-supervised manner, to leverage both costly paired data and unpaired data. > Flamingo outperforms both Frozen and LQAE, and has more experimental evaluations As shown in the Flamingo figure 1, the main two experiments there are VQA and few-shot prompting. In this paper, we include both experimental evaluations. While Flamingo is capable of doing more tasks than LQAE, we note that it is significantly more costly to learn than LQAE, as Flamingo finetunes LLM weights, while Frozen and LQAE keep LLM weights frozen, and would not be a fair comparison. > How does the straight-through estimate work with reconstruction loss? Yes, straight-through estimates are applied during the quantization step. The reconstruction loss back-propogates through the decoder, last layer BERT features,, quantization / straight-through, and encoder. The MLM loss backpropagates through the BERT, quantization / straight-through, and encoder. > How does LQAE prevent codebook from collapsing? Intuitively, our high masking ratio helps prevent codebook collapse during training. In some sense, it is similar to dropping out codebook vectors and prevents over-reliance on specific codes. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My questions have been answered to some extend. I would like to leave the following remarks. 1. While LAION has been criticized to contain noisy data, the efficacy of both OpenCLIP and StableDiffusion are evidence for its utility. Whether extensive cleaning (apart from what has already been done by the research community) is required, would need to be validated experimentally. And since LAION was collected the same way as unpaired data usually is, I would disagree with the authors' notion of "costly" when referring to such paired datasets. This perspective could be better represented in the paper. 2. It is not true that Flamingo fine-tunes LLM weights. Flamingo keeps both LLM and vision encoder weights frozen while training new cross-attention layers from scratch. LQAE keeps the LM frozen and trains vision encoder and decoder. In both cases new parameters are introduced and gradients are propagated through the language model (in the case of Flamingo not through the vision encoder). Hence, apart from different architectural and motivational choices, I do not see a large difference that would make for an unfair comparison. Even if there existed significant differences between models, stating them transparently and reporting state-of-the-art makes the paper more compelling. This way it is easier for the reader to fit the paper into existing literature. 3. It is still not clear how the reconstruction loss reaches the encoder. What does it mean for the gradient to go through the last layer BERT features if the weights of BERT are frozen? There is no direct connection between the last BERT layer and the encoder. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: We would like to thank the reviewer for their detailed assessment of our work. We found the reviewer’s follow up questions and suggestions insightful, and we list our plans to incorporate them below. Please let us know if our answers address your questions. > While LAION has been criticized to contain noisy data, the efficacy of both OpenCLIP and StableDiffusion are evidence for its utility. Whether extensive cleaning (apart from what has already been done by the research community) is required, would need to be validated experimentally. And since LAION was collected the same way as unpaired data usually is, I would disagree with the authors' notion of "costly" when referring to such paired datasets. This perspective could be better represented in the paper. We thank the reviewer for the insightful point. We acknowledge that LAION provides a large, useful source of aligned image-text, and will better clarify our perspective in the revised version of our paper. In particular, we believe a promising direction of future work is to apply our method on other domains (autoencoder in one modality with a denoiser in another modality) where paired data is less readily available compared to unpaired data (e.g. text-audio). We primarily focused on vision-language to demonstrate strong data efficiency properties due to the general availability of pretrained models, and evaluation methods. > It is not true that Flamingo fine-tunes LLM weights. Flamingo keeps both LLM and vision encoder weights frozen while training new cross-attention layers from scratch. LQAE keeps the LM frozen and trains vision encoder and decoder. In both cases new parameters are introduced and gradients are propagated through the language model (in the case of Flamingo not through the vision encoder). Hence, apart from different architectural and motivational choices, I do not see a large difference that would make for an unfair comparison. We would like to clarify that LQAE only backpropagates through a small (relative to LLMs) BERT encoder, whereas Flamingo requires backpropagation through an LLM (10s of billions of parameters, or more). LQAE only uses an LLM for inference.
Summary: This work introduces a method for unsupervised image-text alignment named LQAE. The central concept revolves around utilizing a VQ-VAE framework, replacing the conventional codebook with frozen token embeddings extracted from a Language Model (LM). This approach aims to establish meaningful correspondences between images and their associated textual descriptions. An additional step of masking and filling after decomposing the image into token embeddings is added to ensure consistency and improve the likelihood of the generated texts. There is a comprehensive range of experiments, including an insightful ablation study that thoroughly analyzes each component's importance and respective roles in the alignment process. Notably, the results are better than alternative solutions, even in challenging few-shot settings. By harnessing the power of the GPT model to extract patterns from the text associated with each image, the proposed method achieves remarkable performance in accurately classifying the images. Strengths: - One of the notable strengths of this paper is its comprehensive and detailed ablation study, which consistently enhances its soundness. The authors meticulously analyze and dissect the different components of their proposed method, providing valuable insights into the architectural choices. The thorough investigation of the encoding process and its subsequent integration with the language model is particularly commendable. - Additionally, the paper is well-crafted and exhibits a diverse range of experiments that effectively validate the proposed method. The method itself showcases architectural innovation, setting it apart from its competitors in the field, but since I’m not very familiar with the related literature, my confidence in this regard is not high. Weaknesses: Regarding the experimental setup, a weakness I see is that the results don’t have statistical significance reported, given that only a single run for each setting is shown. This somewhat limits the assessment of the robustness of the method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) During the analysis of the generated text, it appeared to me that most of the generated words were named entities. Do the authors have any insights into why the model tends to use named entities more frequently than standard, more common words? Is the model possibly relying on named entities due to their higher specificity, allowing for greater diversity and (possibly) orthogonality compared to other words, thus enabling more detailed descriptions? 2) There’s a typo on line 163, an extra "is". 3) Do the authors plan on releasing the code upon acceptance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns and questions below. > Are results statistically significant with one run? We have conducted multiple runs on the few-shot prompting experiments. Across three independent runs with temperature = 0, the average scores are 29.039, 29.038, 29.039, indicating the evaluation is reliable with very low statistical variance. Training LQAE should be consistent across runs on such large datasets, due to time constraints, we were not able to finish multiple runs of LQAE training. However, we plan to incorporate such results in the final version. > Do the authors have any insights into why the model tends to use named entities more frequently than standard, more common words? As the reviewer has hypothesized, we do believe that the tendency to use named entities is indeed to the long-tailed nature of the subset of the named entity vocabulary - to access a greater number of and more diverse codebook vectors for better reconstruction. > Do the authors plan on releasing the code upon acceptance? Yes, we plan to release all code on acceptance. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the rebuttal. It addressed the comments I've raised in my review. Considering that and the other reviews/author responses, I confirm my acceptance recommendation. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: We would like to thank the reviewer for their insightful response to our rebuttal, and recommending acceptance. If the reviewer intended to raise their score to 7, we would kindly ask the reviewer to update the score in the original review.
Summary: The proposed approach, LQAE, addresses the lack of visual grounding in large language models by aligning text and image data in an unsupervised manner. It encodes images as sequences of text tokens by quantizing image embeddings using a language codebook and reconstructing the original input with a masked version of the quantized embeddings and BERT. LQAE enables few-shot multi-modal learning, outperforming baseline methods in tasks like image classification and visual question answering with as few as 1-10 image-text pairs. Strengths: 1. The paper proposes a simple but effective method to align text and image representation with pretrained models. 2. The proposed method allows image to be used in the same way as text, which could be adopted for downstream tasks. 3. The method doesn't require image-text pairs by unidireictionally align the image to text embeddings. Weaknesses: 1. The proposed method may lack explanability compared to methods that use contrastive learning to train on text-image pairs. 2. I'm concerned about the technical novelty as there have been a lot of works trying to align and mix visual and text tokens, including BEiT, Parti, etc. 3. The trained representation cannot transfer between models, e.g., between RoBERTa and BERT. 4. The downstream tasks are focused on few-shot settings. How does LQAE perform when doing text-image fine-tuning? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns below. > The proposed method may lack explainability compared to methods that use contrastive learning to train on text-image pairs. Interpretability is indeed an existing flaw in the method, as we do not use text-image pairs. However, we believe that it is a promising direction in future work to potentially train LQAE models with mixed unpaired and paired data for more sample (paired data) efficient learning than existing methods. In addition, LQAE shows promise for a more general framework of leveraging single-modality denoisers to align arbitrary multi-modal pairs, and can be useful in scenarios where unpaired data is much easier to acquire than paired data. > Technical novelty over prior works that try to align and mix visual and text tokens and does text-image finetuning, including BEiT, Parti, etc. We would like to emphasize that the novelty of our work over prior methods centers around enabling vision-language capabilities without requiring text-image pairs - in contrast to nearly every prior vision-language work (Frozen, Flamingo, GPT-4, Parti, BLIP, etc.). In addition, our method does not require fine-tuning, or backpropagating through an LLM, which can be expensive (e.g. 175B LLM), or impossible (no weights available and only through API / prompting). Although BeIT does not require text-image pairs, the method is unimodal in images, and does not contain any language information, unlike LQAE. > The trained representation cannot transfer between models, e.g., between RoBERTa and BERT. Yes, this is a limitation regarding our model. However, to the best of our knowledge we do not believe that this is a common property of most vision-language models, such as Frozen or GPT-4, where both vision and language representations are model / finetuning specific. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I've carefully read it and decide to keep my original recommendation. --- Reply to Comment 1.1.1: Title: Thank you for your response Comment: We would like to thank the reviewer for detailed and positive assessment of our work. We found the reviewer’s questions and suggestions insightful and we thank the reviewer for responding to our rebuttal.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Unleashing the Power of Randomization in Auditing Differentially Private ML
Accept (poster)
Summary: The use of canaries is a very intuitive strategy to evaluate the DP mechanism (which can subsequently attract a broader non-DP specialist audience to this area). The paper is structured well, has comprehensive details on both the method and the corresponding experimental results and shows clear motivation and conclusions. This work presents methodology for auditing of DP trained models through the introduction of Lifted DP and canary-based auditing techniques. Strengths: The use of canaries is a very intuitive strategy to evaluate the DP mechanism (which can subsequently attract a broader non-DP specialist audience to this area). The paper is structured well, has comprehensive details on both the method and the corresponding experimental results and shows clear motivation and conclusions. Weaknesses: While the method show promise, the evaluation results are not very representative: FMNIST is a rather small dataset and so is Purchase-100. Given that the models associated with these are also rather small too, I do not see how well can this method scale to larger architectures and more complex datasets (especially as the canary-generation process can heavily depend on it). It is not very clear to me what the advantage is over the work of https://arxiv.org/abs/2305.08846. Here model auditing can be performed in a single training run, rendering other methods of auditing significantly less efficient. I would like the authors to comment on why their work is a substantial contribution given the existence of this method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the dependency with epsilon level? There does not seem to be a clear trend of how well the method performs with more noise being added. I would like the authors to discuss this further and hypothesise what the trend is and why. Could the authors elaborate on what they mean by ‘poisoning’ canaries? What is the methodology like on non-image modalities? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I described most of the limitations in the weaknesses section, but to summarise: A) it is not clear how the method performs in non-trivial settings (e.g. ImageNet), B) the improvements compared to the work I linked above are not clear. Overall, this work shows promise, but I would like the authors to clarify the points I raised above before I can recommend acceptance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the questions. > **I would like the authors to comment on why their work is a substantial contribution given the existence of the method [of Steinke et al.]** We make **4 key contributions**: (1) the framework of lifted DP, (2) a novel randomized leave-one-out hypothesis test for auditing lifted DP, (3) adaptive confidence intervals, (4) detailed numerical results of the frameworks, including the bias-variance trade-offs it involves. The independent and concurrent work of Steinke et al. also considers auditing with randomized canaries that are Poisson-sampled, i.e., each canary is included or excluded independently with equal probability. **Their recipe is different from ours**: it involves computing an empirical lower bound by comparing the accuracy (rather than the full confusion matrix as in our case) from the possibly dependent guesses with the worst-case randomized response mechanism. This allows them to use multiple dependent observations from a single trial. Our **confidence intervals** are adaptive to the level of correlation between the canaries, while theirs are non-adaptive and worst-case. Our work gives a **deeper analysis of the bias-variance tradeoff** from utilizing randomized canaries. Further, we develop two **novel technical tools**, LiDP and adaptive XBern confidence intervals, which could be of general interest to the community. We are excited about the possibilities of combining our adaptive confidence intervals (the key to our improvements) with the framework of Steinke et al. (the key to their improvements) to get the best of both worlds. We leave this for future work. Finally, we note that the work of Steinke et al. was submitted to arXiv on 15 May 2023, i.e. **2 days before the NeurIPS deadline** on 17 May 2023). Per the **[NeurIPS policy](https://nips.cc/Conferences/2023/CallForPapers)**, "[a] submission will not be rejected on the basis of the comparison to contemporaneous work". We request the reviewer assess the merits of our substantial theoretical and methodological contributions independently. > **The evaluation results are not very representative… larger architectures and more complex datasets ... non-trivial settings (e.g. ImageNet)** We opted for small datasets and models so we could test our theoretical predictions *comprehensively* and compare them with *previous* baselines in the literature. We run **150K experiments for each dataset-model pair** (6 epsilons * 6 values of K * 2 types of canaries * 2000 seeds per setting), which would be infeasible at larger scales. Small datasets and models have also been the norm for the privacy auditing literature, see e.g. [Jagielski et al. NeurIPS 2020](https://arxiv.org/abs/2006.07709), [Nasr et al. IEEE S&P 2021](https://arxiv.org/abs/2101.04535), [Lu et al. NeurIPS 2022](https://arxiv.org/abs/2210.08643). We note that the purpose of auditing is to check for the sanity of the training algorithm, which (in principle) is independent of the model size and the dataset size. One could in practice use a smaller model and a smaller dataset to audit an algorithm, and, if it passes, deploy a larger model on a larger dataset. We agree that this is not ideal, but perhaps is a practical compromise to speed up the adoption of these tools. Further, demonstrating rigorous privacy auditing at ImageNet scale for the *first time* would in itself be a *significant milestone* in the field. This is outside the scope of our paper, whose key contributions are methodological, foundational, and theoretical. > **Question: What is the dependency with epsilon level?** Auditing DP grows exponentially harder with increasing epsilon. Consider for instance, distinguishing between randomized response $\text{RR}(\infty)$ vs. $\text{RR}(\varepsilon)$, which is equivalent to distinguishing between $\text{Bern}(1)$ vs. $\text{Bern}(\sigma(\varepsilon))$ where $\sigma(\cdot)$ is the sigmoid function. By known lower bounds in hypothesis testing, this requires at least $n \ge O(1/(1 - \sigma(\varepsilon))^2) \approx e^{2\varepsilon}$ samples, which is exponential in $\varepsilon$. Thus, we can only effectively audit smaller $\varepsilon$. Empirically, we see from **Figure 5 of the rebuttal PDF** that LiDP auditing greatly outperforms the baseline at small $\varepsilon$ (by a multiplicative factor of 2x to 4x). At large $\varepsilon$, both methods are constrained by the fundamental difficulty described above and perform similarly. > **Why do we refer to ‘poisoning’ canaries?** The phrase “poisoning” comes from the literature on adversarial machine learning where an adversary can “poison” the model training by adding specific (outlier) data points. Much of the prior work on designing canaries for auditing is inspired by this literature. See e.g. [Jagielski et al. (NeurIPS 2020)](https://arxiv.org/abs/2006.07709) for more historical context behind this nomenclature. > **What is the methodology (to design data poisoning canaries) on non-image modalities? + Canary designs for complex datasets** We run our experiments on image and tabular data. Prior work has considered designing canaries for text/speech in the context of memorization analysis (rather than DP auditing). For the text domain, prevailing approaches insert random sequences possibly based on templates as canaries [e.g. [Carlini et al.](https://arxiv.org/abs/1802.08232), [Mireshghallah et al.](https://arxiv.org/abs/2103.07567)]. For the speech modality, prior work uses a text-to-speech engine to convert such text canaries to the speech modality [[Jagielski et al.](https://arxiv.org/abs/2207.00099) p.16]. Our framework of LiDP auditing is flexible enough to incorporate recent advances in designing canaries for more complex datasets by lifting as illustrated in Section 5 and Appendix D. The aforementioned strategies can be lifted to audit LiDP as they are random by construction. This would be an interesting exploration for future work. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I would like to thank the authors for responding to my comments. A lot of their responses have really clarified certain aspects of the work (particularly the part on different levels of epsilon and the comparison to the method I have linked above). You are right in saying that this work was overlapping with this submission, so I will remove this issue from consideration. I am still not entirely happy with the comment on larger datasets, as in my mind, any canary-based method could suffer significantly from more complex datasets, potentially rendering these methods infeasible. I will, nonetheless, raise the score in light of my first comment, but would ideally still like an extended discussion (if not experimental evidence) on how well the method scales with some examples. Score has been raised accordingly and happy to update it further if this comment is addressed in detail. --- Reply to Comment 1.1.1: Title: Thank you for your constructive feedback. Comment: We would like to thank the reviewer for their constructive feedback. We agree that as the dataset (and the task) gets larger and more complex, the kind of canary tests we do might become weaker, thus making the lower bound smaller. It is an important challenge not just for our paper but for the community of researchers working on privacy auditing. We will add such discussion in the revision. We will get back with a more detailed response soon. --- Reply to Comment 1.1.2: Title: Thank you for the response! Comment: We thank the reviewer for the response! There are two major research questions in privacy auditing: (i) designing stronger canaries, and (ii) developing tighter confidence intervals. One needs both to achieve a tight lower bound on epsilon. In this manuscript, we focus on the latter question. Our approach inherits the strengths (and weaknesses) of existing canary designs but provides a better way to reduce the width of the confidence interval with a principled use of randomness. We try to make it clear that this paper does *not* attempt to design stronger canaries. We note that the gain of our approach still depends on the canary design via how correlated the randomly drawn canaries are. That said, there is no reason to believe that the canaries will have a larger correlation when applied to complex tasks (and datasets). At the same time, we agree with the reviewer that if one uses weak canaries, any canary-based method will fail. Ours is no exception here. However, this issue is orthogonal to the main contributions of our paper. In our view, it is outside the scope of a theoretical paper to (i) check the strengths of the canaries for larger and more complex datasets, and (ii) design stronger canaries for such tasks. These are timely and important research directions for the privacy auditing community. We will add a discussion on this subject. Thank you for an excellent suggestion!
Summary: The authors propose a principled improvement over existing privacy auditing techniques. Privacy auditing through the binary hypothesis testing formulation implies gathering samples on the success of the adversary when trying to correctly guess the membership game against a mechanism $M$. All auditing techniques involve switching to a high-probability argument via confidence intervals to be able to provide lower bounds. These CIs converge with a rate of $O(\frac{1}{\sqrt{N}})$, where N is the number of auditing samples. The goal of this paper is to improve this convergence rate by testing multiple (possibly correlated) samples at a time, providing an improvement for the convergence of up to $O(\frac{1}{\sqrt{NK}})$, where K is the number of inserted canaries. Their proposed technique is as follows: 1. Run the mechanism $M$ on $D \cup ( c_1, .., c_K ) $ and $D \cup (c_1, .., c_{K-1}) $, where the canaries are iid samples drawn from a fixed canary distribution. Through a clever randomisation argument, using the property that the canaries are iid from the same distribution, the authors propose testing multiple iid canaries by testing other iid canaries by exchanging the missing $c_K$ with other iid canaries. By doing this, the authors can test $K$ test statistics for the alternative hypothesis and $M$ test statistics for the null hypothesis. Now, the goal is to integrate this new information into the confidence interval. The authors do not make an independence assumption as in previous work and derive a tailored confidence interval that accounts for the possible correlation between the K tests. Their analysis shows how their confidence interval converges with a rate of $O(\frac{1}{\sqrt{N }})$ when the statistics are correlated to a rate of $O(\frac{1}{\sqrt{NK}})$ when the statistics are independent. Strengths: * The paper has a nice structure, it solves a clearly stated problem known in the privacy auditing community, for which they introduce all the needed formalism and they make the necessary experiments to back up their results. * The authors propose an algorithm that could be used as a drop-in replacement for other auditing techniques. * Formally accounting for the correlation between hypotheses fills an existing gap in auditing literature when multiple samples are used for auditing purposes. All the literature I am aware of makes an independence assumption, which might be strong in some cases. This work recovers that case but accounts for others as well, framing and formalising in a principled way the multisample testing problem. Weaknesses: * in the experimental section, the authors show a bias/variance tradeoff when auditing a mechanism M that computes a noisy sum/mean. The same argument does not hold against differentially private machine learning models, and it would be interesting to show if the proposed technique can achieve tightness when auditing a (possibly simple) machine learning pipeline. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * The proposed auditing technique is not tailored for the Gaussian mechanism but for any mechanism $M$. For example, one could audit a Laplace Mechanism or a Subsampled Gaussian Mechanism with it. I think the authors could stress this out for the reader, as improving auditing sample complexity in the general case is a truly desirable property, and this work does that. Would the authors want to add some details or employ their technique to audit more diverse mechanisms? * The bias/variance argument in section 4 holds against a mechanism that averages over a list of gradients, as it is easy to observe how inserting noisy samples will yield privacy amplification in that scenario. Yet, when we audit a machine learning model with input poisoning (so, actual samples need to be inserted, not the gradients), the relationship is not so clear anymore. How many sample canaries could we use for auditing purposes against a machine-learning model? Is there any scenario in which we can hope to audit correctly by training only two models, for example? (n=1, k=?) What is the performance in that scenario? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a detailed review and thought-provoking questions. > **Bias-variance for ML pipelines + auditing with input poisoning** We agree with the reviewer that for the Gaussian mechanism, we have all the information to (numerically) compute the bias and variance exactly for our estimator of the TPR and FPR (and consequently $\varepsilon$). This clearly shows the bias-variance tradeoff that we discuss in the paper. For an ML pipeline, things are more complicated for obvious reasons as the reviewer insightfully pointed out, but we still believe that a similar bias-variance trade-off holds. **Bias increases (i.e., lower bound on $\varepsilon$ decreases) with larger $K$**: This is challenging to show formally, but intuitively we believe this is true for the following reason. Let us consider the actual theoretical $\varepsilon$ of the mechanism with $K$ random canaries. The privacy (treating the canary $c_K$ are the one that is absent/present in neighboring datasets) is governed by the pair of distribution $P_{A,c_1^K}(\theta|c_K)$ and $P_{A,c_1^{K-1}}(\theta)$, where $A$ accounts for the randomness internal to the training mechanism and $c_1^K$ accounts for the randomness in the canaries 1 to $K$. We argue that the bias appears because the theoretical $\varepsilon$ for $K>1$ should be smaller than that of $K=1$. This is suggested by the fact that the pair $P_{A,c_1^K}(\theta|c_K)$ and $P_{A,c_1^{K-1}}(\theta)$ when $K>1$ has more *randomness* coming from the other canaries $c_1^{K-1}$ that act as if they are part of the privacy mechanism, and hence gives a higher variance output (which is more private, and hence smaller $\varepsilon$). We agree that this is hand-wavy and non-rigorous, and probably can only be confirmed empirically. **Variance decreases with larger $K$**: This refers to the variance of our estimators on TPR and FPR here, and the dominant term in the derivation (for example for the Bernstein bound) decreases with larger $K$. This is regardless of the mechanism that is being considered and should hold for general ML pipelines. However, again the lower order terms can increase marginally with $K$, so for this to be absolutely true, we need large enough $n$ and small enough correlation between the canaries. We will add more details about this to the paper. > **Would the authors want to add some details or employ their technique to audit more diverse mechanisms (Laplace, etc.)?** We thank the reviewer for the fantastic suggestion. Indeed, our recipe can be used to audit any mechanism including a general class of mechanisms built from a composition of additive noise mechanisms and subsampling. Note that the DP-SGD we audit in Section 6 includes sub-sampling and amplification by sampling in the mechanism and privacy accountant (for the theoretical upper bound). In the original submission, we only tried DP-SGD due to its overwhelming popularity in practice. Here, as a proof of concept, we additionally audit a Laplace mechanism using canaries uniformly from the $L_1$ sphere. The results are qualitatively similar to those of the Gaussian case with a near 8x gain in the sample complexity. We refer to **Figure 4 of the rebuttal PDF** for detailed results. > **How many sample canaries could we use for auditing purposes against a machine-learning model?** In our experiments with real data training, we observed that our lower bound improves with $K$ up to $K=64$ for Purchase data and $K=256$ for FashionMNIST. Afterward, we believe the lower bounds should start to get worse (get smaller) as $K$ increases. Choosing the right $K$ in practice is a difficult task, although using $K=\sqrt{N}$ seems to give reasonable results. > **Is there any scenario in which we can hope to audit correctly by training only two models, for example? (n=1, k=?) What is the performance in that scenario?** Great question! Our method currently requires $n \ge K^{\ell / (\ell - l)}$ for an $\ell$th order confidence estimator (Proposition 4). No lower bounds are known here and whether this can be reduced to $n=1$ is an open question. The independent and concurrent work of [Steinke et al.](https://arxiv.org/abs/2305.08846) proposes privacy auditing with $n=1$ (this paper appeared on arXiv only 2 days before the NeurIPS deadline). It is not yet clear how their method compares to ours as the experimental settings are not comparable. There is a potential to harness the benefits of both approaches, which we leave as a future research direction. --- Rebuttal Comment 1.1: Comment: Thank you for your great and detailed answers, looking forward to the discussions with other reviewers!
Summary: This work studies the sample complexity (in terms of sampled models) of auditing differential privacy. It proposes a new approach to usage of canaries in auditing via hypothesis test, breaking an existing sample complexity barrier by making use of multiple, randomized canaries. These randomized canaries are used to audit a new, but equivalent formulation of approximate differential privacy called lifted DP, which states a privacy condition for neighboring datasets and rejection sets sampled from a distribution. Sampled models are reused to compute test statistics for each of the multiple canaries included in the dataset, and confidence intervals for the average test statistic are computed using techniques introduced in the paper, tailored to the distribution of test statistics produced by the new auditing method. Empirical evaluation of the proposed method is also given. Strengths: The paper makes progress in improving the sample complexity of DP auditing by introducing new techniques that should be compatible with future progress in strengthening canaries. The usage of multiple, randomized canaries is innovative and novel tools were used to adapt their usage to existing hypothesis testing frameworks for auditing. The empirical results in Section 4 and 6 were quite helpful in interpreting the expected sample complexity improvements from various choices of K and order of confidence interval. Weaknesses: The sample complexity improvements seem notable, but I'm curious about how large of a bottleneck sample complexity is to practical auditing efforts. I'd expect it to be a significant obstacle, but additional discussion of the extent to which sample complexity impedes privacy auditing could help further motivate this work. Notes: pg 5 - "in practice, it depends on how correlated the K test are" In the caption of figure 3 - "provides significant gain in the require number of trials to achieve a desired level of lower bound" pg 9 - "For deployment in production, it worth further studying approaches with minimal computational overhead" Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See comments in weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors adequately addressed potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a constructive review. **Additional discussion on the bottleneck of sample complexity for privacy auditing:** Auditing DP is fundamentally constrained by the sample size $n$, where each sample corresponds to one trained model. Rigorous lower bounds on privacy require $1/\sqrt{n}$ confidence intervals. To get convincing results, [Tramer et al.](https://arxiv.org/pdf/2202.12219.pdf) train $n=10^5$ models, but auditing generally requires $n=O(10^3)$ [[Jagielski et al.](https://proceedings.neurips.cc/paper_files/paper/2020/file/fc4ddc15f9f4b4b06ef7844d6bb53abf-Paper.pdf)]. This makes it unusable for all but the smallest models and datasets. Past approaches use heuristics without rigorous justification to avoid this large computational cost, e.g. [[Zanella-Béguelin et al.](https://arxiv.org/pdf/2206.05199)]. We will add this quantitative discussion in the motivation section. We introduce LiDP, which is equivalent to regular DP, but lets us use randomized canaries. We present a novel recipe based on a randomized leave-one-out hypothesis test to audit LiDP. We propose adaptive confidence intervals to leverage multiple correlated observations from each experiment. Altogether, we obtain the same rigorous lower bounds as the baselines with up to 16x smaller sample size for synthetic simulations and up to 5x for real data experiments. This directly translates into a 5x gain in the run-time of auditing, enabling the use of rigorous privacy auditing with a reduced computational cost. This, we believe, will speed up and broaden the adoption of these important tools for auditing private mechanisms. Further, the novel technical tools we develop in this paper — LiDP and adaptive XBern confidence intervals — might be of general interest beyond auditing. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response detailing to practical implications of their work. I will amend my confidence upwards.
Summary: This paper presents a methodology for calculating lower bounds for the parameter $\\varepsilon$ of an $(\\varepsilon,\\delta)$-differentially private (DP) mechanism. When applied to machine learning algorithms, the methodology can be used to audit the privacy guarantees of popular training algorithms such as DP-SGD. The paper builds on existing methods that audit DP guarantees by testing the presence of a canary in the input of the mechanism. Given an $(\\varepsilon,\\delta)$-DP mechanism $\\mathcal{A}$, two neighboring datasets $D_0$ and $D_1 = D_0 \\cup \\{c\\}$ and a measurable set $R$, a lower bound for $\\varepsilon$ is given by $$ \\varepsilon \\geq \\log (\\mathbb{P}(\\mathcal{A}(D_1) \\in R) - \\delta) - \\log \\mathbb{P}(\\mathcal{A}(D_0) \\in R) . $$ A lower bound for $\\varepsilon$ can be derived from a lower bound $\\underline{\\boldsymbol{p}}_1 \\leq \\mathbb{P}(\\mathcal{A}(D_1) \\in R)$ and an upper bound $\\overline{\\boldsymbol{p}}_0 \\geq \\mathbb{P}(\\mathcal{A}(D_0) \\in R)$ estimated from samples. This lower bound can be made tighter by choosing a canary $c$ whose presence in the training dataset can be easily tested with a rejection set $R$. However, using Bernoulli confidence intervals computed from $n$ samples, the bounds $\\underline{\\boldsymbol{p}}_1, \\overline{\\boldsymbol{p}}_0$ are loose by a factor of $1/\\sqrt{n}$. Since each sample requires evaluating $\\mathcal{A}$ on a different dataset $D_0$, decreasing this factor is expensive. The key idea of the paper is to instead audit a probabilistic lifting of DP (LiDP) using counterexamples sampled from a joint distribution $( \\boldsymbol{D_0}, \\boldsymbol{D_1}, \\boldsymbol{R})$. While entirely equivalent to DP, thanks to the exchangeability of random i.i.d canaries LiDP allows reusing samples $\\mathcal{A}(D \\cup \\{c_1,\\ldots,c_K\\})$ and $\\mathcal{A}(D \\cup \\{c_1,\\ldots,c_{K-1}\\})$ to gather multiple correlated test statistics $\\boldsymbol{x}_k$, one for each canary $c_k$. A lower bound for $\\varepsilon$ can be then calculated similarly as for DP from bounds of the mean of these statistics. The paper analyzes the correlation between the statistics to derive higher-order exact (Bernstein) and asymptotic (Wilson) confidence intervals. Importantly, the authors prove that $\\ell$-th order bounds are loose by a factor of $n^{(1-2\\ell)/(2\\ell)}$ when choosing $K = \\lceil n^{(\\ell-1)/\\ell} \\rceil$ and so e.g. second-order bounds reduce the number of samples required to attain a given confidence because their looseness decreases as $1/n^{3/2}$ rather than $1/n^{1/2}$ for first-order bounds (and previous methods auditing DP). The authors evaluate the methodology on a Gaussian mechanism showing a 4x gain in sample efficiency when using second-order confidence intervals with $n=1000$ compared to a baseline using Bernoulli confidence intervals. They also evaluate the method on linear and 2-layer MLP classifiers trained with DP-SGD on FMNIST and Purchase-100 using either random or clipping-aware poisoned canaries [31]. This evaluation shows an average improvement in sample efficiency of up to 3x also with $n=1000$. The supplemental material includes proofs of the results in the paper, algorithmic descriptions of the methodology, a derivation of asymptotic Wilson intervals, and comprehensive ablation studies varying the privacy budget $\\varepsilon$, the number of canaries $K$, samples $n$, and dimension (for the Gaussian mechanism). Strengths: 1. A novel method for auditing differential privacy that can be combined with existing canary design strategies and improve sample efficiency. 2. First formal analysis of previously used heuristics reusing trained models to obtain multiple samples. 3. Technically solid theoretical foundations. Detailed proofs. Derivation of exact and asymptotic confidence intervals. 4. Great high-level intuition for why the method improves sample efficiency and the reasons for the bias/variance trade-off in selecting the number of canaries to use. 5. Extensive algorithmic descriptions of the method in the supplemental material that make the paper fairly self-contained and enable reproducibility. Authors promise to open-source their code to replicate results. Weaknesses: 1. The evaluation on ML scenarios uses very simple models: a linear model and MLPs with 2 hidden layers with 256 units (269k and 245k parameters for FMNIST and Purchase-100, respectively). 2. Evaluation baselines are limited to approaches using Bernoulli confidence intervals and not to more recent approaches using credible regions [66] which also claim improved sample efficiency. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Using the form of Bernstein's bound I am familiar with (Dudley, Richard M. Uniform central limit theorems. Vol. 142. CUP, 2014) would give a slightly tighter bound than in Equation (6). The bound in the paper seems to use $\\sqrt{a+b} \\leq \\sqrt{a} + \\sqrt{b}$ to simplify it, making it looser. Can you clarify whether you use this simplification, or else provide a detailed derivation? 2. In your evaluation, do you set $m = K$ in Algorithm 1? 3. There seems to be a qualitative difference between $\\underline{\\boldsymbol{p}}_1$ and $\\overline{\\boldsymbol{p}}_0$ in Algorithm 1 that is not discussed in the paper. The former is calculated from statistics $\\boldsymbol{x}^{(i)}$ using canaries $c_1,\\ldots,c_K$ inserted into the training set while the latter is calculated from statistics $\\boldsymbol{y}^{(i)}$ depending on canaries $c'_1,\\ldots,c'_m$ not in the training set that are independent of the output of the mechanism (the statistics themselves are correlated because they all use the same model). The bias-variance tradeoffs that you discuss in §4 seem to apply only to $c_k$. Are there tradeoffs that make increasing the number of out-canaries $m$ not always beneficial? ### Details - The paper presents exact Bernstein confidence intervals but then uses asymptotic Wilson intervals exclusively in the evaluation. The two are only compared in Figure 2. I would have liked to see an additional comparison in at least some selected empirical results for bounds on $\varepsilon$. - In Eq.1 $D,D'$ should be $D_0,D_1$. This has been fixed in the PDF in the supplemental material. - l.119: "the canary has the freedom" => "the **adversary** has the freedom"? - l.197: "applying probabilistic method" => "applying **a heuristic** method"? - Figure 1 is not referenced. I think it is supposed to support the analysis in the paragraph starting at line 271. - In Figure 3 caption: "require number" => "required number" - l.243: "find §4 that" => "find **in** §4 that" - l.346: "it worth" => "it **is** worth" - l.710: $k$ => $K$ - In Algorithm 3 in steps 4 and 5, and in Algorithm 4 in steps 5 and 6, I believe that there is a missing $\frac{K-1}{K}$ multiplying $\overline{\boldsymbol{\mu}}_2$ as in Equation (17). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss some limitations of the auditing methodology throughout the paper and discuss the need to balance the tightness/computation trade-off in practice. It could be a good idea to complement this with a discussion of how the lower bounds for $\\varepsilon$ depend on the canary design and detection strategy, how to interpret the gap between lower bounds and the theoretical guarantee of a mechanism, and the impact of deciding whether a mechanism provides enough privacy based on a lower bound alone. I also believe that given the size and variety of the ML models used in the evaluation, the paper should discuss how the results may extrapolate to other architectures and larger number of parameters. Figure 4 (right) shows the method may benefit as the dimension increases; it would be great if this effect could be backed with experimental results on ML models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a thorough review and thought-provoking questions! > **Weakness 1: ... very simple models and small datasets ...** These are standard benchmarks in privacy auditing literature, due to computational limitations. We refer to our response at the end of this rebuttal for a discussion. We note that the purpose of auditing is to check for the sanity of the training algorithm, which (in principle) is independent of the model size and the dataset size. In practice, one could use a smaller model and a smaller dataset to audit an algorithm, and, if it passes, deploy a larger model on a larger dataset. We agree that this is not ideal, but perhaps is a practical compromise to speed up the adoption of these tools. > **Weakness 2: Evaluation with Bayesian credible intervals as baselines** Following the reviewer’s suggestion, we obtained the lower Bayesian credible interval from the approach of [Zanella-Béguelin et al.](https://arxiv.org/pdf/2206.05199.pdf) Our approach generally provides _improved estimates_ when compared to this baseline. Please see **Figure 1 of the rebuttal PDF** for detailed results. This suggests an interesting future research direction: adapt Bayesian credible intervals for LiDP auditing to get the best of both worlds. > **Question 1: Bernstein Inequality** We use the analytically simpler bound since our purpose of deriving the Bernstein bound is to understand the tradeoff and their asymptotic dependence. The standard form of Bernstein’s inequality states that the empirical mean $\hat\mu_n$ from $n$ samples of i.i.d. $b$-bounded random variables with mean $\mu$, variance $\sigma^2$ is bounded with probability at least $1-\beta$ as $$ \mu - \hat \mu_n \le \sqrt{\frac{2\sigma^2}{n} \log\frac{1}{\beta} + \frac{b^2}{9n^2} \log^2 \frac{1}{\beta}} + \frac{b}{3n} \log\frac{1}{\beta} \le \sqrt{\frac{2\sigma^2}{n} \log\frac{1}{\beta}} + \frac{2b}{3n} \log\frac{1}{\beta} $$ where the last inequality used $\sqrt{a+b} \le \sqrt{a} + \sqrt{b}$ as the reviewer rightly guessed (and similarly for the other tail). > **Question 2: Do you set $m=K$?** Yes, all the simulations and experiments in the paper use $m=K$ throughout. > **Question 3: Are there tradeoffs that make increasing the number of out-canaries $m$ not always beneficial?** Thank you for pointing out this subtlety – you are right! There is no bias-variance tradeoff associated with the number $m$ of canaries used for the null hypothesis (which we refer to as “test canaries” below) and larger $m$ never hurts. We show the plots for the synthetic Gaussian setting in **Figure 2 of the rebuttal PDF**. We find a near-monotonic improvement in the lower bound as $m$ grows larger but the improvement quickly saturates at or before $m = 2K$. The bias-variance trade-off w.r.t. $K$ still holds though, irrespective of the value of $m$. > **Details: Numerical comparison of Bernstein and Wilson intervals in the evaluation** We refer to **Figure 3 of the rebuttal PDF**. While the Bernstein intervals have slightly worse performance than Wilson intervals as expected, LiDP auditing with Bernstein intervals still leads to improvements over DP auditing with Bernstein intervals. > **Typos, particularly the missing factor of $\frac{K-1}{K}$** Great catch, we fixed this and other typos. Thanks! > **Limitations and suggestions on topics to discuss** These are great topics worthy of a detailed discussion. We add a brief note here and a more detailed discussion in the paper. > **Discussion of how the lower bounds depend on the canary design and detection strategy** We provide a detailed discussion on lower bounds in Appendix A. The lower bound for auditing depends on making the right side of Eq. (1) small. This can be done by canary and rejection set design so that the canary is easy to detect. In other words, $\mathbb{P}(\mathcal{A}(D_1) \in R)$ is large and $\mathbb{P}(\mathcal{A}(D_0) \in R)$ is small. Much previous work has focused on this (particularly on designing the canary); we review this literature in Appendix A.1. On the other hand, our work focuses on improving the statistical dependence on the number of trials (as reviewed in Appendix A.2). > **Size and variety of models** Our analysis in Figures 3 & 4 (and Figures 9 & 11 in the appendix) indicates that random gradient canaries in particular would scale well to larger models. The current bottleneck to larger-scale adoption of rigorous privacy auditing is the prohibitive cost of training multiple models (see also the size of datasets/models in the previous literature [[1](https://arxiv.org/pdf/2006.07709.pdf), [2](https://arxiv.org/pdf/2202.12219.pdf), [3](https://www.computer.org/csdl/proceedings-article/sp/2021/893400b183/1t0x9402gY8), [4](https://openreview.net/pdf?id=AKM3C3tsSx3)]). Our work alleviates this cost to some extent but making privacy auditing feasible at larger scales still requires further research. --- Rebuttal Comment 1.1: Title: Thank you for your review! Could you please check the rebuttal? Comment: Thank you again for your thorough review! As the discussion period draws to a close, could you please take a look at the rebuttal and make sure that we have answered all your questions? Thank you!
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed and thoughtful comments. We are delighted that the reviewer appreciated the novelty, principled formal analyses, and potential impact of our work. We are excited that the reviewers recognize that our work "solves a clearly stated problem known in the privacy auditing community" with "innovative and novel tools" that are "compatible with future progress in strengthening canaries." We are pleased that the reviewers appreciate the "first formal analysis of previously used heuristics" and "technically solid theoretical foundations" that are supported by "the necessary experiments." Below, we respond to each reviewer individually. All additional numerical comparisons requested by the reviewers can be found in the attached PDF. We would be happy to answer any further questions or comments! Pdf: /pdf/5378e7e6fc713ef3c676cb57a03a3e00d4be404c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Robust Mean Estimation Without Moments for Symmetric Distributions
Accept (poster)
Summary: This paper studies outlier-robust location estimation for symmetric(-like) distributions. For "semi-product" distributions, the authors show that one can achieve $O(\epsilon \sqrt{\log 1/\epsilon})$ asymptotic error with a polynomial number of samples (and time), and $O(\epsilon)$ asymptotic error when given quasipolynomially many samples (and time poly in the sample set size). For elliptical distributions, the authors show that as long as the scatter matrix has $\Omega(\log d)$ effective rank (hiding $\epsilon$-dependence here), then in quasipolynomial time we can yield asymptotic error $O(\epsilon \sqrt{\log 1/\epsilon})$. Crucially, for the last result, the scatter matrix is not known to the algorithm. Strengths: Prior works in (algorithmic) robust statistics get "stuck" at the $\sqrt{\epsilon}$ asymptotic error, when the covariance of the distribution is not known, even when the underlying distribution is guaranteed to have higher moments or might even be sub-Gaussian. This paper identifies semi-product distributions and elliptical distributions as special classes for which further progress can be made. The author(s) adapts the filtering framework, and proposes using variants of the Huber loss as score as opposed to the basic quadratic score in the filtering step. This allows the author(s) to beat the $\sqrt{\epsilon}$ error even when covariance/scatter matrices are unknown to the algorithm. Weaknesses: I find some parts of the writing clarity can be improved. In particular, I'm still a bit confused about how the guarantees of this work compare with prior works, re: knowledge assumptions on the algorithms. Some of the technical claims are also a bit over-sold (unless I misread or am misunderstanding). See the "Questions" section for more details. Another weakness, for me, is the motivation: the practical relevance of semi-product and elliptical distributions seems to be rooted in mathematical finance. However, at least from the cited works, the relevance of these distributions appears to more or less be "because we can write down theorems". I hope the authors will consider adding a more self-contained discussion on why we should care about these distributions; I think it will make the paper more convincing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: More important: - Knowledge assumption on algorithm in Theorems 1.4 -- 1.6: one of the selling points from the introduction is that the algorithm doesn't need to know the covariance or scatter matrix. However, it was unclear to me whether this means the algorithm knows absolutely nothing about the scatter, or if it only knows very little (such as an upper bound). The theorems can be a lot more explicit in saying exactly what the algorithm needs to know (or explicitly say that it needs to know absolutely nothing about the scatter matrix). For example, does the algorithm in Theorem 1.4 need to know the parameter $\rho$, which is essentially an upper bound to the covariance, at least in the Gaussian case? This distinction seems pretty important to me. - Result statement of Theorems 1.4 -- 1.5: the statements are of the form "if $n$ is sufficiently large, then we get this almost-sub-Gaussian error rate plus the claimed asymptotic error". However, it seems that the required $n$ lower bound is large enough that the almost-sub-Gaussian error is already dominated by the asymptotic error. Yet the prose following the theorems seem to sell how the theorem gets close to the sub-Gaussian rate (for Theorem 1.4, and for constant $\delta$ and large $C$ for Theorem 1.5). Am I misunderstanding something? - Theorem 1.6: Line 170 claims that Theorem 1.6 is only slightly sub-optimal in the no-corruption part of the error. But that error is multiplicative in terms of the effective rank and $\log d/\delta$, which is much worse than the sub-Gaussian rate of *additive* error. This error is equal to the naive analysis of entry-wise median-of-means. In fact, entry-wise median-of-means does not actually need the $d$ in $\log d/\delta$ from the naive union bound, and can get error $\sqrt{\mathrm{Tr}(\Sigma)\log\frac{1}{\delta}/n}$, so the error in Theorem 1.6 is even worse than entry-wise median-of-means. Am I missing the point here? - It is unclear to me that even the sub-Gaussian rate is the correct error rate for the non-robust part of these theorems. For 1-d symmetric distributions, and without corruptions, it was shown by Stone (1975, Maximum Likelihood Estimators of a Location Parameter) that asymptotically one can achieve Fisher information rates, which can be much better than the variance-based sub-Gaussian rate. - The appendix was hard to follow. There are three appendices with filtering algorithms, but Appendix I was titled only "Filtering Algorithm". It also seemed a bit weird that Appendices H and I appeared last, even though they prove Theorem 1.4, which was introduced earlier than Theorems 1.5 and 1.6. In terms of the actual content, there doesn't seem to be much discussion on how the techniques in the appendices relate to each other. I could see that Appendix E generalizes Appendix D, but how does Appendix I (and relatedly, Appendix H) relate to Appendix E (and C)? Less important: - Notation is slightly confusing, with the bold and capital letters. There is no distinction between random scalars and random vectors (e.g. in defining elliptical distributions). - Page 3, it was claimed that the first 2 properties of semi-product distributions allow for accurate non-robust (i.i.d.) location estimation by using entry-wise median. This seems true only when $\rho$ is small? - Footnote 1, does $\alpha$ need to be known by the algorithm? - Footnote 2, there is an $O(\log 1/e)$ which I assume is a typo? Is "$e$" supposed to be "$\epsilon$"? - CTBJ22 reference: they show how to do mean estimation robustly when only $1+\alpha$ moments exist, but (1) they do not handle the full adversarial corruption model because of sample-splitting issues and (2) they require an upper bound to the $1+\alpha$ moment in every direction. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for you feedback. We hope to address your concerns in this note. We believe your questions and this discussion form a valuable addition to our paper and we will add this. ### Knowledge of Problem Parameters In the setting of the introduction, e.g., Theorems 1.4-1.6, our algorithms do only require the corrupted samples as input. In particular, the parameter $\rho$ for semi-product distributions does not need to be known to the algorithm. All relevant parameters can be estimated from the corrupted samples. The case of unknown $\rho$ is described in Appendix G.1. Note that Appendix G.1 is written for elliptical distributions (it corresponds to the case of unknown trace of the scatter matrix in terms we used in the introduction), but the same argument applies to semi-product distributions (we can find a good upper bound on $\rho$ by working separately with each entry, since distributions of the entries are one-dimensional symmetric, hence elliptical). In the setting of a Gaussian distribution with covariance matrix $\rho^2 \cdot I_d$, this implies that we do not need to know an upper bound on the covariance matrix. For the more general setting of Definition A.1 (with $\alpha$), either $\alpha$ or $\rho$ needs to be known, but not both. E.g., if we assume $\alpha$ is known, we can estimate $\rho$ from the input. We agree that our writing with respect to this in some places is unfortunate and we will fix this and add a discussion to the main theorems. ### Sub-Gaussian Error Rates and Sample-Complexity (Semi-Product Distributions) You are correct, that since we have a lower bound on the number of samples, the non-robust error term in Theorems 1.4 and 1.5 is always dominated by the robust error. We wrote the error in this way to make the dependence on all problem parameters explicit. We acknowledge that this might be confusing and consider omitting it. We agree that the discussions in lines 139-140 and 155-156 are indeed confusing. We remark that for Theorem 1.4 we achieve nearly optimal (up to a log factor) sample complexity (necessary to achieve error $\varepsilon\sqrt{\log(1/\varepsilon)}$, and Theorem 1.5 matches what is known for the Gaussian case -- achieving error $O(\varepsilon)$ with quasi-polynomially many samples and in time polynomial in the number of samples. ### Non-Robust Error of Theorem 1.6 (Elliptical Distributions with Unknown Scatter Matrix) We agree that in the non-robust setting, when there are no corruptions, there exist simple algorithms achieving better error than the "non-robust error" of Theorem 1.6. The main challenge is to design algorithms which (nearly) match the error of the non-robust setting that are *also* robust against adversarial corruptions (note that the entry-wise median of means estimator suffers from error $\Omega(\varepsilon \sqrt{d})$). This turns out to be a significant challenge. As mentioned in the paper, specialised to Gaussian distributions with unknown covariance matrix, the optimal (robust) rate is $O(\sqrt{\lVert \Sigma \rVert} \cdot (\sqrt{\tfrac {r(\Sigma) + \log(1/\delta)} n} + \varepsilon))$. An (inefficient) estimator achieving this was only recently found [MZ23]. To the best of our knowledge, there is no efficient estimator matching this rate (even ignoring log-factors in the "robust error"). We would like to clarify, that it is not known how to achieve this rate for general elliptical distributions with unknown scatter matrix, even inefficiently. Note that in [CGR18] the error scales with $d$ instead of the effective rank. ## Sub-Gaussian Error Rate vs Fisher Information Rate For simplicity, consider the 1-D setting. Let $I_D$ denote the Fisher information of a distribution $D$. It is correct that asymptotically, the "right" error rate for a given symmetric distribution $D$ scales with its Fisher information rate $I_D$. In particular, this asymptotic result could let us hope for an "instance-optimal" algorithm, i.e., when run on distribution $D$, its error scales with $I_D$ -- instead of $\inf_{D \in C} I_D$ for some class of distributions $C$. Note that for the classes of distributions we consider the infimum above corresponds to the sub-gaussian rate. Unfortunately, it seems unlikely that such instance-optimal results can be achieved in the finite sample regime, even non-robustly. Consider the following mixture distribution: $(1-\gamma)N(0,1) + \gamma \delta_0$, where $\delta_0$ is a Dirac Delta at 0. Then $I_D = \infty$, but when the number of samples $n \ll 1/\gamma$, we only see samples from $N(0,1)$ and thus, the best rate we can hope for is $1/\sqrt{n}$ -- instead of 0. The example above is taken from the very recent work [GLP23]. They provided an algorithm (in one dimension) using finitely many samples, whose error scales with a quantity related to the Fisher Information but subject to the same constraint outlined above. It would be interesting to obtain similar results in the robust setting. We remark that their techniques rely on likelihood maximization and it is not clear how to make them robust. ## Ordering of Appendices and Other Comments Appendices H and I prove Theorem 1.4, while Appendices D-G prove Theorems 1.5 & 1.6. We chose this ordering since the proofs in H & I are technically more complex and easier to understand if one has read D-G, although this is not necessary. We will also address the minor comments raised by you and add additional motivation for our distributional assumptions. ## References [CGR18]: Chen, Gao, Ren, "Robust covariance and scatter matrix estimation under huber’s contamination model" [MZ23]: Minasyan, Zhivotovskiy, "Statistically Optimal Robust Mean and Covariance Estimation for Anisotropic Gaussians" [GLP23]: Gupta, Lee, Price, "Finite-Sample Symmetric Mean Estimation with Fisher Information Rate" --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Before I consider changing my score, I have the following further remarks and questions/requests. - Knowledge assumptions: thank you for the clarification. This is important to add to the main paper and to discuss very clearly and explicitly. - Sub-Gaussian error and sample complexity: I'm ok with the error being written explicitly in terms of all the parameters, but I think the "close to sub-Gaussian error" claim does need to be dropped and changed to a weaker claim instead about almost-optimal sample complexity. - Theorem 1.6: I agree that's the algorithmic challenge. I'm just saying that I disagree with the claim that the result is almost optimal, and I think the claim needs to be dropped. - Fisher information rate: I believe this warrants a discussion in the paper (even if it's in the appendix for space reasons). [GLP23] shows that it's possible to achieve smoothed Fisher information rates for the non-robust (and 1-d) setting, which asymptotically converges to the true Fisher information rate. I'm just pointing out that we can perhaps aim for smoothed Fisher information rate for the non-robust part while retaining a good robust-error, and so the optimality claims in the paper needs to be toned down, given the symmetry assumption. - Motivating the distributional assumptions: I understand that space was limited in the rebuttals, but can the author respond with what they plan to say about the concrete motivations? --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your additional comments. We will add a detailed discussion about what is known to the algorithm to the paper. We will rephrase the statements regarding the "non-robust error-rate" and its optimality, in particular, (a) changing them to statements about sample complexity and (b) mentioning that it is an exciting open question to go beyond the "Gaussian sample complexity/error rate", and hence, also beyond the guarantees our results achieve, and to aim for a sample complexity/error rate depending on the (smoothed) Fisher Information (similar to [GLP23]). We will also include a discussion about Fisher information similar to the one we had above. Regarding motivations for our distributional assumptions: We would like to recall that the motivation for our paper is mostly theoretical. In particular, our work is guided by the question in lines 66-67: "Do there exist classes of heavy-tailed distributions for which we can (efficiently) achieve the same robust error as for the Gaussian distribution?" A weaker version of this question is: "For what classes of distributions can we (efficiently) achieve robust error $o(\sqrt{\varepsilon})$?" We believe characterizing these distributions is an important foundational question in algorithmic robust statistics. We believe a natural approach to this question (and the first) is considering classes of distributions satisfying the following criteria - they generalize the Gaussian distribution - in the non-robust setting, they behave as nicely as the Gaussian distribution - they are general, i.e., contain many distributions for which the question is still open - they are theoretically well-studied One such class that has been studied before were sub-gaussian distributions. However, additional assumptions (identity-covariance or "certifiably" sub-gaussian) are needed to go beyond $o(\sqrt{\varepsilon})$ in this case. As stated in our submission, our work identifies two other classes satisfying the criteria above, for which error $o(\sqrt{\varepsilon})$ is achievable. 1. Semi-product distributions: This generalizes the standard Gaussian distribution. And we show how to match the robust error of the Gaussian setting with nearly optimal sample complexity. This was previously known only for sub-Gaussian distributions with covariance identity. 1. Elliptical distributions: This generalises the "unknown covariance" Gaussian distribution in which it is only known how to achieve error $o(\sqrt{\varepsilon})$ for the Gaussian distribution, due to the algebraic structure of its moments, and "certifiably" sub-Gaussian distributions. We find it very appealing that there exist classes of distributions that is well-studied in the statistical literature, for which this is possible. Kind regards, The Authors
Summary: This paper considers sample complexity of (robust) mean estimation possibly without moments. Two typical examples are product distributions and elliptical distributions. The main technical contribution of this paper is to adapt the filtering techniques to the setting with less restrict moment assumptions. Strengths: The results of the paper are interesting and new; while the idea may not be. The basic argument relies on filtering, which was previously proposed by [DKK+17] and [DKK+19]. The authors of the paper make a good observation on how the idea of filtering (and coupling) can be used with less restrictive moment assumptions (but it finally replaced by some concentration). The paper is well written, and I enjoyed every minute reading it. Weaknesses: As mentioned, the main concern is that the authors replace the moment conditions with "concentration", i.e. $P(|\eta| \le \rho) > \frac{1}{100}$ and $P(R \le \sqrt{2d}) \ge \frac{1}{100}$. Moreover, from technique viewpoint, the main components, filtering and coupling (identifiability), are not novel. The time complexity is not explicit (though polynomial). Also I think the paper is lack of (synthetic or real) experiments. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I have several questions and comments. (1) In most "useful" applications, some moment assumptions may be needed. For instance, as the authors mentioned, covariance structure may be helpful. Also I may suggest to use "location" parameter instead of "mean". One case is the product Cauchy but it does not have a mean. (2) p.1, line 16: It may be better to say SQ refers to statistical query (if I don't miss it). (3) p.3, Def 1.2: If I understand correctly, the variables $(|\eta_j|)_{j = 1}^d$ can be dependent. If so, this is worth commenting since it is (apparently may be slightly) stronger than the usual sense of product measure. (4) Theorems 1.4-1.6: In all theorems, it is stated that the time complexity is "polynomial", i.e. $n^{\mathcal{O}(1)}$. However, it is not clear whether it is $n$ or $n^{100}$ (this makes difference in high dimensions). I understand that exact or a bound may be difficult. But I hope the authors may clarify this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We thank you for your feedback and are happy that you enjoyed reading our paper. ### Probability Mass Around the Location Parameter vs. Concentration While the condition that some mass is present around the location parameter, e.g., that $\mathbb{P}(\lvert \eta \rvert \leq \rho) \geq 1/100$, may suggest that some concentration phenomena are occurring, we would like to remark that in our setting that need not be the case. In particular, the kind of concentration results needed to apply known filtering results from the bounded moment setting, *cannot* be deduced from our assumptions. As a consequence, we need new algorithmic ideas (see also the next section). As an example: In the bounded-moment setting, it is necessary the the empirical moments of the uncorrupted samples concentrate around their empirical counterpart. In our setting, such concentration may not even occur for the location (let alone for larger powers): Consider the standard Cauchy distribution. This satisfies our definition with $\rho = O(1)$, but averaging $n$ such independent variables, again results in a standard Cauchy distribution. Hence, it is unclear how to use the filtering approach for the bounded moment setting to obtain any non-trivial guarantees. We show that by carefully integrating the Huber loss into this approach, non-trivial and in some cases nearly optimal guarantees can be obtained. ### Novel Filtering and Identifiability Using the Huber loss As you correctly pointed out, the filtering paradigm is not new and has previously been used, e.g., for robust mean estimation under bounded moment assumptions. The same holds for the proof of identifiability based on moment assumptions (further, based on the square loss). Our work builds on these ideas and shows how to extend them to the setting where no moments exist, but a symmetry condition holds. In particular, we would like to emphasize, that to the best of our knowledge, the combination of these ideas with loss functions different from quadratic loss is novel to our submission. As explained in the previous section, these new ideas are necessary to achieve non-trivial guarantees under our assumptions. Using this combination, we can (among others) achieve the following: - Nearly achieve the same guarantees as for the standard Gaussian distribution for two natural symmetric generalizations of the standard Gaussian: Product and spherically symmetric distributions. - For elliptical distributions, achieve the guarantees as for certifiably sub-Gaussian distributions (under a mild assumption on the effective rank). - Do this in the absence of any assumptions related to sum-of-squares on the underlying distribution. In particular, previously the filtering paradigm, as well as other algorithms, failed to achieve error better than $o(\sqrt{\varepsilon})$ without assuming the distribution has *certifiably* bounded moments in sum-of-squares (the only exceptions are the Gaussian distribution and distributions with known covariance matrix). ### Correlation and Location Regarding your first question: In the paper we mentioned that the class of elliptical distributions allows for more complex "correlation structures", which are useful in modelling, e.g., financial data. We used this term informally, to mean that the coordinates of the vectors might have complex dependencies, indicated by the scatter matrix. Note, that the covariance does not need to exist for this, nor do any other moments. We will rephrase, do make this more clear. Similarly, we tried our best to distinguish between mean and location (cf. Theorem 1.4 and 1.5) but will thoroughly proof-read our submission again. ### Other Comments We would also like to address your other comments/questions: Yes, Definition 1.2. allows for distributions with dependent coordinates, one example (mentioned in the paper) being spherically symmetric distribution. We will be happy to highlight this more. Regarding the running time and practicality: As noted in lines 146-148, we do expect our algorithm to be practical. Since it only uses one-dimensional smooth convex minimization for $O(nd)$ times and top-eigenvector computations $O(n)$ times. We remark that for the filtering algorithm under bounded moment assumptions, there are versions achieving nearly linear running time for which the code is also available [DHL19]. These works use additional ideas beyond the standard filtering idea. We could imagine that similar techniques might work in our setting. ### References [DHL19]: Yihe Dong, Samuel Hopkins, Jerry Li, "Quantum Entropy Scoring for Fast Robust Mean Estimation and Improved Outlier Detection" --- Rebuttal Comment 1.1: Comment: Many thanks for the detailed explanations. The score remains unchanged.
Summary: This paper studies the robust mean estimation problem without any moment assumptions. Instead, they consider a class of symmetric distributions, that is, semi-product and elliptical distributions. They develop a method based on Huber loss and the classic filtering technique that can achieve the same error rate as if the underlining distributions are sub-Gaussian. Their sample complexities are nearly optimal (with additional log factors). Strengths: - The idea to use Huber loss and develop a similar result for the classic filtering method is very smart. I think this idea can be generalized to other settings and can be of independent interest. - The paper is well-structured. The techniques part clearly provides the motivation and is very readable. Weaknesses: - The title is a little bit overclaimed. This paper studies the robust mean estimation problem without moments, but with the constraint that the distribution should be symmetric. I think it would be better to reflect this constraint in the title. - The notation style is not consistent. Based on my understanding, the authors use bold font for random variables. However, in some cases, like line 95, some symbols are not in bold font. - A quick question: I can understand that Huber loss has many fantastic properties. Compared with l2-loss, it is more robust against outliers; compared with l1-loss, it is differentiable everywhere. However, l1-loss is only not differentiable at 0. Hence, my question is can we replace the Huber loss with l1-loss? If not, can you briefly mention what is the difficulty? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see the weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your feedback. Regarding the title of our submission: Reflecting that we exploit symmetry already in the title is a good idea. For example, a candidate for the title could be "Robust Mean Estimation Without Moments for Symmetric Distributions". ### Notation We will also make sure to check that our notation is consistent, thank you for point this out. We would like to remark that we use regular (non-bold) font for random variables, some of which have been adversarially corrupted. E.g., in line 95, in Definition 1.1. $Z_1, \ldots, Z_n$, refer to an adversarial corruption of some iid (purely random) sample $\boldsymbol X_1, \ldots, \boldsymbol X_n$. We find this distinction between adversarially corrupted and purely random (uncorrupted) samples particularly useful in our proofs of identifiability. We do acknowledge that we did not specify this and will add clarification to the paper. ### $\ell_1$- instead of Huber loss An approach based on the $\ell_1$-loss should also work, but it requires slightly different assumptions than ours. Concretely, in the semi-product case, one needs to assume that the density is $\Omega(1/\rho)$ at the location. For product distributions we can achieve this by adding Gaussian noise of standard deviation $\Theta(\rho)$ to each entry. For elliptical distributions similar density assumption is also required, however, the approach with adding Gaussian noise to each entry does not work: The resulting distribution might not be elliptical anymore. To illustrate why the condition on the density is necessary, consider the one-dimensional mean estimation problem for the distribution that is uniform over the (discrete) set $\{\mu-1, \mu + 1\}$. Every $x\in [\mu -1,\mu + 1]$ minimizes the $\ell_1$-loss, to find the correct value we need an "averaging". The Huber loss does this automatically. If we have $\Omega(1)$ density at $\mu$, then the $\ell_1$ loss minimizer is close to $\mu$. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Dear authors, Thanks for your detailed response, which addresses all of my concerns. I especially thank your explanation of l1-loss. I will increase my score from 6 to 7. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We are happy that our response addressed all of your concerns and thank you for raising your score.
Summary: This work studies robust mean estimation for symmetric distributions. There may possibly no moments (not even the first moment, hence the term location instead of mean), so previous efficient algorithms that rely on strong distributional assumptions may not work on the heavy-tailed setting. The main result is that similar statistical guarantees as the Gaussian setting can be obtained for $\rho$-semi-product distributions, where a semi-product distribution is a set of distributions that covers slightly more than that of product distributions. For instance, semi-product distributions include elliptical distributions, which is not a product distribution. For the case of known covariance matrix (or scatter matrix for elliptical), the guarantee obtained nearly matches best-known guarantees for Gaussians. For the unknown case, they obtain an error guarantee of $O(\epsilon^{1-\frac{1}{2k}})$ using $\tilde{O}(d^k)$ samples. The key approach is to generalize the filtering idea for robust mean estimation with a Huber loss instead of a quadratic loss in order to be able to handle distributions without second moments. Once the Huber loss is properly incorporated into the filtering technique, the necessary certificate for filtering can be obtained algorithmically without using sum-of-squares proofs for symmetric distributions. Strengths: - This work studies a setting which (partially*) generalizes previous works to handle heavy-tailed distributions in which first or second moments may not exist. Here, they incorporate Huber loss into the common filtering technique that has been used in much of the recent robust learning literature. This in itself is novel. Furthermore, by exploiting symmetry, the filtering technique is made more algorithmically feasible without relying SoS approaches. - Through studying heavy-tailed symmetric distributions, such as elliptical distributions, in the case of unknown covariance, the paper obtains error bounds of $o(\sqrt{\epsilon})$ that were not known for general subgaussian distributions. - The main result obtains strong guarantees that nearly match that of Gaussians. *refer to Weakness 1 Weaknesses: 1. Only a minor weakness: though the abstact motivates the work by stating that previous efficient estimation algorithms assume strong distributional conditions, the symmetry distributional assumption seems also possibly strong. While it is able to incorporate heavy-tail distributions, it also limits itself in generality (as ever-slightly altered Gaussians are not symmetric but have strong concentrations to exploit). 2. While I enjoyed the content of the paper, he main body seems abruptly cut off without a conclusion or final algorithmic overview. The presesntation would be greatly improved with a final algorithmic description along with a retrospective conclusion. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - For the main theorems, there is an upper bound on $\epsilon$. What is the breakdown point here? If C < 1/2, what is the reason the proposed approach breaks down? - In Theorem 1.6, the intuitive definition of $k$ is unclear. Some exposition on $k$ would be helpful. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: No limtation addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We thank you for your feedback. We are happy that you enjoyed reading the content of our paper. We agree that including a succinct description of our algorithm, e.g., as an algorithms box, as well as a conclusion to the end of the paper would improve the presentation. We will add this. ### Breakdown Point The breakdown point of our algorithm depends on how much probability mass is located near the location parameter. To illustrated this, let us consider semi-product distributions (Definition 1.2). We claim that the fraction of corruptions cannot exceed $1/100$. Specifically, consider the mixture distribution $(1-\tfrac 1 {100}) N(\mu,\sigma^2) + \tfrac 1 {100} N(\mu,1)$, for $\sigma^2$ very large (like $2^n$ or larger). Note that this distribution satisfies the assumptions of Definition 1.2 with $\rho = \Theta(1)$. Then by replacing all samples from $N(\mu,1)$ with new independent samples from $N(\mu,\sigma^2)$, the input distribution becomes $N(\mu,\sigma^2)$. In this case, even the information-theoretic error scales with $\sigma$ and hence, can be unbounded. We remark that in the appendix, we consider a more fine-grained version of of Definition 1.2 (Definition A.1). In particular, it states that a distribution is $(\alpha, \rho)$-semi-product if, Definition 1.2 holds, but with the second item replaced by $\mathbb{P}(\lvert \boldsymbol \eta_j\rvert \leq \rho) \geq \alpha$. In this case, the above discussion shows that the fraction of corruptions cannot exceed $\alpha$. For example, in this setting, the analysis of our polynomial time algorithm (Theorem A.1, generalizing Theorem 1.4) requires $\varepsilon \leq \alpha^3 / C$ for some large enough constant $C$. We did not attempt to optimize this dependence on $\alpha$. ### On the Role of $k$ in Theorem 1.6 $k$ can be seen as a parameter which allows a tradeoff between (a) number and samples and running time and (b) accuracy guarantees. It is analogous to the number of moments used in the bounded-moment setting, and these guarantees are similar to the guarantees of the SoS-based filtering algorithm for distributions with certifiably bounded $2k$-th moment. We will clarify it in the final version of the paper. --- Rebuttal Comment 1.1: Title: Reply to Authors Comment: My questions have been answered. Thank you for the response.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Memory-Constrained Algorithms for Convex Optimization
Accept (poster)
Summary: The main result of the paper is a theorem that establishes a trade-off between oracle-complexity and memory-usage for convex optimization problems. More specifically they show that given a convex optimization problem in R^d, then for any p between 1 and d there is a deterministic first-order algorithm tha solves the feasibility problem for accuracy \epsilon <=1/sqrt{d} using O(d^2/p ln 1 \epsilon) bits of memory with O((C (d/p) (ln 1/epsilon))^p) calls to the separation oracle. In other words, by decreasing the number of bits of memory by a factor of p, one needs to call the oracle a number of times that is exponential in p. The basis of the exponential function is roughly (d/p)* ln(1/epsilon). Strengths: The papers seems to make relevant advances in the understanding of tradeoffs between memory and number of oracle calls in convex optimization algorithms. Weaknesses: The technique is heavily based on the Vaidya's cutting-plane method. It is not entirely clear what are really the advancements with respect to the original technique. There are some minor presentation issues, such as statements of results with undefined parameters or parameters that are defined very far away. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Please make theorem 3.2 more self contained by briefly describing two parameters before the statement of the theorem: d and omega. It should not be necessary to read a big portion of the paper to be able to understand the meaning of the main statement (Theorem 3.2). For instance, you could add a line before the theorem stating that you are dealing with optimization problems in R^d. Same thing for omega. I only see it defined in the footnote of page 4. Although the notation is standard, it took some time to find the precise meaning of omega. So it would be good to write its meaning either in the theorem or in the text just before it. 2) What are the differences between your method and Vaidya's method? Is it the case that the only difference relies in a recursive application of Vaidya's method? Do you modify the method somehow? This should be better discussed in the introduction. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In my opinion the paper discusses the limitations of the techniques in a fair way. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reading our paper carefully and for the positive views! For your first point, we agree that our algorithm relies heavily on Vaidya's cutting-plane method [1] and its variant proposed in [2]. We will provide more detailed comparisons between our algorithm and (the variant of) Vaidya's methods in the introduction. Here, we would like to emphasize advancements from the following two perspectives. - We use Vaidya's methods as dimension reduction techniques. In particular, our algorithm divides the variables into blocks and applies Vaidya's methods recursively to construct approximate separation or subgradient oracles for problems with reduced dimensions. We believe that there is some novelty in this recursive dimension reduction step. - An important and novel technical step in the convergence analysis is to ensure that the precision of the computed approximate separation oracles is sufficient, which is primordial given the recursive nature of our algorithms (the errors accumulate at each layer of recursion). A natural approach to having a separation oracle for a father problem is to ensure that Vaidya's method of the sub-problem converges to the ``minimizer'' of the sub-problem. In fact with this approach, the precision required is multiplicative in the depth of the recursion and in the end, the required memory to store such high-precision sub-gradients (for low-level sub-problems) blows up and does not yield improvements over standard cutting-plane methods. Instead, Algorithm 2 performs Vaidya's method to high accuracy on all sub-problems (even though the sub-gradients used are of lower quality): on the sub-problem, the iterates effectively do not converge to the true minimizer of the sub-problem, but the analysis shows that the corresponding aggregated subgradient at the higher-level is still of sufficient quality (the deterioration becomes additive in the number of recursive levels). We did not emphasize this in the main body for the sake of simplicity, but this is the reason the proof requires estimating approximation errors along the complete computation path, instead of using a level-by-level recursive approach. - We take into account memory throughout computations. To be precise, for the original Vaidya's method in [1], only the weaker notion of memory constraint (Definition 2.1) applies. By adopting the variant of Vaidya's method in [2] (which has more penalty terms in the potential function), our algorithm is proven to satisfy the stronger notion of memory constraint (Definition 2.2), i.e. the memory is constrained not just between oracle calls but also throughout computations. Thank you for pointing out the presentation issues, we apologize for any confusion! We will clarify $d$ (the dimension of the problem) and $\omega$ (the exponent of matrix multiplication, $\omega<2.373$) in Theorem 3.2. [1] Kurt M Anstreicher. “Towards a practical volumetric cutting plane method for convex pro- gramming”. In: SIAM Journal on Optimization 9.1 (1998), pp. 190–206. [2] Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong. “A faster cutting plane method and its implications for combinatorial and convex optimization”. In: 2015 IEEE 56th Annual Symposium on Foundations of Computer Science. IEEE. 2015, pp. 1049–1065.
Summary: The paper studies the problem of memory constrained convex optimization. In this setting, the algorithm is given oracle access to the gradients of an unknown convex function and is tasked with finding an approximate minimizer. The additional constraint is that the algorithm is constrained to use as few bits of memory as possible. Though convex programming is a classical area, understanding the memory requirements of algorithms has only recently been considered and has become an active area of research. When considering memory, two standard algorithms witness a tradeoff between memory and oracle complexity: gradient descent makes $1/\epsilon^2$ queries and uses $d \log(1/\epsilon)$ bits of memory while the center of gravity method makes $d \log(1/\epsilon) $ queries and uses $d^2 log^2(1/ \epsilon)$ bits of memory. A recent line of study has been trying to understand the tradeoff between these complexities. The paper presents a recursive implementation of a cutting plane method which uses $O( d^2 / p \log (1/ \epsilon)$ memory while making $O( (d/p \log (1/ \epsilon))^p$ queries. Here $p \in [d]$ is a parameter of the algorithm. In particular, the paper shows that for $\epsilon = d^{-d}$ the algorithm presented has improved query complexity than gradient descent while using the same amount of memory. Strengths: The question considered by the paper is very interesting as memory constraints are a very natural "simplicity" condition on algorithms. To the best of my knowledge this is the first algorithm that improves on the memory complexity over the two benchmark algorithms. Furthermore the framework of recursive partitioning seems very interesting and could lead to improvements to other optimization algorithms, Weaknesses: The regime of parameters that the algorithms improves over gradient descent is rather quant. In this regime both the algorithms make $d^O(d)$ calls to the oracle and the current algorithm improves the constant in $O(d)$ in the exponent. While this is of mathematical interest and does show the non Pareto optimality of gradient descent, justifying the interestingness of this range of parameters seems difficult. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - More thorough description of the algorithm along with the intuition for why it works would be very helpful. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive view and we really appreciate your suggestions! We agree that our algorithm improves over gradient descent only in the regime $\epsilon\leq \frac{1}{d^{\mathcal O(d)}}$, which might be smaller than regimes considered in some previous works on oracle complexity of convex optimization/feasibility problem. To justify studying this regime, we would like to point out here that optimization problems in the low-dimension regimes have a growing literature [1][2]; also, in the specific literature for memory/oracle-complexity tradeoffs, some papers specifically study super-polynomial accuracy regimes, e.g. [3] considers accuracies $\epsilon\leq \exp(-\log^4 d)$ (to appear at FOCS 2023 according to the author's website). In addition, we emphasize that our algorithm shows a non-trivial memory-query trade-off in the regime $\log(\frac{1}{\epsilon})\gg\log(d)$, and outperforms the center-of-mass in the standard regime $\epsilon\leq\frac{1}{\sqrt{d}}$, which are novel results that might help understand the memory-query landscape better. Following your suggestions, we will provide more detailed description of and intuition behind the algorithm. Here in Figure 1 (in the rebuttal pdf), we use a $2$-dimensional feasibility problem to illustrate the geometry of the recursive step, where an approximate separating hyperplane to the problem ``projected'' to the $x$-axis is constructed using a separating oracle for the $2$-dimensional problem. To be precise, the target is $\boldsymbol p^* = (p_x^*,p_y^*)$, and we use two blocks (i.e. $p=2$). Suppose at a step of the algorithm, the current value of the $x$ coordinate is $c$. We then aim to find an approximate separating hyperplane between $x = p_x^*$ and $x = c$ for some $c$, Algorithm 2 first runs Algorithm 1 (i.e. the memory-constrained Vaidya) to find two separating hyperplanes (the two blue hyperplanes). Lemma 4.1 then guarantees the existence of a convex combination of the 2 blue hyperplanes -- the red hyperplane-- which is approximately parallel to the $y$-axis and thus can serve as an approximate separating hyperplane between $x = p_x^*$ and $x = c$. [1] Vavasis, Stephen A. "Black-box complexity of local minimization." SIAM Journal on Optimization 3.1 (1993): 60-80. [2] Bubeck, Sébastien, and Dan Mikulincer. "How to trap a gradient flow." Conference on Learning Theory. PMLR, 2020. [3] Chen, Xi, and Binghui Peng. "Memory-Query Tradeoffs for Randomized Convex Optimization." arXiv preprint arXiv:2306.12534 (2023). --- Rebuttal Comment 1.1: Comment: Thanks for the response. I maintain my positive score.
Summary: This paper studies solving feasibility problems, and hence optimization problems, and focusing on the trade-offs between the number of oracle calls and use of memory. The precise problem is to find a point in a convex set inside the unit cube, given access to a separation oracle, which reports if a query point is in the convex set, or otherwise returns a hyperplane separating the input query point and the convex set. By segmenting the variables/dimensions and working on them sequentially, and using the variant of Vaidya’s cutting-plane method by Lee, Sidford, and Wong, this paper presents a new recursive algorithm with better trade-offs between the number of oracle calls and use of memory. In particular, this algorithm uses the same optimal memory but makes less oracle queries than gradient descent for low error regime $\epsilon \le \frac1{\sqrt d}$. In addition to algorithms, this paper slightly improves on existing lower-bounds for the trade-offs between accuracy $\epsilon$ and dimension $d$ with a more careful analysis. Strengths: This paper adapts an existing algorithm with a recursive decomposition to improve the trade-offs between number of oracle calls and memory usage, beating gradient descent for very low error regime. Weaknesses: The improvement appears incremental, and does not give new insights or understanding for solving feasibility or optimization problems. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are there geometric intuition (better than the repeated convexity argument applied to blocks of variables) to the recursive application of cutting plane method? Arguably, breaking up variables into blocks, while improving bounds, does not give new geometric understanding. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This theoretical paper does not have broader societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive view and the questions! Regarding the weakness, although our work does not provide a full understanding of the memory-query landscape, we emphasize the following three contributions. - To the best of our knowledge, prior to this work, no positive results nor algorithms were known to improve (in the memory-query landscape) over two most foundational algorithms for optimization: gradient descent (GD) and cutting-planes (CP). Our results demonstrate that in super-polynomial accuracy regimes, it is indeed possible to improve over these algorithms (dark green region in Figure 1). This is the first result making some progress on the algorithmic side as opposed to lower bounds for which the literature is better established [1,2,3]. - We further provide a class of algorithms giving a positive trade-off between memory and oracle complexity. This enables an optimizer to specify a desired memory usage (at the price of oracle complexity), through the parameter $p$, instead of restricting the choice to either linear or full (quadratic) memory. - A major question was whether one can Pareto-improve over GD and/or CP. Lower bounds [2,3] have already shown that one cannot improve over CP, but in this work, we show somewhat surprisingly that GD does not Pareto-dominates in some regime (exponential accuracy). Additionally, our algorithms work in the face of a stronger form of memory constraint than described in the literature: they are limited in memory between iterations, but also for within-iteration computations. For your question about the geometric intuition, we agree that the description of the algorithms and their performance is a bit abstract, and we will provide more explanations in the paper. We give in Figure 1 from the rebuttal pdf a geometric illustration in dimension 2 of the recursive step, where an approximate separating hyperplane to the problem ``projected'' to the $x$-axis is constructed using a separating oracle for the $2$-dimensional problem. To be precise, the target is $\boldsymbol p^* = (p_x^*,p_y^*)$, and we use two blocks (i.e. $p=2$). Suppose at a step of the algorithm, the current value of the $x$ coordinate is $c$. We then aim to find an approximate separating hyperplane between $x = p_x^*$ and $x = c$ for some $c$, Algorithm 2 first runs Algorithm 1 (i.e. the memory-constrained Vaidya) to find two separating hyperplanes (the two blue hyperplanes). Lemma 4.1 then guarantees the existence of a convex combination of the 2 blue hyperplanes -- the red hyperplane-- which is approximately parallel to the $y$-axis and thus can serve as an approximate separating hyperplane between $x = p_x^*$ and $x = c$. [1] Marsden, Annie, et al. "Efficient convex optimization requires superlinear memory." Conference on Learning Theory. PMLR, 2022. [2] Blanchard, Moïse, Junhui Zhang, and Patrick Jaillet. "Quadratic memory is necessary for optimal query complexity in convex optimization: Center-of-mass is Pareto-optimal." Conference on Learning Theory. PMLR, 2023. [3] Chen, Xi, and Binghui Peng. "Memory-Query Tradeoffs for Randomized Convex Optimization." arXiv preprint arXiv:2306.12534 (2023). --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, and for the geometric explanation and illustration in 2-dimension. It does help me understand better some of the intuition (though I still find it harder to geometrically understand partitioning of variables, in that the segmenting of dimensions may seem somewhat arbitrary, as a way to make the algorithm and analysis go through, but not something “forced by nature”.) I keep my score.
Summary: The authors propose an algorithm to solve a feasibility problem with constrained memory while minimizing the number of calls in separation oracle. The memory complexity and the calls to separation oracle is parametetrized by a parameter $p$. They get $O(d^2/p \log 1/\epsilon)$ bits of memory with $O(d/p\log(1/\epsilon))^p$ calls to separation oracle. Their result basically improves the case where $\epsilon$ is much smaller than $1/d$. They also provide a lower bound of that shows the dependences on $\log(1/\epsilon)$ for several settings. The algorithm is based on the Vaidya's cutting plane algorithm. Strengths: Contains new contributions for the regime where $\epsilon$ is much smaller than $1/d$. The lower bound is also an interesting and a new contribution and potentially of independent interest. Overall, the paper is well written. Weaknesses: For me, the only weakness is that as you improve the memory, you increase the number of calls exponentially, which does not sound so intuitive to me. But still, the results are interesting. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: no limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive view of the paper. The number of oracle class indeed grows exponentially with the parameter $p$, due to the recursive nature of the algorithm. We emphasize however that these are the first class of algorithms providing a positive result to trade-off memory and oracle-complexity in convex optimization, while the question of understanding the memory/oracle-complexity trade-off was well-identified in the literature (in its strong form, this was formulated in a COLT 2019 open problem [1]). Our results do not fully describe the memory/oracle-complexity landscape but provide significant clarifications. We first show that it is in fact possible to improve in some regime over the two fundamental algorithms in optimization: gradient descent and cutting-planes. Second, while it was known that cutting-planes are Pareto-optimal, we show somewhat surprisingly that in some exponential regime, gradient descent is not. Our algorithms allow an optimizer to specify a memory usage (at the cost of oracle complexity) instead of being restricted to either linear or full (quadratic) memory. Prior to this work, results were only obtained for lower-bounds, i.e., impossibility results [2,3,4], and novel ideas seemed to be required to make progress on the algorithmic side. In particular, to the best of our knowledge, previous approaches for optimization in low dimensions always showed significantly stronger curse of dimensionality [5] than say our $(d/p\ln\frac{1}{\epsilon})^d$ oracle dependence. Here we propose a recursive approach to solve low-dimensional sub-problems together with a careful analysis, which we believe are novel. [1] Woodworth, Blake, and Nathan Srebro. "Open problem: The oracle complexity of convex optimization with limited memory." Conference on Learning Theory. PMLR, 2019. [2] Marsden, Annie, et al. "Efficient convex optimization requires superlinear memory." Conference on Learning Theory. PMLR, 2022. [3] Blanchard, Moïse, Junhui Zhang, and Patrick Jaillet. "Quadratic memory is necessary for optimal query complexity in convex optimization: Center-of-mass is Pareto-optimal." Conference on Learning Theory. PMLR, 2023. [4] Chen, Xi, and Binghui Peng. "Memory-Query Tradeoffs for Randomized Convex Optimization." arXiv preprint arXiv:2306.12534 (2023). [5] Ma, Yi-An, et al. "Sampling can be faster than optimization." Proceedings of the National Academy of Sciences 116.42 (2019): 20881-20885.
Rebuttal 1: Rebuttal: In this document, we provide in Figure 1, a geometric illustration of the recursive step of our algorithm, where an approximate separating hyperplane to the father problem (in the recursive hierarchy) is computed (Reviewers U1uo and H4w2). Pdf: /pdf/6a9cadf2970d5355796ade9aa4a6555513894a79.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper discusses a parameterized family of memory-constrained algorithms for convex optimization and compares the memory and oracle call complexity of this family with existing algorithms. The authors show that with this parameterized family, one can trade between oracle call and memory complexity, and they claim that this is the first work in this direction. Strengths: The paper is very well written and accessible even to readers slightly unfamiliar with the topic (but see also below for some comments). The clarity is particularly facilitated by the excellent setup in Section 2, and also the overview of related work in Section 2.1 is exemplary. The paper seems theoretically quite strong (although I admittedly did not check the proofs in the supplementary material) and the algorithms are easy to understand. Figure 1 excellently summarizes the significance of the paper in the sense that the known upper and lower bounds are non-trivially improved upon. Weaknesses: One weakness of the paper is the introduction, which requires (in my opinion) more prior knowledge than what may be available from a non-expert. For example, the concept of a separation oracle in line 33 did not become clear to me until line 99, and similarly for the feasibility problem in line 42. In line 36 it is said that center-of-mass methods are "quadratic" in memory, from which I deduced quadratic in $d$ (but not in $-\log\epsilon$); then the statement that the algorithm improves upon center-of-mass for $p=1$ in line 54 is confusing. This is not a critical weakness, though, as the rest of the paper is quite accessible. Another concern is whether the content of the paper fits well the scope of the venue. This is not immediately clear to me and shall be discussed in the discussion phase. ## Minor: - line 115: "can also be carried OUT"? - line 122: "all known lower boundS" - line 139: "one needs to store $O(d)$ cuts instead OF $O(...$ Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In Def. 2.2, you write that the "the contents of $Q$ and $N$ are $q$ and $n$, respectively" and that "$R$ must contain at least $n$ bits". Is the latter the same $n$ as the former? - What is the meaning of the notation $\vee$ and $\wedge$ in Corollary 3.1? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I do not see any immediate negative societal impacts, hence it is fine that the authors did not mention any. The limitations of the study are clearly discussed in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive view of our paper and its contributions, and for your useful remarks. We will take care to solve all minor issues in the revised manuscript. We will also make sure to clarify parts of the exposition mentioned. With respect to the conference, NeurIPS has a tradition of publishing optimization papers that provide theoretical insights. This paper contributes to the overall goal of understanding resource constraints for optimization algorithms and designing efficient algorithms for these tasks, which we believe is of interest to the NeurIPS community in general, and its optimization sub-community in particular. In Def 2.2, the two $n$ are indeed the same: roughly speaking, before making a query to the oracle, the algorithm first needs to specify a bit precision for the response. That precision $n$ is stored in the placement $N$ which is read by the oracle. The oracle then writes the subgradient rounded to a precision of $n$ bits within the response placement $R$, which should therefore contain at least $n$ bits. Thank you for noting that we did not define the notations $\land$ and $\lor$, which act as minimum and maximum operators. We will add a formal definition in the revised manuscript. For clarity, Corollary 3.1 states that using $p\leq \mathcal O(\min(d,\frac{\ln\frac{1}{\epsilon}}{\ln d}))$, this provides a tradeoff between cutting planes and having memory $\mathcal O(\max(d^2\ln d,d\ln\frac{1}{\epsilon}))$ (we recall that $d\ln\frac{1}{\epsilon}$ memory is necessary) and oracle complexity $\mathcal O(\min(\frac{1}{\epsilon^2},(C\ln \frac{1}{\epsilon})^d))$. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Dear authors, Thanks for your answers to my questions. Whether the paper fits the venue is anyway a decision that has to be made by the area/program chairs -- but, liking your paper, I would be happy to see that their decision is positive!
Summary: The authors study memory vs oracle calls trade-off for convex optimization. The examples of problems they consider are: To find a point in the $d$-dimensional unit ball up to error $\epsilon$, or to minimize $1$-Lipschitz convex functions over the unit ball up to error $\epsilon$. They provide a family of algorithms parametrised by $p \in [d]$ that solve the problem using $O(d^2 \log (1/\epsilon) / p)$ bits of memory and make $(C d \log (1/\epsilon)/p)^p$ oracle calls, where $C$ is some absolute constant. More precisely, the first result (the case $p=1$) is the proof that a memory-constrained Vaidya’s method has (optimal) oracle complexity $O(d\log(1/\epsilon))$ and memory complexity $O(d^2\log(1/\epsilon))$, that is a $\log(1/\epsilon)$-factor improvement over the state-of-the-art. For $p=d$, their algorithm has oracle complexity $(C\log (1/\epsilon))^d$ and (optimal) memory complexity $O(d\log(1/\epsilon))$, that improves over state-of-the art oracle complexity $O(1/\epsilon^2)$ if $\log(1/\epsilon) \gtrsim d\log d$. Strengths: The paper provides new algorithms with better memory vs oracle calls trade-off than the state-of-the art. The paper is well-written, and the contribution is clear. The algorithms are non-trivial and their analyses are sophisticated. Weaknesses: My main concern is that the results only slightly improve over the state-of-the-art. The first result improves over the state-of-the-art only by a $\log(1/\epsilon)$-factor. It is not surprising, since there are lower bounds that (for the feasibility problem) imply that $d^{2-\Omega(1)}$ memory complexity can only be achieved with $> d^{1+\Omega(1)}$ queries, but I am not sure if this result itself is strong and/or interesting enough to justify acceptance in NeurIPS. The importance of the second result (with $p>1$) is not very clear to me. Let memory complexity of memory-constrained Vaidya’s method be $M$. In order to achieve memory complexity $o(M)$ (even $M/\log \log M$), one needs to increase the oracle complexity by a factor that is super-polynomial in $M$. I can hardly imagine settings when it might be reasonable. For comparison, there are some problems for which information-computation trade-offs are known (e.g. planted clique), and for those problems even a minor improvement over the state-of-the-art might be important, since there are basically only two options: Either we solve the problem (and perhaps spend a lot of computational resources), or do not solve it at all. Here it is not the case: the algorithm designer can always choose to use the memory-constrained Vaidya’s method, or to slightly decrease memory complexity and significantly increase the number of oracle calls (and hence the running time), and it seems to me that the huge price of memory complexity is not adequate here. One could argue that the trade-off that is not useful in practice can be interesting if it clarifies memory vs oracle complexity picture of the problem. But the upper bounds from the paper are very far from the currently known lower bounds, and it does not improve a high-level understanding of the complexity picture. For example, it is not clear if super-polynomial oracle complexity is necessary for sub-quadratic memory complexity. Considering these strengths and weaknesses, I recommend borderline reject. UPDATE: After reading the rebuttal I decided to increase the score from 4 to 6. Please see the comments below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I have a question related to non-efficient algorithms: Since to match the optimal memory complexity your algorithm requires $p=d$ (and so the running time is exponential in $d$), it makes sense to compare your algorithms with inefficient approaches. Naive brute force for the feasibility problem (that only uses a weaker oracle for an indicator of the set, not a separation oracle) seems to require $(C/\epsilon)^d$ oracle calls, which is too large, but maybe there exists some other simple brute force algorithm that uses the separation oracle and has comparable guarantees to your algorithm. Did you by chance think about such an algorithm? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comprehensive summary and your positive view of our algorithms and analysis. We are very happy to provide some answers to the concerns you raised. 1. We believe that this work provides some important clarifications to the memory/oracle complexity landscape. We emphasize that to the best of our knowledge, prior to this work, no positive results nor algorithms were known to improve in any way over two most foundational algorithms for optimization: gradient descent (GD) and cutting-planes (CP) (including ellipsoid methods). The specific question of whether it is possible to improve these algorithms in the memory/oracle complexity picture was also present in the literature and stated in its strong form as a COLT 2019 open problem [1]. Although our work does not provide a full understanding of the picture: - We show that in super-polynomial regimes, it is indeed possible to improve over these algorithms (dark green region in Figure 1). To the best of our knowledge this is the first result making some progress on the algorithmic side as opposed to lower bounds for which the literature is better established [2,3,4]. We further provide a class of algorithms giving a positive trade-off between memory and oracle complexity. This enables an optimizer to specify a desired memory usage (at the price of oracle complexity), through the parameter $p$, instead of restricting the choice to either linear or full (quadratic) memory. - A major question was whether one can Pareto-improve over GD and/or CP. Lower bounds [3,4] have already shown that one cannot improve over CP, but in this work, we show somewhat surprisingly that GD does not Pareto-dominates in some regime (exponential accuracy). Additionally, our algorithms work in the face of a stronger form of memory constraint than described in the literature: they are limited in memory between iterations, but also for within-iteration computations. 2. To answer your question, we are not aware of any brute-force (or other) algorithms that have comparable guarantees to our algorithms (say for $p=d$) for non-smooth convex optimization. Closest to brute-force methods are some works that consider smooth optimization in low-dimensional settings. - Vavasis [5] showed that thanks to smoothness, combining brute-force search with gradient approaches one can achieve $1/\epsilon^{\frac{2d}{d+2}}$ oracle complexity. This slightly improves over what gradient descent would achieve in our setting, only if $\log \frac{1}{\epsilon} \gg d$. However, in that regime $\log \frac{1}{\epsilon} \gtrsim d\ln d$, the oracle complexity $(C\log\frac{1}{\epsilon})^d$ from our method with $p=d$ is much lower than a polynomial in $\frac{1}{\epsilon}$, and does not require smoothness. - The polynomial exponent for the oracle complexity from [5] was then improved in dimensions 2 and 3 by Bubeck and Mikulincer [6] with clever ideas to side-step brute-force approaches. In high-dimensions, their algorithms achieve $(\frac{\log 1/\epsilon}{\epsilon})^{\frac{d-1}{2}}$, which exhibits a strong curse of dimensionality (at least $\epsilon^{-d/4}$). These two works considered harder non-convex problems but heavily used the smoothness assumption. It is worth noting that using smoothness, Monte-Carlo approaches may outperform optimization in terms of oracle complexity (e.g [7], but the suboptimality measure considered is somewhat different), which always shows some form of dimensionality curse [7]. The above-mentioned approaches do not seem to give successful results for our non-smooth convex optimization setup and other ideas seemed to be required compared to the literature to possibly improve over gradient descent. Instead, we use different techniques based on a recursive reduction to lower-dimensional subproblems, which we believe are novel. 3. Last, the study of super-polynomial accuracies is not uncommon in optimization. Following the seminal paper [5], there has been a growing literature that considered optimization in low-dimensional (or even constant as in [6]) settings---for instance, in dimension $d$, the improvement from [5] in smooth-nonconvex optimization over $\frac{1}{\epsilon^2}$ is significant when $\log\frac{1}{\epsilon}\gg d$. In that asymptotic in $\epsilon$ perspective, our bounds in the exponential accuracy regime are (a high-degree) polynomial in $\log\frac{1}{\epsilon}$ instead of $\frac{1}{\epsilon^2}$. Also, in the specific literature for memory/oracle-complexity tradeoffs, some papers specifically study super-polynomial accuracy regimes, e.g. [4] which consider accuracies $\epsilon\leq \exp(-\log^4 d)$ (to appear at FOCS 2023 according to the author's website). [1] Woodworth, Blake, and Nathan Srebro. "Open problem: The oracle complexity of convex optimization with limited memory." Conference on Learning Theory. PMLR, 2019. [2] Marsden, Annie, et al. "Efficient convex optimization requires superlinear memory." Conference on Learning Theory. PMLR, 2022. [3] Blanchard, Moïse, Junhui Zhang, and Patrick Jaillet. "Quadratic memory is necessary for optimal query complexity in convex optimization: Center-of-mass is Pareto-optimal." Conference on Learning Theory. PMLR, 2023. [4] Chen, Xi, and Binghui Peng. "Memory-Query Tradeoffs for Randomized Convex Optimization." arXiv preprint arXiv:2306.12534 (2023). [5] Vavasis, Stephen A. "Black-box complexity of local minimization." SIAM Journal on Optimization 3.1 (1993): 60-80. [6] Bubeck, Sébastien, and Dan Mikulincer. "How to trap a gradient flow." Conference on Learning Theory. PMLR, 2020. [7] Ma, Yi-An, et al. "Sampling can be faster than optimization." Proceedings of the National Academy of Sciences 116.42 (2019): 20881-20885. --- Rebuttal Comment 1.1: Comment: Dear Authors, I apologize for such a late reply. I would like to thank you for your detailed response, it was very interesting to read. After reading your reply and having another look at the paper, I increase my score to 6 (weak accept), and the contribution to 3 (good). Before I did not find your result with optimal memory complexity very interesting since it is better than the gradient descent only in the regime $\log(1/\varepsilon)\gtrsim d\log d$, and I didn’t find this regime natural since it seemed to me that prior works only focused on the regime $\varepsilon = poly(1/d)$. However, since the recent FOCS paper [4] you mentioned focused on the super-polynomial accuracy, it somehow changes the story: Super-polynomial accuracy is for some reason needed in [4] to show a lower bound against (randomized) algorithms that use linear number of oracle calls. You work with super-polynomial accuracy to show a new upper bound among algorithms that use linear memory complexity, which indeed sounds nice in the context of the result of [4]. Actually, the accuracy used in [4] (in their theorem formulation) is $\varepsilon = \exp(-\log^5 d)$, while you wrote $\varepsilon \le \exp(-\log^4 d)$. I don’t care so much about 4 or 5, but an important thing is the inequality you wrote: if they need only an upper bound for $\varepsilon$, that sounds good, and I believe you here that they actually have $\varepsilon \le \ldots$ and not only $\varepsilon = \ldots$, but please confirm this if you have time (I understand that my reply is very late, and you might not be able to response…). And of course please add a reference to [4] to the final version of your paper and perhaps even some discussion on why do they need super-polynomial accuracy (I believe they didn't just chose to work in this regime, there is perhaps the reason why they don't work with $\varepsilon = poly(1/d)$). That might make the motivation of studying the regime you work with more clear. Another thing is the paper [5] you mentioned. I think it would be nice if you also cite it in your paper. As I understood (I didn’t have a look at that paper), they have a minor, but asymptotic $1/poly(d)$ improvement over the gradient descent in the regime $\log(1/\varepsilon)=\Theta(d\log d)$. Still, if we denote the oracle complexity of the gradient descent by $T$, prior to your work no algorithms with oracle complexity $T^{1-\Omega(1)}$ were known, and given lower bounds [3,4] one could have expected that they might not exist. You show that it is not the case, and I think it is good to explicitly write it (otherwise your current text might be wrongly interpreted that the gradient descent was asymptotically the best in terms of oracle calls). --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you very much for your thoughtful comments and positive view of our contributions! Indeed, the super-polynomial accuracy seems to be important in [4] in order to get lower bounds for a communication game that they then encode into a convex optimization problem. As suggested, we will make sure to add a discussion on [4] in the revised manuscript and more details about why they also need super-polynomial accuracy. The rebuttal indeed had a typo. Their accuracy is indeed $\exp(-\log^5 d)$. As you had anticipated, we wrote an inequality $\epsilon\leq ...$ to emphasize that their arguments hold for that whole regime (not just the equality): taking their same class of hard functions, an algorithm solving the problem to accuracy $\epsilon\leq \exp(-\log^5 d)$ solves it in particular to accuracy $\exp(-\log^5 d)$. As you kindly suggested, we will also of course add a discussion [5] which indeed gave some improvements over gradient descent in the regime $\log1/\epsilon = \Theta(d\log d)$. We would like to point out, however, that this result relies heavily on smoothness and as such would not give improvements in our non-smooth convex optimization setting (the idea is to run gradient descent starting from all initial points in a grid: smoothness ensures that in the neighborhood of the initial point closest to the optimum, the Lipschitz constant of the gradients is also reduced, which allows to adapt gradient descent parameters by taking longer more aggressive steps). To the best of our knowledge, it was unknown in prior works whether using the same $\Theta(d\log 1/\epsilon)$ memory (or in fact anything strictly less than quadratic memory, say memory $d^{2-\Omega(1)}\log 1/\epsilon$), gradient descent was asymptotically best in that non-smooth case. Thank you again for your time and efforts in reviewing the paper!
null
null
null
null
Beyond Average Return in Markov Decision Processes
Accept (poster)
Summary: The paper investigates general functionals of the distribution over returns in three situations: 1) For policy evaluation, the authors prove some error bounds for a given distributional RL algorithm. 2) For planning, the authors exhibit the functional form that is optimizable via dynamic programming. 3) For reinforcement learning, they provide a Q-learning algorithm when the functional is linear or exponential Strengths: The paper investigates an interesting question, which is useful for sequential decision-making with more sophisticated decision criteria. The writing is generally quite clear. Weaknesses: The obtained error bound for policy evaluation seems to be very loose, as also suggested by the experiments. Can we say anything about the tightness of this bound? The novelty of some of the obtained results seems to be limited: 1) How different is the known result about Bellman closedness and the new result about Bellman optimizability (Theorem 2)? Isn't the former property a necessary condition for the latter? 2) Regarding the proposed Q-learning algorithm, the linear case seems to be trivial. For the exponential case, how different is the proposed update compared to Borkar's? The paper should be checked for typos (see below for some of them). Technical Quality: 3 good Clarity: 3 good Questions for Authors: See questions above. Some minor issues: - line 52: p_h -> p_h^a(x, \cdot) and R_h -> R_h(x, a) - line 74: p_{h+1}^a(s, \cdot) -> p_{h+1}^a(x, \cdot) - line 95: the definition of \mathcal P_Q(\mathbb R) should include the condition in (5) - line 96: W_1 should be defined here - line 209: m - line 222: \eta vs \eta^* - line 227: missing /extra word? - line 239: the other von Neumann-Morgenstern axioms are not needed? - line 269: sections addresses - line 271: linear of exponential Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive and insightful feedback. Please see our response addressing your concerns and questions : - The obtained error bound for policy evaluation seems to be very loose, as also suggested by the experiments. Can we say anything about the tightness of this bound? First,as explained in the common response, we found a mistake , which led to inconsistent Wasserstein distance computation and made our bound look much worse than it actually is. This was solved, and the corrected graphics can be found in the global rebuttal. Concerning the tightness of the bound, this was investigated more in detail after the submission and added to the appendix later. Our first find is that this bound is never reached exactly for $H \geq 1$. Yet, we believe it might not be possible to refine it in general. Indeed, the bound is proven through a main idea that is summing the successive error bounds due to the projection operator. We have made two important observations : First, it is possible to maximize the projection error bound at each step (meaning, at every step h, the error due to the projection operator matches the bound of Eq.6). This implies that the projection bounds cannot be improved, even after successive steps in general. Then the experiment of the paper with the corrected computation shows that in a simple environment, the sum of the successive projection errors matches very closely the overall error between the true distribution and the approximated one. This implies that the method of the proof is relevant, and may not be improved as well. Combining those two findings, we could not find an immediate improvement and we are ready to conjecture it cannot be significantly improved in general. - How different is the known result about Bellman closedness and the new result about Bellman optimizability (Theorem 2)? Isn't the former property a necessary condition for the latter? The idea of Bellman Closedness is to find a Bellman Equation. We want to compute the values of the statistics on some state and action recursively, with the values on the other states and actions. That is, we try to find a general formula to write $s(r + PV)$ as a function of $r$ the immediate reward, $P$ the transitions and $s(V)$ where $V$ is the value function and $s$ the statistic. In short, Bellman Closedness is all about computing the values by themselves, by recursion. With Bellman optimizability, we check instead if a policy improvement algorithm works and finds the optimal policy, without the concern of computing such values. We assume that we can compute the distribution exactly, and thus have access to the exact values of statistics. We assume that for each state and a current policy, which action maximizes the chosen statistics of the value function. The question we answer is : “Assuming we have access to the value functions for the statistic, will a policy improvement algorithm converge to an optimal policy?” - Regarding the proposed Q-learning algorithm, the linear case seems to be trivial. For the exponential case, how different is the proposed update compared to Borkar's? In this section, we do not propose any new algorithm, both already exist and are mentioned here to recall to the viewer that our theoretical find matches the known algorithm, and to complete the full picture addressed by the paper, that is statistics in different MDP problems : Policy Evaluation, Planning, and finally Reinforcement Learning. This is also addressed in the global rebuttal. We also thank you for pointing out typos and writing errors, we have now fixed them all and made a thorough grammatical pass on the paper. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the rebuttal. I am still not clear about the relation between Bellman closedness and Bellman optimizability. Should Bellman close functional necessarily be Bellman optimizable? --- Reply to Comment 1.1.1: Comment: Thank you again for the review, No, Bellman Closed functionals should not necessarily be Bellman Optimizable. An example would be the Variance, or even the utility function $x\exp (x)$, which are both in Bellman closed sets but not Bellman Optimizable. This has now been clarified in the paper to emphasize on the difference between the two concepts.
Summary: This paper considers the problem of using distributional reinforcement learning to perform policy evaluation and planning with more general classes of reward functionals than those typically considered in standard Markov decision processes and RL. The paper focuses in particular on dynamic programming methods for obtaining distributional Q functions based on distributional Bellman operators, as well as how to use these methods for planning with the resulting distributional Q functions. The main contributions of the paper are twofold: (1) when performing distributional policy evaluation on general utilities using a certain kind of distributional Q function approximation scheme, worst-case bounds in terms of Wasserstein distance of the approximation from the true distributional Q function are provided; (2) when performing planning using distributional Q functions, a new notion of "Bellman optimizability" of reward functionals is given, and it is shown that the only Bellman optimizable reward functionals are the class of exponential utilities. It was previously known that standard (i.e., not distributional) dynamic programming methods (see, e.g., lines 213-214) can be applied to solve planning problems with general utilities, so contribution (2) provides a negative result suggesting that distributional RL methods, though providing a very general framework for performing evaluation and planning via dynamic programming, are not strictly necessary to solve the class of problems to which they are theoretically suited. Strengths: Overall, the paper is well-written and the results add clarity to the distributional RL landscape that is likely to be of interest to the community. Regarding significance, contribution (1) outlined in the summary given above is potentially useful for a wider range of utilities than previously considered, but the results appear to be straightforward extensions of existing results (see C.1, C.2 in the appendix). The main result of the paper is instead contribution (2), provided in the form of Theorem 2 (lines 248-250). This result delimits the applicability of distributional dynamic programming to classes of utilities that can be handled using non-distributional techniques, suggesting that the primary usefulness of distributional RL methods may be experimental instead of theoretical. It is refreshing to see an ML paper making an effort to soberly clarify the mathematical limits of a particular subfield of ML research. The proof of this result (see D.2) appears to boil down to first constructing a differential equation based on the integrand of the utility, then solving to show that the integrand must correspond to an exponential utility. The proof is not long, but the idea is clever. Weaknesses: I have two main concerns regarding the paper: * Contributions (1) and (2) are somewhat disjointed. On the one hand, (1) provides reassurances that Algorithm 1 enjoys approximation bounds for a wide variety of different utilities. At the same time, (2) undermines this result by showing that planning using the resulting distributional Q functions is only theoretically meaningful for a much-restricted subset of those utilities. Some clarification of the implications of (2) for (1) would be helpful. * In the proof of Theorem 2, it is assumed that the utility integrand $f$ is twice-differentiable and $f'(x) \neq 0$, for all values of $x$. This assumption is critical to Theorem 2, thus to contribution (2), and thus to the overall contribution of the paper. It is stated on lines 516-517 that this assumption "could be proven through long and fastidious analysis that is beyond the scope of this article", yet no indication of how this might be accomplished is provided. Given its importance to the paper, I argue that this is within scope of the article, so that a proof sketch or at least a convincing explanation of why this assumption holds is needed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Given Theorem 2, what is the main usefulness and impact of the approximation bounds provided in section 3? * Why is Bellman optimizability (Definition 2, lines 221-222) the right notion of what it means for a problem to be solvable using distributional RL? Are there incompatible notions of optimality that might invalidate Theorem 2? If not, why not? * Why does the assumption regarding the utility integrand $f$ on lines 516-517 in the appendix hold? * It is mentioned that policy stationarity is disrupted in the discounted setting on lines 289-290. Can you provide references ensuring sufficiency of stationary policies in the setting you consider? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Aside from the weaknesses and questions above, the authors have adequately addressed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive and insightful feedback. Please see our response addressing your concerns and questions: > About the relevance of section 3 related to theorem 2: This matter was addressed in the global rebuttal as it asked by several reviewers > About the definition of Bellman Optimizabitily: This is also addressed in the global rebuttal for the same reasons > Are there incompatible notions of optimality that might invalidate Theorem 2? If not, why not? The question on the existence of other notions of ‘optimality’ is unclear to us. An optimal policy is always one that has the maximal value (regardless of the goal, mean reward, utility, etc). Perhaps you meant ‘notions of optimizability’? > About the loss of stationarity in the discounted setting on lines 289-290 : This is a misunderstanding due to the ambiguity of our phrasing. Another way to write it is the following : ”there exists a method to optimize the exponential utility through dynamic programming in discounted MDPs [Cheung and Sobel, 1987]. This approach requires modifying the functional to optimize at each step. A difference with the risk-neutral objective in discounted MDPs, is that the optimal policy may not be stationary”. We never assumed to have stationary policies in our undiscounted setting.In discounted MDPs, however, it is known that the policy optimizing the expected reward can be chosen to be stationary, and we found relevant to highlight that it is no longer the case when we optimize the exponential utility. > Assumptions in the proof of theorem 2 : Differentiation and non-zero derivative We agree with you that Theorem 2 uses strong assumptions with no explanations. In fact, the properties that we claim for the utilities can be proved and we have done it for this response. All details have been added to the appendix: 1. We can reduce the problem to differentiable utilities thanks to an approximation argument. 2. Using the continuity and translation property, we can prove that f is either monotone or constant. Similarly, we prove that f’ is monotone.This implies that the derivative is always non-zero. We will try to convey the main intuition of the proofs of these results here although the full proof does not fit. - For **the differentiation assumption**, a way to look at it is to consider that any integrable function can be arbitrarily approximated with infinitely-differentiable functions (see [1] p119, about the density of differentiable functions in the L_1 space) . Thus, for any function, there exists a differentiable one that is close enough such that optimizing the initial function becomes the same as optimizing on this differentiable one. Hence, in the same way the Von Neumann - Morgenstein theorem is used to reduce the study to utilities, this approximation allows us to reduce the study to differentiable utilities. - **The monotonicity and convex/concave property** are proven using the translation property to transform a local inequality in a global one. Start by considering 2 points, for instance $0$ and $1$. We assume that $f(0) < f(1)$ and show that $f$ is increasing. To do so, we use the translation property with $\nu_1 = \delta_0, \nu_2 = \delta_1$, which gives the following : $\forall c, f(0 + c) \leq f(1 + c)$ (a local inequality becomes a global inequality). In particular by choosing $c$ to be successively $1, 2, n, \dots$, we obtain the chain of inequality $f(0) \leq f(1) \leq f(2) \leq \dots \leq f(n)$. Now we do the same starting with $0$ and $1/2$ and $c$ a multiple of $1/2$, to get the chain $f(0) \leq f(0.5) \leq f(1) \leq f(1.5) \dots$. Halving infinitely by recursion, we obtain that $f$ is monotonous on a dense set of numbers. With density and continuity, $f$ is globally monotonous. Note that in case of equality on two points, this reasoning proves that $f$ is globally constant. Hence, $f$ is either strictly monotonous or constant. For the monotonicity of $f'$, we use the same idea and show that for any two points, with $f((x_1 + x_2)/2) \leq (f(x_1) + f(x_2))/2$ (or $\geq$). This is called midpoint convexity and is equivalent to convexity. (Here, for instance, we start with $\nu_1 = \delta_{1/2},\ \nu_2 = (1/2) (\delta_0 + \delta_1)$) - The strict monotonicity implies that the derivative has a constant sign, while the concavity or convexity implies the monotonicity of the derivative. A function both strictly monotonous and of constant sign cannot ever reach 0.. [1] Functional Analysis Sobolev Spaces and Partial Differential Equations, Haim Brezis, 2010 --- Rebuttal Comment 1.1: Comment: Thanks for your reply. Regarding Theorem 2, I get the general outline of your argument and am reassured. You mention that you have added the details to the appendix -- have you uploaded it? I am currently unable to find it in the supplementary material. --- Reply to Comment 1.1.1: Title: We cannot submit a revised version Comment: Unfortunately, as per Neurips' CFP: "Authors may not submit revisions of their paper or supplemental material, but may post their responses as a discussion in OpenReview." Unless we are missing something, we are afraid we can only do our best to convey the main ideas fo the proof on OpenReview. Do you have a question on our proof idea? We apologize if it's not clear enough, but we are happy to try our best to clarify any concern you may have.
Summary: Classic RL maximizes the (discounted) cumulative reward in Markov Decision Processes (MDPs). This paper studies more general functionals of rewards, such as generalized means. These more general functionals are useful to consider for certain applications, such as those with safety concerns. Furthermore, the study of these more general functionals has been not well understood in the literature. For finite-horizon prediction problems, this paper 1) argues that only generalized means of rewards can be obtained exactly using dynamic programming and there is no need to resort to learning the entire distribution of return (I am confused about this part, see Questions), and 2) provides an upper bound of estimation error for utility functionals, which are more general than generalized means, when approximated with Quantile Distributional RL. Both 1) and 2) mirror Rowland et al.'s (2019) results, which were developed concerning the discounted setting. The most important result is for finite-horizon control problems. This paper shows that the only Bellman optimizable functionals (those that can be optimized by applying Dynamic Programming to the distribution of return) are generalized means. Combined with a previous result showing that generalized means can be solved using dynamic programming (no distribution involved), this result shows that the only functionals that are optimizable by Distributional RL can be optimized by Dynamic Programming, which is a much more efficient algorithm. Strengths: The problem that this paper studies is an important one. The main result, which shows that the only functionals that are optimizable by Distributional RL can be optimized by Dynamic Programming, is remarkable. Weaknesses: The writing of the paper needs improvements. Specifically: 1) right now, the presentation of the paper is driven by the technical results that it wants to show, rather than the messages that it wants the readers to know. The paper presents an important result about Distributional RL: the only functionals that are optimizable by Distributional RL can be optimized by Dynamic Programming. As the authors highlighted in line 260, this result is the most important one and it narrows down the avenues to explain its (distributional RL) empirical success (line 310). However, this result is not outspoken throughout the paper, until an inconspicuous discussion on page 8. Why not say your most important result in the Abstract and the Introduction sections? 2) The notations given background section need to be clearly defined: the notation (z_i)_i needs to be defined. \delta is not defined (line 96). \mu is not defined (line 97). F_\mu is not defined (line 98). what is \eta in line 100? \Delta_\eta is not defined in line 101. Line 124: U_f is not defined. 3) typos: extra m at the end of line 209 line 184: should be "see Figure1 (left)" line 199: should be "Figure 2 (left)" the order of the two subfigures in Figure 2 would better be reversed in accordance to the texts. Technical Quality: 3 good Clarity: 3 good Questions for Authors: "Much of the existing theory is based of discounted MDPs, but many recent efficient RL algorithms with strong guarantees are for finite-horizon, undiscounted setting" What kind of theory do you mean for discounted MDPs? Could you give examples that illustrate this argument? Could you point to the result by Rowland et al. (2019) that gives equation 6? Note that other parameterizations exist but are less practical. Could you explain why is this true or refer to other works that show that this is true? Line 122-123: when lambda > 0, the utility is higher for Gaussian distribution with a larger sigma (higher variance), so isn't this case risk-seeking? Line 124: you don't need to mention this axiom at this point. I have been wondering why I need to know this until I found it being used in section 4. Line 152 is confusing to me: do you mean that for any L >= 1, statistics corresponding to the first L+1 moments form a Bellman closed set? Line 152: Theorem 4.3 by Rowland et al. (2019) only mentioned moment functionals but not exponential functionals. Furthermore, although you prove in A.3 that sets of multiplication of moments functionals and exponential functionals also form a Bellman closed set in the finite horizon setting, you didn't show that these are the only families of functionals that form Bellman closed sets in this setting. The footnote on page 5: how can the support of return be -h + 1 given that all rewards are non-negative? Line 215: is the question open for utilities other than exponential ones for control problems? Line 221: why define Bellman optimizability instead of extending the definition of Bellman closeness to control problems? In addition, it is weird to define a quantity, which you want to optimize, using an algorithm that optimizes it. Do you see other ways that make this definition without resorting to a particular algorithm? Line 241: why does this result narrow down the family Bellman optimizability functionals to utilities? I mean, literally. Section 5: why would we want to see a new algorithm without any theoretical/empirical study of the algorithm? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your in depth review of the paper and the relevant questions that will help clarify it. We now address each raised points : - About the undiscounted setting theory : Thank you for pointing out this sentence, we agree it was unclear. The goal of this sentence and the following was to justify our choice of studying the finite-horizon setting, which has an impact on our results (discussed in the last section). We rephrase it as follows: “Historically, the theory of RL has been established for discounted MDPs (e.g. [Sutton & Barto, 2020; Watkins & Dayan, 1992; Szepesvári, 2010; Bellemare et al., 2023]) but recently more attention was drawn to the undiscounted, finite-horizon setting [Auer, 2002, Osband et al., 2013, Jin et al.,2018, Ghavamzadeh et al., 2020], for which fundamental questions on the theory of MDPs remain open. In this paper...” We also tried to clarify the impact of this choice a bit earlier in the paper and refer to discussions for completeness. - Precise pointers to Rowland’s work (Eq 6 and Th 4.3 (line 152)) : Those are actually intermediate results that can be found in the appendix of their paper ([2]). Equation 6 corresponds to Lemma B.2. We also want to point out that [1] contains a similar result with an additional ½ factor in the bound (exercise 5.20). For the theorem 4.3, the result is part of the proof which is divided in two parts. First, they start by reducing the study to the set of functions we discuss (equation 14 in their appendix), and then they show that, because of the discount, the exponential part has to be discarded. In our case, without discount, only the first part still holds and that is what we use. We just need to verify that the set is indeed bellman closed in this setting, which we do in appendix A.3. - “Line 152 is confusing to me: do you mean that for any L >= 1, statistics corresponding to the first L+1 moments form a Bellman closed set?” Yes, this set of statistics is a Bellman closed set. This was already true and known in the discounted setting considered by Rowland [2]. The difference in ours is the addition of the exponential term in it. - Relevance and discussion of other parametrizations: The only other studied parametrization with theoretical grounds is the Categorical Projection, described and discussed in Appendix B. We omitted it in the main text because it is not essential to our argument. Among its issues is the need for a fixed and known support of the distributions of study and the use of a different metric for the bounds[3]. Moreover, algorithms using the quantile approach display better results [4][5]. - "Is the question open for utilities other than exponential ones for control problems?" To be precise, whether or not any statistical functional other than the exponential utility can be optimized efficiently in general is still an open question, utility or not. We focused on dynamic programming algorithms and thus on the existence of a Bellman equation to support it. Our claim is that for utilities other than exponential, a policy-improvement type of strategy cannot work. We do not claim that there exists no other algorithm that can compute it, this question remains open and would possibly require a complexity theory analysis, such as proving that it is NP-Hard. This is out of the scope of the paper but is an interesting question. We will clarify that again. In RL, however, the predominance of dynamic programming is fundamental and we believe this justifies that our Theorem remains significant and impactful. - "Line 241: why does this result narrow down the family of Bellman optimizability functionals to utilities?" The result does not prevent the existence of non-utilities verifying the property. However, the result implies that studying utilities is enough because if another statistical functional can be optimized by such an algorithm, then there exists a utility that yields the exact same results. Hence, if we know the form of all the utilities that verifies such property, we know the form of all statistical functionals in general. - "Line 221: why define Bellman optimizability instead of extending the definition of Bellman closedness to control problems?" This question was raised by several reviewers, so we address this in detail in the global rebuttal. - "Section 5: why would we want to see a new algorithm without any theoretical/empirical study of the algorithm?" This is also addressed in the global rebuttal: the algorithm is not new nor ours and we mention it in the paper for completeness of our argument that utilities can be learned. We also thank you for pointing out typos and writing errors, we have now fixed them all and made a thorough grammatical pass on the paper. [1] Distributional Reinforcement Learning, Bellemare et al., 2023 [2] Statistics and Samples in Distributional Reinforcement Learning, Rowland et al. 2019 [3] An Analysis of Categorical Distributional Reinforcement Learning, Rowland et al. 2018 [4] Distributional Reinforcement Learning With Quantile Regression, Dabney et al. 2018 [5] Implicit Quantile Networks for Distributional Reinforcement Learning, Dabney et al. 2018
Summary: This paper addresses Strengths: 1. The method is very well-presented, with notations, terms and algorithms put in a very clear and understandable fashion. They are introduced without fancy names and terms, which is great. 2. The questions are well-stated and addressed respectively from a theoretical perspective, and they are fundamental problems. 3. The significance of contribution is good. It provides a good starting point to address the statistical functional evaluation problem. Weaknesses: 1. Some statements in the propositions lack references to earlier works, such as quantile functions an CVaR (metric). 2. The learning environment description is lacked. 3. The limitations, potential societal impacts and ethical concerns, and future work are lacking. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My suggestions has been listed in weaknesses, which I will paraphrase here: 1. Please fix grammatical errors and add proper references on early works, especially for the sections with propositions. 2. Please connect your work to more practical applications. I know it might be hard to conduct experiments as additions to this paper, but some discussions would be helpful. 3. Please give more details about the experiments on functional distance calculation. 4. Most importantly, please add descriptions on potential limitations, concerns, and societal impacts about the work at the very last of the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Although since it is a theoretical paper, I do not think an external ethical review is needed, the limitations, potential societal impacts and ethical concerns, and future work are lacking. Authors may address it later if applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our paper. We respond below to your specific concerns but please also check the common response which contains clarifications that were common to all reviewers and will be added to the final version. - Discussion of Limitations, societal impact and ethical concerns. We have added a paragraph in the discussion about the limitations of our works, potential future work and connections to more practical applications. Thank you for pointing out these points. Regarding societal impact and ethical concerns, given the nature of the paper -- a theoretical result limiting the extent of Distributional RL -- we do not believe it can have any such impact. The Neurips CFP says “The checklist is filled out in the OpenReview submission form, but you may provide additional information in your paper to support your answers (e.g., a limitations section).” and we believe no additional information is needed. If you have a specific concern about the societal impact of our work that we have overlooked, please let us know and we’ll try to address it. - More details about experiments. Thank you for the question. As other reviewers have also raised questions about our experiments, we address this in the common response. First we were able to identify a bug in the computation of the Wasserstein distance in our code and we now have correct Figures (see pdf) that show a much tighter bound. Second, we have explored the tightness of our bound and we are able to make comments on it. We will add this discussion and more details about the experiments in the final version. - Lack of references. We are not sure what you are referring to. The subsection “Beyond Expected Rewards” introduces alternative statistical functionals and for CVaR we cite [Rockafellar, 2000]. Our paper has more than 2 pages of references and we are not sure what is missing. Do you have a pointer to relevant papers we may have missed? --- Rebuttal Comment 1.1: Comment: Thanks the reviewer for the reply. I do not have further new questions. The lacking references problem from my side was mainly about Section 2. But now with re-reading I think I made a mistake about this. Thanks the reviewer for sharp response to correct me.
Rebuttal 1: Rebuttal: **We first want to thank all the reviewers** for their constructive feedback and relevant questions that will help clarify the article. There are a few questions raised by several reviewers that we wish to address here. ### Experiments and tightness of our bound We first want to point out **a minor but impactful bug discovered in the code**, that falsified the experiments of section 3 on the approximation of distribution and statistics. **The corrected results support even more our bounds, and proof method**. Please see the attached pdf for the corrected figures. The method supposed to return the distribution function would return a density function instead. In the notebook code sent in additional material, this can be fixed by changing the line 176 of the 2nd cell, to `probabilities = list(np.cumsum([self.distrib[atom] for atom in atoms]))` instead of `probabilities = [self.distrib[atom] for atom in atoms]`. The experiment was also changed, using a Binomial with $p=0.5$ as parameter, which is more natural. ### Organization of the paper. A main objective of the paper was to depict a somewhat complete picture of what could be done in Markov Decision Processes with statistics other than expectation of the return. To do so, we wanted to address the 3 main problems that is Policy Evaluation, Control/Planning, and Reinforcement Learning, in that specific order that seemed more natural. Our central result is Theorem 2 in the Planning section (Section 4), about the Bellman Optimizability. We agree that it is not highlighted enough in the abstract and introduction, which is now corrected. The section 3’s role is twofold. First, we address the question of Policy Evaluation in the undiscounted setting, and additionally provide error bounds in the case of finite parametrization of the distribution. Doing so, we are led to introduce the notion of Bellman Closedness from the literature, which is relevant to contrast with the upcoming novel definition of Bellman optimizability. We do not see how we could naturally move Section 4 before Section 3. ### Q-Learning in Section 5 is not a contribution **In Section 5, Algorithm 3 is from [Borkar 2002, 2010], and is not a contribution**. The reason why we report it is to complete the argument of the paper by addressing the learnability of utilities, which is the most important problem in RL. We believe that Q-learning for exponential utilities isn’t well known and we wanted to recall its existence. It is true that we do not talk about its theoretical properties, and we have added more references to point to them. We also emphasized again that this algorithm is not a contribution. ### Definition of the Bellman Optimizability. The choice of definition was made according to where the idea came from and what it first implied: the impossibility to optimize other statistics, even with distributional reinforcement learning. Hence the definition using a theoretical distributional algorithm. We acknowledge that this definition is cumbersome and makes it harder to understand all the implications of our main result, the theorem 2. We take into account your feedback, and a new definition, more clear and more related to the Bellman Closedness, will be provided in the final version. Pdf: /pdf/37bda4b2b1039fbde7f1426ec9663f5ae3589003.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Large Language Models Are Zero-Shot Time Series Forecasters
Accept (poster)
Summary: The paper discusses a novel approach for time series forecasting by encoding time series as numerical digits and treating it as next-token prediction in text. Surprisingly, it is found that large language models (LLMs) like GPT-3 can effectively extrapolate time series even without being specifically trained for this task. The success of LLMs in time series forecasting is attributed to their ability to extrapolate deterministic patterns, such as seasonality, and model uncertainty by using expressive distributions on continuous variables. It is argued that LLMs excel in predicting numerical sequences due to biases introduced by their text pre-training, which favors simple explanations. Additionally, LLMs demonstrate excellent representation of multimodal distributions by decomposing the distribution over digits. This combination of qualities allows LLMs to achieve strong performance on time series data and enable new capabilities such as integrating non-numerical text and handling missing data. Furthermore, the study shows that the performance of LLMs in time series prediction improves with increased scale, suggesting significant potential for this unconventional approach. Strengths: 1. The paper proposed a method that utilize the pretrained language model to solve time series forecasting problems. 2. This paper proposes a novel timing coding approach that can help LLMS understand timing data. 3. Also, the authors identify why language models are able to perform well time-series prediction task. Weaknesses: 1. The experimental results are not enough to support the viewpoint put forward in the paper. For example, most experiments are conducted on virtual data sets, but how are the effects on real data sets is not enough. 2. For some of the methods proposed in this paper, their effectiveness has not been fully analyzed, and there is a lack of analysis of Tokenization, scale, etc. 3.As can be seen from the results in Figure 4, the proposed method is effective only for some simple time series data (such as obvious periodicity or trend), but not for some more complex time series data (such as wooly or HeartRate). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. The contributions are not very clear, and it seems that the authors are only utilized the GPT models. 2. The authors claim they explain why LLMs can be used for timing prediction, but there is not enough analysis in the paper. 3. Why are most experiments conducted on virtual datasets and not on more real-world datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. While we appreciate your feedback, we believe that based on the concerns you raise, some of the experiments we performed may have been overlooked. **"most experiments are conducted on virtual datasets"** We do not believe this is an accurate assessment. For time series extrapolation, we have performed comparisons on $8$ real datasets from the Darts project (shown in figure 6, and figure 4), the $10$ most impactful real datasets from the Monash archive (figure 5), and $3$ additional real world datasets collected after the GPT-3 cutoff date (figure 9 and appendix C). With the figure 1 in the attached PDF of the general comment, we have extended these further with an additional $5$ autoformer datasets. This is in comparison to the $8$ synthetic datasets that we evaluate in figure 3. **"For some of the methods proposed, there effectiveness has not been fully analyzed, and there is a lack of analysis of Tokenization, scale, etc"** In section 4 and figure 4 we analyze precisely these design details, considering tokenization with and without spaces and varying the base of the encoding of the numbers. We evaluate these choices both qualitatively and quantitatively. **"... proposed method is effective only for some simple time series data"** We have evaluated on a large variety of real time series datasets and on these datasets our method consistently ranks in the top performing methods. Keep in mind that we are showing the median and 10-90th percentiles, which for time series with a high variance will look quite different from realizations of the time series or samples from the model. On Wooly ours is indeed not the best performing method, but on HeartRate due to the considerable variance in the underlying process our method achieves the lowest NLLs among the competing methods. **"authors are only using GPT-3 models"** We would like to point to figure 7 (right) in the main text where we compare against publicly accessible Cohere, Forefront, and Aleph-Alpha models as the base LLM used in our method (with additional details in appendix B.3). Since the reviews, we have made an effort to extend this even further, with the powerful LLaMA-2 models. We have added additional comparisons using LLaMA-2 70B as the base LLM in the response PDF figures (1,2) and the other sized LLaMA-2 models in figure (3). Altogether we hope that in light of these clarifications about experiments in the text and additional ones that we have run, that you may consider reevaluating the paper. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I would like to first thank the author for their informative and active response, especially the additional experiments on some other forecasting tasks. After reading the author response, I found some of my concerns have been addressed. But one question still remains that, the ability of the LLM is a mystery, why would that perform forecasting with such performance in zero-shot manner? I agree with Reviewer AFnc that the author seemed not active on discovering how this happened and was only showcasing their great performance on simple experiment settings. Another thing I am wondering that, the reasoning ability is lack for LLMs, how could it perform this task in tokenized space? As a result, I'm very curious about the experiment details and the results in your rebuttal response PDF, i.e., the forecasting evaluation with Autoformer, etc. What are the detailed experimental settings such as the multi-run number, the evaluation protocol including datasets, forecasting horizons, evaluation methods, etc. What are the detailed results of all the compared methods? On what datasets? Significance test? I know it is not recommended but I really would like to see the case studies on how they forecast in those long real-world datasets. I think it is an interesting paper. I really believe that the author would give a thorough analysis about the above discussion. But from the view of the current version, I think it lacks the experiments on the existing SoTA works like PatchTST, Autoformer, TimesNet on simple datasets, and lacks the detailed discussion on complicated datasets such as those in author's general response part. It also lacks deep analysis on the mystery of LLMs on these numerical tasks. I am inclined to remain my rating. --- Reply to Comment 1.1.1: Title: Follow-up clarifications (part 1) Comment: Thank you for engaging in the rebuttal discussion! **Why LLMs can perform forecasting** We understand why you are surprised by the strong zero shot performance of LLMs. We were also surprised. As we explore in the submission, however, the ability of LLMs is not a complete mystery. We show that LLMs perform well because of two key properties: 1. LLMs prefer simple sequences over more complex sequences. For numerical sequences generated from a simple algorithm (e.g. a linear or exponential function) LLMs can perform zero-shot extrapolation because completions that are consistent with pattern in the past observations are more simple than completions that are not consistent with the pattern. Therefore LLMs’ preference for simple sequences leads to good extrapolations. Because time series data often consists of noise observations of a deterministic exponential (e.g. viral spread) or periodic (e.g. weather) trend, ability to extrapolate basic numerical sequences also leads to good forecasting performance. 2. LLMs can represent uncertainty over continuous variables. Time series data is often very noisy, and therefore a good time series forecaster must be good at both extrapolating deterministic patterns and estimating the distribution over possible outcomes. We show that LLMs implicitly represent a hierarchical softmax distribution over the input space by outputting a distribution over each digit in a number. It is able to represent complex distributions (e.g. with multiple modes or heavy tails) more effectively than several popular alternatives, as we show in Figure 2 of the submission. It is therefore not a complete mystery why LLMs can perform well. Just as Autoformer can perform well on time series data because it has inductive biases that are well-suited to time series data (e.g. extrapolating periodic features with Fourier decompositions), we show that LLMs have inductive biases that are also well-suited to time series data (e.g. a bias towards simple patterns and flexible uncertainty representation). Compared with many other papers that propose time series architectures, for example PatchTST, which you mention in your comment, we actually provide more explanation for the performance of our method. PatchTST is a vanilla Transformer model trained on time series broken into independent channels and chunks of those channels [1]. It has no biases specifically designed for time series (as the Transformer was originally designed for NLP), and it is therefore not clear *a priori* why it performs so much better than alternative approaches. Yet, like LLMs, PatchTST consistently works well. [1] Nie, Yuqi, et al. "A time series is worth 64 words: Long-term forecasting with transformers." arXiv preprint arXiv:2211.14730 (2022).
Summary: This paper investigates to which extent an off-the-shelf large language model like GPT-3 can be used for time series forecasting. Its main claim is that forecasting performance can be very good, provided some care is taken in correctly encoding the input. Some limited experiments are proposed to support the claim. Strengths: - The paper is definitely timely in the sense that large language models (LLM) are very trendy and that many other works already showed that they are good at transfer learning. In this respect, attempting to use them for forecasting makes perfect sense and this paper could possibly have a good impact. - It cannot be denied that the reported performance is good, which supports the claim in a fascinating way. There are some other limited contributions, notably regarding how the input should be encoded before using the LLM for forecasting, but I see them as very secondary when compared to the two mentioned above. Although I am recommending rejection for the reasons that I will describe below, I still must acknowledge that this paper asks a relevant and interesting question and is bound to meet some good impact, even if it is not presented at Neurips. Weaknesses: - If I had to summarize what I feel is my biggest concern with this paper, I would say that I feel uncomfortable with its _attitude_. As I mentioned above, it asks a question that I definitely think is interesting, but as it currently reads, it basically claims forecasting is solved by the method proposed, that outperforms state of the art in time-series forecasting. In my view, the authors should have rather mentioned that using a LLM for forecasting interestingly seems to bring some very good performance in the experiments they did, but should have insisted much more on how much this should be understood today as some kind of funny curiosity at this stage. Indeed, there is something that is completely forgotten here: those LLM just cannot reasonably be used for forecasting in any realistic setup, that requires handling hundred thousands or million samples. What the paper convincingly shows is only that LLM can yield very fascinating performance for forecasting in the toy setups that were considered, and this is already a big deal. - the paper contains many arguable statements that I will detail below. As a researcher in time-series, I must say that many statements there read pretty pejorative towards research in time-series and sound like "look how NLP will solve this problem that you have been struggling with for years". Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - L19 time series has - L19 "time series are relatively unstructured": this is arguable. There is host of research precisely leveraging structure in time-series. For a very clear example, just consider speech, audio or music processing, where structure is everywhere. - L30 "pretraining has not impacted time series forecasting to the same degree": pretty arguable and gratuitous statement - L35: in a natural way - L52 "instead of building towards a foundation model paradigm". I would say that this statement is not only arguable, it is very wrong and can even be felt as pretty pejorative for the host of researchers working in the field. The authors could for instance check out the many foundational books or papers that were written on modelling stochastic processes or in developing information theory and that all emerged in a time-series setup. Regarding recent foundational approaches, one could for instance mention neural processes or related fundamental research. - L54-64: "uncertainty quantification for time series also tends to be domain-specific and ad hoc": I wonder what makes the authors think that NLP is free of any such bias. The whole paragraph has a strange pejorative feel to it that I cannot understand and that I think is inappropriate. - L64: limited expressive - L65 "by directly numbers as strings of digits" ? - Section 3.1: there is no reference whatsoever to source coding or arithmetic coding and their host of deep-learning based recent variants, which appear to me as a clear _foundational_ setup for the _adhoc_ method proposed here. - L130-L139: Evaluation. It doesn't really make sense to me to compare the proposed RNN-method for quantization with parametric distribution models like GMM or Laplace, etc. The authors should have compared instead with a quantile-based model where the number of quantiles corresponds to the number of parameters of the RNN they fitted. - The "occam razor" discussion from section 3.2 is interesting, where the authors show that in some toyish cases, a LLM is able to extrapolate a numerical sequence. Still, they then generalize this interesting feature they observed on toyish examples as a general feature of LLM, which is not supported in any way but is only a "working assumption" and should be mentioned as such. - L149: "time series generated according to a deterministic function can be considered as only a small generalization of these numerical sequences": This is of course not true and the authors should check the host of different ways that were proposed in the past to model deterministic signals and their incredible complexity. - L157: "maps on precisely to extrapolating linear trends" ? - L168: it is important that the authors detail how the method is extended to autoregressively predict digit sequences to multiple numbers, since this is a very important ingredient of the approach. - Figure 7: I do not understand where history length appears here - Figure 8: As a transformer, I would expect TCN to actually easily handle missing values. - In the conclusion: I would not say that the toy experiments proposed are enough to write that "LLM can act as extremely capable time series forecasters" in a _general_ manner. Once again, the paper makes a very nice case for very small sequences and asks a relevant question, but this remains only a funny curiosity for practitioners at this stage, considering how hard scaling to realistic time series appears. For instance, one second of audio sampled at 44.1kHz means 44100 samples. If each one ends up as a word, this would mean something like 50 pages of text, and this is only for mono. People routinely process hours of multichannel audio. Of course, other time-series practitioners have similar or even much more challenging scales to handle. - The authors are apparently unaware of the fact that a very fruitful research has long been done on multimodal signal processing in time-series contexts. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: What I wonder after reading this paper is whether time-series models could be useful for text processing. -- After reading the rebuttal, I don’t think the authors can be trusted to make the huge amount of changes it would take to address my concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for your detailed review. We appreciate your recognition of the paper's relevance, fascinating results, and potential for impact. We apologize if the tone of our paper comes off as dismissive of prior work. We never intended this to be the case. In several places we believe this perception may come about from a difference of terminology and perspective employed in the two research communities, for example the term “foundation models” which is commonly used to refer to large pretrained models [1], not to suggest that they are more foundational in nature than other research. We assure you that we respect the contributions and nuances of the time-series community and any perceived negative tone was unintended. While we acknowledge some of your concerns and will adjust the manuscript accordingly, including certain uses of language, there are other points we feel require further clarification. **Handling missing values with TCN:** We note that TCN stands for Temporal Convolutional Networks and is not a transformer. On the other hand, actual transformer based models may be able to more easily accommodate missingness, by using a designated token or omitting the missing entries. However, for trained transformer time series models, if missing entries are not encountered at training time, then with missing values the statistics inside the model will be different and the model will not be able to handle the missing entries gracefully. However, as our LLM-based method has not adapted to the statistics of fully observed time series (because it is only using in-context learning) it has a significant advantage in this scenario. **Uncertainty Quantification:** In our description of prior work in time-series uncertainty we agree that "ad hoc" was a poor choice of words (and commonly overused in machine learning papers). What we meant was that there is no standard approach that resolves the problem of parametrizing aleatoric uncertainty in time series models. Instead there are many different approaches, each with its own set of shortcomings, and no satisfactory singular resolution of this problem. We will revise this sentence on line 54 to express this viewpoint more clearly. **Time Series Structure:** Our remarks on time series being relatively unstructured primarily pertain to popular domains of applications like finance and climate. While these series have inherent structure, they are typically more stochastic in nature than data types like images, where deep learning has seen much more substantial success. Our aim wasn't to downplay the complexity of time series modeling. On the contrary, the lack of easily identifiable structures beyond basic ones like periodicity can itself be a challenge when developing forecasting models. We will ensure that our revised manuscript provides a clearer context to avoid any misinterpretations. **Pre-training in Time Series:** While pre-training has its applications in the time-series domain, its prevalence in NLP or vision is evidently much more pronounced. We don’t think it is fair to call our statement “arguable and gratuitous” without any specific evidence or references. **LLM’s Low-complexity Bias:** Our demonstration of LLM’s low-complexity bias in simple deterministic settings was to provide insight into why LLMs can perform well on real world timeseries which also contain low-complexity deterministic patterns, rather than to establish it as a universal phenomenon. Nevertheless, recent work such as [17] also supports our finding that LLMs exhibit a preference for simple completions with low Kolmogorov complexity. **Significance of Our findings:** We strongly disagree that our LLM-based time series prediction method is “only a funny curiosity”. While it’s true that it is not suitable for time series with a large number of samples, there are plenty of real prediction problems with short lengths as demonstrated by the DARTS and Monash datasets, on which LLMs perform among the best models. **Figure 7 History Length:** In the univariate setting, N-HITS and N-BEATS can learn a model by chunking up the time series into segments and training on the batched segments. When the history is short, not many chunks are available and it is insufficient for effectively training a neural network. Smaller models with less learnable parameters (e.g. ARIMA) tend to do better in this scenario, and LLMs also do very well because no parameters are being trained on the data (the inference is zero shot). **L168:** For the autoregressive prediction on the sequence, the tokenized numbers are just concatenated with comma separators as presented in section 4. To compute the continuous NLL of the time series the total NLL is just the sum of the NLLs from each conditional distribution over a given number. Mathematically, $\mathrm{ContinuousLogLikelihood} = \log p(x_{1:T}) = \sum_{i=1}^T \big(\log p(x_i\in U_k|x_{1:i-1}) + nlog B\big) = \mathrm{DiscreteLogLikelihood}+ Tn\log B$ where $U_k$ is bin $k$. [1] Bommasani, Rishi, et al. "On the opportunities and risks of foundation models." arXiv preprint arXiv:2108.07258 (2021).
Summary: This paper proposes a method to use LLMs for time series prediction. Precisely, they develop a way to tokenize and encode numerical digits and prompt LLMs to generate future numbers given the past. Their method is comparable to or exceeds the state-of-the-art time series models. Strengths: The paper proposes a very novel method to use LLMs for time series prediction. The problem is a high-impact problem, and their way of using LLMs is creative. I was excited when reading this paper. The method is very carefully designed and is very convincing. Each component of the method (e.g., digit representation, tokenization) has been carefully thought through and demonstrated with insightful experiments and analysis. The results are positive. Weaknesses: I would like to recommend this paper for acceptance, and I gave a positive score. The main reason that I didn't give a higher score is its presentation and the limitation of the proposed method. For presentation, I think the current structure needs improvement and some terminologies need clarification. The current presentation is a little bottom-up: 3.1 explains how to represent numbers and 3.2 discusses how LLMs process and generate sequences of numbers; then 4 explains how these things are put together to form the method. Honestly, I was lost until I read the end of page-5 (of a 9-page paper!) since I did not know why I had to read section 3 and I didn't know how 3.1 and 3.2 are connected. Maybe a better way is top-down: give a high-level overview of the method first and then dive into each phase/component for their details. If the authors do not want to change the structure dramatically, maybe at least add a paragraph explaining the overall picture of the method at the beginning. I would like the authors to give a precise clarification whenever they mention a term that has multiple meanings. Maybe I am wrong but "multimodality" can mean "the distribution has multiple modes" or "the data has multiple forms". In this paper, it seems that the authors have used it for both cases (3.1 and 5.5) but that made me confused. For limitation of the method, it seems that the proposed method requires something that modern blackbox LLMs do not like to offer: i.e., tokenization and log-likelihood computation. This reliance will restrict the impact of this work: this is fine, since I think the method is already very good and it is a reasonable limitation. But I would appreciate it if the paper has an aggregated section/paragraph discussing this limitation explicitly, instead of mentioning it here and there scattered across the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my questions and suggestions in the Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see limitations in the Weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and supportive review. Your comments about improving the readability of the paper through section 3 are well received. We have added a paragraph at the start of section 3 giving a high-level overview of the method, and we have revised the experiments in Figure 2 to use the decimal encoding. Hopefully these changes will make it clearer how the findings in 3.1 and 3.2 connect to the method we propose. On multimodality, your interpretation is correct and we have somewhat overloaded the usage of multimodal. We will retitle section 5.5 “Connecting Time Series and Language Understanding” to avoid confusion. Regarding the limitations of using black-box LLMs, we have added a complete limitations section in the general response discussing this and other issues that we will add to the main text of the paper. As you point out, tokenization and log-likelihood are often not accessible for the models exposed only through APIs (for example GPT-3 provides log-likelihoods but GPT-4 does not), and this can be a hindrance to the method and make comparison more difficult. However, the proliferation of highly performant open source models like LLaMA-2 provide reason for optimism on this front. Given that your main concern, and the reason for your weak accept rating, was the limitations of black-box LLM APIs, we hope you will considering raising your score in light of our new LLaMA results, which do not suffer from the same limitations. --- Rebuttal Comment 1.1: Title: acknowledge of rebuttal Comment: Thank you for your response and new results. And thank you for considering my presentation suggestions. I would like to raise my rating to 7, with my trust that you will improve the presentation as you promised.
Summary: This paper studies the usage of large language models such as GPT-3 for zero-shot time series forecasting task. To construct the LLMs input sequence in line with how they are pre-trained, it proposes to encode the continuous time series as a string of numerical digits with spaces added between single digits. As a result, time-series forecasting can be transformed into a next-token prediction task suitable for LLMs. Results on standard univariate time-series datasets show that LLMs can perform comparably to time-series models such as ARIMA, etc. in a zero-shot fashion i.e., without any finetuning. Strengths: 1. The paper is well-written and most of the sections are easy to comprehend. 2. The studied problem is straightforward and interesting considering the capabilities exhibited by large language models. Weaknesses: 1. Comparison to other recent transformer-based time-series forecasting methods (listed in the references below, but not limited to) is missing. 2. The paper considers simple univariate time-series forecasting datasets, it would be helpful to consider more complex multivariate time-series datasets and how they should be encoded. 3. In Appendix C, details about how other existing time-series forecasting models/methods perform compared to GPT-3 are missing on the time-series data that was recorded after GPT-3's training data cutoff date. References: [1] Informer: Beyond efficient transformer for long sequence time-series forecasting, Zhou et al., 2021 [2] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting, Wu et al., 2021 [3] FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting, Zhou et al., 2022 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Minor: 1. On page 4, I believe the figure references should be attributed to Figure 2 instead of Figure 3. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: I could not find a section addressing the limitations and negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your review and comments on the paper. **Transformer based time series methods** As we mentioned in the general response, we have added an additional evaluation of our model using the LLAMA-2 LLM which includes comparisons against Informer, Autoformer, and FEDformer as you suggested. We find that our LLM based time series prediction method compares favorably, with aggregate MAE values on par with or better than these transformer methods, but with completely zero-shot predictions. **Complex multivariate time-series** We have added a limitations section to the paper and we list this section in the general response. Our paper is focused on univariate time series, and as we discuss in the limitations section, multivariate time series are challenging to accommodate given the short context windows afforded by the LLMs. In the additional experiments on Autoformer benchmark datasets, we demonstrate that the LLaMA-2 LLM can perform well on multivariate data by considering it as a collection of univariate series. Developing methodology to accommodate all non-temporal correlations from the multivariate time series with LLMs is also a worthwhile endeavor, but we leave it to future work. **GPT-3 training cutoff date evaluation** We would like to direct you to figure 9 in appendix C where we evaluate the LLM predictions on three time series datasets recorded after the GPT-3 training data cutoff date. We evaluate the predictions both qualitatively (showing forecasts and prediction intervals) and quantitatively (using the negative log likelihood). Measured by NLL, GPT-3 performs strongly on the three datasets, and the forecasts are consistent with performance on the other time series we evaluate in the paper. **Limitations** We have added a limitations section in the general response that we will be adding to the main text. As you mention, we believe the question of performing time series predictions with LLMs is of significant interest. We hope that given the new experiments performed and clarification of your concerns you would consider raising your score.
Rebuttal 1: Rebuttal: General Comment to All Reviewers We are thankful for the thoughtful and supportive feedback. We were happy to see many of the reviewers think that the goals and results of the paper are interesting, relevant, and useful. Few ML researchers would confidently assert that language models are zero-shot time series forecasters. This finding is simultaneously surprising and supported by diverse experiments on real-world datasets. To unpack this result, we perform a comprehensive set of experiments to understand why language models are well-suited to forecasting and what factors impact the performance. Because there were a few recurring themes in reviewer questions, we would like to make a few general points before responding individually. **Evaluation of Other Large Language Models** Several reviewers mentioned the need for evaluating other LLMs to better demonstrate the generality of the method. We would like to highlight Figure 7 (right) in the original submission’s main text, where we compare GPT-3 against many other LLMs that are accessible via open APIs, evaluated on the Darts datasets. Additionally, we are happy to share new results with the recent open-source LLaMA-2 models. These results, shown in Figure 1,2,3 of the attached PDF, demonstrate very strong performance, with metrics comparable to GPT-3 on Darts and significantly exceeding GPT-3 on Monash, making it the best performing model overall on those datasets. These results help demonstrate that our method is not limited to GPT-3, that more powerful LLMs will likely yield continued improvements on time series prediction, and that the limitations of working with LLMs locked behind APIs can be alleviated with open source models. **Additional Comparisons Against Transformer Time Series Models** Following suggestions from the reviewers, we have added comparisons against the recent Autoformer, Informer, Reformer, and FEDFormer methods. LLaMA-2 demonstrates strong performance against these models across the board. **Additional Limitations Section** Several reviewers suggested that it would be helpful to have a unified limitations section. We present the following limitations section below that we will add into the revised version of the paper: “While we have demonstrated that text-pretrained LLMs are surprisingly capable at time series extrapolation, our approach has a few notable drawbacks, resulting from limitations of the underlying LLMs. Most notably the relatively short context window of LLMs limits the amount of information that can be processed. Our approach is therefore best suited for time series that are short in length, univariate, and consist of a single instance. Longer time series are currently truncated in order to fit into the context window (though subsampling could also be used), and multivariate time series pose an additional challenge, as the length is multiplied by the number of covariates. One possible solution for multivariate time series is to make predictions on the distinct dimensions separately, ignoring the correlations between the channels, but this independence assumption is less than ideal. Computational cost can also be a limiting factor. While most time series models are lightweight and fast to run over a large number of series, LLMs are significantly more expensive, even when used solely for inference. Beyond the properties of LLMs themselves, relying on API access to black-box LLMs can lead to its own set of drawbacks, as we lack full control over tokenization and model alignment methods. As we show in the paper, correct tokenization is essential to good forecasting performance and safety-driven prompts and RLHF may directly interfere with forecasting ability. Black box models also increasingly do not provide likelihood values (GPT-3 does, GPT-4 does not) that are necessary for computing the continuous probability density functions for time series. While these are significant obstacles when using LLM APIs, we hope that open source LLMs such as LLaMA can alleviate many of these issues, enabling direct control over prompting, tokenization, and likelihood calculations.” Pdf: /pdf/5e729c23331255a3c34be1c608a2a403fb32b969.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a way to use autoregressive large language models to solve time series problems. Several tricks make the time series inputs amenable to LLMs (e.g., separating digits by a space to circumvent tokenization problems). GPT-3 outperforms existing methods on several time series benchmarks. LLMs can also perform imputation on missing data in the time series. Strengths: The paper considers the interesting setting of applying pre-trained LLMs to "classical" (i.e., 1-D) time series problems. The tricks introduced to format the time series as input to an LLM are interesting in their own right, though I am not fully sure if they are novel or not. The authors also conduct thorough ablations on these tricks to see how they might affect the performance of GPT-3 on these types of problems. Weaknesses: **Costs of using LLMs** One crucial discussion missing from this paper is the inference cost. If using GPT-3 or GPT-4, then this is a literal payment. Even for open-sourced LLMs, I would imagine that the inference cost is much higher than that of other time series models. Similarly, using LLMs for imputation is much more expensive than standard methods (e.g., interpolation). The authors should acknowledge this weakness clearly. **No comparison to transformer time series models** The authors ought to include comparisons to other transformer time series models, such as the Informer and Autoformer, which were clear leaps forward in the field. It's hard to tell if the gains observed in this paper are from the massive amounts of pre-training data (as suggested by Section 3.2?) or from the self-attention mechanism, which does seem well-adapted to time series. Please also compare the size of GPT-3 to those models. This is my main reason to not give it an accept rating. **Writing**: Please define MAE, CRPS, etc before using them in the text. The colors used for "Ground Truth" and "GPT-3" are too close to each other in Figure 3, so it is hard to read. The authors should include a discussion of how positional embeddings make it easier for an LLM to interpret the input data and operate across the time domain. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: **I don't understand the argument in Section 3.2: LLMs extrapolate patterns in numerical sequences.** The discussion in Section 3.2 is confusing to me. The authors suggest that LLMs can handle sequences with linear growth, exponential growth, etc. What's confusing is that these are all fixed functions that have no dependence on the previously occurring values. Something that may be more interesting is e.g. the Fibonacci sequence, where the next value is a function of past values. I don't really get the point of Figure 3 + Table 1 because these are just simple functions, and if I wanted to fit them, I would directly fit them using the appropriate function class (i.e., not huge, overparametrized neural networks). I also don't really get why it is a given that LLMs can do "simple arithmetic operations". Although these are simple to us, they may be complicated functions to learn in the embedding space, and they depend on the tokenization used. Note that the cited paper [17] is focused on low-complexity sequences in the sense of Kolmogorov complexity, which doesn't necessarily correspond to simple arithmetic operations. **Can the authors motivate their dataset choices?** The transformer-related time series models don't use the DARTS dataset. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: **Not enough LLMs** The writing indicates that the authors are trying to make the broader claim that all LLMs (that can perform in-context learning) can serve as time series models. To actually substantiate this claim, they need to test other LLMs besides GPT-3 and GPT-4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your thoughtful and detailed review. In our understanding, your main concerns about the paper are the need for testing additional LLMs, the lack of comparison against Informer and Autoformer, and the purpose of the argument that we present in section 3.2. **Evaluating other LLMs** In fact we did evaluate other available LLMs shown in Figure 7 (right) and in Appendix B.3, though to further address your question, we have added the recent LLaMA-2 70B model to our investigations and find that in many cases it outperforms GPT-3 Davinci. These comparisons are shown in Figure 1, 2, and 3 of the attached PDF. Regarding your point about transformer methods like Informer and Autoformer, we have added comparisons against these methods shown in Figure 1 in the attached PDF. We compare LLaMA-2 70B with Autoformer, Informer, Reformer, and a transformer model on 5 of the autoformer benchmark datasets with two different prediction lengths. We find that LLaMA-2 70B performs on-par with Autoformer and better than Informer in aggregate. We were surprised to find that the model performed this well out of the box without any hyperparameter tuning or modification of the sampling temperature. While we think this performance is quite impressive, it’s worth noting that the LLaMA model is about 10,000 times larger than the autoformer models it is compared against, and the LLaMA model appears to experience degrading performance with longer sequence lengths, but the LLaMA model is predicting completely zero-shot and is capable of performing all kinds of text-generation tasks in addition to forecasting. **Inference costs** Regarding inference costs, because we do not perform any training of the base LLM, these are actually quite manageable. A single forecast with maximum history length costs only 4 cents with the largest GPT-3 Davinci model. In general these costs are very low, except if one requires to draw many many samples from the distribution over trajectories. In total we have spent less than 1000 dollars on GPT-3 queries for prototyping our method and evaluating on the many datasets. With the ability to run models locally like LLaMA-2, this cost decreases even further. **Section 3.2** Regarding Section 3.2 and the deterministic synthetic time series, these experiments provide a controlled environment for uncovering which patterns are easier for the LLMs to identify and extrapolate, and provide an understanding of why this is the case. Ultimately we do not care about the model’s performance on these deterministic signals for their own sake, and of course one could fit these functions very well by directly parameterizing a function in the right family. Although these functions are easy to express as a mathematical formula, as you allude to, that does not mean that the functions are easy to express at a token level inside an LLM and even less so when the task is combined with identifying the recurring pattern from the history. Hence, the ability to identify and extrapolate these signals is a nontrivial capability of the language models and we try to make sense of how this is possible in simple cases through the arithmetic operations that are entailed. While the discussion is focused on Kolmogorov complexity in [17], the experiments on the low complexity bias of LLMs actually measure complexity with respect to a reduced language of expressions that can be formed from an expression tree with constant, addition, multiplication, and integer division primitives. These primitives coincide with those we discuss in Section 3.2 for simple pattern extrapolation. Given that we have performed the necessary experiments to specifically address your concerns, we hope that you will consider raising your score. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the additional experiments as well as the limitations section in the general response. My original concerns about inference cost are proven to be true, and the limited context window of LLMs does indeed make them impractical as a practical method. I took some time to read through the other reviewers' comments and realized I was not familiar with PromptCast (mentioned by Reviewer URMj). After reading that paper, I feel that the interesting insights from this paper have already been shown in that work. Namely, I think the main contribution of this work is to show that LLMs can perform time-series forecasting. The authors do not cite this work and instead imply theirs is the first to consider using LLMs for this setting. PromptCast was posted >6 months before submission, so I am curious to hear why the authors did not discuss it. I found their response to Reviewer URMj to be unsatisfactory, because it primarily focused on the idea that this paper advances some understanding of why LLMs can do time series forecasting, which I take issue with (see below). My concern with Section 3.2 remains. I don't think that studying LLM behavior on these highly simple primitives is indicative of their behavior on more complex data (like the Fibonacci sequence I suggested). Section 3.2 gave me little to no additional understanding of why LLMs can operate on time series. If the authors would like to make this point, I suggest they study more complex settings. Given the concerns with PromptCast and Section 3.2, I won't be raising my score on this paper. --- Reply to Comment 1.1.1: Title: Addressing PromptCast and Section 3.2 (part 1) Comment: **PromptCast** We did not discuss PromptCast in our original submission because we were not aware of it at the time. From a methodological standpoint, it is worth noting that our approach to forecasting with LLMs is very different from the method introduced in PromptCast. Let’s use the series of numbers [1,2,3] as example inputs to the model and [4,5] as example outputs. PromptCast takes raw values and inserts them into a text prompt that is specifically designed for a given dataset. For example, the following input and output prompts (taken verbatim from the paper) are used for the “City Temperature” dataset: Input: “The average temperature of was 1, 2, 3 degree on each day. What is the temperature going to be on tomorrow?” Output: “The temperature will be 4 degree.” Using our method with precision 3 would instead have the following inputs and outputs: Input: “0 , 5 0, 1 0 0 ” Output: “1 5 0 ” ​​ As you can see, the two methods are quite distinct. We used the very specific formatting in our paper because it led to the best results. We did not find additional prompting necessary or helpful, and we found that rescaling and paying careful attention to tokenization was absolutely essential to good performance. Our method also has the additional benefit of being directly applicable to any input dataset without dataset-specific prompt engineering, which is evidently necessary in the PromptCast framework. Our method is also more robust to the scale of the input numbers and doesn’t require any knowledge of their associated units. Beyond clear methodological differences, the approaches to evaluation are very distinct between our work and PromptCast’s. Firstly, their metrics are purely deterministic and therefore do not effectively capture whether language models have learned a proper distribution over the continuous values. Deterministic metrics are known to be problematic in time series analysis, because incredibly naive forecasts can often perform well only considering point estimates [1,2]. For example, simply predicting the last value in the input or the mean of the input values can perform better than many “state-of-the-art” methods under deterministic predictions. For this reason, we spent a significant fraction of our submission assessing the ability of language models to fit the continuous time series distributions. This analysis, and the associated strong performance of LLMs on distributional metrics like negative log likelihood (NLL) and continuous ranked probability score (CRPS) is unique to our work and is absent from PromptCast. In addition to the lack of probabilistic evaluation, PromptCast also provides much weaker evidence of generalization across datasets–only examining 3 datasets, while we evaluate our method on well over 20 unique datasets, including standard benchmark datasets like DARTS, Monash, and the Informer dataset. We also anticipated concerns about data leakage (potential memorization of test data by the LLM) and provided additional experiments demonstrating strong LLM performance on datasets collected after the training cutoff, while PromptCast performs no such analysis and therefore offers no reassurance that its result are not simply a side-effect of data leakage. It’s notable that URMj was the reviewer to mention PromptCast as an alternative method and still gave our submission a positive rating. Despite surface-level similarities, we think there are fundamental differences between PromptCast and our submission and that the unique contributions of our submission make it worthy of acceptance.
Summary: The papers presents an analysis of larger language models for the task of time series prediction. The paper claims that these large language models being very generalizable are able to work very efficiently with numbers in these tasks since each digit prediction is a conditional over the previously predicted digits and hence acts in a way similar to hierarchical softmax to predict the final output. Strengths: Even though a lot of the work presented here is known to be useful, less is known about how these models work and performing a closer analysis with simplified datasets and well defined examples gives a much better understanding about what works and what does not and helps figure out areas of improvement. For example the approach to split numbers by digits, previous works have achieved similar results by removing multi-digit entries from the vocabulary of the model which eventually has the same impact as this paper. But this paper presents intuition around how this might help the model in predicting one digit at a time and be more performant using the analogy to heirarchical softmax. The well defined set of experiments show that this approach works very well to improve the ability of LLMs to work with numbers and the analysis on time series datasets confirms the same. Weaknesses: There are already papers showing that: 1. Mathematical ability of LLMs improves significantly if we process digit by digit (approached generally by removing multi-digit entries from the vocab). 2. Application of LLMs on time series datasets which generalize better then regular approaches has also been shown. See "PROMPTCAST: A NEW PROMPT-BASED LEARNING PARADIGM FOR TIME SERIES FORECASTING" So there isn't much novelty provided by this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some typos - line 59: "from from" -> "from" - line 63-64: "without any major changes by directly numbers as strings of digits." --> needs to be reworded Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - Since we depend on the LM model to perform the forecasting, it is much harder to understand why a particular forecasting was chosen. There are always multiple ways to interpret a series for completion and relying that the LM with follow Occum razor is dangerous. Only the other hand, traditional time series forecasting methods are much more interpretable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the supportive review and your comments on related work. **Comparison to related work** Though other papers have investigated the use of LLMs in basic arithmetic (e.g Goat [1]) and forecasting (e.g. PromptCast), we believe the findings of our paper are unique in their scope and focus. We delve deep into understanding why LLMs can extrapolate on time series, emphasizing their bias for low-complexity completions, ability to identify and execute arithmetic operations on numerical sequences, and superior representation of uncertainties. We also uncover scaling results and the effect of alignment methods which are not present in related work (e.g. PromptCast). **Following Occam’s Prior** Regarding your comment on Occam’s razor, of course there are many consistent completions of a time series (just as there can be many consistent extrapolations of a numerical pattern). Following the Occam’s razor prior explored in [17] does not mean that we should _select_ the simplest trajectory consistent with the observed history, but rather that we should form a predictive distribution and _sample_ according to their posterior likelihoods using this prior conditioned on the observations, resulting in simpler completions being sampled more often. This process is formalized through Solomonoff induction, whereby the Occam’s razor prior (also known as Solomonoff prior) $p(x) = 2^{-K(x)}/Z$ (where $x$ is the complete time series and $K$ is the prefix Kolmogorov complexity) is used as a prior to form a posterior $p(x|x_{[:t]}) = p(x_{[:t]}|x)p(x)/p(x_{[:t]}) \propto 2^{-K(x)}$ for all series $x$ that match the history. We believe that LLMs capture a very rough and approximate version of this Solomonoff prior. With the deterministic time series in Figure 3, these signals can be explained by a generating program of short length, and other programs which correctly fit the previous history but make substantially different predictions on the future require many more bits to express. These solutions are still represented in $p(x|x_{[:t]})$ but are just exponentially less likely to get sampled (if we sample enough we should recover them). On the other hand, for real time series that have noise, the shortest generating program for the data is still very large since it must encode the noise. Therefore, other explanations of the data are close in size and therefore sampled frequently in the posterior distribution of the Occam’s prior, and that reflects what we see in the LLM predictions in e.g. Figure 6 that have considerably more variation. [1] Liu, T., & Low, B. K. H. (2023). Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks. arXiv preprint arXiv:2305.14201. --- Rebuttal Comment 1.1: Comment: Thanks for your clarifications! The paper is structured in a way that it focuses on presenting LLMs as a viable solution for timeseries forecasting, and not what is mentioned above. The overlap with PromptCast is significant and IMO needs to be better acknowledged in the paper. In #2, I wanted to focus on the fact that when we compare the advantages and disadvantages of traditional and LLM based approach for time series forecasting, I agree that LLMs are better in many aspects, but we also need to point out that interpretability of the outputs generated through traditional methods is a major advantage for these methods where LLMs lack significantly. --- Reply to Comment 1.1.1: Title: Follow-up notes Comment: Thank you for your engagement and thoughtful suggestions throughout the review process. **Overlap with PromptCast** We have provided a more in-depth comparison between PromptCast and our method in our latest general comment (entitled “Our Submission’s Contribution”), which we hope you will read over and consider. We show that PromptCast performs much worse than our approach in practice, and we detail many important differences both at a high-level and in the methodological details. **Interpretability of LLMs vs traditional methods** We agree that interpretability is a major appeal of traditional methods with a small number of parameters (e.g. ARIMA). It’s worth noting that many popular deep methods for forecasting (e.g. DeepAR, TCN, PatchTST) already sacrifice this interpretability for predictive performance and yet have been widely adopted. Our question-answering experiments in Section 5.5 also suggest another mechanism for achieving interpretability with LLM predictions. We show that LLMs can be queried in English to recognize deterministic trends. It is therefore also possible that LLMs can be effectively prompted (or fine-tuned) to provide useful explanations of their predictions or deconstruct high-level trends.
Summary: Briefly summarize the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary. This paper proposes that LLMs like GPT-3 can be employed as time series forecasters due to the similarities between text and time series such as periodicity. The authors show how LLM can be used to model multimodal distributions through a hierarchical discretization of the output space and uniform distribution in each discrete bucket, enabling training of LLM with negative log-likelihood. The hierarchical discretization also allows the LLMs to circumvent the requirement of a fixed output range with binning values and a large number of bins for an expressive output space. Experiments show that GPT-3 fitted with the proposed modifications can forecast time series (e.g. linear, periodic, exponential) better than baselines such as TCN and ARIMA. The authors also explore optimal ways to encode time series data in LLMs and found that using a base of 10 and use of space to individually tokenize digits in numbers result in better performance. When benchmarked against existing models, the proposed approach perform competitively with top baselines in the DARTS and Monash datasets. Further experiments were conducted to show that LLMs as time series forecasters are sample efficient, have performances scaled with model size and handles missing data well. Strengths: The approach to using LLM as a time series forecaster is innovation and the authors found ways to elegantly unlock this capability without the need for extensive effort (e.g. retraining or finetuning). This finding will be interesting and relevant to a wide audience in this community. Experiments were extensively conducted to show optimal approaches for LLMs’ time series forecasting ability and other aspects such as sample efficiency etc. Weaknesses: Limitations of the proposed approach were not discussed in the paper Technical Quality: 3 good Clarity: 3 good Questions for Authors: a) The experiments seem to be mostly in a zero-shot setting, have the authors studied the few-shot setting where few-shot examples are in the prompt and how would that affect forecasting performance? b) What prompts, if any, are given to the LLM, other than the initial time series data and what effects would the choice of prompts have on its performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: No, a separate section addressing the approach's limitations and potential societal impact would be recommended. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review! To answer your questions, We did not study the few-shot setting, but it would be a great direction for future work. We suspect that both few-shot prompting and fine tuning could substantially improve forecasting performance. One challenge of few-shot prompting, however, is the limited context window of current LLMs. In the zero-shot setting, we can use the full context window to extend the number of past observations used by the model. In the few-shot setting, this same context window might get consumed by the few-shot examples, limiting the number of past observations. All the results in paper that use GPT-3 or LLaMA models are obtained without any natural language prompting. We experimented with prompting with a description of the dataset, but the initial results showed no improvement in performance. For ChatGPT and GPT-4, we used the following prompt (which was necessary for good forecasts): "You are a helpful assistant that performs time series predictions. The user will provide a sequence and you will predict the remaining sequence." We appreciate the question about limitations. There are several key limitations that are worth noting and we have added them into a unified section shown in the general response, which will be integrated into the paper. The limitations of our method are largely the limitations of LLMs in general, specifically limited input sequence length (small context window), subsequent challenges in handling multivariate data, and computational cost. Additionally, black box APIs have limited control over system prompts, RLHF, tokenization and likelihoods. As each of these limitations is an active area of research in itself, we expect that all progress should also improve the practicality of our method. By connecting LLMs with time series forecasting, we have also opened new avenues in evaluating progress in LLM performance on a task that requires understanding complex, quantitative patterns. --- Rebuttal 2: Title: Acknowledgement of Rebuttal Comment: I have read the authors' rebuttal and decided to keep my original score.
Summary: This paper discusses a novel approach to time series forecasting using large language models (LLMs) like GPT-3. The authors encode time series as a string of numerical digits and view time series forecasting as next-token prediction in text. They find that LLMs can extrapolate time series at a level comparable or exceeding the performance of several classic time series models. The authors argue that LLMs excel at extrapolating deterministic numerical sequences because of biases for simple explanations introduced by their text pretraining. The authors suggest that this approach has potential and enables new capabilities, such as integrating non-numerical text into the input or output. Overall, the paper studies an interesting problem setting that directly utilizes the in-context learning ability of Transformers to conduct zero-shot forecasting problems. In particular, the pre-trained transformer model (e.g., LLMs) can give promising performance. However, the analysis in the paper is not comprehensive enough. First of all, the paper fails to answer why pre-trained LLM can yield better performance. The arguments in the paper either focus on RNN (e.g., L121) or directly assume LLMs can perform extrapolation on time series sequences (e.g., L144 and L154). Based on the current presentation, it looks like the authors design a specific feature engineering method (digits representation) that can properly utilize the LLMs' extrapolation or in-context learning ability. If it is the case, the most important research question, why pretrained LLMs can conduct arbitrary time series' extrapolation better than other models is not answered or well discussed. From my perspective, if the major innovation of the paper is a feature engineering method, the current numerical performance would not be strong enough to lead to a publication in the top machine learning conferences like Neurips. Moreover, the experimental settings in the paper contain learnable parameters that are tuned based on the forecasting error. From my perspective, I would not treat it as zero-shot learning setting, since parameters can be viewed as trained. The authors also don't provide codes to help reviewers verify the results of experiments. In summary, this paper studies an interesting problem that utilizing pretrained LLMs to help time series forecasting. However, based on the current presentation, the analysis and the numerical performance are not strong enough. Strengths: This paper studies an interesting problem that utilizing pretrained LLMs to help time series forecasting Weaknesses: Please see my comments in Summary section. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see my comments in Summary section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: The authors don't directly discuss the limitations of the proposed methods in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your review. **Understanding why LLMs can perform extrapolation on time series:** We have demonstrated empirically that LLMs indeed can perform extrapolation on both deterministic and stochastic time series in Section 3.1 and 3.2. We explicitly pose the ability to extrapolate simple deterministic trends as the ability to identify and implement simple arithmetic operations on the numbers in the sequence, for example addition by a constant for linear growth, multiplication for exponential growth, or copying for periodicity. Prior works [4,5] have shown that larger LLMs are increasingly capable at performing addition and multiplication at the token level, and the repetition bias is also a well-studied phenomenon [3]. We also relate these well-known phenomena with recent findings that LLMs have a preference for numerical sequences generated by a small number of arithmetic operations [2]. While we devoted a significant portion of the paper to investigating why LLMs are well suited to time series prediction, ultimately a good theory will hinge on a deeper understanding of the inductive biases of the transformer architecture and the effects of extensive generative pretraining. These questions are among the most important open problems in the field of deep learning at large and extend beyond the scope of our paper. **Comprehensiveness of the analysis** RNN vs LLM for Figure 2 experiments (fitting continuous distributions): For figure 2 we chose to use an RNN as the autoregressive model over the digits in order to disentangle uncertainty representation from a model’s ability to perform in-context learning. While we can train a small RNN or transformer directly to fit the distribution via gradient descent, the practical setup required in time series requires adapting to the distribution purely using in context learning, a considerably more challenging task. In section 3.1 we wanted to evaluate this parametrization of continuous densities in isolation. We have previously run GPT-3 on this identical problem with in context learning and the results are similar (though slightly worse due to the burden of only using in context learning). **Zero shot setting:** In general we find that there is little value in tuning the hyperparameters of our LLM time series prediction method on the data at hand. Using a fixed set of hyperparameters reproduces very nearly the same performance, and in the additional LLaMA-2 experiments we always use the fixed sampling parameters temperature (T) = 1.0, nucleus size (top_p) = 0.9, and 3 decimals of precision. Regardless, by zero shot we simply mean the setting without additional data besides the given series, for example no other instances of completed or uncompleted time series, and we feel that this usage is consistent with how zero shot is used in computer vision and NLP ( e.g. a zero shot image classification method that ingests a test image and performs test-time fine-tuning [1]). **Code:** We fully intend to release the code for our method on the acceptance of the paper. **Limitations:** While we do discuss several limitations within the paper (such as the limited context window, and the challenge of multivariate series), we recognize that it would be better to have these limitations aggregated into a dedicated limitations section. We have listed this new limitations section in the general response, and we will add it into the revised version of the paper. [1] Shu, Manli, et al. "Test-time prompt tuning for zero-shot generalization in vision-language models." Advances in Neural Information Processing Systems 35 (2022): 14274-14289. [2] Goldblum, Micah, Marc Finzi, Keefer Rowan, and Andrew Gordon Wilson. "The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning." arXiv preprint arXiv:2304.05366 (2023). [3] Holtzman, Ari, et al. "The curious case of neural text degeneration." arXiv preprint arXiv:1904.09751 (2019). [4] Yuan, Z., Yuan, H., Tan, C., Wang, W., & Huang, S. (2023). How well do Large Language Models perform in Arithmetic tasks?. arXiv preprint arXiv:2304.02015. [5] Liu, T., & Low, B. K. H. (2023). Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks. arXiv preprint arXiv:2305.14201. --- Rebuttal Comment 1.1: Title: Thanks for your comments. Comment: I would like to express my gratitude to the authors for acknowledging and taking into account my concerns. After carefully evaluating the responses and considering the feedback provided by other reviewers, my rating remains unchanged for now. As the discrepancy between the authors and myself lies in a subjective aspect of novelty/contribution, I would like to leave it to the reviewer-reviewer/AC discussion in order to align my understanding with the review team. A final decision will be made following further discussions with the other reviewers and the AC.
Neural Lighting Simulation for Urban Scenes
Accept (poster)
Summary: This paper proposed LightSim, a lighting-aware camera simulation system for improving robot perception. This system built relightable digital twins from real-world raw sensor data and enabled applications, e.g., actor insertion, modification, removal, and re-rendering. Strengths: 1. Propose a complete system for outdoor illumination estimation of urban driving scenes and its application. 2. Leverage physics-based rendering to enable controllable simulation of the dynamic scene. 3. Better results than the baseline. Weaknesses: 1. The workload of the paper is large, but it seems an integration of previous works and lacks innovation. e.g., [25] for scene reconstruction, DeepFillv2 for panorama image inpainting, [17] for neural deferred rendering. 2. The motivation for neural deferred rendering needs to be clarified. The motivation for neural deferred rendering needs to be clarified. In the right of Fig. 3, I^src, E^src, and E^tgt are known, I_buffer, S^src, and S^tgt are generalized by Blender, and digital twins are estimated, then I^tgt can be directly rendered by Blender. Why train an extra network for rendering? 3. What is the material map in L235? Is Blender's default Principle BSDF? and link the vertex color as the base color? 4. Digital Twins is described differently in Fig. 2 and L126-127. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see the Weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: see the Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful reviews and comments. We address the concerns as follows **Q1: Novelty - Integration of previous works** \ **A1:** Our paper’s novelty lies in developing a neural lighting simulation system for self driving, which is critical for thorough evaluation and more robust training of robot autonomy before safe deployment. To our knowledge, we are one of the first to perform relighting for dynamic urban scenes (SDV and other actors moving in the scenario). LightSim significantly outperforms the state of the art, producing high-quality photorealistic driving videos under a wide range of lighting conditions. We strongly believe LightSim is a critical and innovative step towards realistic and scalable lighting simulation for robotics. Through this paper, we also hope to convey the importance of leveraging available real data (digital twins) and propose a new regime of exploiting digital twins for lighting simulation. While some individual components (reconstruction for dynamic scenes [75], environment map in-painting [59] and LDR-HDR lifting [69], G-buffers to aid learning [52]) are studied before, why they are used and how they are used are all carefully designed in the context of self-driving. Specifically, different from existing works that conduct lighting estimation from single image (limited FoV) which is ill-posed and challenging, we fuse all available data (i.e., all six cameras and sun angles based on time and GPS) to reduce ambiguity (see Table A4). To overcome the lack of real world driving scenes captured under different lightings, we propose a “novel data pair training scheme” [Reviewer 6GXP] that leverages the digital twins that are built from real world to generate synthetic paired images under different lighting conditions. Those generated diverse synthetic lighting variation data are then combined with real data to train the neural deferred rendering network. The resulting framework is generic, interpretable and has various capacities including actor insertion, removal, modification, and rendering from new viewpoints, all in lighting-aware manner. We believe LightSim is not just a simple extension or integration of previous works. Also, exploiting existing algorithms to realize a novel idea does not mean there is no technical contribution [Reviewer 5Bwf and 6GXP]. We hope the reviewer can acknowledge this. We look forward to follow-up discussions. **Q2: Clarification of motivation for neural deferred rendering. Why not directly render using Blender?** \ **A2:** Recovery of perfect geometry, material and lighting in urban driving scenes is a very challenging task which requires strong priors and real-world data under different lighting conditions. To mitigate this issue, we propose to generate synthetic paired lighting variations from our imperfect digital twins and use neural deferred renderer to generate relighting videos in a more realistic manner. Specifically, the neural deferred rendering network learns to relight (guided by coarse blender renderings) while maintaining the realism by taking the original real image as input. The direct blender rendering results have noticeable artifacts as shown in **Rebuttal Figure R3 and R6**. **Q3: Material map in L234 (default Principle BSDF and link vertex color as base color?)** \ **A3:** Yes, We use default Principled BSDF in blender and link vertex color as the base color. For all dynamic assets, we set metal=0.5, roughness=0.2, sepcular=0.5, clearcoat = 1.0, clearcoat roughness = 1.0. For the background asset, we set metal=0.0, roughness=0.7, clearcoat = 0.0, specular=0.5. The other material parameters are initialized with Blender default. We will include the details in the revision. **Q4: “Digital Twins” is described differently in Fig. 2 and L126-127.** \ **A4:** Both Fig.2 and L126-127 describe lighting-aware digital twins as containing geometry, material and lighting. We look forward to further discussions if additional clarification is required. We hope the following response addresses the reviewer’s concerns. We look forward to follow-up discussions. --- Rebuttal Comment 1.1: Title: Looking forward to follow-up discussions! Comment: We thank the reviewer for taking precious time in checking our responses. We hope our answers and additional results address your concerns well. Specifically, - Q1/A1: We clarified the novelty of LightSim. - Q2/A2: We clarified the motivation of neural deferred rendering and explained why not directly render using Blender (see **Rebuttal Figure R3 and R6**). - Q3/A3: We provided details of material map and will include them in the revision. - Q4/A4: Both Fig.2 and L126-127 describe lighting-aware digital twins as containing geometry, material and lighting. Please let us know if you have any additional or follow-up questions. We will be more than happy to clarify them. Any follow-up discussions are highly appreciated!
Summary: Generating training data for self-driving cars is a challenging task due to the difficulty of capturing real-world scenarios. While video games have been used to generate training data, there exists a domain gap between virtual and real-world environments. To address these challenges, the authors of this paper propose an approach that generates composable and relightable scenes. The method involves a multi-stage process aimed at training a dynamic Neural Radiance Fields (NeRF) model, which decomposes a scene into static and dynamic components. Furthermore, the illumination in the scene is learned as a high dynamic range (HDR) sky dome. The proposed approach tackles several sub-problems to achieve its goal. Firstly, panorama reconstruction is performed from the input data. Additionally, in-painting techniques are employed to fill unobserved areas caused by occlusions. An LDR-to-HDR estimation is carried out to enhance the illumination information. Supervision signals are provided by leveraging known sun angles and intensities from GPS data and timestamps. These signals aid in training the model to accurately estimate the scene illumination. To generate lighting-relevant data, a non-differentiable rendering step is performed, producing essential information such as surface normals, depth, position, and ambient occlusion, along with a render using a single base material. A 2D U-Net utilizes the render buffer and a target illumination to guide the relighting or editing of a source image. This process allows for consistent relighting of the scene and offers flexibility to insert new objects, as well as move or remove dynamic objects. To train the system, the authors introduce a novel data pair training scheme. Strengths: - I like the fusing of all available data, especially the supervision from sun angles based on time and GPS data. This is a really strong regularization loss in outside scenes, as the illumination is dominated by sunlight. - The paired learning scheme for the neural renderer is a good idea and seems to provide good results. - The editing produces plausible relighting and edits that surpass the quality of purely synthetic data from common game engines. Weaknesses: - Judging illumination without any specific reference is hard for humans. This is apparent in Fig. 4, where the relighting is hard to judge for plausibility. I suggest taking two images: Source 1 under illumination 1 and Source 2 under illumination 2. Then generate Source 1 under Illumination 2 and vice versa. This way, there exists a reference point for each illumination, and due to the similarity in scenes, it is easier to judge if the illumination is plausible. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - The geometry from marching cubes often has obvious artifacts, which are especially apparent when lighting the mesh (random shadowed triangles). Have the authors noticed these effects? Is the neural rendering removing them? Is this due to the I_render|E_src -> I_real step? - The authors propose to only model view-independent diffuse colors instead of the view-dependent typical NeRF one. Did the authors notice any artifacts around reflective surfaces, which degrade the meshes severely? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful reviews and comments. We are glad that the reviewer appreciates the novelty and contribution of our work. We address the concerns as follows. **Q1: Judging illumination is hard for human without reference** \ **A1:** Thanks for the suggestion. We agree that it is challenging to judge illumination for humans (even experts) without ground truth data or other reference. In Figure 4, the first and third row are relighting with estimated lighting conditions of other PandaSet log snippets. We added the reference real image in **Rebuttal Figure R5**. We note that the lighting variations of public driving datasets (e.g., PandaSet and NuScenes) are limited, therefore we also use third party HDRs for evaluation (without limited FoV real-image reference). As discussed in Appendix E, it is an important future direction to collect extensive data from real-world urban scenes under diverse lighting conditions (e.g., repeating the same driving route under varying lighting conditions), which would be beneficial to the community (reducing the ambiguity and more helpful for lighting evaluation). **Q2: Random shadowed triangles when relighting due to artifacts in marching cube extracted geometry? (Have the authors noticed these effects? Is neural rendering removing them? Is this due to the $\mathbf{I}_\mathrm{render} | \mathbf{E}^\mathrm{src}$ -> I_real step?)** \ **A2:** Yes, we noticed that random shadowed triangles are very common in the $\mathbf{I}_\mathrm{render} | \mathbf{E}^\mathrm{src}$ due to non-smooth extracted meshes. This is more obvious for the nuScenes dataset where the LiDAR is sparser (32-beam) and the capture frequency is lower (2Hz). We can notice there are a lot of holes and random shadowed triangles. However, thanks to our image-based neural deferred rendering pass which are trained with mixed sim-real data, our relighting network takes the original image and modifies the lighting, which removes a lot of those artifacts in the final relit images. We show two nuScenes examples in **Rebuttal Figure R6**. **Q3: Any artifact in the meshes around reflective surfaces due to simple modelling of view-independent diffuse colors.** \ **A3:** Yes, due to the simplification of view-independent diffuse color and base materials, we cannot accurately simulate the lighting effects around reflective surfaces (e.g., car windows). Because we leverage both LiDAR and camera data to build the meshes, we did not notice large degradation in the mesh quality for these reflective surfaces. We hope LightSim can leverage better intrinsic decomposition in the future to further enhance the performance. --- Rebuttal Comment 1.1: Title: Post rebuttal Update Comment: Thanks for the detailed rebuttal and the inclusion of the source illuminations. I have some comments regarding some answers: Q1: There was a paper from Wieschollek et al. - "Learning Robust Video Synchronization without Annotations" which synchronized video of a commute over the duration of a year. I didn't find a direct download link, but maybe for future works, this might be helpful. Q2: My question was more in line with the loss formulation. Which term is responsible for removing these artifacts? It needs to learn to ignore the guidance of the rendered base material image. Is it the I_render|E_src -> I_real data pair? I think Sim-Real Training in the ablation Fig. 5 of the main paper refers to this supervision data pair. --- Reply to Comment 1.1.1: Title: Response to Post-rebuttal Update Comment: Thanks for the further discussions. We address the comments as follows. Q1: Thanks for the suggestion and reference! We believe leveraging ideas from Wieschollek is helpful for future works to synchronize collected multipass lighting data (e.g., same driving route under varying lighting or weather conditions as shown in Wieschollek et al.). The synchronized paired data can be used as evaluation reference and training supervision. We will add more discussions in Appendix E. Q2: Yes, to enhance the realism and remove artifacts in simulated data, we train the network to map $\mathbf{I}_{\mathrm{render}|\mathbf{E}^{\mathrm{src}}} \rightarrow \mathbf{I}\_{\mathrm{real}}$, mapping any relit synthetic scene to its original real world image given the estimated environment map. This reduces artifacts (holes and shadowed triangles), encourages the network to be physically grounded, and produces more realistic images (lower FID in Table A.2 and qualitative ablation in Fig. 5). Apart from sim-real training, the content-preserving loss $\mathcal{L}_{\mathrm{reg}}$ also plays an important role in reducing the effects of sim artifacts as shown in Fig. 5. This is because we enforce the edges of the relit image to be consistent with the edges of the original real image. This helps the network to ignore/suppress the artifacts in the G-buffers as those artifacts introduce undesired edges. We provided more ablation studies in Fig. A8 and Table A.2 (Supplementary D.2) to better understand the effects of each component in the neural deferred shading network.
Summary: This paper presents a system for decomposing urban outdoor driving scenes into estimated geometry and lighting components, which are then used as inputs to their deferred neural rendering workflow. The geometry is represented as a mesh that is extracted using Marching Cubes, from an optimized SDF volume, while the lighting is represented as an inferred HDR sky dome. A physically based renderer (Blender) is used to render deferred shading passes, which are then provided to a U-Net based "neural renderer" to produce the final output images. The target application is realistic relighting, to improve diversity of training data for vision-based perception systems in the driving domain. Strengths: The system presented in this paper involves significant engineering effort, successfully combining multiple learning-based modules. Their design of the neural deferred renderer module presents some novel extensions beyond prior work, such as the choice of conditioning the U-Net upon the environment map, as well as their loss formulation that includes terms for perceptual loss and edge loss. Qualitatively, results generally appear to be of a high quality, on par with or marginally better than existing and concurrent works. Weaknesses: The paper presents a system that draws inspiration from existing works [i.e. FEGR], which makes its own novelty/contribution hard to discern. It would be helpful for the authors to clarify what the unique contributions of this work are that distinguish it from other related works. Some terminology is inconsistent and/or using non-standard terms, e.g. use of all of the following terms "physically based rendering", "physics based rendering", and "physics rendering" which all share the same meaning, the latter two of which are not commonly used in existing literature, and therefore may either cause confusion, and in the worst case, be inaccurate. Another such example is "camera simulation". I recommend proofreading to improve this particular aspect of the writing. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In Section 3.1, for the base material that is assigned to reconstructed objects, what are the specific material parameters used? In Section 3.1, how exactly is the separation between static background and dynamic actors achieved? On L240, why is it beneficial to use a U-Net to generate the final relit image from the deferred rendering passes? Given that the deferred rendering passes contain imperfections, does the U-Net Could you provide more details about the feature grids? These are mentioned, but not elaborated upon as to their importance, or what purpose they serve. Do these bear some relation to the multi-resolution hash grids from [Instant-NGP, Müller et al, 2022]? If so, I suggest including the proper citation, and if not, further elaboration would be ideal. Could you elaborate on why BEVFormer was chosen for the downstream perception training analysis? Could you clarify which components of the system are optimized per-scene, and which parts are optimized from a larger dataset? It seems that the geometry and initial LDR panorama are optimized per-scene, while the other modules are learned from large-scale data -- is this accurate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The relighting results seem to be limited in which aspects of the appearance they affect. For example, re-rendered shadows look convincing, but the reflections on the cars themselves appear to largely be unaffected in terms of lighting direction / appearance of specular highlights. Estimation of materials left as future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful reviews and comments. We also appreciate the detailed suggestions to improve the presentation quality. We address the concerns as follows. **Q1: Discern novelty/contribution with FEGR** \ **A1**: As discussed in the related work section, FEGR is a **concurrent and independent** work (Sec 2 Line 98 to Line 106). It was first made publicly available on arXiv on April 6., **less than two month of NeurIPS submission deadline May 11**. According to NeurIPS policy, “Authors are not expected to compare to work that appeared only a month or two before the deadline.” Therefore, the existence of FEGR should not weaken the novelty of LightSim. Moreover, as stated in related work, while FEGR and LightSim are dealing with a similar task, the methods are different. FEGR aims to conduct inverse rendering (intrinsic decomposition) from a single scene with strong regularization and priors. In contrast, LightSim proposes a “novel data pair training scheme” [Reviewer 6GXP] and learns on many driven scenes to bypass the challenges of inverse rendering. We believe these two approaches are complementary and the combination of both can lead to a better system. Unlike FEGR, LightSim also produces more realistic relighting videos for **dynamic scenes** (reconstruction and relighting of dynamic actors in the original urban scenes, with inter-object lighting effects) and we also demonstrate the effectiveness of our approach for downstream detection tasks. We plan to move the discussion in related work into a separate paragraph to make it clearer. Any follow up discussions are highly appreciated. **Q2: Unify the terminology** \ **A2:** Thanks for the suggestion. We will unify the terminology (“physically based rendering” and “camera simulation”) to avoid confusion. **Q3: Base material parameters used for reconstructed objects?** \ **A3:** Thanks for pointing it out. We use Blender PBR materials. Specifically, we use principled BSDF with vertex color as the base color. For all dynamic assets, we set metal=0.5, roughness=0.2, sepcular=0.5, clearcoat = 1.0, clearcoat roughness = 1.0. For the background asset, we set metal=0.0, roughness=0.7, clearcoat = 0.0, specular=0.5. The other material parameters are initialized with Blender default. We will include the details in the revision. **Q4: Separation between static background and dynamic actors?** \ **A4:** We decompose the dynamic scenes into static background and dynamic objects (assuming to be rigid). Specifically, we use 3D bounding box annotations and separate feature grids to model the background and foreground. The neural scene reconstruction is a modified version of [75]. Please refer to Sec A.1 (supp. material) for more details. **Q5: Why use a U-Net to generate the final relit image from the deferred rendering passes?** \ **A5:** The U-Net learns to relight from large-scale generated synthetic data under different lighting conditions (low resolution and contain imperfections), meanwhile maintaining the context and quality of original RGB image. By carefully designing the training scheme (sim-to-sim, sim-to-real pair) and learning from large-scale data, our neural deferred rendering pass takes the original RGB image / rendered buffers as inputs and produces high-quality rendering results in a lighting aware manner. **Q6: Details about feature grids and relation to Instant-NGP, details, citation?** \ **A6:** We adopt the instant-NGP feature grids with hash encoding. Specifically, we set 16 levels of multi-level feature grids with a hash table size of 2**19. The dimensionality of the feature vector stored in each level's entries is set as 2 and the resolution of the coarsest level is set 16**3. We apologize for missing the citation. We will add the citation and details in the revision. **Q7: Why was BEVFormer chosen for the downstream perception training analysis?** \ **A7:** BEVFormer is a state-of-the-art camera only 3D detection model for self-driving scenes, which has been used for comparison and autonomy evaluation in prior works [1, 2, 3, 4]. [1] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation. Liu et al., ICRA 2022. \ [2] RoboBEV: Towards Robust Bird’s Eye View Perception under Corruptions. Xie et al., ICCV 2023. \ [3] Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving. Dong et al., 2023. \ [4] On the Adversarial Robustness of Camera-based 3D Object Detection. Xie et al., 2023. **Q8: Which components of the system are optimized per-scene, and which parts are optimized from a larger dataset? geometry and initial LDR panorama are optimized per-scene, while the other modules are learned from large-scale data?** \ **A8:** Yes. The geometry (baked with view-dependent color) and LDR/HDR panorama are optimized / predicted per-scene. The other modules including panorama completion network, LDR2HDR lifting network, neural deferred rendering network are learned from large-scale data. **Q9: Reflections on the cars themselves appear to largely be unaffected in terms of lighting direction / appearance of specular highlights. Estimation of materials left as future work.** \ **A9:** We agree that better handling of specular highlights is an exciting research direction that can enhance LightSim. As depicted in Figure R1, this is an open-problem for large-scale scene relighting. We hope the following response addresses the reviewer’s concern. We look forward to any follow up discussions. --- Rebuttal Comment 1.1: Title: Looking forward to follow-up discussions! Comment: We thank the reviewer for taking precious time in checking our responses. We hope our answers and additional results address your concerns well. Specifically, - Q1/A1: We clarified FEGR is a *concurrent and independent* work (less than two months). - Q2/A2: We will unify the terminology. - Q3/A3, Q4/A4, Q6/A6: We explained the technical details and will clarify clearly in the revision. - Q5/A5: We explained why we need neural deferred rendering and the proposed training scheme. - Q7/A7: We justified the choice of BEVFormer. - Q8/A8: We clarified which components are optimized per-scene and which parts are learned from large-scale data. - Q9/A9: We agree better handling of specular highlights is an open, challenging, and exciting research direction (**Rebuttal Figure R1**). Please let us know if you have any additional or follow-up questions. We will be more than happy to clarify them. Any follow-up discussions are highly appreciated!
Summary: The paper proposes a method, LightSim, for recovering geometry, appearance and scene lighting for driving scenes, which enables downstream applications of scene editing and lighting editing. The method incorporates a learned sky dome estimator for hallucinating the original lighting from limited observations, as well as a image-based rendering module for rendering with novel lighting, using rendering proxies as input. The paper demonstrates more realistic light editing results compared to baseline methods in qualitative evaluations, as well as quantitative improvement in scores of perceptual quality and downstream perception tasks where the proposed method is used for training data augmentation. Strengths: [1] The method is generally novel, and presents reasonable improvements in results. The method takes advantage of full array of sensor data including lidar, RGB, as well as GPS to facilitate reconstruction of scene geometry, lighting including the sun. The paper also introduces a learned rendering model with rendering proxies and lighting cues as input, which is able to alleviate artifacts in rendering. [2] The paper demonstrates noticeable qualitative improvements when compared to baseline methods. More importantly, given the lack of ground truth images under novel lighting, the paper is able to include indirect quantitative evaluation with perceptual scores and downstream tasks, to demonstrate the method not only produces visually convincing results, but also benefits downstream tasks. Weaknesses: [1] Clarity. The paper is in general well-written and easy to follow. However, clarify and additional details have to be enhanced for a polished version. For example, - (L143) details on the representation of base materials for all assets; - (Sec. 3.2) details on how to acquire geometry for dynamic scenes? Is geometry estimated per-frame? If so how to deal with temporal consistency? If not, how dynamic scene is models in a NeRF-like framework for geometry reconstruction? - (L246-) What synthetic data is used? What are the specs of the training datasets? What is the training scheme? - In learning the image-based renderer on real scenes, the estimated sky dome lighting is needed from the previous stage in the pipeline. What if the estimated sky dome lighting is not perfect? Does that affect the learning of the renderer? [2] Evaluation. The paper is unique in that it leverages a collection of sensor data besides RGB for the task. In this sense, comparing the method to baselines which leverage RGB data only may not be fair. Moreover, is there any reason why the method is not compared against the SOTAs of [69] and [70]? Is it because of the lack of source code and difficulty to reproduce? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the above section for questions to be addressed. Without understanding those questions, it is difficult to fully evaluate the soundness of the method and results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact of the work is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful reviews and comments. We are glad that our work is recognized as “novel” with “noticeable qualitative improvements”. We address the concerns as follows. **Q1: Clarifications and additional details** \ **A1:** We thank the reviewer for the feedback to improve the presentation quality. We will revise the statements clearly and move more details to the main paper for clarity. **(1) Base materials for all assets:** \ We use Blender PBR materials. Specifically, we use principled BSDF with vertex color as the base color. For all dynamic assets, we set metal=0.5, roughness=0.2, sepcular=0.5, clearcoat = 1.0, clearcoat roughness = 1.0. For the background asset, we set metal=0.0, roughness=0.7, clearcoat = 0.0, specular=0.5. The other material parameters are initialized with Blender default. We will include the details in the revision. **(2) Details on how to acquire geometry for dynamic scenes:** \ We decompose the dynamic scenes into static background and dynamic objects (assuming to be rigid). Specifically, we use 3D bounding boxes annotation and separate feature grids to model the background and foreground. The neural scene reconstruction is a view-independent version of [75]. Please refer to Sec A.1 (supp. material) for more details. **(3) Synthetic data used (Line246). What are the specs of the training datasets? What is the training scheme?** \ We use our reconstructed digital twins to generate synthetic data under different lighting conditions as the paired supervision for neural relighting network training. The training dataset configuration and training scheme are described in detail in the supplementary pdf (Line 110 to 121). **(4) What if the estimated sky dome lighting is not perfect? Does that affect the learning of the renderer?** \ If the estimated sky dome is not perfect, the inference and training of the neural deferred rendering network will be affected since the estimated sky dome is taken as the input during inference and we also use estimated lighting to generate synthetic data. **Q2: Comparison to baselines that only leverage RGB information?** \ **A2:** We tried our best to compare existing works for outdoor relighting and lighting estimation works. Existing works [1, 2] however. are not designed for self-driving and do not fully leverage the available data (e.g., LiDAR, time and GPS data). We noticed that [1, 2] are also important baselines compared in concurrent works FEGR [3] and UrbanIR [4]. Furthermore, we enhance the baseline [5] to leverage the digital twins built by LightSim which leverages all available data as LightSim. [1] Neural Radiance Fields for Outdoor Scene Relighting. Rudnev et al.., ECCV 2022. \ [2] Self-supervised Outdoor Scene Relighting. Yu et al., ECCV 2020. \ [3] Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes. Wang et al., CVPR 2023. \ [4] UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video. Lin et al., Arxiv 2023. \ [5] Enhancing Photorealism Enhancement. Richter PAMI, 2021. **Q3: Comparison with NLFE [69] and FEGR [70]** \ **A3:** Thanks for the suggestion. We compared with NLFE on the lighting estimation task and provided results in the supplementary material (Table A4 and Figure A9 in Sec D.4 ). In summary, LightSim achieves more accurate lighting estimation compared to NLFE. LightSim can also model the inter-object lighting effects compared to NLFE in actor insertion application. For FEGR, we notice it is a *concurrent and independent* work (Sec 2 Line 98 to Line 106). It was first made publicly available on arXiv on April 6., **less than two month from the NeurIPS submission deadline May 11**. According to NeurIPS policy, “Authors are not expected to compare to work that appeared only a month or two before the deadline.” Additionally, since FEGR does not offer its source code, reproducing it within a tight deadline becomes challenging. We hope the following response addresses the reviewer’s concern. We look forward to any follow up discussions. --- Rebuttal Comment 1.1: Title: Looking forward to follow-up discussions! Comment: We thank the reviewer for taking precious time in checking our responses. We hope our answers and additional results address your concerns well. Specifically, - Q1/A1: We clarified the technical details. - Q2/A2: We tried the best to compare existing works for outdoor relighting and lighting estimation. We enhance EPE to leverage all available data as LightSim. - Q3/A3: We compared with NLFE (Table A4 and Figure A9 in Sec D.4). For FEGR, we notice it is a *concurrent and independent* work (less than two months). Please let us know if you have any additional or follow-up questions. We will be more than happy to clarify them. Any follow-up discussions are highly appreciated!
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful reviews and valuable comments. We are excited that the reviewers found our approach “novel” [**Reviewer 5Bwf, Reviewer 25CV, Reviewer ENZ6, Reviewer 6GXP**], and acknowledged our evaluation is “thorough” [**Reviewer 5Bwf**], the results are “good/high-quality” or have “noticeable improvements” over prior arts [**Reviewer 25CV, Reviewer ENZ6, Reviewer 6GXP, Reviewer G49E**]. In the following, we briefly summarize a few points. Please see individual responses for more details. **Novelty [Reviewer G49E, Reviewer ENz6]** Our paper’s novelty lies in developing a neural lighting simulation system for self driving, which is critical for thorough evaluation and more robust training of robot autonomy before safe deployment. To our knowledge, we are one of the first to perform relighting for dynamic urban scenes (SDV and other actors moving in the scenario). LightSim significantly outperforms the state of the art, producing high-quality photorealistic driving videos under a wide range of lighting conditions. We strongly believe LightSim is a critical and innovative step towards realistic and scalable lighting simulation for robotics. Through this paper, we also hope to convey the importance of leveraging available real data (digital twins) and propose a new regime of exploiting digital twins for lighting simulation. While some individual components (reconstruction for dynamic scenes [75], environment map in-painting [59] and LDR-HDR lifting [69], G-buffers to aid learning [52]) are studied before, why they are used and how they are used are carefully designed for self-driving simulation. Specifically, different from existing works that conduct lighting estimation from single image (limited FoV) which is ill-posed and challenging, we fuse all available data (i.e., all six cameras and sun angles based on time and GPS) to reduce ambiguity (see Table A4). To overcome the lack of real world driving scenes captured under different lightings, we propose a “novel data pair training scheme” [**Reviewer 6GXP**] that leverages the digital twins that are built from real world to generate synthetic paired images under different lighting conditions. Those generated diverse synthetic lighting variation data are then combined with real data to train the neural deferred rendering network. The resulting framework is generic, interpretable and has various capacities including actor insertion, removal, modification, and rendering from new viewpoints, all in lighting-aware manner. We believe LightSim is not just a simple extension or integration of previous works. Also, exploiting existing algorithms to realize a novel idea does not mean there is no technical contribution [**Reviewer 5Bwf and 6GXP**]. We hope the reviewers, in particular **Reviewer G49E**, can acknowledge this. **Comparison to FEGR [Reviewer ENZ6, Reviewer 25CV]** FEGR is a *concurrent and independent* work (Sec 2 Line 98 to Line 106). It was first made publicly available on arXiv on April 6., **less than two month of NeurIPS submission deadline May 11**. According to NeurIPS policy, “Authors are not expected to compare to work that appeared only a month or two before the deadline.” Therefore, the existence of FEGR should not weaken the novelty of LightSim. Moreover, as clearly stated in related work, while FEGR and LightSim are dealing with a similar task, the methods are very different. FEGR aims to conduct inverse rendering (intrinsic decomposition) from a single scene with strong regularization and priors. In contrast, LightSim proposes a “novel data pair training scheme” [Reviewer 6GXP] and learns on many driven scenes to bypass the challenges of inverse rendering. We believe these two approaches are complementary and the combination of both can lead to a better system. LightSim also produces more realistic relighting videos for **dynamic scenes** and demonstrates the effectiveness in downstream detection. **Please see the attached pdf for all rebuttal figures (Figure R1-6)**. We hope our responses and additional results can address the concerns. We look forward to follow-up discussions. Pdf: /pdf/0db0df1d600e28b1ab66ed2d35aeee5b4843275b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work extends a recent novel view synthesis approach for autonomous driving scenes with relighting and virtual object insertion with properly cast shadows. The proposed method is a system that consists of the following steps: i) geometry and albedo reconstruction from sensor data, ii) estimation of the environment map (inpainting the sensor data and lifting LDR to HDR), iii) neural deferred rendering that takes in the source image and relights it based on a new environment map. The proposed system enables relighting the original source images or novel views rendered from the step (i). Apart from this, it also allows virtual object insertion of either reconstructed or synthetic assets. The proposed approach is thoroughly evaluated on the tasks of relighting, and virtual object insertion on Pandas and Kitti datasets. Furthermore, it is used to simulate data for a downstream task (3D object detection) where it boosts the performance when compared to using only real world data (a subset of all scenes) . In most experiments/evaluation metrics, the proposed outperforms the selected baselines. Strengths: - This work is a well-designed system that efficiently builds upon prior work (UniSIM). While individual components are not very special, their combination is technically sound and leads to good results. - The experimental evaluation is really thorough with several ablation studies and qualitative/quantitative results including the evaluations on the downstream tasks. - I agree with the authors that trying to recover perfect materials and geometry in AV scenes is a very challenging task that requires strong priors. Therefore, I really like the use of the neural deferred renderer. - The paper is well written and easy to follow, the motivation for the approach is very clear, and the design choices are well-supported by the results/ablation studies. Weaknesses: From my perception, the following are the most important weaknesses of this work: - **Simplification of the reconstruction process**: During the reconstruction stage ((i) above) two strong simplification are made which in my opinion prevent obtaining good results on challenging illuminations (strong directional light). First, (If I understood this part correctly) the method aims to reconstruct the albedo of the scene, by simply removing the directional dependence of the color MLP, but the supervision still comes from the full RGB images. Second, a single material (no information which?) is used for all the assets after the reconstruction. However, material differences in AV scenes are quite large (e.g. asphalt compared to metallic cars). While the reconstructed albedo images are not shown in the submission (if they are, and I missed them, I am sorry), but I assume that in case of strong directional light the shadows simply get baked into the "albedo" representation. - **Temporal aspect**: The prediction of the relighted frames is done "independently" for each frame using a deferred neural renderer. This means that the temporal aspect is not considered and there is no guaranty for mutliview consistency. - **Recovering the env map**: The env map is recovered in two steps, i) the sensor data is projected onto a panorama and inpainted, ii) the LDR image is lifted to HDR. However, if the sun is not observed in the sensor data, I assume that it is very difficult for the inpainting network to predict its location (this is actually very ill-posed as set up). While GPS information can help with the sun location, it cannot predict the occlusion, by clouds. I think that integrating the optimization of the ENV map in the first step and guide the sun location/intensity with the shadows in the scene would be more principled. - **Somewhat limited novelty**: This is actually not a major weakness in my mind, but still wanted to bring it up. The individual components of the proposed system are in my mind not very novel (e.g. reconstruction is from UniSim, env in-painting and lifting to HDR has often been done before). This being said, I do think that the whole system and the combination of these modules are sufficiently novel, but probably not be too far away from the border of acceptable/expected novelty. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: **Comments**: L7: I would suggest softening the claim that the reconstructed assets are *"relightable digital twins*". The two simplifications mentioned above are in contradiction with this statement. I think that tit would be very beneficial to show some results of the UniSIM albedo reconstruction. Do the shadows get baked into albedo and can the network then neural renderer recover from this? On a similar note, it would be good to see some results of the physics based rendered images with target env maps that are used to supervise the neural renderer (Eq. 5). Also, the I_rendered|E_src would be interesting to see. Most of the results that are shown are based on the source images captured on cloudy days or without strong directional light. While I realize that the results on very sunny days or even with sun glare will be worse, I would like to see how gracefully the proposed method degrades. Including some failure cases is always a plus in my mind. **Questions** - L137: relating to the comment in the *weaknesses*, how is the diffuse color supervised? Simply using the full RGB of the source images? - In the environment modeling, why is the sky intensity a vector quantity? Is it a full image representation? - In the physical rendering the buffers are mentioned to be 12 dim, but there is only position (3), normal (3), depth (1), ambient occlusion (1?). What do other dimensions represent? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors have extensively described the limitation and societal impacts of the proposed work in the supplementary material. I also appreciated the information about the GPU hours and proper acknowledgment of the data sources. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We are excited that LightSim is recognized as a “well-designed system” which is “technically sound” with “really thorough experimental evaluation” and “the design choices are well supported”. We are also glad that the reviewer believes the motivation is “very clear” and likes the idea of “neural deferred renderer”. We now address comments. **Q1: Simplification of reconstruction (i.e., diffuse color, material)** \ **A1:** As mentioned in our limitations (supp. L343-345) we agree with the reviewer that LightSim makes simplifications in the reconstruction process and that improvements in intrinsic decomposition can further enhance performance. Recent concurrent works make steps towards better decomposition (FEGR [1], UrbanIR [2]), but it is still a challenging open problem to recover perfect decomposition of materials and light sources for large urban scenes (see **Figure R1-Right** of intrinsic decompositions of recent works - the recovered materials lack little semblance to semantic parts in the original scene). Additionally, these recent relighting works [1-4] also have shadows baked into the recovered albedo (**Figure R1-Left**). **Figure R2** includes examples of the recovered albedo for four scenes from LightSim, and we also have baked shadows (mentioned in limitations, supp. L339-342). As discussed in L221-226, our novelty is in leveraging neural deferred rendering to overcome the limitations of purely physics-based rendering when the decomposition is not perfect. This allows us to generate better relighting results than prior works that have imperfect decompositions. It is an exciting future direction to incorporate better material decomposition along with neural deferred rendering for improved relighting. **Q2: Temporal consistency** \ **A2:** We refer the reviewer to the relighting / shadow editing videos (3min 08s to 3min 51s) in our supplementary video. While we do not guarantee temporal consistency, LightSim can produce temporally consistent lighting simulation videos in most cases, therefore we did not explore other techniques to enforce temporal consistency explicitly. We believe the temporal consistency in neural deferred rendering comes from temporally and multi-view consistent inputs during inference (real image, G-buffers) as well as our combination of simulation and real paired relighting data during training. We believe further improving the temporal consistency is an interesting direction for future work. **Q3: Recovering the env map** \ **A3:** Thanks for the suggestion. We believe optimization of the environment map would be an exciting direction to improve relighting. We explain our design choice in the following two aspects. (a) We use a feed-forward network for lighting estimation which is more efficient and can benefit from learning from a larger set of data. In contrast, the optimization paradigm is more expensive which requires per scene optimization but might be more accurate as mentioned by the reviewer. (b) The ill-posed nature of lighting estimation and extreme intensity range make inverse rendering challenging for outdoor scenes. Optimization of the environment map requires a differentiable renderer and high-quality geometry/material to achieve good results. The existing / concurrent state-of-the-art works [1, 3] still cannot solve the problem accurately (e.g., cloud occlusion example mentioned by the reviewer) as shown in Rebuttal Figure R1 (right bottom). **Q4: Novelty** \ **A4:** See **General Response**. **Q5: Examples of view-independent reconstruction, I_rendered | E_src, strong directional light - failure cases** \ **A5:** Thanks for the suggestions. We included those examples in Rebuttal Figure R2 (view-independent reconstruction), R3 (blender renderings) and R4 (figure cases). We will include those results and more analysis in the next version. **Figure R2:** \ We provide examples of view-independent reconstruction for LightSim. Since we adopt the original RGB as the supervision to train the neural fields that map 3D location to the diffuse $\mathbf{k}_d$, the diffuse reconstruction results have the shadow baked. **Figure R3:** \ The original RGB, diffuse reconstruction, $\mathbf{I}_\mathrm{render} | \mathbf{E}^{\mathrm{src, tgt}}$, relit image and lighting reference are provided. We generate paired synthetic data under different lighting conditions and design a mixed sim-real training scheme. **Figure R4:** \ We highlight two examples of LightSim applied to scenes with strong directional lighting and high sun intensity. Each row shows the shadow editing / relighting results under 4 different sun angles of the target environment map. In Row 1, LightSim cannot fully remove source shadows in bright and sunny conditions (as mentioned in limitations) due to the baked shadows in the view-independent reconstruction, but with neural deferred rendering, LightSim still generates reasonable results. In Row 2, we depict a source image with high sun intensity and glare relit to a new target lighting. It is challenging to remove the sun glare and alter the over-exposed regions in this setting, but we can still apply some relighting effects to the cars and buildings in the scene (see arrows). **Q6: Why is the sky intensity a vector quantity?** \ **A6:** Sky intensity contains 3 channels for R, G, B. **Q7: G-buffer dimension: position (3), normal (3), depth (1), ambient occlusion (1)** \ **A7:** Yes, the buffers are 8 dimensions instead of 12 (in implementation, we loaded depth and ambient occlusion as 3-dim. images for simplicity). We will revise it in the next version. **Q8: "*relightable digital twins*" claim** \ **A8:** We will revise the term to be “*lighting-aware digital twins*” instead to indicate that the reconstructions also include scene lighting estimations. [1] FEGR. Wang et al., 2023. \ [2] UrbanIR. Lin et al., 2023. \ [3] NeRF-OSR. Rudnev et al.., 2022. \ [4] Self-OSR. Yu et al., 2020. --- Rebuttal Comment 1.1: Title: Looking forward to follow-up discussions! Comment: We thank the reviewer for taking precious time in checking our responses. We hope our answers and additional results address your concerns well. Specifically, - Q1/A1 and Q3/A3: Thanks for the suggestion. We explained the reasons for the simplifications of digital twin reconstruction. Better inverse rendering (base color, material, and lighting decomposition) is a promising direction that merits future study (see **Rebuttal Figure R1**). - Q2/A2: While we do not guarantee temporal consistency, Lightsim can generate temporally consistent lighting simulation videos in most cases. - Q4/A4: We clarified the novelty of LightSim. - Q5/A5: We provided additional results in **Rebuttal Figure R2 (view-independent reconstruction), R3 (blender renderings) and R4 (figure cases)**. - Q6/A6 and Q7/A7: We clarified the details of the sky intensity vector and G-buffer dimension. - Q8/A8: We will change the term "*relightable digital twins*" to "*lighting-aware digital twins*". Please let us know if you have any additional or follow-up questions. We will be more than happy to clarify them. Any follow-up discussions are highly appreciated!
null
null
null
null
null
null
SEENN: Towards Temporal Spiking Early Exit Neural Networks
Accept (poster)
Summary: This paper considers to adaptively determine inference time steps of spiking neural networks to improve the tradeoff between accuracy and time. Two methods are proposed. The first one uses confidence score thresholding. The second one introduces an additional policy network to predict the number of timesteps by reinforcement learning. Experiments show the effectiveness of the proposed methods for direct SNN training and ANN-SNN conversion methods. Strengths: 1. This paper considers early-exit of spiking neural networks, which can effectively improve the efficiency of SNNs. The idea of dynamically determining inference time steps based on the difficulty of inputs is interesting. 2. Extensive experiments on static and neuromorphic datasets as well as qualitative assessment are conducted. Weaknesses: 1. Dynamic inference time steps of SNNs, especially based on confidence scores, were also explored by (possibly concurrent) recent works [1,2], which can be discussed. The idea of confidence score is very simple and straight-forward. 2. For SEENN-II, an additional policy network is required for inference, and it is unclear how this can be deployed. From the descriptions (and codes), the policy network is an ANN rather than SNN, so it is not compatible with the main SNN for hardware deployment. This poses challenges and also arouses questions of why considering such hybrid architectures. Additionally, SEENN-II seems not to be flexible, i.e. after training it cannot make tradeoff between accuracy and time/energy. 3. From the descriptions, it is not clear enough whether the energy consumption estimation of SEENN-II consider policy networks. This should be included and more detailedly discussed. For example, on ImageNet, is the policy network an ANN ResNet-34 (as shown in code)? Then it may consume more than the SNN part. The energy result on ImageNet is missing. [1] Wu, D., Jin, G., Yu, H., Yi, X., & Huang, X. (2023). Optimising Event-Driven Spiking Neural Network with Regularisation and Cutoff. arXiv preprint arXiv:2301.09522. [2] Li, C., Jones, E., & Furber, S. (2023). Unleashing the Potential of Spiking Neural Networks by Dynamic Confidence. arXiv preprint arXiv:2303.10276. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Can the authors discuss more about conceptually how to determine the inference of (deep) SNNs (in small time steps) with confidence signals from the top layer? Note that the common time steps of SNNs simply refer to time steps for each layer and do not consider propagation across layers. However, if we consider asynchronous neurons, e.g. on neuromorphic hardware, it also takes time to propagate signals across layers, and the actual delay may be layer number plus time steps per layer. For deep SNNs (e.g. >18 layers in this paper), it takes much time to obtain the confidence score from the top layer, which may be longer than the small time steps per layer. Then how can it effectively control such small time steps? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss limitations and societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your efforts in reviewing our article and providing constructive feedback. We’d like to reply to your concerns in detail. Q1: Dynamic inference time steps of SNNs were also explored by (possibly concurrent) recent works [1,2], which can be discussed. A1: Thank you for pointing out the concurrent works [1] and [2]. We were unaware of these papers during our manuscript preparation, and we appreciate the opportunity to clarify the distinctions between our work and theirs. While [1] and [2] concentrate on converted SNNs with early exit using post-training methods, our SEENN approach is applied to both conversion and direct training of SNNs. Their methods rely on metrics like spike differences and dynamic confidence scores, whereas we utilize confidence score metrics. Beyond the post-training approach, we introduce a training-aware method (SEENN-II) that leverages reinforcement learning to optimize early exit performance. This innovation enhances accuracy within the same number of timesteps. Our work provides a comprehensive empirical comparison with existing SNN literature, encompassing both conversion and direct training methodologies. We acknowledge the relevance of [1] and [2] and will incorporate references to these papers in our revised manuscript. This inclusion will enrich the context and highlight the unique contributions of our SEENN approach. Q2: For SEENN-II, an additional policy network is required for inference, and it is unclear how this can be deployed. Additionally, SEENN-II seems not to be flexible. A2: Thanks for the question. Choosing the policy network architecture is open to discussion. Technically, any neural network would suffice this role. Indeed, if we use two SNNs it might have better hardware compatibility. Then, the question is whether we should use the early exit on policy network architecture and so on. We have avoided this recursive paradox and used an extremely small ANN (ResNet-8 with much fewer channels), which only occupies 0.5% of the FLOPs of the ResNet-19. Meanwhile, we would also like to point out existing work that tries to involve both ANNs and SNNs to enhance the representation ability and efficiency. It is possible to use two types of neural networks on the edge devices collaboratively, e.g. [3]. As for the tradeoff problem, we agree with this notice. Here, we want to clarify that a training-aware approach for SNNs has always been fixed to certain timesteps. To be more specific, in the direct training of SNNs, all existing works report the test accuracy when using the same $T$ in training. Even though they can test under lower timesteps, their reported accuracy on the different numbers of timesteps always uses different networks trained with the corresponding numbers of timesteps. We argue that this is a common problem for all training-aware approaches. Post-training algorithms, like our SEENN-I and conversion methods, can flexibly trade off between accuracy and time. In conclusion, our SEENN-I and SEENN-II align with the existing SNNs method in terms of practicality. Q3: From the descriptions, it is not clear enough whether the energy consumption estimation of SEENN-II consider policy networks. The energy result on ImageNet is missing. A3: We included all costs from policy networks and in line 277 we described that “Meanwhile, the policy network in SEENN-II only brings marginal effect and does not impact the overall inference speed and energy cost”. This is due to the extremely efficient design of the policy network architecture. To measure the energy, we used the same energy per operation times the number of operations as in [27], where the ANN policy network architecture uses multiplications and additions as well. In our code, we use ResNet-8 with an 8x fewer channel size (when compared to ResNet-19) for the policy network, occupying 0.5% FLOPs of the ResNet-19. Hence, the actual cost induced by the policy network has been made as minuscule as possible. Here, we report the energy results on the ImageNet dataset in the rebuttal PDF file. Q4: Can the authors discuss more about conceptually how to determine the inference of (deep) SNNs (in small time steps) with confidence signals from the top layer? How can it effectively control such small time steps? A4: This is a good question. To the best of our understanding, the reviewer is asking about an asynchronous pipeline inference implementation, where later timesteps start getting processed before the current timestep reaches the top layer and causes unnecessary energy waste. First, we would like to clarify that not all hardware devices implement this technique, since it requires lots of area to map the full network to the hardware. For example, Loihi2 [4] only has 8192 neurons in one core, which is not enough for an 18-layer network. Second, if we have to use SEENN on this type of hardware. We can avoid unnecessary energy waste with several methods. (1) Instead of sending the next timestep of current input to the pipeline, we can send the next input data. The next timestep of the current input will be processed only when the confidence score is computed. We put a figure in the rebuttal PDF to show this scheme. (2) Determine the confidence score at early layers. This can be done by training a classifier in the early layers (similar to ANN early exit) and using the signal from early layers to determine the exit timesteps. (3) Predict timesteps before the inference as we did in SEENN-II. Q5: The authors do not discuss limitations and societal impact. A5: We apologize for not mentioning the limitations and broader impact. Please check our general response. Reference: [3] Zhao, Rong, et al. "A framework for the general design and computation of hybrid neural networks." Nature communications 13.1 (2022): 3427. [4] Davies, Mike. "Taking neuromorphic computing to the next level with Loihi2." Intel Labs’ Loihi 2 (2021): 1-7. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed responses. Most of my questions are addressed and I will raise my score. Some additional comments: 1. Directly trained SNNs can be tested under different timesteps for trade-off and some previous works reported such accuracy [1,2]; 2. The design to send the next input data to the pipeline requires frequent memory exchange and additional memory (e.g., for membrane potentials), which can cause much additional energy consumption. [1] Temporal efficient training of spiking neural network via gradient re-weighting. ICLR 2022. [2] Online training through time for spiking neural networks. NeurIPS 2022. --- Reply to Comment 1.1.1: Title: Authors reply Comment: We'd like to thank reviewer Sp8z for his appreciation and genuine discussion of our work. We'd like to discuss the two comments raised by reviewer Sp8z. (1) *Directly trained SNNs can be tested under different timesteps for trade-off and some previous works reported such accuracy [1,2]* Reply: Yes, technically directly trained SNNs can be tested with other numbers of timesteps. Nonetheless, we emphasize that these two works report did not report those accuracies when they are compared against other paper. For example, TET[1] has shown its ability to adjust $T$ in Figure 4, but the model trained with $T=4$ only gets 70.5% accuracy evaluated at $T=2$. However, they report 72.87% accuracy in their main table (Table 3). OTTT[2] also has a huge accuracy gap (7%) between $T=2$ and $T=6$, and they did not compare $T=2$ results with their current state-of-the-art methods like tdBN, TET. Having said that, our emphasis is that the results we presented in Table 1 are evaluated under a fair setting with existing methods. (2) *The design to send the next input data to the pipeline requires frequent memory exchange and additional memory (e.g., for membrane potentials), which can cause much additional energy consumption.* Reply: We agree with the reviewer on this matter. Let's formulate this entire question from the beginning. In such a pipeline inference architecture, we can describe the overall latency as $$Latency(T) = C + \Delta (T-1),$$ where $C$ is a constant representing the latency to finish all layer's computation and $\Delta$ is the latency interval between any two timesteps. As the reviewer suggested, if we have a very deep network (i.e. $C$ is large) under a low number of timesteps (i.e., max $T$ is small, for example 4), SEENN-I cannot save much energy. This is true. However, the problem is under such cases, any timesteps reduction is not meaningful since $Latency(4)-Latency(1)$ has only $3\Delta$ magnitude, which is a lot lower than $C$. One may just use the full timesteps to utilize the full performance of SNNs. If the max $T$ is a lot higher, for example, $>500$, which is often the case for this type of hardware, timesteps reduction can bring a significant energy reduction. The ratio of energy saving is $$Ratio = \frac{C + \Delta (\hat{T}-1)}{C+\Delta(T-1)}$$ where $\hat{T}$ is the average number of timesteps for all test samples in SEENN-I. To summarize, the relationship between $C$, $\Delta$, and max $T$ will impact the overall saving ratio. The choice of $C$ is a hardware design problem and decides the upper limit of the acceleration ratio in SEENN-I or other timestep reduction work. In a few extreme cases ($C \gg \Delta \times T$), timestep reduction brings a limited advantage. This hardware design problem is somewhat beyond the scope of our work as it impacts the common algorithmic aspects in SNNs like network architecture and the choice of $T$.
Summary: This paper proposes an accuracy-efficiency tradeoff method by adjusting the number of timesteps in SNNs, which is new and interesting. The authors accomplish this idea with two methods that uses a confidence score thresholding and reinforcement learning. These methods can be applied to directly training SNN and the ANN-SNN both, and results show these methods can save energy greatly but with negligible accuracy decreasing. I like the work that has potential and would provide a new direction for the following SNN work. Strengths: 1. well-written, easy to read. 2.The idea to adjust the number of timesteps to balance accuracy-efficiency is new and interesting. 3. The method is simple and easy to follow. 4. Experimental results are really good. The method can keep similar performance while reduce cost. Weaknesses: To show the effectiveness of the method. A similar timesteps for other methods should be provided, for example, 2 or 3 timesteps for TET on ImageNet. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1.Can this method be used to other backbones, like transformer? 2.What is the performance of the method using vanilla CE loss? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: 1.I find no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback on our work. Please check our response to your questions and concerns. Q1: To show the effectiveness of the method. A similar timesteps for other methods should be provided, for example, 2 or 3 timesteps for TET on ImageNet. A1: Thanks for the suggestion. We report the TET SEW-ResNet-34 (denoted as Static SNN) accuracy from T=1 to T=4 on the ImageNet dataset as well as our SEENN accuracy. The results are shown in the table below. | Method | T | Accuracy | |------------|-------|----------| | Static SNN | 1 | 60.78 | | Static SNN | 2 | 65.74 | | Static SNN | 3 | 67.11 | | Static SNN | 4 | 68.00 | | SEENN-I | 1.66 | 66.20 | | SEENN-I | 2.35 | 67.99 | | SEENN-II | 1.79 | 67.48 | Q2: Can this method be used to other backbones, like transformer? A2: Yes, our method can be seamlessly integrated into any SNN backbones with any number of timesteps. For example, we train a SpikeFormer [1] on the CIFAR10 dataset and demonstrate the improvement from our SEENN-I in the following table. | SpikeFormer | | SEENN SpikFormer | | |-------------|-------|------------------|-------| | T=1 | 89.39 | T=1.23 | 93.47 | | T=2 | 93.96 | T=1.35 | 93.91 | | T=3 | 94.30 | T=1.82 | 94.28 | | T=4 | 94.51 | T=2.11 | 94.49 | Q3: What is the performance of the method using vanilla CE loss? A3: This is a good question. When using vanilla CE loss, the performance of early timesteps will inevitably decrease, thus damaging the early exit performance. We refer you to our reply to Reviewer ZThW where we show the SEENN performance on a pretrained SNASNet checkpoint, which is trained with vanilla CE loss function. It can be found that the acceleration of SEENN is still effective. However, the reduction in timesteps is relatively smaller, presumably brought by a higher AET value (i.e., Eq. 4 in the main manuscript).
Summary: This paper introduces a new inference scheme for spiking neural networks (SNNs), the early exit on the time dimension. To make sure that relatively easy images can be predicted with less number of timesteps, this work build two frameworks to identify the appropriate timestep to minimize the latency while maintaining decent performance. The first method is an on-the-fly approach and the second one is more complicated that requires finetuning. The authors test their SEENN on various recognition benchmarks and obtain quite good accuracy with even lower number of timesteps compared to existing papers. Strengths: 1. Compared to other SNNs papers that focus on either conversion and training, what is work studied is a universal approach (i.e. time) that can be applied to any types SNNs. 2. The methodology is technically sound and covers different user resources (whether finetune or just plug-and-play) 3. Experiments results is thorough and solid and verify the effectiveness of this approach. Weaknesses: 1. There is no empirical evidence showing that the confidence score will increase along with the number of timesteps. The authors are suggested to add a figure to show the confidence score evolution. The above limitations has been explained in the rebuttal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can SEENN-II be applied to the conversion approach? if not, why is that, please elaborate it. I suggest the following response about this problem is discussed into the revision. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback on our work. Please check our response to your questions and concerns. Q1: There is no empirical evidence showing that the confidence score will increase along with the number of timesteps. The authors are suggested to add a figure to show the confidence score evolution. A1: Thanks for your suggestion. We agree with the reviewer that a visualization of the confidence score would help the readers understand our SEENN-I mechanism. Here, we draw the confidence score distribution on test images. We split the range into [0, 0.9999] and [0.9999, 1.0], otherwise the figure would distort too much. We put the figure into the rebuttal PDF file, please refer to it there. It can be clearly observed that the number of test samples moving to [0.9999, 1.0] has increased if we increase the number of timesteps. This means that the network prediction is getting more confident as we increase the number of timesteps. Moreover, we also measure the mean/variance of confidence scores over the test batches, as shown below. We can find the mean/variance continues to increase/decrease as we increase the number of timesteps. | Timesteps | 1 | 2 | 3 | 4 | |-------------|-------|-------|-------|-------| | CS Mean | 0.816 | 0.923 | 0.953 | 0.964 | | CS Variance | 0.059 | 0.024 | 0.014 | 0.010 | Q2: Can SEENN-II be applied to the conversion approach? if not, why is that, please elaborate it. A2: Thanks for this question. In SEENN-II, we jointly optimize the policy network and the SNNs. If we apply SEENN-II to converted SNNs, we can focus on training the policy network only and keep the converted SNNs frozen. However, many computation resources and the whole training data are required to train the policy network. This setup breaks the assumption that ANN-SNN conversion is done when there are limited computation resources and training data. Therefore, we did not include the SEENN-II to ANN-SNN conversion methods in our initial draft because we think the comparison is unfair. To demonstrate that SEENN-II can work effectively with the converted model, we’d like to provide a result for QCFS-based (Bu et al., 2022b) converted ResNet-18 on the CIFAR-10 dataset. We train a policy network and the average predicted number of timesteps is 1.35, achieving 94.43% accuracy. The improvement is consistent with the results shown in our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the responses, my concerns have been resovled. I appreciate the additional experiment and the discussion in the above response, the corresponding content should be added into the revision if accepted.
Summary: This paper proposes a novel manner to improve the efficiency of the spiking neural network. Specifically, the SEENN is proposed to determine the appropriate number of timesteps, which therefore reduce the inference time-cost. The proposed method is evaluated on CIFAR-10 and Imagenet, achieving good performance. Strengths: + The main contribution of this paper is to treat the number of timesteps as a variable in SNN model. Accordingly, several variations of early-exit manner is deigned for better accuracy-efficiency tradeoff. + The paper writing and organization is good. Weaknesses: - The proposed method for determining the best timesteps is not novel. The first manner is to simply set the right number with the confidence score, while the second one introduces a policy network for better prediction. Although above two manners could work, these would inevitably introduce extra computation cost or human-based prior, which is not expected. In addition, the proposed SNN model still rely heavily on the ANN backbones, e.g., Resnet or VGGnet. - As for the hardware efficiency, the adopted nvidia V 100 maybe not proper, which is not designed for SNN application. - The ablation study is not clear, it seems the SEEN adopts different backbones with SNN. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your efforts in reviewing our article and providing constructive feedback. We’d like to reply to your concerns in detail. Q1: The proposed method for determining the best timesteps is not novel. The first manner is to simply set the right number with the confidence score, while the second one introduces a policy network for better prediction. Although above two manners could work, these would inevitably introduce extra computation cost or human-based prior, which is not expected. In addition, the proposed SNN model still rely heavily on the ANN backbones, e.g., Resnet or VGGnet. A1: Thanks for your question. Our proposed methodology aims to minimize the extra computation cost of SEENN. For SEENN-I, calculating the confidence score is quite fast, only involving a softmax and a max function to determine the score, which is negligible compared to the whole network inference (Eq. 5). For SEENN-II, we use an extremely tiny network (ResNet-8) with 8x fewer channels than ResNet-19 (See details in our appendix C), which only occupies 0.5% FLOPs of the ResNet-19. In our experiments, we compared the energy/latency with static SNN baselines and found that SEENN incurs much lower hardware costs even with these extra computations. As for the human prior problem, we should clarify that this is the same problem with **all** SNNs works. Under the static time steps setting, it is required to pre-set the number of timesteps for the SNN before the inference. As such, the human prior is needed to choose more efficiency or more accuracy by changing the number of timesteps. In our SEENN, this is similar to setting the $\alpha$ or $\beta$ to balance efficiency and accuracy. For the architecture concern, we’d like to emphasize that our work adjusts the timesteps of any SNN, agnostic to the SNN architecture used. For example, we can use the SNAS-Net [1] on the CIFAR10, a work that searches for unique SNN network architecture. We apply our early exit mechanism to it. The results are shown below: | SNASNet | | SEENN SNASNet | | |---------|-------|----------------|-------| | T=1 | 73.69 | T=1.39 | 84.59 | | T=2 | 84.79 | T=1.99 | 90.85 | | T=3 | 90.87 | T = 2.36 | 92.21 | | T=4 | 92.66 | T = 2.76 | 93.05 | | T=5 | 93.39 | T = 3.14 | 93.37 | In addition to NAS-based networks, we refer you to our reply to Reviewer DV4t where we provide the results on the Transformer based network architecture. Q2: As for the hardware efficiency, the adopted nvidia V100 maybe not proper, which is not designed for SNN application. A2: Thanks for your question. It is true that Nvidia GPUs are not optimized for SNNs. Therefore, most SNN acceleration work cannot be tested on GPUs. However, in this paper, we show that reducing the number of timesteps of SNNs can lead to acceleration on any inference platform, including GPUs that are not even optimized for SNNs. Our SEENN can be inherently accelerated on other platforms like neuromorphic hardware and in-memory computing (IMC) architectures. Here, we show the implementation of SEENN on a publicly available IMC architecture simulator [2]. The experiments use ResNet-19 architecture on the CIFAR10 dataset. Besides the accuracy metric, we use the relative Energy-Delay-Product (EDP) with respect to Static SNN at 4 timesteps as the hardware metric: The results are shown in the table below. It can be found that our SEENN also shares a significant amount of acceleration on other hardware platforms. | Method | T | Accuracy | Relative EDP (%) | |------------|------|----------|------------------| | Static SNN | 1 | 95.01 | 9.6% | | Static SNN | 2 | 95.64 | 30.0% | | Static SNN | 3 | 96.26 | 57.3% | | Static SNN | 4 | 96.46 | 100% | | SEENN-I | 1.09 | 96.07 | 11.8% | | SEENN-I | 1.20 | 96.38 | 13.8% | Q3: The ablation study is not clear, it seems the SEEN adopts different backbones with SNN. A3: We apologize for any confusion in the ablation study. But we do use identical backbones when comparing SNNs and SEENNs. Therefore, the acceleration effect brought by our proposed method and the comparison is fair and clear. We will change our description and clarify it in the revised version of the manuscript. *Reference* [1] Kim, Youngeun, et al. "Neural architecture search for spiking neural networks." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [2] Moitra, Abhishek, et al. "Spikesim: An end-to-end compute-in-memory hardware evaluation tool for benchmarking spiking neural networks." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2023). --- Rebuttal Comment 1.1: Title: Authors reply Comment: Thank you again for your time in reviewing our article. Given that the deadline for the author-reviewers discussion is approaching, we want to kindly remind you to let us know if you have further questions or concerns about our rebuttal. We welcome any further feedback, comments, or questions and we are open to discussion.
Rebuttal 1: Rebuttal: We’d like to thank all reviewers for their constructive feedback and suggestions on our work. We will address each reviewer’s questions and concerns point-to-point. And we welcome any further discussion on our paper. We have attached a PDF file for demonstrating 4 figures. A detailed description can be found in each rebuttal thread. Here, we'd like to reply to a concern that we did not include limitations and potential negative social impact. Our work focuses on reducing the inference cost of SNNs that can potentially benefit their deployment on edge hardware. We think it does not bring potential negative impacts on society. For limitations, SEENN uses hyper-parameter $\alpha, \beta$ to control the accuracy-efficiency tradeoff, which is less straightforward to users and may require certain trial and error. Pdf: /pdf/23b0c210cf81e9dd841a061fc8693da04b1b3e46.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bridging Discrete and Backpropagation: Straight-Through and Beyond
Accept (oral)
Summary: The presented paper tackles the optimization of the parameter of the distribution of a discrete random variable through stochastic gradient descent by addressing the problem of gradient estimation given samples from the discrete distribution. The authors consider three popular estimators as a starting point for their investigation, namely, the reinforce estimator, the straight-through estimator and the straight-through gumbel-softmax estimator. The authors first establish that the straight-through estimator is a first-order approximation of the true gradient, since the difference $f(I_i) - f(I_j)$ in the true gradient is replaced by $\frac{\partial f(I_j)}{\partial I_j}(I_i - I_j)$. The author thus replace the $\frac{\partial f(I_j)}{\partial I_j}(I_i - I_j)$, which gives a first-order approximation of $f(I_i) - f(I_j)$, by $\frac{1}{2} (\frac{\partial f(I_j)}{\partial I_j} + \frac{\partial f(I_i)}{\partial I_i})(I_i - I_j)$, which yield a second order approximation. The authors then empirically evaluates both the baselines and the proposed estimator on a synthetic task (polynomial programming), unsupervised modeling (unsupervised sequence parsing), and structured output prediction (generative modeling). Strengths: - The theoretical analysis is well written and intuitively explained. The interpretation of the straight-through estimator as a first-order approximation of the true gradient allows the author to use numerical analysis results to improve the theoretical behavior of their proposed solution. - The experiment section is decent, with qualitatively different setups addressing different application domains. - The choice of the baseline for the baseline subtraction method as well as the tuning of the temperature parameter is appropriately discussed. Weaknesses: - Equation (6) does not seem to be valid. I would expect $\sum_i (f(I_i) - E[f(D)]) \frac{d \pi_i}{d\theta} = \sum_i \sum_j \pi_j(f(I_i) - f(I_j)) \frac{d \pi_i}{d\theta}$, thus I don't understand why the term $\sum_i E[f(D)] \frac{d \pi_i}{d \theta}$ vanishes. - The derivation of remark 4.1 as provided in appendix C does not seem to be valid. I don't understand how $\sum_i \sum_j \pi_j \frac{\partial f(I_j)}{\partial I_j}(I_i - I_j)) \frac{d \pi_i}{d\theta} = \sum_i \frac{\phi_D}{\pi_D} \pi_D \frac{\partial f(I_j)}{\partial I_j} \sum_i I_i \frac{d \pi_i}{d\theta}$. Furthermore, there should be a typo at line 155 since it should be $\pi_D$ who is the output of the softmax and could take very small values. This would be consistent with the fact that it is the denominator of the fraction $\frac{\phi_D}{\pi_D}$. - I have no idea how equation (8) has been derived. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It is said that the derivation of equation (8) leverages the derivative of the softmax function, thus there may be a handy formula about these derivative that I am not aware of and which justify all the equations that do not seem valid to me. Could the authors elaborate on my concern about the validity of their equations ? I highly doubt that the equations are wrong, given that they coincide with previous work on the topic and the consistent improvements provided in the experiment section. I would be willing to increase my score if the author correctly addresses my concerns. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is a decent discussion about the limitation of the proposed method througout the paper. Albeit, given that high variance is the main limiting factor of the REINFORCE estimator, and given that the other estimators are biased, a focus on bias variance tradeoff would have been welcome. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We value your comments and will address your concerns regarding the correctness of the derivation, with further elaborations to be included in the final paper. **Reply to weakness argument 1:** Since $\sum_i \pi_i = 1$, we have $\sum_i E[f(D)] \frac{d \pi_i}{d \theta} = E[f(D)] \cdot \frac{d\sum_i \pi_i }{d\theta} = E[f(D)] \cdot \frac{d 1}{d\theta} = 0$ In the revision, we will add a simple explanation of this in Eq. (6). **Reply to weakness argument 2:** Thanks for pointing out the typo, and we will fix the typo and add more elaborations in the revision. Please find the detailed derivation of the last-second equation in Appendix C as below. $\sum_i \sum_j \phi_j \frac{\partial f(I_j)}{\partial I_j} (I_i - I_j) \frac{d \pi_i}{d \theta}$ $= \sum_j \Large(\normalsize \phi_j\frac{\partial f(I_j)}{\partial I_j} \sum_i (I_i - I_j) \frac{d \pi_i}{d \theta} \Large)\normalsize$ $= \sum_j \Large(\normalsize \phi_j\frac{\partial f(I_j)}{\partial I_j} (\sum_i I_i \frac{d \pi_i}{d \theta} - \sum_i I_j \frac{d \pi_i}{d \theta} )\Large)\normalsize$ $= \sum_j \Large(\normalsize \phi_j\frac{\partial f(I_j)}{\partial I_j} (\sum_i I_i \frac{d \pi_i}{d \theta} - I_j \frac{d \sum_i \pi_i}{d \theta} )\Large)\normalsize$ $= \sum_j \Large(\normalsize \phi_j\frac{\partial f(I_j)}{\partial I_j} (\sum_i I_i \frac{d \pi_i}{d \theta} - I_j \frac{d 1}{d \theta}) \Large)\normalsize$ $= \sum_j \phi_j\frac{\partial f(I_j)}{\partial I_j} \sum_i I_i \frac{d \pi_i}{d \theta}$ $= \sum_j \frac{\phi_j}{\pi_j}\cdot \pi_j \cdot \frac{\partial f(I_j)}{\partial I_j} \sum_i I_i \frac{d\pi_i}{d\theta}$. **Reply to weakness argument 3 and question 1:** In the revision, we will mention the derivative of the softmax around L171, i.e., for $\pi = \mbox{softmax}(\theta)$, we have $\partial \pi_i / \partial \theta_k = \pi_k (\delta_{ik} - \pi_i)$. Please find the detailed derivation of equation (8) as below. $\frac{\partial \mathcal{L}}{\partial \theta_k}= \frac{\partial \sum_i \pi_i f(I_i) }{\partial \theta_k}$ $=\sum_i f(I_i) \frac{d \pi_i}{d \theta_k}$ $= \sum_i f(I_i) \pi_k (\delta_{ik} - \pi_i)$ $= \sum_i f(I_i) \pi_k \delta_{ik} - \sum_i f(I_i) \pi_k \pi_i$ $= f(I_k) \pi_k - \sum_i f(I_i) \pi_k \pi_i$ $= f(I_k) \pi_k \sum_i \pi_i - \sum_i f(I_i) \pi_k \pi_i$ $= \pi_k \sum_i \pi_i (f(I_k) - f(I_i))$. We hope our responses have adequately addressed your concerns and further highlighted the innovations and potential impact of our study. If you have any further questions or need additional information, please do not hesitate to ask. --- Rebuttal Comment 1.1: Comment: I thank the authors for taking the time to carefully right each step of their derivation to me. As other reviewers suggested, some details needed for the full derivation should be mentioned in the main text, or I would alternatively suggest to include the derivations in the appendix if the main body run out of space. Since the proposed contribution consistently outperforms alternative methods in the experiments provided in the paper as well as the additional experiments provided in the global rebuttal, I am increasing my score accordingly. --- Reply to Comment 1.1.1: Title: Author Response Comment: Thank you for your feedback. We appreciate your acknowledgment of our work's performance in the experiments, and we will include the discussions and clarifications here.
Summary: This paper works on gradient approximation for discrete latent variables. The problem is challenging because the discreteness hinders direct backpropagation through the neural networks; hence, an approximation is needed. The authors address the issues from the perspective of a second-order approximation of the gradient. Specifically, the authors proposed ReinMax and showed that ReinMax achieved second-order accuracy with negligible computation overheads. The experiments on Polynomial Programming, MNIST-VAE, and ListOps demonstrated the superiority of ReinMax over the prior methods such as Straight-Through Gumbel-Softmax, Gumbel-Rao Monte Carlo, and Gapped Straight-Through. Strengths: The problem has been challenging for a long time. Most prior work addresses the issue from the perspective of Straight-Through Gumbel Softmax (STGS). To the reader's best knowledge, the perspective of second-order approximation is novel in this field and the performance improvement is believable. Below highlights the strengths of the paper. a) The use of second-order approximation is novel and the theoretical derivation is believable. b) The performance gain is clear with similar computational overhead as in the prior work. c) The experimental analysis covers some relevant topics such as temperature scaling and the choice of baseline. The conclusion is clear from the analysis. Weaknesses: a) The design choice of ReinMax is unclear. Although ReinMax is well-motivated by the second-order approximation, the reader might be confused by the form of ReinMax at Eq. (7). For example, why mixing $\pi$ and $D$ by introducing $\pi_D=(\pi+D)/2$? Why substracts by $\nabla_{ST}$? Where are the coefficients (2 and 1/2) from? It would be nice if the authors can elaborate more on these. b) The implication of the experiments remains unclear. For a fair comparison with the prior work, the authors follow the literature and run ReinMax on Polynomial Programming, MNIST-VAE, and ListOps. However, these are toy problems with limited implications for the applications. For example, does the good performance of ReinMax on Polynomial Program/MNIST-VAE/ListOps imply any possible applications? Conversely, if we have a reinforcement learning problem with discrete actions or a VQ-VAE model, can ReinMax be helpful? It would be nice if the authors can evaluate more on the application side so that the other practitioners can follow. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why the proposed method is called ReinMax? For the other questions, see the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We value your comments and will address the concerns regarding experimental design and presentation in this rebuttal, with further elaborations to be included in the final paper. **Reply to weakness 1:** We understand the complexity of ReinMax's exact form may not be immediately intuitive, as it is a result of specific derivations. To alleviate confusion, we will dedicate additional space in the final version to elaborate on the exact form of ReinMax. **Reply to weakness 2:** In our submission, we adhered to the experiment design of the existing study, focusing on small-scale problems for controllability and resource efficiency, as our experiments were mainly conducted on P100/P40 GPUs. To further demonstrate the generalizability of ReinMax, we conducted additional experiments (detailed in the general rebuttal), including (1) comparisons with the REINFORCE variant that employs the state-of-the-art variance reduction technology, specifically RODEO (SHI 2022), and (2) applications to a real-world scenario, i.e., differentiable neural architecture search on CIFAR10, CIFAR100, and ImageNet-16-120. ReinMax maintained outstanding performance throughout these expanded tests, showing consistent improvements over the baseline. These additional experiments should provide a more comprehensive understanding of ReinMax's potential applications. **Reply to question 1:** We used the paper title and the abstract as the prompt and queried ChatGPT for several names, among which we selected ReinMax, since it concisely implies the method's connection to softmax and REINFORCE, and we believe it is a fitting and appealing term. We hope our responses have adequately addressed your concerns and further highlighted the innovations and potential impact of our study. If you have any further questions or need additional information, please do not hesitate to ask. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for this feedback authors. This will be taken into account.
Summary: The paper proves that the straight-through (ST) gradient estimator for the categorical distribution can be viewed as a first-order approaximation of the true gradients. Based on this point of view, a new gradient estimator, ReinMax is proposed based on Heun's method which is a second-order method. Experiments are conducted which show that ReinMax improves upon the state of the art methods. Strengths: - **The paper has good writing quality.** The background is introduced in a well-organized manner and contributions are stated clearly. Mathematical formulae are also well laided out and are easy to understand. Relationship with existing work (ST, REINFORCE, baseline subtraction) are also expanded upon in good detail and nuance. - **Solid contribution with a comprehensive set of experiments.** The proposed new method has solid theoretical underpinnings (Heun's method being more accurate than Euler's method), is still very efficient (not requiring the Hessian), and is proven to be effective under multiple different settings (polynomial programming, MNIST, etc.). The new method is compared against many commonly used methods (STGS, DisARM, ...) and there is also discussion of using a different baseline for subtraction. Extensive hyperparameter tuning is performed and advantage of the proposed method is demonstrated well. Weaknesses: My main concern with this paper is that it **oversells its theoretical contribution**. The view of the straight-through estimator as a first-order Taylor expansion of the unbiased gradients is not news to people who study gradient estimators. The authors are fair in pointing out that previous work (Tokui & Sato 2017) only dealt with the Bernoulli case, but one might argue that The categorical distribution is just a natural extension of the Bernoulli case. Regardless, I think it is **potentially misleading to say that "ST as a first-order approximation to the gradient" is a novel perspective**. Despite this, I still consider the proposed approach to be novel enough that the contribution of the paper as a whole is still significant. Another concern is the lack of comparison with RODEO (Shi 2022) which as far as I know is the state of the art for REINFORCE methods with variance reduction. The paper only reported numbers on RLOO and DisARM in table 3 but these methods are known to be weaker than RODEO, which is reported to outperform DisARM significantly in terms of gradient variance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - What is the effect of temperature scaling on the proposed method? How does it influence the relative performance of ST-type methods when compared with unbiased REINFORCE-based methods? - Is there a complexity-efficiency trade-off here? For example, would using 4th-order Runge-Kutta instead of Heun's method yield much better gradient estimations and improve the overall result? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors do not explicitly discuss the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We value your comments and will address the concerns regarding the novelty of our study and experiment design, with further elaborations to be included in the final paper. **Reply to weakness argument 1:** While it may be intuitive to some that the straight-through estimator functions as a gradient approximation, no prior work has formally established this for the general multinomial case. Also, the derivation of Theorem 3.1, while not overly complicated, is more than a mere expansion of existing results: - In Tokui & Sato (2017), the authors positioned $\hat{\nabla}_{ST}$ as a first-order approximation, but their analysis is exclusively rooted in the properties of Bernoulli variables. As an example, let us consider a Bernoulli random variable $D \\in \\{ I_1, I_2 \\}$. Their approach depends on the property that $\\nabla = (f(I_2) - f(I_1)) \frac{d\\pi_1}{d \\theta} = (f(I_1) - f(I_2)) \\frac{d\\pi_2}{d \\theta}$, and thus is not applicable to multinomial variables. - On the other hand, the analyses in Gregor et al. (2014) and Pervez et al. (2020) are applicable to multinomial variables but resort to adding the term $\\frac{1}{n \\cdot \\pi_D}$ in $\\hat{\\nabla}_{ST}$, an alteration that we believe could induce unwanted instability. This concern is discussed in Section 4.1 and Section 6.4. In the revision, we will modify our claim as “our study is the first to formally establish $\\hat{\\nabla}_{ST}$ works as a first-order approximation in the general multinomial case". **Reply to weakness argument 2:** In the general rebuttal, we provide additional experimental results on (1) comparisons with the REINFORCE variant that employs the state-of-the-art variance reduction technology, specifically RODEO (SHI 2022), and (2) applications to a real-world scenario, i.e., differentiable neural architecture search on CIFAR10, CIFAR100, and ImageNet-16-120. ReinMax maintained outstanding performance throughout these expanded tests, showing consistent improvements over the baseline. **Reply to question 1:** In our experiments, we observed that temperature scaling enhances the stability of both ST and ReinMax algorithms. We conjecture that this scaling acts as a variance reduction method. Across all six VAE settings discussed in the general rebuttal, ReinMax achieved the best performance when the temperature was set to 1.1. Adjusting the temperature to 1.0 or 1.2 would lead to a slight performance drop (with an ELBO difference of approximately 1), while ReinMax still outperformed RODEO with these sub-optimal temperatures. **Reply to question 2.1** As discussed in Section 6.4, the computation overhead of ReinMax is negligible. **Reply to question 2.2** Although it's possible to apply higher-order ODE solvers, they require more gradient evaluations, leading to undesirable computational overhead. To illustrate this point: - The approximation used by ReinMax (as described in Definition 3.2) requires N gradient evaluations, i.e., $\\{\\frac{\\partial f(I_i)}{\\partial I_i}\\}$. - In contrast, the approximation derived by RK4 needs $N^2+N$ gradient evaluations, i.e., $\\{\\frac{\\partial f(I_i)}{\\partial I_i}\\}$ and $\\{\\frac{\\partial f(I_{ij})}{\\partial I_{ij}}\\}$, where $I_{ij} = \\frac{I_i + I_j}{2}$. Therefore, while higher-order solvers are applicable, they may not be suitable in our case. We hope our responses have adequately addressed your concerns and further highlighted the innovations and potential impact of our study. If you have any further questions or need additional information, please do not hesitate to ask. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and for providing further insights on the theoretical aspects of their work. The authors added comparison with the state of the art method for REINFORCE variance reduction and provided insight on the strengths of each method which I found to be helpful for my understanding of their work. I am also happy to know that the authors have acknowledge my point on the novelty of the theoretical perspective and are willing to adjust the writing. With the above concerns addressed, I am bumping up my score to a 7. --- Reply to Comment 1.1.1: Title: Author Response Comment: Thank you for your constructive feedback. We're glad our clarifications were helpful. we will include the discussions and clarifications in the revision.
Summary: This work develops a new method, ReinMax, to compute gradients of parameters used to generate discrete random variables; specifically, the parameters of a multinomial's softmax parameterization. It provides a new perspective onto the existing straight-through (ST) gradient estimator as a first-order approximation. The new method is a second-order approximation and, in contrast to ST, relies on gradient differences rather than a single gradient. The ReinMax gradient estimator is compared to other approaches on polynomial programming toy problems, unsupervised parsing on ListOps, and ELBO training of a VAE on MNIST. Strengths: - (S1) The paper provides a new perspective onto the Straight-Through (ST) technique, which simply replaces the backpropagation through a non-differentiable operation by applying the identity. This adds a new theoretical justification why such heuristics work. Framing ST as first-order approximation also naturally suggests an extension to second-order, as presented by the paper. - (S2) The new method does not seem to add significant overhead to existing approaches. The analysis includes a sensitivity analysis over different problem hyper-parameters which suggests that the estimation works consistently. Weaknesses: - (W1) Limited to softmax parameterization of multinomial: The main text focuses on computing gradients for the parameters of a multinomial parameterized by a softmax. The authors should comment on the (im)possibility to treat other parameterizations or distributions. - (W2) Experiments: In the presented experiments, ReinMax seems to consistently work well and perform comparably or more favourably than the competitors. However, I do not have the expertise to judge whether the experiments represent state-of-the-art tasks. I am willing to adapt my score and confidence during the discussion phase with the authors and other reviewers. - (W3) Clarity: Some steps could be easier to follow by providing the required mathematical properties. The figures and tables should be moved closer to where they are referenced in the text. Here is a list of actionable suggestions which I think would improve clarity: - In Eq. (4), explicitly add the term $\partial D / \partial \pi$ that is set to the identity in ST - In Eq. (6), mention that $\sum_i \partial \pi_i / \partial \theta = \partial \sum_i \pi_i / \partial \theta = \partial 1 / \partial \theta = 0$. - I think it would be helpful to move parts of appendix E to the main text to have a more formal description of first- and second-order approximation in the specific context. - Fix '2rd-order' into '2nd-order' in various places in the main text and the appendix. - Mention the derivative of a softmax around L171, that is $\partial \pi_i / \partial \theta_k = \pi_k (\delta_{ik} - \pi_i)$ - Clarify the notation $\delta_{\mathbf{D} k}$ in the appendix - Minor suggestions: Use consistent symbol $\mathcal{R}$ in L51, 'softmax' in caption of Figure 1, remove 'on' in L84, 'computational efficiency' rather than 'computation efficiency', 'approximation' in L185, use bold symbol for $\theta_i$ in L210 and add $\in \mathcal{R}^2$, 'phenomena' rather than 'phenomenon' in L218), add 'that' between 'one' and 'uses' in L264 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why is the method called ReinMax? - ReinMax is a second-order extension of ST. Can this be extended to even higher orders? Would this be beneficial or are there diminishing returns in including higher orders? - What is $\phi_i$ in L150 and what are its constraints? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See (W1) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We value your comments and will address the concerns regarding experimental design, presentation, and limitations of ReinMax in this rebuttal, with further elaborations to be included in the final paper. **Reply to limitation 1:** While our main text focuses on multinomial distributions, by reparameterizing other categorical distributions into a multinomial distribution, ReinMax can be generally applied to all categorical random variables. It is worth mentioning that ReinMax is not applicable to continuous random variables. While extending ReinMax to these cases is possible, it may not be necessary, given that many commonly used distributions can be reparameterized. For example, the normal distribution $z \sim \mathcal{N}(\mu, \sigma)$ can be re-written as $z = \mu + \sigma \cdot \mathcal{N} (0, 1)$, making it trivial to compute $\partial z/\partial \mu$ and $\partial z/\partial \sigma$. Furthermore, ReinMax differs from REINFORCE in that ReinMax requires both the $\frac{\partial f(D)}{\partial D}$ and $\frac{\partial p(D)}{\partial \theta}$, while REINFORCE only requires $\frac{\partial p(D)}{\partial \theta}$. Thus, REINFORCE can be applied in cases where $f(D)$ is not differentiable. **Reply to weakness 2:** In our submission, we adhered to the experiment design of the existing study, focusing on small-scale problems for controllability and resource efficiency, as our experiments were mainly conducted on P100/P40 GPUs. To further demonstrate the generalizability of ReinMax, we conducted additional experiments (detailed in the general rebuttal), including (1) comparisons with the REINFORCE variant that employs the state-of-the-art variance reduction technology, specifically RODEO (SHI 2022), and (2) applications to a real-world scenario, i.e., differentiable neural architecture search on CIFAR10, CIFAR100, and ImageNet-16-120. Our findings showed that ReinMax performs strongly and consistently in these settings. We will add more discussions and elaborations about these experiments in the final version of the paper. **Reply to weakness 3:** We appreciate your suggestions for improving clarity, and we will incorporate them into our revision. **Reply to question 1:** We used the paper title and the abstract as the prompt to query ChatGPT for several names, among which we selected ReinMax since it concisely implies the method's connection to softmax and REINFORCE, and we believe it is a fitting and appealing term. **Reply to question 2** Although it's possible to apply higher-order ODE solvers, they require more gradient evaluations, leading to undesirable computational overhead. To illustrate this point: - The approximation used by ReinMax (as described in Definition 3.2) requires N gradient evaluations, i.e., $\\{\\frac{\\partial f(I_i)}{\\partial I_i}\\}$. - In contrast, the approximation derived by RK4 needs $N^2+N$ gradient evaluations, i.e., $\\{\\frac{\\partial f(I_i)}{\\partial I_i}\\}$ and $\\{\\frac{\\partial f(I_{ij})}{\\partial I_{ij}}\\}$, where $I_{ij} = \\frac{I_i + I_j}{2}$. Therefore, while higher-order solvers are applicable, they may not be suitable in our case. **Reply to question 3** $\phi$ is a distribution over $\{I_1, \cdots, I_n\}$, i.e., $\sum_i \phi_i = 1$ and $\phi_i = P(I_i)$. We hope our responses have adequately addressed your concerns and further highlighted the innovations and potential impact of our study. If you have any further questions or need additional information, please do not hesitate to ask. --- Rebuttal Comment 1.1: Title: Rebuttal follow-up Comment: Dear authors, thanks for the rebuttal. I am satisfied with your discussion of ReinMax's applicability/limitations (W1) and the possible extension to higher-order estimators. **Please make sure to include them in the draft**. You should also include a footnote that explains the algorithm's name, as this was brought up by multiple reviewers. Based on the overall positive feedback on your experiments (W2) from the other reviewers, as well as the additional results you provided in the rebuttal, I have decided to raise my score. --- Reply to Comment 1.1.1: Title: Author response Comment: Thanks again for your timely response and detailed suggestions! We will include these discussions and additional results in the revision.
Rebuttal 1: Rebuttal: # General Rebuttal We thank all reviewers for their thoughtful feedback. In this work, we tackle critical challenges in gradient estimation for discrete variables, and our contributions are notable in two primary areas: - we formally established that the straight-through estimator is a first-order approximation of the gradient, - we proposed ReinMax, offering a second-order approximation with negligible computational overhead and consistent performance improvements. In this general rebuttal, we provide additional experiment results on: - Comparisons with the REINFORCE variant that employs the state-of-the-art variance reduction technology, namely RODEO (SHI 2022). - Applications of ReinMax on real-world problems, specifically differentiable neural architecture search on CIFAR10, CIFAR100, and ImageNet-16-120. ## Comparisons with RODEO ### Bernoulli VAEs We utilized ReinMax to train Bernoulli VAEs on MNIST, Fashion-MNIST, and Omniglot, adhering closely to the experimental settings of RODEO (SHI et al., 2022), including pre-processing, model architecture, batch size, and training epochs. As summarized in Tables A and B, ReinMax consistently outperforms RODEO across all settings. **Train A: -ELBO of 2 x 200 VAE on MNIST, Fashion-MNIST, and Omniglot when K=3 (i.e., three evaluations per image). \* Baseline results are referenced from SHI et al. (2022).** ||ARMS$^*$|DoubleCV$^*$|RODEO$^*$|RELAX$^*$|ReinMax| |-|-|-|-|-|-| |MNIST |100.84±0.14|100.94±0.09|100.46±0.13|101.99±0.04|97.83±0.36| |Fashion-MNIST|237.05±0.12|237.40±0.11|236.88±0.12|237.74±0.12|234.53±0.42| |Omniglot|115.32±0.07|115.06±0.12|115.01±0.05|115.70±0.08|107.51±0.42| **Train B: -ELBO of 2 x 200 VAE on MNIST, Fashion-MNIST, and Omniglot when K=2 (i.e., two evaluations per image). \* Baseline results are referenced from SHI et al. (2022).** ||DisARM$^*$|Double CV$^*$|RODEO$^*$|ReinMax| |-|-|-|-|-| |MNIST |102.75±0.08|102.14±0.06|101.89±0.17|98.05±0.29| |Fashion-MNIST|237.68±0.13|237.55±0.16|237.44±0.09|234.86±0.33| |Omniglot|116.50±0.04|116.39±0.10|115.93±0.06|107.79±0.27| ### Polynomial Programming To better understand the difference between RODEO and ReinMax, we conduct more experiments on polynomial programming, i.e., $\min_\theta E_{X} \Large[\normalsize \frac{\|X - c\|_p^p}{L}\Large]\normalsize$. Specifically, we consider polynomial programming under two different settings that define $c$ differently (the difference between these two settings is elaborated at the end of the part): - Setting A: $c = [0.45, \cdots, 0.45]$. This is the setting we used in the submission. - Setting B: $c = [\frac{0.5}{L}, \frac{1.5}{L}, \cdots, \frac{L-0.5}{L}]$. We visualized the training curve of polynomial programming in the attached pdf (Figure 13), together with the training curve of Bernoulli VAE (Figure 12). ReinMax achieves better performance in more challenging scenarios, i.e., smaller batch size, more latent variables, or more complicated problems (Setting B or VAEs). Meanwhile, REINFORCE and RODEO achieve better performance on simpler problem settings, i.e., larger batch size, fewer latent variables, or simpler problems (Setting A). This observation matches our intuition: - REIFORCE-style algorithms excel as they provide unbiased gradient estimation but may fall short in complex scenarios, since they only utilize the zero-order information, i.e., a scalar $f(\cdot)$ for each training instance. - ReinMax, using more information (i.e., a vector $\frac{\partial f(D)}{\partial D}$ for each training instance), handles challenging scenarios better. Meanwhile, as a consequence of its estimation bias, ReinMax leads to slower convergence in some simple scenarios. As to the difference between the Setting A and the Setting B, we would like to note: - In Setting A, since $\forall i, c_i=0.45$ and $\theta_i\sim Uniform(-0.01, 0.01)$ at initialization, $E_{X_i \sim \mbox{softmax}(\theta_i)}\Large[\normalsize \frac{\|X_i - c_i\|_p^p}{L}\Large]\normalsize$ would have similar values. Therefore, the optimal control variates for $\theta_i$ are similar across different $i$. - In Setting B, we set $c_i$ to different values for different $i$, and thus the optimal control variate for $\theta_i$ are different across different $i$. Therefore, Setting A is a simpler setting for applying control variate to REINFORCE. ## Differentiable Neural Architecture Search We demonstrate the applicability of ReinMax as a drop-in replacement in differentiable neural architecture search. GDAS (Dong & Yang, 2019) is an algorithm that employs STGS to estimate the gradient of neural architecture parameters with a temperature schedule (decaying linearly from 10 to 0.1). We replaced STGS with ReinMax as the gradient approximator and changed the minimal temperature from 0.1 to 1.1(as discussed in Section 5 and Section 6.2, temperature scaling plays a different role in ReinMax). We evaluate the resulting algorithm with the official implementation under the topology search setting in the NATS-Bench benchmark (Dong et al., 2020), and summarize the results in Table C as below. ReinMax brings consistent performance improvements over the baseline across all three datasets, demonstrating the great potential of ReinMax. We will add more analyses and discussions in the revision. **Train C: Performance in NATS-Bench.\* Baseline results are referenced from Dong et al. (2020).** ||CIFAR-10 DEV|CIFAR-10 TEST|CIFAR-100 DEV|CIFAR-100 TEST|ImageNet-16-120| |-|-|-|-|-|-| |GDAS-Straight-Through Gumbel-Softmax$^*$|89.68±0.72|93.23±0.58|68.35±2.71|68.17±2.50|39.55±0.00| |GDAS-ReiMax|89.92±0.27|93.47±0.35|69.40±1.63|69.61±1.71|41.11±2.09| Dong, X. and Yang, Y. Searching for a robust neural architecture in four GPU hours. *CVPR*, 2019. Dong, X., Liu, L., Musial, K., and Gabrys, B. Nats-bench: Benchmarking nas algorithms for 307 architecture topology and size. *TPAMI*, 2020. Pdf: /pdf/5ef08e647facd01a0391b10c5f7f1b844212125c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper is about designing a new approach to approximating the gradient of parameters in generating discrete random variables. The work first starts with discovering the connection of Straight-Through (ST) estimator and the first-order approximation of the true gradient. From the insight, the authors aim to improve the ST estimator, by applying second-order approximation of the true gradient. To apply second-order approximation without actually calculating second-order derivatives, the authors use Heun's Method, which approximates second-order derivatives using two first-order derivatives; thus no expensive calculation is needed. The proposed estimator is coined ReinMax. Through mathematical analyses, the effectiveness of using the expected value of function outputs is proven. From evaluations, the authors empirically prove ReinMax method outperforms other estimators, along with presenting other properties and insights e.g. sensitivity to the number of dimensions, batch size, convergence speed, memory usage, and running time. Strengths: - The paper is well-organized and easy to follow - Mathematical background is given thoroughly - The method can be applied with minimal change of code Weaknesses: - Equation 6 seems to be one of core findings from the paper; the derivation process may be written more comprehensibly (like in Appendix A) - More evaluations on benchmark datasets close to real-world distribution would be beneficial. For instance, how much improvement will be made when we apply the ReinMax estimator into other models for NLI or sentiment analysis instead of ListOps? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - In experiments using the most basic REINFORCE algorithm, not RLOO or DisARM-Tree, is there any baseline subtraction used? If not can the $E[f(I_i)]$ baseline be used also in REINFORCE? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: As mentioned in the checklist, I suppose no potential negative societal impact will arise by this work. Experiments are done quite fairly, including standard deviations of scores and implementation code. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We value your comments and will address the concerns regarding experimental design in this rebuttal, with further elaborations to be included in the final paper. **Reply to weakness 1:** Thanks for the suggestions! We will add more elaborations in the revision. **Reply to weakness 2:** In our submission, we adhered to the experiment design of the existing study, focusing on small-scale problems for controllability and resource efficiency, as our experiments were mainly conducted on P100/P40 GPUs. To further demonstrate the generalizability of ReinMax, we conducted additional experiments (detailed in the general rebuttal), including (1) comparisons with the REINFORCE variant that employs the state-of-the-art variance reduction technology, specifically RODEO (SHI 2022), and (2) applications to a real-world scenario, i.e., differentiable neural architecture search on CIFAR10, CIFAR100, and ImageNet-16-120. ReinMax maintained outstanding performance throughout these expanded tests, showing consistent improvements over the baseline. **Reply to question 1:** In the general rebuttal, we detailed comparisons between ReinMax and RODEO, a REINFORCE variant employing a state-of-the-art variance reduction method. RODEO outperforms ReinMax on simple scenarios (e.g., large batch size, small number of latent variables, Setting A). Meanwhile, ReinMax achieves better performance on complex scenarios (e.g., small batch size, large number of latent variables, VAE, and Setting B). We will add more elaborations and make corresponding revisions in the final version. We hope our responses have adequately addressed your concerns and further highlighted the innovations and potential impact of our study. If you have any further questions or need additional information, please do not hesitate to ask. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for this feedback authors. This will be taken into account.
Summary: The paper shows that the straight-through estimator is a first-order approximation of the gradient. The authors then propose a method, called ReinMax, which provides a second-order approximation with negligible computational overhead. Experiments are performed in several settings involving discrete variables (polynomial programming, structured output prediction, and discrete latent variable generative models), showing that ReinMax is more accurate and stable. Strengths: **Originality** The originality of the paper is fairly strong. As I see it, the main contributions of the paper are in 1) identifying the straight-through estimator as an instance of the forward Euler method, which is a first-order approximation of the gradient, and 2) using Heun’s method to derive a second-order gradient approximation. To someone mostly outside of the field of gradient approximation, these are non-trivial insights that, to the best of my knowledge, are unique to this paper. **Quality** The paper is high quality. I particularly appreciated the effort that the authors put into the empirical evaluation, performing hyperparameter sweeps to demonstrate stability, as well as empirically verifying that the ReinMax gradient estimator provides improved estimates of the true gradient. Overall, the result is a paper that provides compelling evidence of 1) improved understanding of straight-through and 2) an improved gradient estimator. **Clarity** Overall, the paper is quite clear. The authors do an excellent job of presenting mathematical notation to help the reader see the similarities across gradient derivations and estimators. Theoretical results are presented in a clear, logical order. The empirical results are also generally presented well, with clear labeling of tables and plots. **Significance** While many papers have mentioned that the straight-through estimator is an approximation of the gradient, it appears that none of these papers have formally shown it. If, indeed, this paper is the first to do so, then it is a significant contribution to our theoretical understanding. The proposed improved estimator, ReinMax, also appears to offer some performance improvement over previously proposed (first-order) estimators. Considering that ReinMax has similar computational overhead as straight-through, then this could serve as a drop-in replacement for the straight-through estimator throughout various applications. However, it is unclear to me whether this serves as a drop-in replacement for all existing instances of the straight-through estimator, or merely those that operate on Multinomial distributions. Weaknesses: I see two relatively minor weaknesses in the paper as-is: larger-scale empirical evaluation and slight improvements to the presentation. These are discussed below. The current empirical evaluation involves three settings: quadratic programming, structured output prediction, and latent variable generative modeling with discrete latent variables. While the authors demonstrate ReinMax in all three settings, much of the empirical evaluation revolves around the final setting (categorical VAE on MNIST). These results are a useful indicator of the benefits of ReinMax, however, the empirical setting itself is rather toy-ish compared with modern settings. Given that ReinMax is a drop-in replacement for the straight-through estimator, I would imagine that it should be trivial to replace the ST estimator in an existing scaled-up setting, e.g., vector-quantized VAEs or Hafner et al.’s discrete world models. The authors could alternatively / additionally explore categorical VAEs on more complex data. This would allow the authors to more definitively claim that their proposed gradient estimator leads to tangible empirical improvements. Additionally, several minor aspects of the presentation could be improved. * The plots in Figure 1 are presented with fairly minimal context in the surrounding text. The caption states “Details are elaborated in Section 6.” Then it may make sense to place this figure closer to Section 6. * As far as I can tell, the name “ReinMax” is never actually explained. * The labels for the baseline methods, e.g., in Figure 1, 4, etc. are difficult to read; they are various dashed lines. Colors and/or larger lines would make this clearer. * The shaded regions in Figure 5 (right) are rather jagged — perhaps there’s an issue with the evaluation interval or the plotting setup. * In various tables, e.g., Tables 1, 2, …, the results for ReinMax are bolded, despite falling within the error bounds of the baseline methods. I find this to be somewhat misleading. * The citation in the second sentence of the introduction is incorrect. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could you please elaborate on the applicability of the ReinMax estimator? Does this estimator (or the insights developed in the theoretical sections of the paper) alleviate the “laborious and time-consuming” work of developing “different ST variants for different applications in a trial-and-error manner”? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: As mentioned above, I would appreciate a clearer discussion of which existing forms of straight-through estimator the ReinMax estimator can replace. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We value your comments and will address the concerns regarding experimental design, applicability, and limitations of ReinMax in this rebuttal, with further elaborations to be included in the final paper. **Reply to Weakness 1 and Question 1:** In our submission, we adhered to the experiment design of the existing study, focusing on small-scale problems for controllability and resource efficiency, as our experiments were mainly conducted on P100/P40 GPUs. To further demonstrate the generalizability of ReinMax, we conducted additional experiments (detailed in the general rebuttal), including (1) comparisons with the REINFORCE variant that employs the state-of-the-art variance reduction technology, specifically RODEO (SHI 2022), and (2) applications to a real-world scenario, i.e., differentiable neural architecture search on CIFAR10, CIFAR100, and ImageNet-16-120. In the architecture search application, ReinMax consistently improved performance over the baseline in a plug-and-play manner by: - Replacing STGS with ReinMax as the gradient estimator, - Conducting a minor change to the temperature hyper-parameters (changing the minimal value of the temperature from 0.1 to 1.1), as guided by our findings (refer to Sections 5 and 6.2). **Reply to limitation 1:** While our main text focuses on multinomial distributions, by reparameterizing other categorical distributions into a multinomial distribution, ReinMax can be generally applied to all categorical random variables. It is worth mentioning that ReinMax is not applicable to continuous random variables. While extending ReinMax to these cases is possible, it may not be necessary, given that many commonly used distributions can be reparameterized. For example, the normal distribution $z \sim \mathcal{N}(\mu, \sigma)$ can be re-written as $z = \mu + \sigma \cdot \mathcal{N} (0, 1)$, making it trivial to compute $\partial z/\partial \mu$ and $\partial z/\partial \sigma$. We hope our responses have adequately addressed your concerns and further highlighted the innovations and potential impact of our study. If you have any further questions or need additional information, please do not hesitate to ask. --- Rebuttal Comment 1.1: Title: Response to the Authors Comment: I have read the other reviews and the authors' rebuttals and have decided to maintain my score. A majority of the reviewers feel that this paper should be accepted; the main contention is over the degree of significance. I hope that the authors include the new experiments, as well as the reviewers' suggestions, in the revised paper. --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: Thank you for acknowledging our contribution. To highlight the significant potential of our proposed method, we provide additional experimental results (as elaborated in our general rebuttal). We will incorporate these discussions in our revised version.
null
null
null
null
Generative Evolutionary Strategy For Black-Box Optimization
Reject
Summary: The paper introduces a Generative Evolution Optimization (GEO) algorithm to black-box optimization, introducing. The GEO algorithm is claimed to combine the strengths of Evolution Strategy (ES) and Generative Surrogate Network (GSN) to address the limitations of Bayesian optimization and other existing methods. Some benchmark functions are tested to verify the performance of GEO. Strengths: Originality: The paper introduces GEO which combines the strengths of L-GSO and Evolutionary Generative Adversarial Networks (EGAN). I did not see such method before. Quality: The authors provide explanations of the GEO method, including its foundational concepts, operation steps, and algorithmic structure. Some benchmark functions have been tested. Significance: The paper addresses a significant challenge in the field of black-box optimization, e.g., the optimization of non-convex, high-dimensional, multi-objective, and stochastic target functions. Weaknesses: Some claims and concepts are not adequate, like the O(N) complexity. Without the target of finding global optimal, we can design various methods that achieve O(N) complexity easily. Some related works are not cited adequately, like Xavier and He initialization. The experiments seem to be limited to specific test functions. Performance on such few benchmark functions are not convincing The paper discusses the potential application of GEO in other areas of machine learning, such as reinforcement learning. However, it does not provide any empirical evidence or case studies to support these claims. Including real-world applications or case studies could strengthen the paper's significance and practical relevance. The paper does not clearly outline the limitations of the GEO method, which could be beneficial for future research and application of the method. The paper mentions the tendency of GEO to collapse towards one side while optimizing certain functions, but it does not delve into why this happens or how it could be mitigated. A more in-depth analysis of this issue could improve the paper's quality. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The paper mentions the tendency of GEO to collapse towards one side while optimizing certain functions. Could you provide more insight into why this happens? The paper mentions that GEO is more effective in high dimensions rather than low dimensions. Could you provide more insight into why this is the case? The paper discusses the potential application of GEO in other areas of machine learning, such as reinforcement learning. Could you provide any empirical evidence or case studies to support these claims? What are the experimental settings? Why we choose those benchmark functions? What are the measurements of the performance? What are the parameter settings for the algorithms? How many extra computational costs are introduced in the optimization procedure? Those are all unclear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Societal impact of the work is not discussed in this paper. Furthermore, limitations of the proposed technique are not discussed clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and the many questions. I will provide answers to each of your questions in turn. Q1. The paper mentions the tendency of GEO to collapse towards one side while optimizing certain functions. Could you provide more insight into why this happens? A1. In the case of ZDT2 where collapsing occurs strongly, it has a concave shape of the Pareto-front, unlike ZDT1 (convex shape) and ZDT3 (partially convex, partially concave shape). We hypothesize that when the Pareto front is concave, the edge state's probability of being sampled in the ES process increases, leading to gradual collapsing to one side. We acknowledge that there can be simple solutions, such as setting reference axes and referencing them during the ES process to rebalance the weights. However, this would require additional hyperparameter tuning and would be considered a separate task from the original ES algorithm. If we were to suggest a GEO that includes Reference axes, it would be considered a separate algorithm from the original GEO. (e.g. In Classical ES, NSGA and R-NSGA are considered separate algorithms) This explanation was included in the earlier version, but was removed to meet the paper's length limit. We'll add it back to the supplement. Q2. The paper mentions that GEO is more effective in high dimensions rather than low dimensions. Could you provide more insight into why this is the case? A2. This is an important question. The reduced performance of GEO in lower dimensions is due to GEO being based on neural networks. Neural networks show very high generalization performance when learning from a large amount of data, but when there's less data, performance sharply decreases due to overfitting. Particularly, this tendency is stronger with larger neural network sizes. Optimization in lower dimensions can be achieved with fewer data points (e.g., optimization in 2-dim. might be sufficiently done with 100 data points), so algorithms based on neural networks may show weakness in these situations. Also, we used a specific structure called a self-attention Transformer in GEO for this experiment (We explained the reason in Supplement). Transformers are known to be more suitable for large data but tend to perform poorly with small data. This trend made GEO's performance at lower dimensions get worse. Structurally, the decline in neural networks' performance with less data is an intrinsic problem that cannot be avoided. Hence, I emphasized in the discussion section that GEO is suitable for higher dimensions when there's a possibility for many target function evaluations. The characteristics of neural networks are generally well-known, so I thought that an explanation might not be necessary. However, it seems that a more comprehensive explanation would indeed be beneficial. Adding a discussion on this topic to the supplement appears to be a good approach. Q3. The paper discusses the potential application of GEO in other areas of machine learning, such as reinforcement learning. Could you provide any empirical evidence or case studies to support these claims? A3. This is mentioned with consideration of the paper “Evolution strategies as a scalable alternative to reinforcement learning” published by OpenAI in 2017 (In the discussion section, we cited this paper). In this paper, the authors demonstrate that reinforcement-type neural net problems can be trained with ES instead of traditional RL training techniques. Despite being presented by OpenAI, this research did not gain significant attention, primarily due to its performance compared to conventional RL techniques like DQN and A3C. Hence, we mentioned our interest in applying GEO in a similar manner. Since GEO is a kind of ES, specialized for high dimensions, it might be worthwhile to challenge this kind of problem with GEO. (Note that, ES can be effectively applied to the neural network, as the neural network itself is a kind of special type of convex function) Q4. What are the experimental settings? Why we choose those benchmark functions? What are the measurements of the performance? What are the parameter settings for the algorithms? How many extra computational costs are introduced in the optimization procedure? Those are all unclear. A4. All experimental settings are explained in the Supplement. Please see the details there When we wrote the initial version of this paper, we included all the experimental setting in the 9-page main text.. This led to many confusions of readers. Thus, we allocated most of the main text pages to the explanation of GEO's working principle, and included the experimental settings and important additional experiments in the Supplement. There are so many details in Supplement, so it's challenging to put everything in the Rebuttal. We kindly ask that you refer to the Supplement. The benchmark functions we used are those traditionally used in classical optimization researches. Sometimes, a few studies (mainly based on neural networks) completely ignore traditional test functions in performance measurement, making it difficult to compare with previously developed algorithms. It is important to measure performance based on well-established and validated test functions, such as the ZDT functions. Performance measurement is based on how close we got to the global optimum relative to the number of target function evaluations. This is predicated on one of the primary assumptions in black-box optimization that the target function evaluation is the most expensive resource. Nearly all studies in this field follow this assumption. How many extra computational costs - See supplement for the issue of non-dominated sorting cost We kindly ask for your understanding that we put all experimental setting information in the Supplement. Due to the 9-page limit, including all experimental details would reduce the GEO's main algorithm explanation, leading to reader confusion. Once again, thank you for your earnest review --- Rebuttal Comment 1.1: Title: Thank your for the response Comment: Thank you for the responses. I have read the rebuttal. Though I get the idea of the answers, I still feel the statements are somewhat intuitive and thus vague. This issue also exists in the manuscript as I mentioned in the "weakness". I believe a more rigorous description would help better clarify the contribution and the technical merits. --- Reply to Comment 1.1.1: Title: Thank you for your insights. Comment: Thank you for your insights. I understand your concerns about the intuitive nature and perceived vagueness in the manuscript. I will make an effort to provide a more rigorous description of the concepts and technical details in the subsequent revision. if space is insufficient in the main text, I will provide as much information as possible in the supplementary materials.
Summary: This paper investigates a new integrated optimization method targeting at black-box optimization within high-dimensional spaces scenario, called Generative Evolutionary Optimization (GEO). Different from the popular black-box optimization method - Bayesian optimization, GEO exhibits a linear time complexity. Intrinsically, Geo a black-box optimization method that combines an evolutionary strategy with a generative surrogate neural network (GSN) model, and the two basic components could function in a synergetic manner. Specifically, evolutionary strategy helps to deal with the stability of the surrogate network training for GSN, while GSN improves the mutation efficiency (sample efficiency) of the former. Since the fitness results are combined and ranked using non-dominated sorting in GEO, it can be applied to multi-objective scenario. Besides, the age evolution strategy is leveraged to dominated sorting step when the target function is stochastic. Finally, the experimental findings reveal that GEO can accomplish the mentioned objectives: optimizing convex, high-dimensional, multi-objective, and stochastic target functions while maintaining O(N) complexity. Strengths: 1. This paper is well-written and easy-to-follow, and the following parts are highlights: technique explanation, limitation analysis, high-level summary. 2. The technical design (GSN, ES, training stability) is reasonable, and the experimental evaluation is clear. 3. The key design in the cooperative framework is novel, which integrates the strengths of both GSN and ES while mitigating their weaknesses. Weaknesses: 1. The specific parameter settings are not clear. 2. The reviewer suggests that in the discussion chapter, the related multi-objective high-dimensional solutions from Bayesian optimization community could be analyzed in terms of time complexity or efficiency if possible. 3. The test function in the experimental evaluation is limited, and this hamper the evaluation confidence. As mentioned by the authors, more test functions from different domains (maybe a fair benchmark) should be included to evaluate the performance of GEO. In addition to Ackley, Rosenbrock and Styblinski-Tang, there are many objective function family, including CONSTR, SRN and so on. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. How many initial points are sampled in the latin hypercube stage. 2. What is the setting for the parameters in GEO, e.g., the number of generator. These greatly affect the reproducibility. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Response to the Review: Thank you for your thorough review. First, here are the answers to the questions raised: Q1. How many initial points are sampled in the Latin hypercube stage? A1. The number is equal to the size of the generator pool used in the evolutionary strategy (ES). For example, if the pool size is 1000, the LHC initialization will also begin at 1000. We matched the initial point count for two types of initializations, random generator network initializations and Latin Hypercube Sampling (LHC). The maximum number is determined by the pool size when initializing generator network, hence the LHC initialization point count was matched to the pool size. Q2. What is the setting for the parameters in GEO, e.g., the number of generators? These greatly affect the reproducibility. A2. The number of generators corresponds to the pool size, which varies but was 1000 for the 2-objective problem in our experiments. Effects concerning the pool size can be found in the Supplement. Regarding other major parameters: Details of the neural network structure are also included in the Supplement paper. Although any kind of neural network can be used in GEO, we primarily adopted a Transformer-based structure for these experiments. Since it is a self-attention, structurally it is similar to GPT (Generative Pretrained Transformer). Without using the Trunk-branch trick, the Generator consists of six self-attention transformer blocks (designated as ASB in the Supplement figures). Though commonly GPT and other self-attentions use GeLU as activation, we employed HSWISH in these experiments. This was not for a particular reason but due to my misconception in 2021 that HSWISH was superior to GeLU. However, since the effect of activation is minimal, it was retained as is. In addition, I’d like to highlight that we have already found that GEO works with CNN-based networks. We refrained from including CNN-based experiments in the paper to avoid unnecessary confusion of readers. I want to emphasize that the type does not matter much as long as the neural network is sufficiently large. There are many other important parameters: - Boundary condition - Pool size - Trunk-branch trick Among these, the method of setting the boundary condition greatly influences the performance. These factors are described in the Supplement, and I kindly request you to refer to it for more information. ————————- My response to the weaknesses mentioned: W1.The specific parameter settings are not clear. A1. I believe that the Supplement provides a sufficient explanation of the specific parameters. There are additional experiments concerning pool size (= # of generators), boundary conditions, etc., and since the network structure is fundamentally similar to GPT, it can be easily reproduced. The Trunk-branch trick is also described in the Supplement. W2. The reviewer suggests that in the discussion chapter, the related multi-objective high-dimensional solutions from the Bayesian optimization community could be analyzed in terms of time complexity or efficiency if possible. A2. We often received questions comparing GEO with Bayesian optimization when presenting GEO, and we are aware of such demands. Bayesian optimizations are particularly effective in optimization, especially for the lower dimensions, whereas GEO demonstrates inefficiency in low dimensions and becomes efficient only in high-dimensional settings. Comparing these two algorithms with entirely different characteristics was challenging. If one performs better in a medium area (e.g., dimension = 20), readers may misunderstand that one is superior to the other. However, this is not the case. Bayesian optimizations and GEO (or conventional ES) are designed for different purposes, and as a result, grey areas inevitably arise. Comparisons within these grey areas can be quite ambiguous. Therefore, to prevent misunderstandings, we avoided direct comparisons. While I agree with your point somehow, I think it would be better to explore that idea in a separate study rather than including it in this paper. W3. The test function in the experimental evaluation is limited, and this hampers the evaluation confidence. As mentioned by the authors, more test functions from different domains (maybe a fair benchmark) should be included to evaluate the performance of GEO. In addition to Ackley, Rosenbrock, and Styblinski-Tang, there are many objective function families, including CONSTR, SRN, and so on. A3. I agree with this point. As I mentioned in the "global rebuttal," many more experiments were conducted in this research than what was included in the paper. However, unnecessary data was removed, considering issues such as figure readability. I am planning including the extra experiments in the Supplement. Even if new experiments are conducted, they are unlikely to significantly impact the main point of this study. Once again, thank you for your review. --- Rebuttal Comment 1.1: Comment: Thanks for your response, and we will carefully consider them in the following phase. --- Reply to Comment 1.1.1: Title: Thank you for your review Comment: Thank you for your review
Summary: In this paper, a black-box optimization approach is proposed that combines an evolutionary strategy (ES) with a generative surrogate neural network (GSN) model. This integrated model is designed to function in a complementary manner, where ES addresses the instability inherent in surrogate neural network learning associated with GSN models, and GSN improves the mutation efficiency of ES. From the overall view of this paper, the authors basically expressed clearly the point of innovation and the proposed algorithm. Strengths: The organization of this paper and the technical details of the proposed method are clear and easy to follow. Weaknesses: Theoretical derivations and proofs are lacking, and the validity of the method is difficult to be supported. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1.In the introduction, there is no summary of the contribution, and I suggest that the contribution of this work be further emphasized. 2.This paper lacks experimental results and analysis of some arguments. For example, please explain why GEO is less efficient at lower sizes. 3.The writing of the paper could be improved for better description and clarification. 4.The selected algorithms for comparison are not new and are not state-of-the-art, please add the recently proposed multi-objective evolutionary approach for comparison. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The relevant limitations are described, but not in depth and specific enough. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. I will respond to each question one by one. Q1. In the introduction, there is no summary of the contribution, and I suggest that the contribution of this work be further emphasized. A1. This issue seems to be a writing problem similar to your 3rd question. In short, the core idea is that we've inherited the ideas from LGSO and EGAN to make a GSN-based optimization algorithm that functions much better in higher dimensions. We tended to elaborate on our words to convey the results about the performance as smoothly as possible, and to prevent it from appearing overly emphasized. This may have caused the summary to seem insufficient. Adding a brief summary to the introduction, as per your suggestion, would be a good idea. Q2. This paper lacks experimental results and analysis of some arguments. For example, please explain why GEO is less efficient at lower sizes. A2. This is an important question. The reduced performance of GEO in lower dimensions is due to GEO being based on neural networks. Neural networks show very high generalization performance when learning from a large amount of data, but when there's less data, performance sharply decreases due to overfitting. Particularly, this tendency is stronger with larger neural network sizes. Optimization in lower dimensions can be achieved with fewer data points (e.g., optimization in 2-dim. might be sufficiently done with 100 data points), so algorithms based on neural networks may show weakness in these situations. Also, we used a specific structure called a self-attention Transformer in GEO for this experiment (We explained the reason in Supplement). Transformers are known to be more suitable for large data but tend to perform poorly with small data. This trend made GEO's performance at lower dimensions get worse. Structurally, the decline in neural networks' performance with less data is an intrinsic problem that cannot be avoided. Hence, I emphasized in the discussion section that GEO is suitable for higher dimensions when there's a possibility for many target function evaluations. The characteristics of neural networks are generally well-known, so I thought that an explanation might not be necessary. However, based on your suggestion, it seems that a more comprehensive explanation would indeed be beneficial. Adding a discussion on this topic to the supplement appears to be a good approach. Thank you for your valuable insight. Q3. The writing of the paper could be improved for better description and clarification. A3. This is similar to Q1. A short summary of our research goals in the introduction would have made things clearer. Although the current version has been proofread and improved for reader comprehension, it appears you have pointed out that there are still improvements to be made in readability. I will review this further. Q4. The selected algorithms for comparison are not new and are not state-of-the-art, please add the recently proposed multi-objective evolutionary approach for comparison. A4.1. In fact, we conducted far more experiments than what has been included in the main text of the paper. We conducted the comparison experiments to the best of our ability during the initial stages of research, using available python packages. As is well-known, classical ES fails to optimize in higher dimensions, a fact that we confirmed (meaning that all the classical ES we tried failed to optimize at the target value of 8192 dimensions, so their differences were practically nonexistent). Therefore, we selected the most representative and well-known ES algorithms, intending to show concise and meaningful graphs. This approach may have inadvertently appeared as a lack of data. A.4.2. Additional answers about SOTA (State-of-the-Art): As I mentioned in the Discussion section of the main text and in the "global rebuttal," the SOTA (or "best" ) of black-box optimization algorithm is not clear-cut. Even algorithms that claim superior performance may actually perform worse depending on hyperparameter settings and target functions. This occurs frequently, and particularly disappointing results often arise in real-world problems. Such considerations have brought about dilemmas in choosing between the latest algorithms and those that are well-known and widely used. Another reason: Even in LGSO, which is the most crucial paper for comparison, the testing was not conducted densely. Therefore, we thought that a similar amount of data would be sufficient. Additionally, the main objective of this paper was to overcome the limitations of the GSN algorithms, so we thought that we have successfully shown our goal. Despite this, more data is always beneficial, so we plan to upload additional materials to the Supplement. Thank you for your insightful comment. ------------- Once again, thank you for your review. --- Rebuttal Comment 1.1: Comment: Dear authors: I extend my gratitude for your thorough responses and the inclusion of additional experiments. Overall, I have no more in-depth questions about this paper. Given the improvements in the revised version concerning the method description and empirical research, I am inclined to change my evaluation from "Borderline reject" to "Borderline accept". --- Reply to Comment 1.1.1: Title: Thank you for your review Comment: Thank you for your review
Summary: The paper introduces a new method called Generative Evolutionary Optimization (GEO) that aims to address the challenges of black-box optimization in high-dimensional problems. The authors highlight that existing algorithms, such as Evolution Strategies (ES) and Bayesian optimization, have limitations when it comes to optimizing high-dimensional, non-convex problems while maintaining linear time complexity. They propose GEO as a combination of ES and Generative Surrogate Neural networks (GSNs) to achieve better performance in terms of stability, mutation efficiency, and optimization in high dimensions. The paper outlines the goals of GEO, discusses related works (L-GSO and EGAN), presents the methodology, and provides experimental results showing GEO's superiority over traditional ES and GSN in higher dimensions. Strengths: - The paper addresses an important problem in black-box optimization: optimizing high-dimensional, non-convex problems while maintaining linear time complexity - The introduction provides a clear overview of the challenges faced by existing algorithms and the potential benefits of using GSN-based approaches like GEO - The goals of GEO are well-defined, and the paper sets the stage for discussing the methodology and experimental results - Combining EA with GSN is novel and interesting Weaknesses: - Some simple ES algorithms, such as OpenAI-ES [1], can optimize about 100k parameters in their paper; it is used to optimize the weight of the policy network. Although the idea of this paper seems novel and interesting, I am not sure that the 10k params can be called high-dimensional. - The main results are shown in Figure 3, but it is unclear which function is used for 3-a), and the figure is not explained in the manuscript. [1] Salimans, Tim, et al. "Evolution strategies as a scalable alternative to reinforcement learning." arXiv preprint arXiv:1703.03864 (2017). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - I think the authors should provide more results compared with the previous methods similar to Figure 3-a), with various functions and algorithms. If there is no space for plotting all results, I think the authors should summarize them in a table. In the evosax [2], an open-source ES library, there is an example code for testing many ES algorithms in the ES benchmark function like Rosenbrock. [2] https://github.com/RobertTLange/evosax/tree/main Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - In the single-objective function, I think more algorithms should be compared to the proposed algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. I will address the Weaknesses and Question you pointed out one by one. ---------- W1. Some simple ES algorithms, such as OpenAI-ES [1], can optimize about 100k parameters in their paper; it is used to optimize the weight of the policy network. Although the idea of this paper seems novel and interesting, I am not sure that the 10k params can be called high-dimensional. [1] Salimans, Tim, et al. "Evolution strategies as a scalable alternative to reinforcement learning." arXiv preprint arXiv:1703.03864 (2017). A1. We are aware of OpenAI-ES [1] and have cited it in the discussion section of our main text. OpenAI-ES targets the optimization of neural networks. However, neural networks differ significantly in nature from black-box optimization test functions. The characteristics of neural networks are more akin to those of convex functions. Though neural networks indeed consist of many parameters, almost all the points where the gradient is 0 are saddle points. It is contrary to the design of non-convex test functions in black-box optimization, which often have many local optima. Typical training methods like backpropagation gradient descent are more concerned with vanishing/exploding gradients rather than local optimum traps. If neural networks behaved like the Ackley function, with numerous local optima, expecting them to be optimized by gradient descent would be unrealistic. In other words, neural networks are generally closer in characteristics to convex functions, and thus the context of the research in "Evolution Strategies as a Scalable Alternative to Reinforcement Learning" is entirely different from typical black-box optimization. In most non-convex black-box optimization research, around 100 dimensions is considered high, and our experiment's extension to nearly 10,000 dimensions justifies calling it high-dimensional. The distinction between Non-convex and Convex (or semi-convex) is critical, as I formerly experienced that some researchers have misleadingly claimed state-of-the-art performance without proper delineation. W2. The main results are shown in Figure 3, but it is unclear which function is used for 3-a), and the figure is not explained in the manuscript. A2. The function used is Styblinski-Tang test function(with the ground state readjusted to 0). This seems to have been overlooked during the paper's revision. Thank you for pointing it out. Q. I think the authors should provide more results compared with the previous methods similar to Figure 3-a), with various functions and algorithms. If there is no space for plotting all results, I think the authors should summarize them in a table. In the evosax [2], an open-source ES library, there is an example code for testing many ES algorithms in the ES benchmark function like Rosenbrock. A. Additional experiments that were not presented in the main text are already included in the supplement. However, I acknowledge your question that this may still be insufficient, and I will answer accordingly. We conducted comparative experiments using a package very similar to evosax, called Pymoo, and a non-open package. In fact, we conducted far more experiments than what has been included in the main text of the paper. As is well-known, classical ES fails to optimize in higher dimensions, a fact that we confirmed (meaning that all the classical ES we tried failed to optimize at the target value of 8192 dimensions, so their differences were practically nonexistent). Therefore, we selected the most representative and well-known ES algorithms, intending to show concise and meaningful graphs. This approach may have inadvertently appeared as a lack of data. Another reason: Even in LGSO, which is the most crucial paper for comparison, the testing was not conducted densely. Therefore, we thought that a similar amount of data would be sufficient. Additionally, the main objective of this paper was to overcome the limitations of the GSN algorithms, so we thought that we have successfully shown our goal. Despite the reasons mentioned above, it is true that the more experimental data we can show, the better. Choosing where to put extra data - in the main text or the supplement - needs a bit of thought. It'd be great to add it into the main text, but if we do, we might need to cut out some of the discussion or other data. So, I am considering upload extra data in Supplement. --------- Note 1. Although the Rosenbrock function is commonly used as a test function, it has a unique form. The Rosenbrock function is designed to test an algorithm's ability to search sharp valley region, rather than its capacity to escape many local optima. Therefore, I believe that functions such as Ackley or Styblinski-Tang would serve as more appropriate tests compared to the Rosenbrock function.) ------------ Note 2. Following your suggestion, I recently conducted a straight forward test using the evosax package (6th Aug. W/ Rastrigin function). As anticipated, all of the algorithms ["SimpleES", "SimpleGA", "PSO", "DE", "Sep_CMA_ES", "Full_iAMaLGaM", "Indep_iAMaLGaM", "MA_ES", "LM_MA_ES", "RmES", "GLD", "SimAnneal", "GESMR_GA", "SAMR_GA"] exhibited a sharp decrease in performance as the dimensions increased. Particularly in a 1000-dimensional space, newly developed algorithms like GESMR_GA are showing poorer results than classical and well-validated algorithms such as Simple ES and PSO. (While changing the hyperparameter settings might result in different outcomes, the process of identifying the optimal parameters through repeated experiments cannot be regarded as black-box optimization.) This phenomenon is very common and is something I've previously encountered in my experiments. Due to these issues with generalization, we found that algorithms that are both newly developed and less well-known may actually be at a disadvantage in the validation process. ----------- Once again, thank you for your diligent review.
Rebuttal 1: Rebuttal: Dear Reviewers, I extend my heartfelt thanks for your diligent reviews. I have responded to each of your questions individually. However, given the character limit in the Rebuttal section, I was constrained to provide only brief answers to each question. Should you have any additional questions, I am ready and willing to offer more detailed responses. ---------- About Experimental Details & the Supplement document: Many questions were raised regarding the details of the experiments. Due to the NeurIPS paper regulations limiting the content to 9 pages, I had to place the experimental details in the Supplement. In the Supplement, you will find information on the GEO network structure, additional experiments concerning key parameters, efforts to handle Transformers efficiently, more experiments on conventional test functions, discussions on non-dominated sorting, and so on. I kindly refer you to this section for further insights. In the early version of this paper, I attempted to include all the details within the main text. However, due to the 9-page limit, the explanation of GEO's main concept was reduced, leading to feedback that the paper was hard to understand. Therefore, in this revision version, I have focused on including only the most critical parts in the main text, while boldly omitting or relocating less essential sections to the Supplement. I kindly request your understanding and leniency regarding this matter. --------------- About Additional Experiments: From the initial stages of our research, we conducted a far greater number of experiments than what is included in this paper. However, for the sake of readability, we retained only the experiments we considered most important and excluded the rest. It seems that our attempt has unfortunately been perceived as a lack of data. The specific situation is as follows: We carried out as many experiments as possible on ES using Python packages. It is a well-known fact that classical ES does not perform well in high-dimensional optimization, our experimental results were also consistent with that. The most crucial experiment in this paper is the performance comparison at 8192 dimensions, where all the classical ES we tried failed to optimize. Since classical ES can't optimize in such high dimensions, we felt that the comparison would be meaningless and too much data points would impair the readability of the graph. Therefore, we only kept the most renowned and widely-used algorithms. (In addition to conventional test functions, we have extensively carried out simulations with practical environments such as electronic circuit simulators. However, these were not included in the paper to prevent potential reader confusion and due to the confidentiality terms associated with the provided simulators.) Meanwhile, our focus was more on the GSN perspective. Therefore, we believed that demonstrating that GEO surpasses LGSO was a sufficient explanation. Also, I would like to note that the superior performance of LGSO compared to traditional ES has already been demonstrated in LGSO's own research. While we felt we had a good reason for excluding the unnecessary data, but it raised a lot of questions in this revision. Therefore, we plan to reincorporate the additional experiments, and due to space limitations in the main text, we believe that adding them to the supplement would be more appropriate. ---------------- About SOTA: When referring to the "state-of-the-art" (SOTA) in black-box optimization algorithms, we must be extra careful. This is especially true when compared to other machine learning fields. While the term "SOTA" often implies "the best", at least in this field, it cannot be understood so simply. The reason for this is that optimization algorithms vary widely in their goals and the conditions under which they work well. Hyperparameter settings, test functions, and the algorithm itself are strongly correlated, leading to situations where an algorithm that seems to be the best may perform poorly in specific scenarios, and vice versa. And such a situation can occur quite frequently. From this perspective, our experiments were not intended to demonstrate that GEO outperforms other optimization methods in every scenario. Rather, we aimed to highlight that, unlike traditional methods which suffer from the "Curse of Dimensionality", GEO does not lose as much effectiveness as the number of dimensions increases. Also, this does not imply that GEO is always the ideal choice or a perfect solution. Every optimizer has its strengths and weaknesses depending on the specific situation, and there may be unpredictable circumstances where GEO does not perform as well. Therefore, while our findings, including those from experiments we haven't disclosed, consistently show GEO outperforming other algorithms, we're careful not to quickly say that GEO is the best solution. We hope our intention has been conveyed without any misunderstanding.

 ------------ Note. Following a suggestion from one of the reviewers, I recently conducted a straight forward test using the evosax package (6th Aug., W/ Rastrigin function). As anticipated, all of the algorithms ["SimpleES", "SimpleGA", "PSO", "DE", "Sep_CMA_ES", "Full_iAMaLGaM", "Indep_iAMaLGaM", "MA_ES", "LM_MA_ES", "RmES", "GLD", "SimAnneal", "GESMR_GA", "SAMR_GA"] exhibited a sharp decrease in performance as the dimensions increased. Particularly in a 1000-dimensional space, newly developed algorithms like GESMR_GA are showing poorer results than classical and well-validated algorithms such as Simple ES and PSO. This phenomenon is very common and is something I've previously encountered in my experiments. Due to these issues with generalization, we found that algorithms that are both newly developed and less well-known may actually be at a disadvantage in the validation process. ------------- Once again, thank you to the reviewers. I look forward to your further feedback.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards the Difficulty for a Deep Neural Network to Learn Concepts of Different Complexities
Accept (poster)
Summary: Theoretical work using prior work defining "interactive concepts" to quantify difficulty of concepts and show why DNNs prefer learning simpler concepts by approximating the concept learning process for DNNs by linear regression. Strengths: 1) Useful theoretical contribution that clearly uses interactive concept idea to formally define "easy" concepts and show why DNNs prefer to learn these shortcuts 2) Well-written and easy to follow 3) Empirical proof showing that higher order interactive concepts are more vulnerable to noise in the data Weaknesses: 1) More discussion can be provided on why this particular experiment designed by the authors is a good way to verify the claims made. 2) More clear explanation of what is the contribution of prior work and that of this work. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your great efforts. We are glad that all reviewers have given us positive comments. We will try our best to answer all your questions. Please let us know if you still have further concerns, or if you are not satisfied with the current responses, so that we can further update the response ASAP. --- Q1: More discussion can be provided on why this particular experiment designed by the authors is a good way to verify the claims made." A: Thanks. We have followed your suggestions to discuss more about the design of experiments w.r.t. the verification of our claims. Specifically, we have three main claims, *i.e.,* high-order concepts are usually less stably extracted under data noises, are learned more slowly, and are more sensitive to adversarial attacks. All these three claims have been verified in experiments, which proved the difficulty of learning high-order concepts. **For Claim 1**: *high-order interactive concepts are less stably extracted under data variations than low-order interactive concepts.* The experiment in Line 232 was designed to verify this claim in a direct manner. We directly quantified the instability metric $\kappa(S)$ of concepts of different orders. We found that high-order concepts were less stable under data variations than low-order concepts, which verified Claim 1. **For Claim 2**: *Fast learning of low-order concepts.* The experiment in Line 253 was designed to verify this claim in a direct manner. We extracted interactive concepts from the finally-learned DNN $v_{\text{final}}(x)$, and extracted interactive concepts from the DNN $v_{t}(x)$ trained after $t$ epochs. A high Jaccard similarity between $s$-order concepts extracted from $v_{t}(x)$ and $s$-order concept extracted from $v_{\text{final}}(x)$ would indicate the fast learning of $s$-order concepts, because many concepts had been already learned after $t$ epochs. Fig. 5 in the main paper shows that low-order concepts usually had higher Jaccard similarity during the learning process, which verified Claim 2. **For Claim 3:** *High-order interactive concepts are more sensitive to adversarial attacks.* The experiment in Line 285 was designed to verify this claim in a direct manner. We directly quantified the sensitivity metric $\alpha(S)$ of concepts of different orders under adversarial attacks. We found that high-order concepts were more sensitive to adversarial attacks than low-order concepts, which verified Claim 3. --- Q2: '' More clear explanation of what is the contribution of prior work and that of this work." A: We have followed your suggestions to clarify the distinctive contribution of this work over previous studies. Previous papers [2, 16, 20] mainly study the phenomenon that *DNNs easily learn simple concepts* in an experimental manner. Thus, the easy learning of simple concepts is still a common intuition without a clear theoretical formulation or analytic explanation, because how to define a concept encoded by a DNN is still an open problem. In comparison, thanks to the recent progress in [26], we follow its mathematical definition of interactive concepts. This enables us to provide an explicit theoretical connection between the complexity of concepts and the difficulty of learning concepts. Specifically, we derive an approximate instability for interactive concepts of each specific order, which reveals the high instability of high-order concepts. Thus, our research provides an approximate yet analytic explanation of the difficulty of learning high-order concepts. --- Rebuttal Comment 1.1: Comment: Thank you for your response, I think these explanations should be included in the revision to further strengthen the paper. I stand by my recommendation to accept the paper.
Summary: This paper explores the problem of the learning difficulty of interactive concepts. The paper theoretically shows that a DNN is more likely to encode simple interactive concepts (with fewer variables in the interaction). Low-order interactive concepts are more stable to data noises and they exhibit consistent effects on the inference scores of different samples. Experiments on various DNN architectures using image and tabular datasets are done to validate the results. The paper also compares and relates their findings with existing theoretical and empirical works on trying to analyze and explain DNNs. Strengths: [S1] The paper presents a novel contribution by deriving an approximate analytical solution to the variance of interactive content's effects w.r.t data noise and showing that it increases along the order of concepts in an exponential manner. [S2] The proof provides novel insights explaining DNNS. The findings are useful in explaining existing empirical works such as adversarial robustness results for different ordered concepts, why adversarial training is faster for certain concepts, etc [S3] Experiments are conducted on four types of architectures and two datasets. The results are consistent with the theoretical findings. [S4] Paper is well-written and covers various existing works. Weaknesses: [W1] Authors claim that easy samples mainly contain low-order interactive concepts. Is there existing work showing that this claim is true or can authors show it? How is "easy" defined? Is it based on how "easy" (fast?) is it for a specific network to learn or is it a more universal concept? [W2] How was the variance value used in the experiments chosen? What is the impact if it is changed? What is the impact if a different type of noise is used? [W3] Could authors experiment on newer architectures such as transformers? What about textual data input? [W4] Can you provide a limitation/broader impacts section? E.g., Are there any assumptions? Does the definition of "simplicity" match real-world applicability? What can/cannot this new understanding do towards guiding the design of future networks, etc.. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not provide a limitations section. Perhaps authors can explain the assumptions behind the proof and its impacts on real-world applications. The paper does not provide any ethical statement in the paper nor do I think it is needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper. We will try our best to answer all your questions. Please let us know if you still have further concerns, or if you are not satisfied with the current responses, so that we can further update the response ASAP. --- Q1: "Authors claim that easy samples mainly contain low-order interactive concepts. Is there existing work showing that this claim is true or can authors show it? How is "easy" defined? ..." A: A good question. The claim in Line 313 that *easy samples mainly contain low-order interactive concepts* is actually supported by the heuristic findings in [4]. Cheng et al. [4] discovered that OOD samples, which were considered as difficult samples, usually contained much more high-order interactions than normal samples (simple samples). Besides, Mangalam and Prabhu [20] defined easy samples as training samples that could be correctly classified by shallow machine learning models, such as SVM. To this end, we conducted **new experiments**, in which for each $l$-th layer of an MLP, we learned a specific linear classifier $y=softmax(M * f_l)$ to use the $l$-th layer's feature $f_l$ for classification. Because high layers mainly encoded more complex features than low layers, in this experiment, we tested whether the classifier based on more complex features (in higher layers) also encoded more high-order concepts. Then, we quantified the interactions between input variables encoded by different classifiers in different layers. Fig. 1 in *the response pdf file* shows that higher layers usually encoded more complex interactions. This also partially supported the above claim. --- Q2: "How was the variance value used in the experiments chosen? What is the impact if it is changed? What is the impact if a different type of noise is used?" A: Thanks. We set the variance of noise $\sigma^2=0.02^2$ in all experiments in the main paper. We have followed your suggestions to conduct **two new experiments** to answer your questions. These two experiments were conducted on the AlexNet trained on the CIFAR-10 dataset. **Experiment 1**. We tested the instability of interactive concepts $\kappa(S)$ (defined in Line 234) by setting input noises with different variances $\sigma^2$. We added Gaussian perturbations $\epsilon \sim \mathcal{N}(0, {\sigma}^2I)$ to each training sample. Table 1 in *the response pdf file* verifies that high-order concepts were less stable than low-order concepts. This claim was not affected by the change of $\sigma^2$. **Experiment 2**. As you requested, in this experiment, we computed the instability $\kappa(S)$ (defined in Line 234) by applying two different types of noises, *i.e.,* Gaussian perturbations $\epsilon \sim \mathcal{N}(0, {0.02}^2I)$ and uniform perturbations $\epsilon \sim \mathcal{U}(-0.02, +0.02)$ on each training sample. Table 2 in *the response pdf file* verifies that the type of noises did not affect the claim that high-order concepts were less stable than low-order concepts. --- Q3: "Could authors experiment on newer architectures such as transformers? What about textual data input?" A: Thanks. We have followed your suggestions to **conduct new experiments** on the BERT (a classical transformer-based model) on textual data. Specifically, we used the pre-trained BERT model [c1] and fine-tuned it for the sentiment classification task on the SST-2 dataset. We added Gaussian perturbations $\epsilon \sim \mathcal{N}(0, {0.02}^2I)$ to the token embedding of each training sample. We computed the instability of the interactive concept $\kappa(S)$. Table 3 in *the response pdf file* verifies that the interaction effect of the high-order interactive concept was usually less stable than that of the low-order interactive concept on textual data. [c1] J. Devlin et al. “Bert: Pre-training of deep bidirectional transformers for language understanding,” in NAACL-HLT, 2018. --- Q4: "Can you provide a limitation/broader impacts section?..." A: Thanks a lot. We will follow your suggestions to add a new section to discuss the limitation of this paper. The limitation is that there are very few ways to define and examine what is a ''concept." In this paper, we only use the following three properties to support the faithfulness of using sparse salient interactions as concepts encoded by the DNN. $\bullet$ [26] proved that a well-trained DNN would encode just a small number of salient interactions for inference under some common conditions. See Line 83. $\bullet$ [24] proved that, such a small number of salient interactions extracted from a sample $x$ could well mimic DNN’s outputs on numerous masked samples {$x_T | T\subseteq N$}. See Theorem 1. $\bullet$ [15] have discovered that salient interactions have considerable transferability and strong discrimination power. See Line 103. However, the above properties cannot guarantee a clear correspondence between an interactive concept and a concept in human cognition. Up to now, we cannot mathematically formulate what is a concept in cognition science. Thus, there is still a long way to unify the learning difficulty of a DNN perspective and the cognitive difficulty of human beings.
Summary: In this paper, the authors proved several theoretical results that formalized the idea that simple interactive concepts achieve a smaller variance in their interaction effects in the face of Gaussian perturbations, and are thus more stable and easier for deep neural networks (DNNs) to learn. The authors also provided experimental verifications to verify their theoretical results, and pointed out the connections between their theoretical work and previous findings on DNN learning. Strengths: - The paper provides a rigorous theoretical framework for understanding concept learning of DNNs. - The paper formalizes the idea of conceptual complexity and establishes a formal link between conceptual complexity and learning difficulty of DNNs. - The paper sheds light on previous heuristic findings on DNN learning, and advances our understanding of DNNs. Weaknesses: - While the theoretical derivations of the results in the paper are impressive, I am not sure how the insights from a theoretical understanding of DNN learning could help us train better DNNs. In other words, I am not certain about the practical implications and contributions of this work. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How can we leverage the results and theoretical insights from this work to design better algorithms for training DNNs, or help DNNs learn complex concepts? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Since I did not find a section on the limitations of their work, the authors are encouraged to include a discussion of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper. We will try our best to answer all your questions. Please let us know if you still have further concerns, or if you are not satisfied with the current responses, so that we can further update the response ASAP. --- Q1: Ask how to use theoretical insights of this paper. "While the theoretical derivations of the results in the paper are impressive, I am not sure how the insights from a theoretical understanding of DNN learning could help us train better DNNs ... "How can we leverage the results and theoretical insights from this work to design better algorithms for training DNNs, or help DNNs learn complex concepts?"" A: A good question. Analyzing the representation flaw of DNNs has become an emerging direction in recent years, *e.g.,* shortcut learning [cite1,2] and simplicity bias of DNNs [cite3,4]. For decades, researchers have realized that it is easy for an AI model to encode simple concepts, and it is more difficult to encode complex concepts. However, there are two core challenges for future development in this direction. (1) How to define the concepts encoded by a DNN is still an open problem. There is no formal and universally accepted definition of concepts so far. (2) It is still a challenge to provide analytic connections between the complexity of concepts and the difficulty of learning concepts. Therefore, in this paper, we theoretically explain the trend of DNNs learning simple concepts. Specifically, we prove that low-order interactive concepts in the data are much more stable than high-order interactive concepts, which makes low-order interactive concepts more likely to be encoded. The above theoretical analysis provides a specific form of the complexity of concepts that boosts the learning difficulty. Thus, in the future, we may design new neural networks, whose architectures strengthen the capacity of encoding complex (high-order) concepts and meanwhile boost the stability of these concepts (to improve the quality of concepts). However, there is still a large gap between the breakthrough in theory and achievements in practice. [cite1] Geirhos R, Jacobsen J H, Michaelis C, et al. Shortcut learning in deep neural networks[J]. Nature Machine Intelligence, 2020, 2(11): 665-673. [cite2] Scimeca L, Oh S J, Chun S, et al. Which shortcut cues will dnns choose? a study from the parameter-space perspective[J]. arXiv preprint arXiv:2110.03095, 2021. [cite3] Shah H, Tamuly K, Raghunathan A, et al. The pitfalls of simplicity bias in neural networks[J]. Advances in Neural Information Processing Systems, 2020, 33: 9573-9585. [cite4] Huh M, Mobahi H, Zhang R, et al. The low-rank simplicity bias in deep networks[J]. arXiv:2103.10427, 2021. --- Q2: "Since I did not find a section on the limitations of their work, the authors are encouraged to include a discussion of the limitations." A: Thanks a lot. We will follow your suggestions to add a new section to discuss the limitation of this paper. The limitation is that there are very few ways to define and examine what is a ''concept." In this paper, we only use the following three properties to support the faithfulness of using sparse salient interactions as concepts encoded by the DNN. $\bullet$ [26] proved that a well-trained DNN would encode just a small number of salient interactions for inference under some common conditions. See Line 83. $\bullet$ [24] proved that, such a small number of salient interactions extracted from a sample $x$ could well mimic DNN’s outputs on numerous masked samples {$x_T | T\subseteq N $}. See Theorem 1. $\bullet$ [15] have discovered that salient interactions have considerable transferability and strong discrimination power. See Line 103. However, the above properties cannot guarantee a clear correspondence between an interactive concept and a concept in human cognition. Up to now, we cannot mathematically formulate what is a concept in cognition science. Thus, there is still a long way to unify the learning difficulty of a DNN perspective and the cognitive difficulty of human beings. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: After reading the response, I have revised my rating to "6: Weak Accept."
Summary: The paper provides theoretical results explaining why simpler concepts are easier to learn for neural networks. The paper refers to prior work on interactive concept models, which define a concept as a subset of input features, and defines the complexity of each concept as the number of features contained within it. The main theoretical results of the paper argue that the variance of a concept and its interaction under noisy input increases exponentially with the size or complexity of the concept. Hence, more complex concepts are more likely to be influenced by small variations in the data. The paper then presents empirical evidence supporting these claims, and shows empirically that lower order concepts are more stable and learnt quicker than higher order ones. Strengths: 1. The paper proposes an interesting analysis around interactive concepts. 2. The empirical evaluation backs the claims of the paper. Weaknesses: 1. The theoretical analysis does not match the claims of the paper's abstract. The theory only explains higher variance for higher order concepts, but the connection between higher variance and difficulty of learning is not made explicit theoretically. 2. Several lines of work around shortcut learning and simplicity bias of networks are missing from related works. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Can the authors make the connection between variance and learning difficulty more concrete theoretically? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper. We will try our best to answer all your questions. Please let us know if you still have further concerns, or if you are not satisfied with the current responses, so that we can further update the response ASAP. --- Q1: Ask for the theoretical connection between variance and learning difficulty. "... the connection between higher variance and difficulty of learning is not made explicit theoretically." A: Thank you for your insightful comments. In fact, this is a great challenge. Up to now, there is no certificated strict correspondence between the aforementioned interactive concepts and neurons in the DNN. Therefore, in order to derive an analytic connection between variances and learning difficulty, let us discuss our recent findings in the difficulty of learning different interactive concepts. Specifically, we simplify the learning of a DNN into a linear regression problem. In this **very simple** setting, we assume each $i$-th concept encoded by the DNN to be a specific feature, and use $f_i$ to represent the triggering (presence) state of the $i$-th concept in a training sample. Because according to Eq. (9) in Line 200, the network output $v(\cdot)$ is the sum of a small number of salient interactive concepts, we roughly simplify and rewrite the inference output of the DNN as the following linear function $$y=w^\top f=w_1f_1+\cdots +w_df_d ,$$ where $f=[f_1,\cdots, f_d]^\top$. Then, $w_i$ can be viewed as the strength of the DNN encoding the $i$-th interactive concept. In this way, if $w_i \approx 0$, the DNN does not learn the $i$-th interactive concept. We suppose that training samples are sampled from $f \sim P(f) = N(\mu, {\Sigma}^2)$ and $y^*$ denotes the ground truth label. Specifically, $\mu=[\mu_1,\mu_2,...,\mu_d]^{\top}$ and $\Sigma={\rm diag}(\sigma_1^{2},\sigma_2^{2},...,\sigma_d^{2})$. The toy regression problem can be formulated as follows, $$L=E_{P(f)}[\dfrac{1}{2}(w^\top f-y^*)^2]$$ In this way, we can derive the optimal weights to the above regression task $|w_i| \propto |1/\sigma_i^2|$, where $w_i$ denotes the strength of the DNN encoding the $i$-th interactive concept. Please see Fig. 2 in *the response pdf file* for the proof overview (due to the limit of the page number). This is our recent finding, and we will add the full proof to the appendix if the paper is accepted. $|w_i| \propto |1/\sigma_i^2|$ indicates that the learning effect of the $i$-th interactive concept is inversely proportional to the variance of the $i$-th interactive concept. Therefore, interactive concepts with high variances (high-order interactive concepts) are more difficult to learn. In fact, it is difficult to analyze the exact dynamics of learning concepts in much more complex real-world settings. We only analyze the connection between variances and learning difficulty in the above very simplified toy setting. Nevertheless, the above analysis still provides conceptual and analytic insights into the relation between variances and difficulty of learning. --- Q2: "Several lines of work around shortcut learning and simplicity bias of networks are missing from related works." A: We will cite and discuss papers [cite1-6] for shortcut learning and simplicity bias in the related work section in the revised paper. From the perspective of the simplicity bias, our study considers the definition of interactive concepts in [26], and analyzes the bias towards learning simple concepts. More crucially, we try to induce an approximate analytic explanation from the common intuition of the bias, based on the theory of game-theoretic interactions. This work clarifies an exact form of the complexity of concepts that a DNN is difficult to learn. [cite1] Shah H, Tamuly K, Raghunathan A, et al. The pitfalls of simplicity bias in neural networks[J]. Advances in Neural Information Processing Systems, 2020, 33: 9573-9585. [cite2] Huh M, Mobahi H, Zhang R, et al. The low-rank simplicity bias in deep networks[J]. arXiv:2103.10427, 2021. [cite3] Geirhos R, Jacobsen J H, Michaelis C, et al. Shortcut learning in deep neural networks[J]. Nature Machine Intelligence, 2020, 2(11): 665-673. [cite4] Pezeshki M, Kaba O, Bengio Y, et al. Gradient starvation: A learning proclivity in neural networks[J]. Advances in Neural Information Processing Systems, 2021, 34: 1256-1272. [cite5] Scimeca L, Oh S J, Chun S, et al. Which shortcut cues will dnns choose? a study from the parameter-space perspective[J]. arXiv preprint arXiv:2110.03095, 2021. [cite6] Addepalli S, Nasery A, Radhakrishnan V B, et al. Feature Reconstruction From Outputs Can Mitigate Simplicity Bias in Neural Networks[C]//The Eleventh International Conference on Learning Representations. 2022. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response. The added proof on the toy setting as well as related works would strengthen the paper in my opinion. I am raising my rating as well.
Rebuttal 1: Rebuttal: Thanks for all reviewers' great efforts and comments. This paper has received rating of a *strong acceptance*, a *weak acceptance*, and two *borderline acceptance*. We are glad to answer all your questions and conduct **new experiments as requested.** **Please let us know if you still have further concerns, so that we can update the response as soon as possible.** Pdf: /pdf/e698403f5d0f05038a439d89dd56b871b35c59b0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Class-Conditional Conformal Prediction with Many Classes
Accept (poster)
Summary: Standard conformal prediction provides a marginal coverage guarantee which is insufficient for many practical applications. Class-conditional conformal prediction is suitable for many applications, especially in $Y \rightarrow X$ settings, e.g., image data. Achieving class-conditional coverage relies on learning a separate threshold for each label, and hence splitting the calibration set into label-dependent groups. This becomes prohibitive in classification problems with many classes. The current paper proposes a method based on clustering similar classes (which share similar conformal scores) and performing cluster-conditional conformal threshold calibration. Strengths: The paper is well-written, and provided figures help the reader to follow the content easily. Relevant works and the limitations of the respective methods are highlighted. The proposed method is validated on a large number of large-scale public datasets, illustrating its general practical relevance. Weaknesses: To avoid tautology, most of my concerns will be outlined in the questions box: those are related to the issues arising empty prediction sets and clustering based on the nonconformity scores. There is also a couple of minor typos: 1. Line 26: $\mathcal{C}(X_{test})\subseteq \mathcal{Y}$. 2. Proposition 1: missing full stop in the equation between lines 156 and 157. Finally, a couple of stylistic suggestions: 1. In Figure 1, it may be better to make sub-captions consistent (i.e., point out that the left su-plot refers to standard CP and the right one to Classwise CP). 2. In Figure 2, it may be better to use different markers (in addition to different colors) for depicting the results of using different methods. 3. I believe that [1] should also be cited and discussed. [1] "Least Ambiguous Set-Valued Classifiers with Bounded Error Levels". Sadinle et al., 2016 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I hope that the authors can provide answers to the following questions/concerns: 1. One of the issues with class-conditional CP which has been pointed out in [1] is that there could be cases when the resulting prediction sets are empty (the authors also proposed several ways to handle this issue). Does this issue also apply to the proposed method? 2. I am concerned regarding clustering based on nonconformity scores. Is it possible to (empirically) demonstrate that (even in the most straightforward cases) such classes form "meaningful"/"interpretable" clusters if nonconformity scores are used as features? Consider CIFAR-100 dataset as an example: class "apples" is "closer" to the class "mushrooms" than it is to the class "rocket". However, suppose those happened to be assigned to the same cluster. In that case, the coverage guarantee reads as "among images with apples, mushrooms, and rockets, the true label is contained in the prediction set with probability at least $1-\alpha$" and may not be that interpretable. 3. For many datasets, there is a known taxonomy/hierarchy of classes, e.g., for CIFAR-100, even though there are 100 classes, these are grouped into 20 superclasses. Such taxonomies can also be used to convert the low-data class-conditional calibration step to a high-data group-conditional calibration step, while resulting in interpretable clusters/groups. Even if such taxonomy is not provided, one can still try to come up with one that guarantees that there are "enough" calibration points in each group. What are the advantages of the proposed method over this strategy? [1] "Least Ambiguous Set-Valued Classifiers with Bounded Error Levels". Sadinle et al., 2016 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The current work is a part of methodological research and does not have a potential negative societal impact. Overall, I hope that the authors can better highlight the downsides of the proposed method (e.g., grouping based on existing dataset taxonomy). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We appreciate your summary of the motivation for achieving class-conditional coverage and why it can be difficult in multi-class classification problems. Thanks for suggesting we cite Sadinle et al., 2016, which presents a variant of classwise conformal. We have added it to our discussion of related work. In response to your questions: > Q1: Empty sets In our experiments, we did not observe null sets to be a problem. The problem of empty sets is a real one that appears with some conformal prediction methods, but this problem is orthogonal to the problem we are focused on. If null sets were to appear when the clustered conformal procedure is applied to other settings, we could apply some method for addressing empty sets in other conformal procedures. For example, a simple modification that can be applied to any conformal procedure to avoid empty sets is to return the union of the conformal set and $\\{y_{\min}\\}$ where $y_{\min}$ is the class with the smallest conformal score (i.e., the “most likely” class). Since this modified set is a superset of the original conformal set, the new set will also have a $1-\alpha$ coverage guarantee. (Also, there are further improvements that can make this procedure non-conservative as well.) In a different direction, Guan and Tibshirani ‘22 [1] use empty sets intentionally and give them semantic meaning. In that work, the empty set is used to indicate that a data point does not appear to be consistent with any class and should be treated as an outlier. While we do not pursue this approach in our present work, with some conformity scores, the same idea could be used in conjunction with the clustering we propose. [1] "Prediction and outlier detection in classification problems." Guan and Tibshirani, 2022. > Q2: Meaningful clusters Our investigations into cluster memberships showed that indeed, classes that are semantically similar are generally not grouped together (i.e., did not have similar conformal score distributions). If semantically similar classes did have similar conformal score distributions, our problem would be simpler to solve! The fact that this relationship does _not_ hold is what makes our data-driven clustering approach necessary and what prompted our investigation in the first place. Nonetheless, exploring what determines which classes are grouped together is something we would like to better understand. We thank the reviewer for this point and we will explicitly discuss this in the updated manuscript. > Q3: Clustering based on taxonomy This is another very interesting direction. The advantage of the alternative method you described is that the clusters are more interpretable, so the cluster-conditional coverage guarantee is perhaps more intuitive. The disadvantage is that this alternative method will not in general yield class-conditional coverage – the errors will be traded off, with some hard classes having low coverage and some easy classes having high coverage. Conversely, our method produces clusters that are not generally not semantically meaningful, so the cluster-conditional coverage guarantee is less interpretable; however, the advantage is that our method is designed to yield class-conditional coverage (due to the approximate exchangeability of the conformal scores for classes grouped in the same cluster). Echoing our reply to your previous question, the fact that semantically related classes do not have similar distributions is what makes our proposal necessary, and motivated us to pursue this direction in the first place. To summarize: if you want good coverage _for each taxonomic group_, the method you described will be better, but if you want good coverage _for each class_, our proposed method will be better. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses to the questions. I have checked the questions/concerns of other reviewers and the corresponding responses, and the experiments with new evaluation metrics. I had a question about the mentioned definition of UnderCovGap: is $|\mathcal{Y}|$ in the denominator correct? Shouldn't it be the sum of indicators (e.g., $\sum_{y\in\mathcal{Y}}1\\{\hat{c}_{y}\leq 1-\alpha\\}$)? (Experiments also suggest so) --- Reply to Comment 1.1.1: Comment: You are correct about the denominator; thank you for pointing that out! The metric UnderCovGap captures the average coverage gap among classes that are undercovered and the correct description for how it is computed is: Let $\mathcal{Y}_{\text{under}} = \\{y: \hat{c}_y \leq 1-\alpha \\}$ be the set of classes with coverage less than $1-\alpha$. Then, $$\text{UnderCovGap} = 100 \times \frac{1}{|\mathcal{Y_{\text{under}}}|}\sum_{y \in \mathcal{Y_{\text{under}}}} |\hat{c}_y -(1-\alpha)|$$
Summary: This article studies the clustered conformal prediction, mitigating the issues in the standard conformal and class-wise conformal, by grouping some similar classes together. Strengths: * The authors proposed clustered conformal prediction to strike the balance between marginal coverage and class-wise coverage for the setting of many classes with limited data per class. * When calibration data is limited, the empirical results show that the proposed method achieves a relatively small coverage gap according to the designed metric (CovGap), effectively balancing marginal coverage and class-wise coverage. Weaknesses: * The authors need to specify the low-data scenario (at least on the calibration data from the perspective of the experiments) in the title, otherwise, there are other works related to many classes. Moreover, from the main manuscript, it looks more like cluster-conditional conformal instead of class-conditional conformal in the title. * The setting and the numerical study somehow contradict each other. Now that the authors used deep learning to train the score function, it does not make sense that there is a limited calibration dataset in practice. If that, why don’t allocate some from the training data? * There is no sensitivity analysis on the clustering procedure. What if different schemes are used for choosing $M$? How does $M$ (e.g., $M < K$ or $M > K$) affect the results? * Using CovGap as a metric might obscure the issue of coverage for each class from methods, e.g., the standard and the proposed method. The results are difficult to say without explicitly showing the coverage for each class. For example, class-wise conformal has high coverage than $1-\alpha$ with the larger gap, but the proposed method could return the coverage below $1-\alpha$ with a smaller gap. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * The strikingly large prediction set of class-wise conformal is due to the threshold set as in Line 115, but why don’t we set it as the largest score in the calibration data instead of the infinity? There might be a loss of coverage, but it is unclear how large it is in these empirical results, and whether or not this kind of loss is tolerable, especially when the other two competing methods may also lose the coverage for the individual class. * Line 149: It seems the idea of finding the threshold for the null cluster is the same as that for the marginal coverage. Now that you criticized the standard conformal’s overall coverage ability, why not merge some clusters until you get the desired sample size instead of merging all clusters in the proposed method? * Line 176: The authors also notice the issue of class imbalance may affect the clustering, then can we up-sample the minor classes to mitigate the issue mentioned in Line 115 and then follow the class-conditional conformal instead of the proposed clustered conformal? Or what if you do the clustering on the original dataset to obtain the desired sample size, and then use class-conditional conformal? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see the above two parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and questions. In response to weaknesses: > W1: “Low-data scenario”; cluster-conditional conformal vs. class-conditional conformal We are not aware of other works that focus on creating prediction sets that target class-conditional coverage in the many-class regime. If you are aware of work in this setting, please share! As mentioned in our Related Work section, existing work that evaluates class-conditional coverage focuses on classification problems with at most 10 classes, which is still an order of magnitude off from the setting we work in (100-1000 classes). Our goal is to produce prediction sets with good class-conditional coverage. We perform clustering as a means towards this goal. The choice of titling the manuscript “cluster-conditional conformal” or “class-conditional conformal” is a matter of emphasizing the mechanism behind the procedure or the result of the procedure. We chose the title of “class-conditional conformal” to highlight this desirable property of the resulting prediction sets. > W2: Reallocating training data to increase calibration data This is a good point. There are many reasons that the amount of calibration data per class may be limited in practice. Nowadays, it is common that models are first pre-trained on large amounts of data that are not from the same distribution that you will deploy the model on and then fine-tuned on a much smaller dataset from the target distribution. Moving data from the pre-training data to the conformal calibration dataset will not lead to valid conformal prediction sets due to the distribution shift. Even if you do have a reasonably large labeled dataset from your target distribution, it is generally undesirable to exclude large amounts of data from training: suppose that we want 100 calibration examples per class but we have 1,000 classes; we would have to set aside 100,000 examples. Removing such large amounts of data from the training dataset will often reduce model accuracy. We thank the reviewer for raising this point, and we will explicitly raise this in the manuscript. > W3: Sensitivity analysis You can find the sensitivity analysis on the clustering procedure in Appendix B.2. We find that our method is robust to the choice of $M$. > W4: CovGap and potential undercoverage This is another important point, and we have carried out an extensive experiment to investigate this. Please see the common response for details. In response to questions: > Q1: Using largest calibration score rather than infinity As you point out, your suggested modification will not yield a coverage guarantee. To test the effect in practice, we ran an additional experiment following your suggestion. The tl;dr is that (1) as expected, the modified classwise conformal procedure does have smaller set size and results in undercoverage and (2) clustered conformal still does better than modified classwise in terms of both set size (lower AvgSize) and class-conditional coverage (lower CovGap). In more detail: (CIFAR-100, with 1000 total calibration points sampled using fixed random seed). FracUnderCov is the fraction of classes with less than 80% coverage. * `Classwise`: CovGap = 7.8, AvgSize = 46.1, FracUnderCov = 0.08 * `Modified Classwise`: CovGap = 7.9, AvgSize = 18.0, FracUnderCov = 0.18 * `Clustered`: CovGap = 4.5, AvgSize = 8.2, FracUnderCov = 0.08 > Q2: Null cluster; merging clusters How we treat classes in the null cluster makes very little practical difference. With that said, one way to get cluster-conditional coverage even for the null cluster is to treat it like any other cluster and estimate a conformal quantile for that cluster. We choose to not do this and opt instead to use the threshold that provides marginal coverage since it is lower variance and works well in practice. > Q3: Alternative methods for handling with rare classes We would not gain anything by up-sampling rare classes and running class-conditional conformal on the synthetically generated sampled dataset. First, from a theoretical perspective, this would not come with any coverage guarantee. The issue is that upsampling the data results in a data set that is not exchangeable with test data – there will be many duplicates in the calibration data, but we know that with probability 1 we will not see any duplicates in the test data. Thus, upsampling would fight against the guarantees underlying conformal prediction. Second, from a practical perspective, the resulting estimated conformal thresholds would be extremely volatile. Regarding your suggestion to “do the clustering on the original dataset to obtain the desired sample size, and then use class-conditional conformal?” — this is a very reasonable idea and is in fact what we do! What is special about our clustering is that the clusters we produce make it so that we will get good class-conditional coverage and not just cluster-conditional coverage. --- Rebuttal Comment 1.1: Comment: Thanks for the author’s explanation. I admit current works like [7, 8, newRef] about class-conditional conformal of many classes only conduct experiments on at most 10 classes. However, they directly control the coverage for each original class without bothering with the extra clustering step. In contrast, like in your discussion section, the necessity of the clustering step is due to the regime of the insufficient calibration data. The gain would be marginal if $n_{avg}$ is larger from the trend of current experiments, or say greater than 200, or even higher (which is not included in the experiments). Theoretically, it directly controls the cluster-wise coverage as in Proposition 1, and may control the class-wise coverage under the requirement/assumption of the clustering as in Proposition 2-3, where total variation $\epsilon$ seems a critical point about the goodness of the clustering step. I feel the authors may need to properly add some discussions on Proposition 3 (although the appendix includes the proof), otherwise, this part is somehow less solid in the main article. By the way, $n_{avg}$ shows up many times but there is no explicit introduction when it first appears, which may confuse the readers. [newRef] Sadinle, Mauricio, Jing Lei, and Larry Wasserman. "Least ambiguous set-valued classifiers with bounded error levels." Journal of the American Statistical Association 114.525 (2019): 223-234. --- Reply to Comment 1.1.1: Comment: We agree with everything you said about when clustering is/isn’t beneficial. If you have a calibration set that is at least moderately large and has only ~10 classes, you should run classwise conformal, as clustering will not provide a benefit. We hoped to have communicated this in our paper, but we can make this more clear in the camera-ready draft. We would also like to note that following reviewer MBi5’s suggestion, we have added discussion of Sadinle et al., 2019 to our related work section, and are happy to add more references too. Regarding the issue of low data: our motivation for this work was to achieve good class-conditional coverage on ImageNet (a “many classes” setting) using their validation set of 50,000 images. 50,000 images is not “low data” in aggregate, but divided amongst 1,000 classes, this yields only 50 images per class. Many real applications in computer vision are similar in this regard (reasonably large calibration sets but also "many classes" — hundreds, thousands, or more); in fact, many applications are even worse off than ImageNet in terms of the number of examples available per class. This is where clustering will be useful, since it allows us to dynamically pool information between classes. Following our work, some of our computer vision colleagues are already investigating incorporating clustering into their conformal systems. As for the comment about the propositions, thank you, we can explain the role of Propositions 2-3 more carefully in the camera-ready version, including the role of $\epsilon$. We would like to note that, since the initial version, we have been able to further strengthen Proposition 3. Now the same conclusion holds as written, but we only require the KS (Kolmogorov-Smirnov) distance between all pairs of class score distributions within a cluster to be bounded by $\epsilon$, and not the TV (total variation) distance. This is a weaker requirement since the KS distance is never larger than the TV distance. Thank you also for the suggestion to explain $n_{\text{avg}}$ more. We have added the following sentence to Line 200 preceding our first use of $n_{\text{avg}}$: “We construct calibration sets of varying size by changing the average number of points in each class, denoted $n_{\text{avg}}$.” We hope this helps clarify our reasoning for the positioning of this paper, the choice of experiments, and so on.
Summary: The paper studies how to achieve class-conditional coverage (in the setting of conformal prediction) for multiclass classification problems, in particular for tasks with large label spaces. Previous techniques either provide no class-wise guarantees or tend to produce conservative prediction sets due to lack of data. The key insight here is that non-conformity scores from different classes may be grouped together if their class-conditional score distributions are similar, since the $(1- \alpha)$-quantile value to achieve desired coverage would be the same across these classes. Essentially, we can extrapolate about a class with low amounts of data by simply using data from other classes with a similar score distribution. This work is particularly applicable in settings where there is a large number of classes (and thus higher likelihood of small amount of data for some of these classes) and / or high class imbalance. Strengths: • Firstly, I found this paper particularly well-written and structured; was written to make the discussion quite intuitive and easy to follow. • Method is a natural solution in settings where there is an underlying structure / similarity in how the base model performs on certain groups of classes. • Strikes a balance between getting meaningful prediction sets (not overly conservative) and getting low variance in class-wise coverage rates. • Empirically, the proposed method performs quite well, performing on par or better than the baseline methods for the studied metrics. Weaknesses: • This performance of CLUSTERED seems to be essentially a data and model-driven problem. If there are different classes with similar score distribution, it improves performance. Is there a possibility of artificially inducing clusters, if such a similarity doesn’t exist? • The CovGap metric seems to penalize both overcoverage and undercoverage without distinguishing between the two. It would be nice to see how much each of the methods overcover and undercover (classwise) separately. For example, in Table 2 I would expect that the high CovGap values for CLASSWISE is mostly due to overcoverage, while for STANDARD it is probably undercovering and overcovering in equal measure, but this metric hides that potential distinction. • Minor typo: use of $h$ in line 146, equation after line 146 and in Proposition 1 should be $\hat{h}$. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: • Though equation (2) implies equation (1), in general we may not achieve the guarantee of (2) as you have mentioned in Section 2.2, so the CLUSTERED algorithm may not necessarily achieve marginal coverage. Do you see this method as more of on a continuum with STANDARD and CLASSWISE, with each of them having situations where they work better, especially since CLUSTERED guarantees are not as consistent as the others? • Clarification about the usage of Algorithm CLASSWISE: when generating a prediction set for a new input, to choose a conformal predictor to use, we need to know the true label, which we do not. So which of the K predictors is chosen? Do we run all of them and choose the most conservative quantile value? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: • The theoretical guarantee on class-conditional coverage depends on the value of $\epsilon$ defined in Proposition 3 which can vary based on the actual score distributions across classes, the value of $m$ used and the clustering algorithm used, so there may be concern on achieving close to $(1 - \alpha)$-coverage performance in general, although in all experiments, the method seems to work quite well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your nice summary of the paper! In response to weaknesses: > W1: Inducing artificial clusters Our method does, in a sense, induce artificial clusters, but we do not view this as a weakness. In the original datasets, there is no true clustering. We simply partition the classes in the way that the data suggests in order to yield good class-conditional coverage. 
 > W2: CovGap We thank the reader for this point. We have conducted a substantial additional experiment to investigate this. Please see the file attached to the common response for plots that separately show undercoverage and overcoverage. In response to questions: > Q1: Marginal coverage Clustered conformal may not always yield perfect class-conditional coverage, but it always yields cluster-conditional coverage (see Proposition 1), which is a stronger guarantee than marginal coverage. Whether cluster-conditional coverage implies class-conditional coverage is dependent on the quality of the clustering (see Proposition 3). To better clarify this point, we will update the exposition surrounding Proposition 1 and 3 in the text. We thank the reviewer for this point. > Q2: Classwise conformal For a more technical explanation of how the conformal sets are formed by the CLASSWISE procedure, please see Lines 111-112. In words: We first compute a separate conformal quantile for each class $y$. Then, given an input $X$, we compute the conformal prediction set as follows: for each possible label $y$, we compute the conformal score $s(X,y)$ and then include $y$ in the prediction set if $s(X,y)$ is less than the conformal quantile for class $y$. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I have gone through the added experimental results and the other reviewers comments, and would like to maintain my original score.
Summary: The paper describes a method for conformal prediction specially addressed for the case where there is few avaliable data in some of the classes. To address this problem, the proposed method performs a clustering of classes with similar score distributions with the goal of increasing the data used to select the set of classes for a given sample. Experimental results on standard image datasets with a large number of classes with different settings of data availability are reported comparing the proposed method with other standard approaches for conformal prediction. Strengths: 1. The paper addresses the problem of conformal prediction with a large number of classes and with few data per classe, and proposes a new method specifically designed for this setting. The method is well motivated and formalized. 2. Extensive experiments are performed on different datasets and different settings of data availability, comparing the proposed method with standard and classwise approaches. Results show that the proposed method achieves a better average coverage. 3. The paper include useful practical recommendations about when to use each of the conformal prediction approaches. Weaknesses: 1. One of the motivating points of the paper is that, in some cases average coverage can be good enough, but specific coverage for some underepresented classes can be very low. Results show that average coverage improves with the proposed method, but there is not any discussion or results about the specific coverage for particular classes, specially those with few data. Therefore, we cannot validate the original motivation of the paper. Furthermore, as fas as I understand, classes with few data will most probably assigned to the null cluster and then, those classes will follow the standard approach, and there will be no difference between the proposed method and the standard one for these classes. 2. Although the average coverage improves, the AvgSize measure is, in general worse than with the standard method. 3. Some technical details could be better explained or motivated. - In section 2.1, what do you exactly mean by "scores for all classes in a given cluster are exchangeable? That score distributions are equal? - The k-means clustering algorithm is very dependent on the selected number of clusters. Although there is a explanation on how the number of cluster is defined, wouldn't it be better to use an adaptive clustering algorithm where the number of clusters can be automatically determined? - The representation used to perform the clustering is based on a few discrete number of quantile scores. Wouldn't it be better to use a larger and more dense number of quantile scores that could give a better approximation of the distribution of score values? 3. Related work is very oriented to describe on which types of data conformal prediction has been applied. I miss a more technical discussion describing the different techniques and methods used in the existing works. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above, specially questions regarding performance on specific classes with few data. ----------------------------- I have read the rebuttal and it has clarified most of the questions I had about the paper, specially those related with classes with few samples and technical issues. After reading all comments and discussion, I increase my rating to weak accept. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss on the limitations of the method and no potential negative impacts are foreseen Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and, in particular, for highlighting our practical recommendations, which we hope will provide useful guidance for practitioners. In response to weaknesses: > W1: Classes with few examples See our common response, which describes additional experiments we ran to look at potential undercoverage. It is true that classes with very few examples (say, 5) will end up in the null cluster. In our practical recommendation in the Discussion, we highlight that in regimes where we expect many classes to have very few examples, one should run standard conformal. However, if most classes have few, but not very few, examples (say, 20), clustered conformal will provide a boost in performance. > W2: AvgSize of clustered conformal is larger than standard conformal 
 This is true — there is no free lunch! To get class-conditional coverage, clustered has to output sets that are slightly larger than standard conformal. Standard conformal is not guaranteed to yield class-conditional coverage and can have bad class-conditional coverage in some settings. In many cases, the trade-off of slightly larger sets in return for class-conditional coverage is worth it. Compared to the classwise conformal procedure, we get similar class-conditional coverage with much smaller sets. > W3: Adaptive clustering algorithm We agree that an adaptive clustering algorithm seems nice in theory. However, implementing this in our setting is trickier than it might initially appear. There are several methods designed for choosing $k$ in k-means (e.g., elbow, gap statistic), and we experimented with many of them. However, none of them worked well in our particular clustering setting. The problem of clustering distributions is relatively unexplored and would be interesting to dive into, but it was not the focus of our work. Combining this with the fact that our method is robust to the choice of $k$ in the k-means procedure (see our sensitivity analysis in Appendix B.3) made it reasonable to just use the intuitive procedure for choosing $k$ that we described in the paper. > W4: Representation used for clustering Yes, there is flexibility in choosing the representation upon which to perform clustering. We chose one that makes intuitive sense and stuck with it, since we found it to work well in practice. Increasing the number of quantiles used to create the representation is not necessarily better: for example, if there are only 10 examples, the 85% quantile and the 90% quantile will correspond to the same example. > W5: Related work Thanks for this suggestion; we have expanded our related work. We originally focused on datasets in previous work since the only existing techniques/methods are STANDARD and CLASSWISE, which we described earlier on in the Introduction. CLASSWISE is a special case of Mondrian CP (Line 85), which we now present in more detail. For some related problems (such as creating prediction sets with group-conditional coverage), we have also added some additional discussion of techniques used in previous works. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your response. It has clarified most of the questions I had about the paper, specially those related with classes with few samples and technical issues. After reading all comments and discussion, I will increase my rating to weak accept.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for engaging with our paper and for providing helpful comments and suggestions. We have incorporated the feedback into the paper, and have run a new set of experiments (described below). The reviewers agreed the paper was well-written and that we thoroughly validated our method via extensive experiments on several datasets. A couple points we would like to address to all reviewers: * Several reviewers expressed concerns that the CovGap metric could obscure undercoverage of some classes. To investigate this, we ran experiments that looked specifically at the fraction of classes with less than 80% coverage (FracUnderCov). These results are included in Appendix C.2 and we find that clustered conformal also does well in this metric and trends in FracUnderCov generally mirror the trends in CovGap. Additionally, in the attached pdf we have included new plots that show the amount of undercoverage (UnderCovGap) and the amount of overcoverage (OverCovGap). More explicitly, UnderCovGap is CovGap restricted to classes with less than the desired coverage level: $$\text{UnderCovGap} = 100 \times \frac{1}{|\mathcal{Y}|}\sum_{y \in \mathcal{Y}} |\hat{c}_y -(1-\alpha)| \cdot \mathbf{1}\\{\hat{c}_y \leq (1-\alpha)\\}$$ where $\hat{c}_y$ is the coverage of class $y$. OverCovGap is computed analogously but with $\mathbf{1}\\{\hat{c}_y \geq (1-\alpha)\\}$. * The goal of our paper is not to champion the use of the clustered conformal method in all settings (and we hope our writing reflects this, particularly in the discussion section). We perform extensive experiments to understand when baselines do well, and we identify a regime in which we can be smarter about how we use our data — in this regime, we recommend the use of clustered conformal. We provide individual replies to specific comments from each reviewer. Pdf: /pdf/da44a2db587d4a2c39a2d27c4fb5e645c0805cff.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Faithful Sign Language Translation
Reject
Summary: The paper proposes a novel framework for sign language translation (SLT), which can integrate multiple SLT subtasks. The work is motivated by a series of experimental analysis, e.g., converging speed of different subtasks and relationship between SLT and sign language recognition (SLR) performance. Besides, two constraints are proposed to improve the faithfulness of the model and ease the model training. The method achieves SOTA performance on two widely adopted SLT benchmarks using only keypoint inputs. Strengths: 1. The paper figures out two important problems: the lack of faithfulness in current SLT models and the inconsistent trend between SLR and SLT, which can inspire future works in this field. 2. The method is well motivated by a series of in-depth analysis in Figure 3. 3. The code-switching operation is interesting and novel, which can inspire future works on cross-modality modeling for SLT. 4. SOTA performance are achieved with a lightweight model using only keypoint inputs. Weaknesses: My major concerns come from method details and experiments. Method: 1. The MLP and classifier in Figure 2 should be shared or not? In VAC [9], two different classifiers are appended to the visual and contextual module, while SMKD [40] uses a shared classifier, and the paper follows the design of SMKD. Intuitively, different classifiers should be used to project two features from different spaces into a common space. More discussion is needed for the discrepancy. 2. Gloss embeddings and mixup also appeared in a recent paper in the field of SLR [R1]. In [R1], the gloss embeddings are extracted by FastText, and the mixup is also achieved between visual and gloss embeddings. Some discussion or comparison should be added. 3. What is the motivation to fulfill code switch between the visual and gloss embeddings? Is it possible to use it between the contextual and gloss embeddings? Experiments: 4. In Table 5, the sentence-wise code-switching does not consistently outperform the token-wise counterpart. The authors may explain why the sentence-wise one performs better when not using annealing and consistency. 5. As stated in line 299, logits are a closer representation of glosses. Also, [12] uses logits as the input for the translation module for CSL-Daily. Thus, it is not rigorous to conclude that adopting logits will degrade the performance since the ablation study is conducted on Phoenix14T. 6. The paper focuses on improving the faithfulness of SLT. But there are not objective metrics mentioned to measure the faithfulness. [R1] Natural Language-Assisted Sign Language Recognition, CVPR 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How to obtain $m(\beta)$? Is it right that: when $\beta>0.5$, $m(\beta)=1$? 2. What is the difference between the consistency constraints in the paper with that in VAC? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations and societal impact adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and constructive comments. Our review responses are summarized below. **Q6. About the choice of evaluation metrics.** Thanks for your constructive comments on the evaluation metrics. As we mentioned **in the “global” response**, we suggest evaluating the over-translation and under-translation with BLEU-1 and ROUGE-1, respectively. We also evaluate the proposed method on the subsets of different universal part-of-speech tags. Experiment results show that almost all tags benefit a lot from the proposed MonoSLT, which indicates the proposed method is effective in increasing faithfulness. **Q1. Whether the MLP and classifier in Figure 2 should be shared or not?** Thanks for your insightful comments. This submission follows the design of SMKD and shares the MLP and classifiers. We agree that it is common to project two features from different spaces into a common space. However, the two feature spaces here are not independent. The contextual features are extracted from the visual features with a two-layer BiLSTM, thus sharing the classifiers and the following modules can be regarded as a kind of implicit constraint for the BiLSTM module. Besides, as we mentioned in the “global” response, this submission keeps the visual encoder (SMKD-based) the same and does not dig into the faithfulness of the visual encoder. The visual encoder already projects two features from different spaces into a common space, and sharing the following modules or not makes little influence on the performance as shown in Table 6. **Q2. Comparison with recent work.** Thanks for pointing out the relevant work and it has a similar motivation of alignment constraint. Both works try to guide the learning of visual features with relevant semantic meanings. The main difference here is the NLA-SLR conducts a class-wise mixup to improve isolated sign language recognition, and this submission conducts a sentence-wise mixup to align make the translation module aware of word alignment implicitly. We will add this comparison in the related work. **Q3. What is the motivation to fulfill code switch between the visual and gloss embeddings?** As mentioned in the introduction of Sect. 3.3, recent works have shown that a single transformer-based model has the capacity to model multi-lingual languages (e.g., XNLI[37], mBart[38]) and cross-modality information (e.g., VideoBERT[39]). If we regard the translation model as an auto-encoder, the token-wise code-switching can be regarded as an ensemble of two complementary masked language modeling processes, and they need contextual information from each other to achieve accurate translation. Therefore, the translation module needs to align the contextual information from both visual and semantic features to achieve the translation goal during training, which aligns the visual and semantic spaces implicitly. This submission adopts the code-switching module to ease the training of visual features and align the feature spaces, it is also possible to be used as an augmentation strategy to the "self-alignment" of visual or contextual features, which is verified in the experiments on CSL-Daily **in the "global" response**. **Q4. The choice of code-switching module.** This submission provides two kinds of code-switching approaches and both of them achieve comparable results as shown in Table 5. Compared to choosing one as the universal solution, we suggest selecting approaches based on the practical situation. For example, sentence-wise code-switching is a more suitable choice when the alignment is hard to obtain because it can preserve the complete information of both sequences, while token-wise code-switching may be a more suitable choice when the model is co-trained with a masked language modeling task. **Q5. Conclusion that adopting logits will degrade the performance is not rigorous.** Thank you for bringing this up. This submission conducts this experiment to reveal that the logits are imprecise on Phoenix14T. We apologize for the misunderstanding and will revise this statement as "adopting logits as input leads to a little performance degradation on Phoenix14T, ...". **Q7. How to obtain $m(\beta)$?** $m(\beta)$ is a vector of length $T$, and each element is $0$ or $1$, which represents whether the corresponding semantic embedding is used or not. Specifically, we first estimate the alignment path $\hat{\pi}=\arg\max_\pi p(\pi|X, G)$ with the maximal probability [41], and then obtain the corresponding range $[s_i, e_i)$ for each gloss $i$, where the $s_i$ and $e_i$ denote start and end frame indexes. Then we randomly sample gloss $i$ with the probability of $\beta$ and replace the corresponding visual embeddings $\mathcal{P}(v_i), i \in [s_i, e_i)$ with $\mathcal{E}(G_i)$ (i.e., replace all frames that are recognized as gloss $i$ with its semantic embedding). The matrix form is presented in Equ. 3. We will clarify the relevant formulation and provide several examples in the supplementary. **Q8. What is the difference between the consistency constraints in the paper with that in VAC?** The main difference is that VAC needs pair-wise alignments, and we select two subtasks based on their characteristic and incorporate them as a single task to simplify the training process. Therefore, from the perspective of constraint design, the proposed method can be regarded as a simplified version of VAC. We choose a VAC-style constraint design to validate the effectiveness of the proposed framework, and other alignment methods are also potentially feasible (e.g., contrastive-based methods).
Summary: This paper mainly discusses the challenges of improving faithfulness in sign language translation and proposes solutions. The researchers leverage rich monolingual data and adopt back-translation to generate synthetic parallel data, explore the potential of denoising auto-encoder, and propose the MonoSLT framework to improve the accuracy of sign language translation. They also emphasize the importance of alignment and consistency constraints to align visual and linguistic embeddings and improve faithfulness. This paper has important reference value for improving faithfulness in sign language translation. Strengths: 1. Proposing a new unified framework, MonoSLT, which integrates subtasks of sign language translation into a single framework, allowing these subtasks to share acquired knowledge. 2. Proposing two constraints: alignment constraint and consistency constraint, which help improve the faithfulness of translation. 3. Experimental results show that the MonoSLT framework is competitive in improving the faithfulness of sign language translation and can increase the utilization of visual signals, especially when sign language vocabulary is imprecise. Weaknesses: The paper does not explicitly address the handling of non-manual signals and sign language morphological changes, which are crucial factors influencing the faithfulness of sign language translation. The experimental settings in the paper do not provide detailed explanations for many hyperparameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why is the overfitting problem in the C2T model defined as a lack of faithfulness? Using regularization to address overfitting seems to make sense, but a more common approach should be a suitable variant of Dropout. This requires further explanation. The paper sets many hyperparameters, but lacks corresponding ablation studies, as well as the analysis of the mutual influences between different parameters. Can simply setting all hyperparameters to 1.0 or 0.1 yield better results than state-of-the-art (SOTA) methods? This question remains unanswered. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The proposed method lacks proper metrics to quantitatively evaluate the faithfulness of Sign Language Translation models and continues to use BLEU and ROUGE for evaluation. The paper's analysis and experimental results regarding faithfulness are not clearly defined. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful comments. Our responses to them are summarized below. **Q1. About the non-manual signals.** We agree with you that non-manual signals are essential for SLU. One of the major reasons that we select keypoints as input is that synchronous signals can be easily modeled by the interactions among different groups of keypoints. As mentioned in the supplementary, to make the skeleton-based SLR method achieves comparable performance with image-based SLR methods, we divide the keypoints into five groups (body, left hand, right hand, mouth, and facial contour), apply modified ST-GCN blocks to model each signal independently and model the interaction among groups through 1D CNN layers. To better reveal the influence of non-manual signals for SLT, we conduct an ablation study by masking the rest group-wise features. ||Dev Set|||Test Set||| |---|:---:|:---:|:---:|:---:|:---:|:---:| ||BLEU|ROUGE|WER|BLEU|ROUGE|WER| |Body|18.84|45.07|42.0|20.69|45.11|40.0| |Body+Hands|26.38|51.72|28.0|26.12|50.44|27.7| |Body+Hands+Mouth|28.27|53.93|21.6|29.06|53.70|22.0| |Body+Hands+Mouth+Face|29.96|55.41|21.2|31.15|57.05|21.4| As shown in the table above, the mouth signal is essential for SLU, and including the facial signal can bring further improvement. However, we believe the leverage of visual signals is more relevant to the SLR task. Therefore, we prefer to add this ablation in the supplementary to make the completeness of the main body. **Q2. About the sign language morphological changes.** Thanks for your suggestion about the morphological changes, which lead to a large intra-class variance. As mentioned in the supplementary, we normalize the weights and the fed features of the shared classifiers as previous metric learning works do to reduce intra-class variance. We agree that modeling morphological changes of sign languages are important for faithfulness, as the sign gloss may appear different under different contexts. However, we believe that exploring the relationship among SLT subtasks has a higher priority than modeling morphological changes. As shown in the experiments of **the “global” response**, although we do not explicitly model morphological changes, the alignment among SLT subtasks improves the faithfulness on ADJ, ADP, and PRON, which indicates that the proposed method can improve the ability to handle the morphological changes implicitly. **Q3. About the hyperparameters settings and why setting all hyperparameters to 1.0 or 0.1 yields better results.** Thanks for your suggestion about the hyperparameters settings. Most of the hyperparameters are set based on previous works, for example, the temperature is set to 1 as the original knowledge distillation. As for loss weights, we pay more attention to the loss designs and keep the loss weights the same as the preliminary experiment in Table 4. To provide more details, we provide more ablation results about the choice of the hyperparameters on Phoenix14T. It is worth noting that the following results are evaluated on the G2T task because we saved the best model on the dev set based on the performance of G2T by mistake, but the influence of hyperparameter choice on performance should be similar. Ablation on the temperature of KL divergence |Temperature|BLEU|ROUGE|BLEU|ROUGE| |:---:|:---:|:---:|:---:|:---:| |1|29.73|55.09|30.03|55.17| |2|**30.26**|55.43|29.65|54.36| |4|29.82|**55.57**|29.07|55.03| |8|29.85|55.49|29.35|54.42| Based on this ablation, the influence of temperature is little and setting it as 2 can achieve better performance. Ablation on the weight of the consistency loss ||Dev Set| |Test Set| | |:-:|:-:|:-:|:-:|:-:| |$\lambda_c$|BLEU|ROUGE|BLEU|ROUGE| |0.0|28.77|54.05|29.87|54.61| |0.01|29.36|54.30|27.96|52.65| |0.02|29.80|55.35|29.95|54.88| |0.05|29.62|55.25|28.71|53.52| |0.1|**30.26**|55.43|29.65|54.36| |0.2|30.08|**55.46**|29.82|54.57| |0.5|28.22|54.00|29.06|53.88| |1.0|12.57|36.45|12.95|36.35| In the main paper, we set $\lambda_c=0.1$ in default, because a large loss weight will domain the training process and affect the translation performance. It can be observed that $[0.02, 0.2]$ is a reasonable range for $\lambda_c$. We will add more ablation results about hyperparameter choices in the supplementary. **Q4. Why is the overfitting problem in the C2T model defined as a lack of faithfulness?** As shown in Fig. 3(b), the different convergence speeds make the C2T model easier to leverage the target-side context and the implicit language model rather than visual features, which faces a pretty high risk of hallucination. As suggested by reviewer uJKb, we agree that overfitting is caused by faithfulness is not well-supported, many possible factors lead to overfitting, and the lack of faithfulness is only one of them. The conclusion here should be an assumption that motivates us to explore the leverage of visual signals in SLT. We will carefully revise the relevant analysis to make it more thorough. **Q5. Regularization or Dropout?** Thanks for your comment, it provides another viewpoint for the proposed method. The code-switching module (Equ.3) can also be regarded as a joint training of two source-aligned translation tasks with complementary token-wise dropout. Designing regularization loss is a more familiar route for us and adopting dropout or other regularization tools is also a feasible choice. **Q6. About the evaluation metrics.** Thanks for your constructive comments on the evaluation metrics. As we mentioned **in the “global” response**, we suggest evaluating the over-translation and under-translation with BLEU-1 and ROUGE-1, respectively. We also evaluate the proposed method on the subsets of different universal part-of-speech tags. Experiment results show that almost all tags benefit a lot from the proposed MonoSLT, which indicates the proposed method is effective in increasing faithfulness. --- Rebuttal Comment 1.1: Comment: The authors answered all of my questions very carefully and added more extensive experimental results. I hope they will continue to consider this deeply in future studies. The rating will be changed to 5.
Summary: The paper discusses the issue of faithfulness in sign language translation (SLT), which refers to whether the SLT model captures the correct visual signals. It is found that imprecise glosses and limited corpora can hinder faithfulness in SLT. To address this, the paper proposes MonoSLT, which integrates SLT subtasks into a single framework that can share knowledge among subtasks. Two kinds of constraints are proposed to improve faithfulness: the alignment constraint and the consistency constraint. Experimental results show that MonoSLT is competitive against previous SLT methods and can increase the utilization of visual signals, especially when glosses are imprecise. Strengths: The method proposed in this paper outperforms multiple baseline methods, which is a promising contribution to sign language translation. Weaknesses: 1. There is no comparison with [12,15] on the bug-free dataset, which is a concern. Although I understand that reproducing [15] would require additional effort, since your code is based on MMTLB, it would be reasonable to verify the effectiveness of MMTLAB on the bug-free data. 2. The analysis of faithfulness and hallucination in the paper is not in-depth enough. There is no metric (either manual or automatic) to quantify faithfulness and hallucinations, and the improvement in BLEU is not sufficient to indicate that the faithfulness issue has been effectively addressed. The few cases presented in the paper are not enough to support the conclusions. 3. The analysis in section 3.2 is not thorough enough, and the conclusions are somewhat forced. For example, the statement that overfitting is caused by faithfulness is not well-supported, and the conclusion that there is no obvious negative correlation between SLT and SLR in Figure 3(c) is due to hallucination lacks data support and quantitative analysis. The few examples presented in section 4 are not sufficient to demonstrate the issue of hallucination. Technical Quality: 3 good Clarity: 3 good Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. About the bug-free evaluation on the CSL-Daily dataset.** Thanks for your suggestion, it pushes us to figure out the performance gap between MMTLB[12] and the used baseline. We present the re-evaluated results **in the "global" response**. Moreover, we also find that the overconfident predictions of the BiLSTM layer hinder the leverage of visual information. Therefore, we only train V2T and G2T jointly on CSL-Daily. With this modification, the gap between MMTLB and the used baseline is narrowed, and MonoSLT shows competitive performance with skeleton inputs only. We will clarify relevant statements and update the results in Table 2. **Q2. About the evaluation metrics.** Thanks for your constructive comments on the evaluation metrics. As we mentioned **in the “global” response**, we suggest evaluating the over-translation and under-translation with BLEU-1 and ROUGE-1, respectively. We also evaluate the proposed method on the subsets of different universal part-of-speech tags. Experiment results show that almost all tags benefit a lot from the proposed MonoSLT, especially for ADP, ADV, and PRON, which indicates the proposed method is effective in increasing faithfulness. **Q3. The analysis in section 3.2 is not thorough enough, and the conclusions are somewhat forced.** Thanks for your constructive comments on the analysis in section 3.2. We agree with you that overfitting caused by faithfulness is not well-supported, many possible factors lead to overfitting, and faithfulness is only one of them. The conclusion here should be an assumption that motivates us to explore the leverage of visual signals in SLT. As for the hallucination, we agree that there is no obvious negative correlation between SLT and SLR in Figure 3(c) can only show the potential risk of hallucination, which is also a motivation rather than a conclusion. Based on the evaluation results (0.570 BLUE-1 in total) **in the "global" response**, about 43% of words predicted by MonoSLT do not exist in the references, although some of them perhaps are synonyms of the corresponding words in the reference, the MonoSLT model still faces a pretty high risk of hallucination. We will carefully revise the relevant analysis to make it more thorough.
Summary: This work is dedicated to enabling the SLT model to capture correct visual signals (faithfulness in SLT). In order to improve faithfulness in SLT, the author integrates SLT subtasks into a single framework named MonoSLT, and based on this, proposes alignment constraints and consistency constraints. The former is used for aligning the visual and linguistic embeddings. The latter is used for integrating the advantages of subtasks. To demonstrate the effectiveness of the proposed method, the authors conduct experiments on two public datasets. Strengths: [1 - complete layout and detailed description]. The article has a relatively complete overall layout and a detailed work description. [2 – method novelty]. The author Introduced the code-switching phenomenon in the Alignment Constraint, mimicking the phenomenon of language alternation in conversations between multilinguals, and mixed visual embedding and gloss embedding as an input to the Translation Module. [3 – the rationality of alignment]. Implicitly align visual and linguistic embeddings through shared translation modules and synthetic code-switching corpora. Better utilization of the characteristics of different subtasks. [4 – method performance]. On the Phoenix14T dataset, the author's method only uses skeleton sequences as input, which improves performance (+2.2 BLUE-4) compared to the best method using RGB video as input. Weaknesses: [1 – Writing quality]. In section 3.2, some analysis is confusing, and the conclusion seems to be the author's subjective thoughts. And in the title of table 2, ‘the inconsistent punctation bug’ is confusing. [2 - method performance on CLS-Daily]. On the CLS-Daily dataset, MonoSLT performs poorly, lagging behind several sota models. [3 - Model evaluation issues]. The paper also mentions that although it alleviates the problem of faithfulness in SLT, there are no suitable metrics to measure it. The author still uses BLEU and ROUGE for evaluation Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. At the end of line 145, S2T model should be C2T model. This may be a spelling error. 2. In the title of table 2, the inconsistent punctation bug appeared, which confused me. I don't know the meaning of this phrase. 3. On the CSL-Daily dataset, the performance of MonoSLT is somewhat lacking. Can you further improve the performance of MonoSLT on CLS-Daily? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: 1.Find or create proper metrics to quantitatively evaluate the faithfulness of SLT models. 2.You said that the CLS-Daily dataset provides more precise gloss annotations, which leads to other models with lower SLR performance being able to achieve better SLT results This also leads to your model not performing as well as some models on the CLS-Daily dataset. As the sign language dataset becomes larger and more accurate, your model may not be as good as other models. I think this is worth considering. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1. About the inconsistent punctation bug.** Thank you for bringing this up. During experiments, we find the tokenization process of the original implementation always predicts punctuation marks in English, which greatly affects the evaluation results, especially for longer n-gram. For example, ',' / '?' / '!' / ':' are punctuation marks in English, and ',' / '?' / '!' / '!' are punctuation marks in Chinese. We also re-evaluate the released models of MMTLB[12] **in the "global" response**. We will clarify relevant statements and update the results in Table 2. **Q2. About the poorly performance on CSL-Daily.** Thanks for your suggestion, it pushes us to figure out the performance gap between MMTLB[12] and the used baseline. As we mentioned in the main paper, this submission only uses the translation module of MMTLB, and we assume the used feature extractor (GCN+Conv1D+BiLSTM) is a universal structure for all forms of translation. To better leverage the precise gloss annotations provided by CSL-Daily, previous works (MMTLB and 2S-SLT[15]) take the gloss probabilities as the input to the VL-mapper and initialize the MLP layer with pretrained gloss embedding (presented in the supplementary of MMTLB). We follow this setting, but the BiLSTM layer will generate over-confident predictions due to its large temporal receptive field, which hinders the leverage of visual information and makes the model easily overfitting. Therefore, we only train V2T and G2T jointly on CSL-Daily and update relevant results as stated **in the "global" response**. With this modification, the gap between MMTLB and the used baseline is narrowed, and MonoSLT shows competitive performance with skeleton inputs only. **Q3. About the evaluation metrics for faithfulness.** Thanks for your constructive comments on the evaluation metrics. As we mentioned **in the “global” response**, we suggest evaluating the over-translation and under-translation with BLEU-1 and ROUGE-1, respectively. We also evaluate the proposed method on the subsets of different universal part-of-speech tags. Experiment results show that almost all tags benefit a lot from the proposed MonoSLT, especially for ADP, ADV, and PRON, which indicates the proposed method is effective in increasing faithfulness. **Q4. The generalization ability of the proposed method when datasets become larger and more accurate.** Thanks for your constructive feedback on the generalization on the larger scale dataset. Based on the manner of collecting data, recent sign language datasets can be summarized into two categories, some datasets (Phoenix14T[6], [R1], and [R2]) are collected from the broadcast or the Internet, and provide coarse-grained or automatically generated annotations, the others (CSL-Daily[11], R3) are collected by first designing the sign language corpus and then record sign videos from invited signers. The former way can easily collect large-scale datasets with imprecise annotations and is closer to real-world scenarios, while datasets collected in the latter way are under several constraints (e.g., the camera pose and background are kept the same in CSLDaily). From this perspective, the proposed method tries to leverage the alignment nature of sign video data rather than precise annotations, which indicates that it is more suitable for large-scale datasets. Moreover, it is worth noting that the computing cost also increases along with the size of the dataset, and the proposed skeleton-based method provides an effective way to quickly verify the effectiveness of SLU methods on large-scale datasets. [R1] Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox, and Andrew Zisserman. Bsl-1k: Scaling up co-articulated sign language recognition using mouthing cues. In ECCV, 2020. [R2] Uthus D, Tanzer G, Georg M. YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus[J]. arXiv preprint arXiv:2306.15162, 2023. [R3] Amanda Duarte, Shruti Palaskar, Lucas Ventura, Deepti Ghadiyaram, Kenneth DeHaan, Florian Metze, Jordi Torres, and Xavier Giro-i Nieto. How2Sign: A large-scale multimodal dataset for continuous American Sign Language. In CVPR, 2021. **Q5. About the Writing quality.** Thanks for your constructive comments about the analysis in section 3.2. We agree with you that some conclusions are subjective and not well-supported, many possible factors lead to overfitting, and the lack of faithfulness is only one of them. Some conclusions here should be assumptions that motivate us to explore the leverage of visual signals in SLT. We have fixed typos and will carefully revise the relevant analysis to make it more thorough.
Rebuttal 1: Rebuttal: **Thanks for the constructive suggestions and insightful comments from all reviewers, they push us to think about the faithfulness in SLU more deeply. Our review responses for common questions are summarized below.** **Q1. Potential quantitative metrics for faithfulness** Inspired by previous works in NMT [R1], we summarize unfaithfulness in SLU to three problems: unfaithful visual encoder, over-translation, and under-translation. This submission keeps the visual encoder the same and does not dig into the faithfulness of the visual encoder. We provide potential quantitative metrics for the latter problems as below. **Over-translation: some words are unnecessarily translated for fluency.** The over-translation problem happens when the SLU model takes the wrong visual information or relies too much on the target-side context and the implicit language model. However, it is hard to evaluate the utilization of visual information, and the lack of ground-truth alignment also makes it difficult to evaluate the inference process of the translation module. Therefore, we attempt to evaluate the over-translation problem from the predictions. Compared to proposing a new metric, we suggest using existing metrics (specifically, the BLEU-1) to evaluate the over-translation problem. As mentioned in the original paper [R2]: > A translation using the same words (1-gram) as in the references tends to satisfy adequacy. The longer n-gram matches account for fluency. Different from longer n-gram metrics, BLEU-1 evaluates the precision of word-wise prediction without regard to fluency. To quantitatively evaluate the faithfulness of the proposed method, we further calculate the BLEU-1 with different universal part-of-speech tags: $$ \text{BLEU-1(tag)}=\frac{\sum_{C\in{HYP}}\sum_{word\in C, \tau(word)==tag} Count_{clip}(word) }{\sum_{C\in{HYP}}\sum_{word\in C, \tau(word)==tag} Count(word) }, $$ where $\tau(\cdot)$ is a model to obtain the part-of-speech tag of a word. **Under-translation: some visual information is ignored or mistakenly understood.** The under-translation problem happens when the SLU model ignores the critical visual information or takes wrong visual information for translation. Similar to the over-translation problem, we suggest using ROUGE-1 to evaluate the under-translation problem. The ROUGE-1 scores with different universal part-of-speech tags are calculated by: $$ \text{ROUGE-1(tag)}=\frac{\sum_{C\in{REF}}\sum_{word\in C, \tau(word)==tag} Count_{clip}(word) }{\sum_{C\in{REF}}\sum_{word\in C, \tau(word)==tag} Count(word) }, $$ The evaluation results with BLEU-1 and ROUGE-1 on the Dev set of Phoenix14T are presented in the following table. The completed table and results on the Test set can be found in the attached PDF. ||BLEU-1|||ROUGE-1||| |-|:-:|:-:|:-:|:-:|:-:|:-:| ||Baseline|MonoSLT|#Word|Baseline|MonoSLT|#Word| |ADP|0.561|**0.612**|997/1000|0.545|**0.593**|1037| |ADV|0.455|**0.494**|1333/1274|0.419|**0.431**|1479| |DET|0.563|**0.568**|554/572|0.514|**0.528**|615| |NOUN|0.617|**0.643**|1480/1469|0.591|**0.610**|1535| |PRON|0.484|**0.493**|428/434|0.516|**0.547**|397| |PROPN|**0.444**|0.239|160/381|0.445|**0.555**|164| |VERB|0.326|**0.341**|469/472|0.310|**0.335**|484| |Overall|0.563|**0.570**|6989/7205|0.536|**0.559**|7339| As shown in the tables, almost all tags benefit a lot from the proposed MonoSLT, especially for ADP, ADV, and PRON, which indicates the proposed method is effective in increasing faithfulness. This result is roughly coincident with the previous finding in NMT [R3] that the increase in faithfulness for function words is much more than that of content words. The linguistic structure of sign language is complicated yet fascinating, we hope the proposed method and metrics can inspire further works. R1. Modeling coverage for neural machine translation. ACL, 2016. R2. Bleu: a method for automatic evaluation of machine translation. ACL, 2002. R3. Measuring and improving faithfulness of attention in neural machine translation. EACL, 2021. **Q2. Punctuation bug and poor performance on CSL-Daily** To provide a more thoughtful evaluation on the bug-free dataset, we re-evaluate the released models of MMTLB[12] and 2S-SLT[15] by adding an extra line in the evaluation script provided by [12]: ``` txt_hyp = txt_hyp.replace(',', ',').replace('?', '?').replace('!', '!').replace(':', ':') ``` where ',' / '?' / '!' / ':' are punctuation marks in English, and ',' / '?' / '!' / '!' are punctuation marks in Chinese. The tokenization process of the original implementation always predicts punctuation marks in English, which greatly affects the evaluation results, especially for longer n-gram. The re-evaluate results are presented in the following table. ||Dev Set|||Test Set|||||| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| ||R|B4|WER|R|B1|B2|B3|B4|WER| |MMTLB[12]|55.76|27.43|30.6|56.06|56.61|43.66|34.31|27.51|30.3| |2S-SLT[15]|58.24|29.18|25.4|**58.62**|**58.64**|**45.77**|**36.39**|**29.55**|**25.3**| |Baseline|54.86|27.00|30.1|55.37|56.03|42.82|33.62|27.02|*29.1*| |MonoSLT|55.79|28.09|29.9|*56.25*|*57.28*|*44.15*|*34.89*|*28.19*|29.2| As suggested by reviewers, we attempt to figure out why MonoSLT performs poorly on CSL-Daily. We find that adopting visual logits as input can achieve better performance than contextual logits (27.0\% v.s. 25.9\%). One possible reason is that we initialize the MLP layer with pretrained gloss embedding as MMTLB[12] does, and the overconfident predictions of BiLSTM layer hinder the leverage of visual information. Therefore, we only train V2T and G2T jointly on CSL-Daily, and the final supervision is formulated as: $$ L = L_{CTC}^{V}+\lambda_C L_{SLT}^V + \lambda_{CS} L_{SLT}^{CS} + \lambda_c(D_{KL}(p_V||p_{CS})+D_{KL}(p_{CS}||p_{V})). $$ The corresponding results are updated in the table above. With this modification, MonoSLT achieves better performance than MMTLB with comparable WER, which indicates the effectiveness of MonoSLT. Pdf: /pdf/81dacac379e66f1f9a5345ea35776c9cf2be76d4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work mainly studies the faithfulness issue in SLT (i.e., whether the SLT model captures correct visual signals). The study identifies imprecise glosses and limited corpora as the main factors contributing to limited faithfulness. In order to mitigate this issue, this work proposes a framework called MonoSLT, which leverages the shared monotonically aligned nature among SLT subtasks. This framework incorporates alignment and consistency constraints. Experiments demonstrates the effectiveness of the proposed method. Strengths: This paper is well-written and well-organized. This work performs in-depth analysis on the previous works and the association among SLT-relevant tasks. The overall performance is promising and shows notable performance gain over the baseline. Weaknesses: The main focus of this work is on the concept of faithfulness in spoken language translation (SLT). However, a notable limitation of the study is the absence of quantitative metrics to evaluate faithfulness. While the authors acknowledge this limitation in the paper's discussion of limitations, it remains a drawback. It would be beneficial for the authors to provide further clarification on this issue, perhaps by suggesting potential quantitative metrics that could be used to assess faithfulness in future research. It is suggested that the proposed framework be compared with VAC, as they share similar components such as consistency loss and visual module constraints. Regarding the discrepancy in length between the embeddings produced by the visual GCN module and the gloss module, it is essential to understand how the code-switching module handles this challenge. The authors should provide clarification or explanation on how the code-switching module addresses this issue. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the comments in Weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It is better to design a suitable metric to evaluate faithfulness in SLT. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: About the further clarification on the quantitative metrics.** Thanks for the insightful comment about the metric, and it pushes us to think about the evaluation metrics of faithfulness more deeply. As we mentioned **in the “global” response**, we suggest evaluating the over-translation and under-translation with BLEU-1 and ROUGE-1, respectively. Moreover, we believe the universal tokenization approach and the alignment between visual features (which may need extra annotations) and the prediction can further improve the interpretability of the SLT model. **Q2. About the comparison with VAC and the technical contributions.** Thanks for your suggestion. In fact, the design of preliminary experiments in Table 4 is inspired by VAC. Technically speaking, there is little difference in the constraint design between the proposed method and VAC: both adopt a visual module constraint and a consistency loss, and the main difference is that VAC needs pair-wise alignments, and we select two subtasks based on their characteristic and incorporate them as a single task to simplify the training process. Therefore, from the perspective of constraint design, the proposed method can be regarded as a simplified version of VAC, and we empirically infer the performance should be comparable. However, the proposed method is significantly different from VAC in the context of sign language understanding. We summarize the main differences and the technical contributions below. 1. **The faithful aspects of SLU models.** As stated in the "global" response, VAC can be regarded as a typical approach to improve the faithfulness of the visual encoder, and both constraints are designed for improving the visual encoding ability of the SLU models. However, this submission keeps the visual encoder the same and attempts to improve the faithfulness of the translation module, which tries to improve the translation process with the help of multiple subtasks. Besides, the proposed alignment constraint can also be applied to the "self-alignment" of visual features, which is verified in the experiments on CSL-Daily **in the "global" response**. Compared to the constraint designs, we believe the proposed framework is more important. We choose a VAC-style constraint design to validate the effectiveness of the proposed framework, and other alignment methods are also potentially feasible (e.g., contrastive-based methods). 2. **Explore the relationship between SLR and SLT**. Recent SLR and SLT studies are nearly independent, as we mentioned in the introduction, SLR approaches often validate their generalization ability on SLT tasks, and SLT approaches often adopt the pretrained SLR models and focus on the translation module designs. There exists a research gap between SLR and SLT works: whether the visual features are leveraged sufficiently. For example, as shown in Table 1 of the main paper, TwoStream-SLT [15] with the ensemble of multi-modality models (both skeleton sequence and video with three different random seeds) achieves much better SLR performance than with skeleton sequence only (-9.42% WER, from 27.14% to 17.72%), but it achieves comparable SLT performance (+0.97 BLEU-4, from 27.98% to 28.95%) under these two settings. It seems that the translation performance tends to be saturated, and this submission shows there is still a long way to make the SLT methods applicable. 3. **Explore the robustness of the SLT models**. The main difference between SLR and SLT is the different language habits, and this submission attempts to reveal the importance of faithfulness beyond performance: if the translation results change the meaning of the gloss sequence, the high BLEU-4 performance is meanless and the SLR model is a more practical choice than SLT. We hope this submission can inspire more works considering the robustness of SLT models. **Q3. How the code-switching module handles the length discrepancy.** Thanks for your suggestion, the visual sequence $V=(v_1, \cdots, v_T)$ and the gloss sequence $G=(g_1,\cdots,g_N)$ have different lengths, which prevent the alignment between the visual and semantic embeddings. We first estimate the alignment path $\hat{\pi}=\arg\max_\pi p(\pi|X, G)$ with the maximal probability [41], and then obtain the corresponding range $[s_i, e_i)$ for each gloss $i$, where $s_i$ and $e_i$ denote start and end frame indexes. Then we randomly sample gloss $i$ with the probability of $\beta$ and replace the corresponding visual embeddings $\mathcal{P}(v_i), i \in [s_i, e_i)$ with $\mathcal{E}(G_i)$ (i.e., replace all frames that are recognized as gloss $i$ with its semantic embedding). For gloss embedding that uses byte pair encoding, we average embedding of its subunits as the gloss embedding $\mathcal{E}(G_i)$. The matrix form is presented in Equ. 3. We will clarify the relevant statements and provide several examples in the supplementary.
null
null
null
null
null
null
Reining Generalization in Offline Reinforcement Learning via Representation Distinction
Accept (poster)
Summary: Authors consider overgeneralization in offline RL which is new and propose the modification which helps to solve the problem and can be applied to any algorithm. Strengths: Proposed method boosted performance of different algorithms by a notable margin in most of the cases and it can be integrated in any algorithm while being simple. Good ablation study. Weaknesses: The problem epxeriments is that algorithms are tested using MuJoCo datasets and 2 pen datasets and appeares not to be sufficient. For example, in CORL (https://arxiv.org/abs/2210.07105) benchmarks which you mention and in original work (https://arxiv.org/abs/2110.01548), it is seen that EDAC performs great on the MuJoCo but fails on AntMaze (the recent update of CORL tested even more datasets and EDAC seems to fail on Adroit too). There are already some algorithms which are competitive or outperform EDAC on MuJoCo while working better on other datasets, see https://arxiv.org/abs/2206.02829 as an example. Some algorithms can performe not that good on MuJoCo but outperform others on different datasets, see concurrent work (https://arxiv.org/abs/2305.09836) or IQL in CORL. I suggest to run more experiments using AntMaze or all of the Adroit datasets to verify that your approach helps. We can't conclude that it helps without checking more datasets. Wihout it I'm not sure that paper should be accepted. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does you modification affect training runtime? Is the change significant or not? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not clear without additional benchmarking or runtime information. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### run more experiments using AntMaze or all of the Adroit datasets As AntMaze datasets are designed to investigate the stitching ability and the datasets can well cover the state-action space, our approach may not help significantly. Therefore, we perform experiments on Adroit datasets. We report average results across five seeds after training for 50 epochs on Adroit-expert datasets. On these datasets, RD significantly improve TD3-N-Unc's performance as shown in the following table. We also tried our best to tune algorithms on the remaining Adroit-cloned and Adroit-human datasets, however, it seems none of widely-used algorithms and backbone algorithms can obtain final models that have satisfying average performance, and our approach also failed to help improving the evaluation performance of final models. This phenomenon indicates RD cannot boost existing methods from the perspective of reining generalization on these datasets and more in-depth analysis of potential factors is required (e.g., limited data quantity), which is left for future works. | Dataset | TD3-N-Unc | TD3-N-Unc + RD | | ------------------ | --------------- | --------------- | | pen-expert-v1 | 61.8 $\pm$ 26.1 | **84.7 $\pm$ 5.1** | | door-expert-v1 | 11.1 $\pm$ 3.2 | **75.8 $\pm$ 29.4** | | relocate-expert-v1 | 1.9 $\pm$ 2.9 | **12.8 $\pm$ 9.0** | | hammer-expert-v1 | 0.7 $\pm$ 0.9 | **20.9 $\pm$ 41.5** | Note that in addition to the auxiliary loss, all the other parameters are kept the same across different methods to ensure fairness. ### How does you modification affect training runtime We counted the mean training time of one epoch (1000 gradient steps) of TD3-N-Unc and TD3-N-Unc + RD on all MuJoCo datasets, the training time of TD3-N-Unc + RD is 1.3x that of TD3-N-Unc. Although training time of one epoch is increased, we provide evidence in Table 6 (appendix C) of our paper, supporting the efficacy of RD in enhancing convergence speed and its potential to reduce the quantity of Q ensembles. As shown in Table 6, the performance at 1M and 3M steps of both backbone methods is improved by incorporating RD on a large portion of the datasets, demonstrating the effectiveness of RD in increasing convergence speed and final performance. Additionally, using RD with fewer Q ensembles can achieve similar or even better results than the backbone methods using more Q ensembles, indicating its potential in reducing computing resource consumption. --- Rebuttal Comment 1.1: Comment: Thank you very much for answering my questions. Expert Adroit datasets are not that interesting but at least your approach improves results on them. I will increase the rating but decrease my confidence in it. Wishing you all the best with your publication. --- Reply to Comment 1.1.1: Comment: Thank you for the valuable advice.
Summary: The authors investigated the effect of generalization in overestimation of critic values and devised an algorithm that can mitigate overgeneralization. The proposed method, Representation Distinction (RD), is orthogonal to most offline RL algorithms and thus be applied to most of them. Experimental results show that RD significantly enhances the performance of baseline algorithms. Strengths: ### Originality The paper introduces the Backup-Generalization framework to analyze the overestimation of critic values. The authors devised a novel method of learning a suboptimal policy to sample OOD actions and use them in the later stage of training. They also proposed a simple and effective heuristic that allows a natural transition from PDD to POD as the training progresses. ### Quality Most of the claims are technically sound. The authors have analyzed the performance of their method on popular offline RL benchmarks and conducted multiple interesting ablation studies. ### Clarity The paper is overall well-written and easy to understand. ### Significance Most works on offline RL focus on the backup step. This work instead points out the role of generalization in overestimation of critic values, which seems like a promising research direction. Also, the proposed method can be applied to most existing offline RL algorithms to improve their performance. Weaknesses: 1. Line 315~316 is difficult to understand. A pseudocode of CQL+RD will be helpful. 2. A theoretical analysis of how RD differs from DR3 (Kumar et al. 2022) and why using OOD actions is better than just optimizing the DR3 auxiliary loss, $\mathbb{E}_{(s, a, s', a')\sim\mathcal{D}}\left[\Phi(s', a')^\intercal\Phi(s, a)\right]$, is missing from the paper. Adding this will improve the originality of the paper, as this work looks like a slight modification of DR3 at first glance. Also, the authors do not compare their algorithm with DR3 variants of CQL, TD3-BC, and SAC. Although I expect their performance to be on par with the PDD variants, I believe they should be included in the experiments section. 3. TD3-N-Unc and SAC-N-Unc use an ensemble of N critics. It is unfair to compare their performance with other baselines that only use one critic. ### References Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, and Sergey Levine. DR3: Value-based deep reinforcement learning requires explicit regularization. In *International Conference on Learning Representations*, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Table 3 reports the average normalized score of TD3-N-Unc on the Walker2D-Expert domain to be $4.1\pm 7.1$ and Table 1 reports the average normalized score of TD3-N-Unc on the Waler2D-Medium domain to be $69.9\pm 35.2$. Why does TD3-N-Unc perform a lot better on the medium dataset? &nbsp; ### Minor suggestions Eq. (4) It would be easier for the readers to follow if the authors explicitly mention that $\nabla_{\mathbf{w}}Q_\phi(s, a)=\Phi(s, a)$. Line 315: gruadually $\to$ gradually There are duplicate references: [33] and [41] are both Kumar et al. (2022). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### pseudocode of CQL+RD will be helpful. Here we provide the detailed illustration of modified CQL based on the insight of RD. The core idea of CQL conservativeness is to increase the Q value of the data in the dataset $\mathcal{D}$ and diminish the Q value of the data generated by a mixed policy $\mu$ comprising the learning policy $\hat{\pi}\_\theta$ and random policy as following: $$ \begin{aligned} \min\_Q \alpha\left(\mathbb{E}\_{\mathbf{s} \sim \mathcal{D}, \mathbf{a} \sim \mu(\mathbf{a} \mid \mathbf{s})}[Q(\mathbf{s}, \mathbf{a})]-\mathbb{E}\_{\mathbf{s} \sim \mathcal{D}, \mathbf{a} \sim \hat{\pi}\_\theta(\mathbf{a} \mid \mathbf{s})}[Q(\mathbf{s}, \mathbf{a})]\right) \\ \end{aligned} $$ This formulation incorporates the representation distinction of state-action pairs sourced from different distributions. Therefore in practice, we apply the insights of RD to gradually transfer the original Q value restriction in the above equation to the Q-value differentiation between the learning policy and the designed OOD policy in the below equation: $$ \begin{aligned} \min\_Q \alpha\left(\mathbb{E}\_{\mathbf{s} \sim \mathcal{D}, \mathbf{a} \sim \hat{\pi}\_\theta(\mathbf{a} \mid \mathbf{s})}[Q(\mathbf{s}, \mathbf{a})]-\mathbb{E}\_{\mathbf{s} \sim \mathcal{D}, \mathbf{a} \sim \pi_\text{ood}(\mathbf{a} \mid \mathbf{s})}[Q(\mathbf{s}, \mathbf{a})]\right) \end{aligned} $$ ### this work looks like a slight modification of DR3 at first glance RD and DR3 differ at both the theoretical framework from which they are derived and the regularization effect in terms of out-of-sample data. DR3 is derived from the theoretical characterizing implicit regularization in TD-Learning, which is a generalization of the implicit regularization effect from Supervised Learning as studied in Blanc et al. (2020) and Damian et al. (2021) to TD-Learning in RL setting. In contrast, RD is derived from the Backup-Generalization framework proposed by us. From the angle of regularization effect, DR3 is proposed to directly counter the implicit regularization of TD-Learning while RD is to suppress the generalization between in-sample data and out-of-sample data. With some heuristics, DR3 regularizer arrives at a similar form, i.e., to minimize the NTK between consecutive state-action pairs in backup $(s,a)$ and $(s^{\prime},a^{\prime})$ (where $a^{\prime} \sim \pi(\cdot|s^{\prime})$), to RD regularizer. Apparently, DR3 regularizer can be a special case of RD regularizer in terms of the definition of out-of-sample actions. The regularization effect of DR3 is similar to the Policy-Dataset Generalization Inhibition shown by Figure 2 in our paper. Such a regularization can induce over-inhibition of generalization when the distribution of current policy overlaps with the offline dataset as illustrated. We appreciate the reviewer's inspiring comment and we will add more analysis on the differences between RD and DR3 in our paper as suggested. For the empircal comparison between DR3 and RD, we provide additional experimental results as follows. ### comparisons with DR3 According to the suggestion, we perform experiments against TD3-N-Unc with DR3 and with layer norm. Below table demonstrates the results on six datasets. Note that in addition to the auxiliary loss, all the other parameters are kept the same across different methods to ensure fairness. Overall, RD is more helpful than DR3 and Layer Norm. | Dataset | TD3-N-Unc + RD | TD3-N-Unc + DR3 | TD3-N-Unc + Layer Norm | | --------------------- | -------------- | --------------- | ---------------------- | | halfcheetah-medium-v2 | **66.8$\pm$1.2** | 64.4 $\pm$ 1.7 | 63.2 $\pm$ 0.8 | | hopper-medium-v2 | **103.0$\pm$0.8** | **103.4 $\pm$ 0.7** | 83.0 $\pm$ 29.0 | | walker2d-medium-v2 | **97.6$\pm$3.4** | 92.1 $\pm$ 2.0 | 68.5 $\pm$ 20.7 | | halfcheetah-expert-v2 | 103.1$\pm$0.6 | 100.0 $\pm$ 3.7 | **104.4 $\pm$ 1.5** | | hopper-expert-v2 | **108.8$\pm$0.3** | **108.0 $\pm$ 0.5** | 88.4 $\pm$ 42.8 | | walker2d-expert-v2 | **111.2$\pm$0.7** | 109.9 $\pm$ 0.4 | **111.7 $\pm$ 0.4** | ### unfair to compare performance with other baselines that only use one critic. Ensemble is a way to implement pessimism, which is concurrent to other types of pessimism such as policy constraint and value penalization. Beside, as noted in the appendix F of Gaon,et al (2021), increasing the number of Q-networks in CQL is of little help or even harmful. Thus aligning the number of Q-networks in each baseline algorithm may be not helpful. In addition, even for ensemble-based baseline EDAC, SAC-N-Unc+RD and TD3-N-Unc+RD can achieve better performance. ### Why does TD3-N-Unc perform a lot better on the medium dataset Table 3 is provided to elucidate the significance of RD’s components on different datasets. In order to achieve this, the hyperparameter settings of offline RL algorithm itself of each variant in Table 3 are exactly the same. This means that the results of the original TD3-N-Unc in Table 3 are not the optimal results it can achieve, so it is meaningless to compare the performance of the original TD3-N-Unc between different datasets. When we tune the hyperparameters of the original TD3-N-Unc, we can see in Table 6 of the appendix that the performance learned by TD3-N-Unc on the expert data set is better than that learned on the medium data. #### Reference * Guy Blanc, Neha Gupta, Gregory Valiant, and Paul Valiant. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. In Conference on learning theory, pp.483–513. PMLR, 2020. * Alex Damian, Tengyu Ma, and Jason Lee. Label noise sgd provably prefers flat global minimizers. arXiv preprint arXiv:2106.06530, 2021. * An, Gaon, et al. "Uncertainty-based offline reinforcement learning with diversified q-ensemble." *Advances in neural information processing systems* 34 (2021): 7436-7447. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. The DR3 results should definitely be included in the final version of the paper. I'll raise my score to 6, as most of my concerns have been resolved. By the way, does it mean that the CQL-RD does not have an NTK loss term? --- Reply to Comment 1.1.1: Comment: Thanks for your reply. Yes, CQL-RD does not have an explicit NTK loss term as the core idea of CQL is to increase the Q value of the data in the dataset and diminish the Q value of the data generated by a mixed policy comprising the learning policy and random policy, which incorporates the representation distinction of state-action pairs sourced from different distributions. Therefore, we apply the insights of RD to gruadually transfer the original Q value restriction to the Q-value differentiation between the mixed policy and the designed OOD policy.
Summary: This paper explains the necessity of reining generalization in offline RL and proposed a new method Representation Distinction (RD) based on this principle. RD shows good performance on D4RL benchmarks. Strengths: - This paper presented a view called Backup-Generalization Cycle, which explains that the generalization issue should be considered in the training phase. - This experiment shown in Figure 3 is interesting, which demonstrates overgeneralization is an important issue in offline RL and the method is able to mitigate it. - By applying RD to existing algorithms, their performance is improved significantly. Weaknesses: - There is a gap between analysis and practical implementation. Therefore, it is difficult to identify whether the performance improvement comes from the overgeneralization issue. - The algorithm introduces some hyperparameters which may make it a little difficult to tune. - The difference between PDD and RD is not significant. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The experiment in Figure 3 is only conducted on halfcheetah environment, which is not very convincing as halfcheetah has some special features. Can the authors provide more results on other environments? - In section 6.2, the authors only demonstrated the representation of PDD and RD. While I'm curious about the representation of the original algorithm. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation of this work is that it is limited to continuous control tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### gap between analysis and practical implementation The conceptual illustration of reining generalization via kernel control in Section 4.3 including two stages: Policy-Dataset generalization inhibition and Policy-OOD generalization inhibition. In the practice algorithm design, we design loss $\mathcal{L}_1$ and $\mathcal{L}_2$ for the two stages, respectively, and design a simple heuristic approach to smoothly transit from stage 1 to stage 2 by dynamically adjusting the weight of $\mathcal{L}_1$ and $\mathcal{L}_2$. Please let me know if you still find this correspondence is unclear. ### hyperparameters which may make it a little difficult to tune There are four hyperparameters used in RD, including: $\alpha$, $\beta$ to control the OOD data generator to create data with lower Q-values compared to the data produced by current policy; $M$ to control the balance between $\mathcal{L}\_{1}$ and $\mathcal{L}\_{2}$; $\epsilon$ to balance $\mathcal{L}\_\text{RD}$ and $\mathcal{L}\_\text{critic}$ of any offline RL algorithm. Note that for all the datasets and algorithms used in the experiments, we set $\alpha=0.6$, $\beta=0.7$, $\epsilon=0.1$. For $M$, we set it to 2e6 when applying RD to CQL. For other algorithms including TD3BC, SAC-N-Unc and TD3-N-Unc, $M$ is set to 1/10 of the total training steps. The relatively general setting demonstrate the hyperparameters could be applied to different algorithms and datasets. Hyperparameters setting of RD are also provided in the appendix. ### difference between PDD and RD is not significant According to Table 3, the differences between PDD and RD on half of the datasets are relatively significant. On walker2d-expert dataset, the difference is quite large. Although the differences between PDD and RD on some datasets are not clear, the differences between algorithm with PDD and the pure algorithm is quite significant, which demonstrates the effectiveness of PDD, which is an important component of RD a described in the main text of Section 6.2. ### more results on other environments in Figure 3 We provide average results of SAC-2 (vanilla SAC) and SAC-2+RD of three seed after training 1M gradient steps on more dataset in the following table. It can be observed that RD is helpful in improving SAC-2 without explicit pessimism. | Dataset | SAC-2 | SAC-2 + RD | | -------- | ------- | ------- | | halfcheetah-random-v2 | 29.7 | 30.0 | | halfcheetah-medium-v2 | 38.2 | 68.5 | | halfcheetah-medium-replay-v2 | 0.8 | 49.2 | | halfcheetah-full-replay-v2 | 86.8 | 82.3 | | hopper-random-v2 | 9.9 | 15.2 | | hopper-medium-v2 | 0.8 | 2.1 | | hopper-medium-replay-v2 | 7.4 | 64.3 | | hopper-full-replay-v2 | 41.1 | 110.0 | | walker2d-random-v2 | 0.9 | 0.3 | | walker2d-medium-v2 | -0.3 | -0.2 | | walker2d-medium-replay-v2 | -0.4 | 52.6 | | walker2d-full-replay-v2 | 27.9 | 97.3 | ### the representation of the original algorithm We expand Table 4 by adding corresponding statistics of the original algorithm on halfcheetah-expert-v2 in the following table. The performance of the original algorithm TD3-N-Unc of the same seed is 80.1, which is slightly lower than that of TD3-N-Unc with PDD ( 82.5). It can be observed that both the representations learned by PDD and RD are better than that of the original algorithm. | Dataset | Rep of ori | Rep via PDD | Rep via RD | | -------- | ------- | ------- | ------- | | expert-expert | 0.83 | 0.87 | 0.94 | | expert-medium | 0.15 | 0.12 | 0.06 | | expert-random | 0.01 | 0.01 | 0.00 | | medium-expert | 0.38 | 0.38 | 0.38 | | medium-medium | 0.40 | 0.43 | 0.51 | | medium-random | 0.22 | 0.19 | 0.11 | --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. The authors' reply has addressed most of my concerns. Despite originating from different motivations, the similarity in methods between this work and previous studies downgrades the significance of this work. Therefore, I will maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for the valuable advice. We will add the discussion of our method and existing methods in our paper.
Summary: The paper proposes a flexible plug-in method called Representation Distinction (RD) to address the overgeneralization issue in offline Reinforcement Learning (RL) algorithms. The authors formalizes the process of generalization and investigates the potential to rein the generalization from the representation perspective to enhance offline RL, performing both Policy-Dataset and Policy OOD Generalization Inhibition with awareness of the two possible phases in the offline learning course. The proposed method explicitly differentiates between the representations of in-sample and out-of-distribution state-action pairs generated by the learning policy, which significantly improves the performance of the backbone and widely-used offline RL algorithms across various continuous control tasks on D4RL datasets. Strengths: + **Clarity**: the paper is well-structured and clear to follow. + **Flexibility and Good Performance**: the authors propose a novel method called Representation Distinction (RD) to improve the performance of offline reinforcement learning algorithms, which explicitly differentiates between the representations of in-sample and out-of-distribution (OOD) state-action pairs generated by the learning policy, demonstrating the efficacy of the proposed approach by flexibly applying RD to specially-designed backbone algorithms and widely-used offline RL algorithms. + **Insight**: the paper provides an insightful, nuanced and systematic formalization of the process of generalization in offline RL and then investigates the prevalent overgeneralization issue in offline RL. Weaknesses: + **Missing Citations or Clear Indications**: some recent papers also utilize NTK to analyze the genralization ability of offline RL on similar considerations with RD [1, 2, 3]. It is advisable to provide more comparisons and at least analyses on the way different NTK-relevant approaches address generalization issues in offline RL. + **Evaluation**: questions about experimental setups and results are listed in the next section. [1] Kumar et al. DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization. ICLR 2022. [2] Ghasemipour et al. Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters. NeurIPS 2022. [3] Li et al. When data geometry meets deep function: Generalizing offline reinforcement learning. ICLR 2023. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: **Methodology** + It is recommended to include more comparisons and conduct thorough analyses on how RD and other existing NTK-relevant approaches address generalization issues in offline RL. This would enhance the comprehensiveness and depth of the discussion. + As far as I am concerned, RD via kernel control is doing value regularization implicitly on representation level. It would be beneficial to provide a clearer and more insightful explanation of the potential advantages of applying RD via kernel control instead of value regularization and policy constraint, particularly in the context of offline RL algorithms. **Evaluation** + There appears to be an inconsistency between the results of SAC/TD3-N-Unc+RD in Table 1 and Table 2. It is important to address this discrepancy and provide clarification or additional information to ensure the accuracy and reliability of the reported findings. + Considering that well-established techniques that inhibit potential overestimation induced by extrapolation already applied into reinforcement learning, such as Lipchitz Regularization with Spectral Norm [4] and Layer Norm [5], are flexible and widely applicable, it would be worth discussing and experimenting with the possibility of leveraging these existing and simpler methods to effectively constrain and smooth the critic approximation, thereby preventing overgeneralization. + The comparison between curves obtained from a single seed in Figure 4 lacks experimental validity and may raise concerns about the reliability of the results. It is essential to address this limitation and consider conducting experiments with multiple seeds to improve the robustness and credibility of the presented findings. [4] Gogianu et al. Spectral normalisation for deep reinforcement learning: An optimisation perspective. ICML 2021. [5] Ba et al. Layer normalization. Advances in NeurIPS 2016 Deep Learning Symposium, 2016. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### analyses on how RD and other existing NTK-relevant approaches address generalization issues in offline RL See the global rebuttal parts. ### RD via kernel control is doing value regularization implicitly on representation level The core idea of RD is to encourage the learned Q-function to yield representations that are as orthogonal as possible between data from dataset and $\pi$ in the first stage and that between $\pi$ and $\pi_{ood}$ in the second stage. Taking stage 1 for example, in comparison to the value regularization on the q value **backup** process of $\pi$ (part of) in CQL, the value of $\pi$ using RD is regularized by cutting off the **generalization** from the learning on dataset. In extreme cases when $\left|\nabla_\phi Q_\phi(s, a)^{\top} \nabla_\phi Q_\phi(s, \pi(s))\right|=0$, the values of the unreliable actions of $\pi$ will be kept the same as the initialized ones. Although achieving similar results that the values of untrusted actions are low, the mechanism behind RD is completely different from the regularization of the Q value as shown in the Backup-Generalization Cycle in Figure 1. ### comparisons with DR3 and layer norm According to the suggestion, we perform experiments against TD3-N-Unc with DR3 and with layer norm. Below table demonstrates the results on six datasets. Note that in addition to the auxiliary loss, all the other parameters are kept the same across different methods to ensure fairness. Overall, RD is more helpful than DR3 and Layer Norm. | Dataset | TD3-N-Unc + RD | TD3-N-Unc + DR3 | TD3-N-Unc + Layer Norm | | --------------------- | -------------- | --------------- | ---------------------- | | halfcheetah-medium-v2 | **66.8$\pm$1.2** | 64.4 $\pm$ 1.7 | 63.2 $\pm$ 0.8 | | hopper-medium-v2 | **103.0$\pm$0.8** | **103.4 $\pm$ 0.7** | 83.0 $\pm$ 29.0 | | walker2d-medium-v2 | **97.6$\pm$3.4** | 92.1 $\pm$ 2.0 | 68.5 $\pm$ 20.7 | | halfcheetah-expert-v2 | 103.1$\pm$0.6 | 100.0 $\pm$ 3.7 | **104.4 $\pm$ 1.5** | | hopper-expert-v2 | **108.8$\pm$0.3** | **108.0 $\pm$ 0.5** | 88.4 $\pm$ 42.8 | | walker2d-expert-v2 | **111.2$\pm$0.7** | 109.9 $\pm$ 0.4 | **111.7 $\pm$ 0.4** | ### single seed in Figure 4 We use the result of one seed in Figure 4 to try to depict the occasionally happened performance degradation during the later stages of training in a clear manner. The subsequent T-SNE visualization is also based on the model trained obtained in Figure 4. Average results of several seeds could blur the result. In the appendix, we provide the curves of five seeds, which could demonstrate the reliability of the results. ### inconsistency between the results of SAC/TD3-N-Unc+RD in Table 1 and Table 2. Thanks for pointing out it. We have revised the results carefully and update the results in the draft. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. The authors' reply has addressed most of my concerns and those additional comparisons with other methods definitely strengthen the paper's empirical claims so I will vote for a higher rate of acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for the valuable advice. We will add the additional comparisons in our paper.
Rebuttal 1: Rebuttal: ### analyses on how RD and other existing NTK-relevant approaches address generalization issues in offline RL RD v.s. DR3 [1] - RD and DR3 differ at both the theoretical framework from which they are derived and the regularization effect in terms of out-of-sample data. DR3 is derived from the theoretical characterizing implicit regularization in TD-Learning, which is a generalization of the implicit regularization effect from Supervised Learning as studied in [2] and [3] to TD-Learning in RL setting. In contrast, RD is derived from the Backup-Generalization framework proposed by us. - From the angle of regularization effect, DR3 is proposed to directly counter the implicit regularization of TD-Learning while RD is to suppress the generalization between in-sample data and out-of-sample data. With some heuristics, DR3 regularizer arrives at a similar form, i.e., to minimize the NTK between consecutive state-action pairs in backup $(s,a)$ and $(s^{\prime},a^{\prime})$ (where $a^{\prime} \sim \pi(\cdot|s^{\prime})$), to RD regularizer. Apparently, DR3 regularizer can be a special case of RD regularizer in terms of the definition of out-of-sample actions. The regularization effect of DR3 is similar to the Policy-Dataset Generalization Inhibition shown by Figure 2 in our paper. Such a regularization can induce over-inhibition of generalization when the distribution of current policy overlaps with the offline dataset as illustrated. RD v.s. MSG [4] - MSG use NTK to characterize the difference of learning dynamics between Independent-taget ensemble and Shared-target ensemble for ensemble-based pessimistic offline RL methods, showing that commonly adopted Shared-target ensemble method could lead to a paradoxically optimistic estimate. The NTK analysis in MSG paper is not directly related to generalization issues in offline RL. Besides, MSG also has nothing to do with representation. - One thing to notice is that MSG uses the general infinite-width NTK regime proposed by [5], while we follow the NTK notion especially in the context of RL proposed in [6]. Although the latter one is a derivative notion of the former one, they are often different in the settings and aims of analysis at least for RD and MSG here. RD v.s. DOGE [7] - DOGE uses NTK to characterize the difference of Q-function generalization for interpolated and extrapolated data, showing that Q-function generalizes better to interpolated state-action pairs. This then motivates the proposal of a new plug-in policy constraint that regularizes the learning policy to select actions in the approximated interpolated action space. Such a policy constraint can relieve the over-conservatism issue in prior policy constraint-based offline RL methods. - In our paper, we use NTK to characterize the general generalization case within the Backup-Generalization framework. The major difference is that DOGE addresses the generalization issue by leveraging a less conservative policy constraint, while RD regularizes the generalization on the level of representation. To some degree, DOGE explicitly controls the policy space where generalization issues are addressed; in contrast, RD can be viewed as realizing an implicit control of the policy space by the penultimate-layer representation. Interestingly, we think there is chance to make use the definition and approximation method of interpolated data and extrapolated data in DOGE paper to design new representation regularization schemes. We will consider this as a future direction. We appreciate the reviewer's inspiring comment and we will add more analysis on the differences between RD and the related works above in our paper as suggested. #### Reference [1] Kumar et al. DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization. ICLR 2022. [2] Blanc et al. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. In Conference on learning theory, pp.483–513. PMLR, 2020. [3] Damian et al. Label noise sgd provably prefers flat global minimizers. arXiv preprint arXiv:2106.06530, 2021. [4] Ghasemipour et al. Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters. NeurIPS 2022. [5] Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. arXiv preprint arXiv:1806.07572, 2018. [6] Joshua Achiam, Ethan Knight, and Pieter Abbeel. Towards characterizing divergence in deep q-learning. arXiv preprint arXiv:1903.08894, 2019. [7] Li et al. When data geometry meets deep function: Generalizing offline reinforcement learning. ICLR 2023.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Hierarchical Gaussian Mixture based Task Generative Model for Robust Meta-Learning
Accept (poster)
Summary: The authors propose a novel probabilistic meta-learning algorithm called HTGM which models the full hierarchical sampling procedure in few-shot classification in order to perform prediction and novel task detection. To learn the parameters they employ the EM procedure together with variational inference. Experimental results show that they perform best among competitors in few-shot classification prediction and novel task detection. Strengths: - Originality & significance: The authors propose a probabilistic model of the full few-shot classification process which allows for both adaptation (prediction) and novelty detection. Additionally it can be added on top of other metric-learning few-shot learning algorithms out of the box. - Quality: The probabilistic model seem reasonable and they discuss problems and solutions of design choices made. Experiments for both prediction and novelty-detection is convincing (but see the weakness section). - Clarity: The paper is well-written in general. Weaknesses: - Partition function: The authors show that the partition function is upper bounded and show asymptotic tightness of this bound for the minimal distance $D_{hl}$, but I'm still not fully convinced by this and would like to know more how this impacts modelling and "what we lose" in disregarding the partition function. - Probabilistic modelling and "tricks": The pipeline has several tricks which may help performance but weakens the interpretation of the model as inferring a correct probabilistic distribution. - Non-standard multi-domain experiments: The datasets of Plain-Multi and Art-Multi are non-standard. Ideally additional experiments on for example meta-dataset would be beneficial. - Typos: 1. Line 172: I think $\mu_k^c$ should be $\mu_1^c$ and $\mu_N^c$? 2. Line 314: comapred -> compared 3. Line 341: they even don't fit -> they don't even fit 4. Table 2: I think the second entry in the "Setting" column should be "5-way, 5-shot" Technical Quality: 3 good Clarity: 3 good Questions for Authors: # Partition Function Could you explain what the potential limitations of disregarding the partition function is? Intuitively it feels like the partition function would have a regularizatory effect and disregarding it may lead to some overfitting. Would be interested to hear what the authors think the effect of being able to optimizing this part of the objective would be. # Tricks - Tying $\theta$ and $\phi$: Is there are reason for letting the recognition network $q$ be equal to $f$? Is the tying of the parameters done due to convenience and that it empirically was found to work well or is there some other reason? - Negative sampling and partition function: Show empirical results that the quantity $D\_{hl}$ goes to infinity over the course of training. - Model adaptation: Is it possible to put the adaptation step on a probabilistic footing? Right now it seems like it's pretty ad-hoc, can we for example trust $p(y'\_i = j' | x\_i')$ more as a probabilistic model compared to using any standard (non-probabilistic) metric learning few-shot algorithm which produces an embedding $f_{\theta}(x)$? One way to show this would be to show that $p$ here is more well-callibrated as opposed to using the backbone trained by proto-net or some other algorithm. # Non-standard experimental dataset Meta-dataset is the standard modern community benchmark for few-shot multi-domain classification. I'd be interested to see how the algorithm fares on this benchmark since their probabilistic model explicitly models the multi-domain aspect of few-shot learning for which the meta-dataset seems like a good fit. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No, certain limitations should be expanded upon (see Questions). Societal impact is discussed correctly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the constructive feedback. Here are our responses. Q1. Limitations of disregarding the partition function Response: The main issue of replacing the partition function with its upper bound is that it increases the noise in the inferred class mean. Note that the entropy of distribution $\pi(\mu_k^c|v_{\tau},w)$ ($\mu_k^c$ is a class mean) in Eq. (2) is that: $$H(\pi)=-\int\pi(\mu_k^c|v_{\tau},w)\log\pi(\mu_k^c|v_{\tau},w)d\mu_k^c =\log Z+\bar{E_w}$$ where $\bar{E_w}$ is the average energy. Therefore replacing $Z$ with its upper bound increases the entropy of $\pi(\mu_k^c|v_{\tau},w)$, leading to more noise in $\mu_k^c$. This leads to a decreasing accuracy of meta-learning. To alleviate this issue, we proposed to include an negative sampling term as in Eq. (4) to increase the class mean distance to reduce the influence of the noise Q2.1 Tying parameters. Response: The purpose of the inference network $\phi$ in $q_{\phi}$ is to infer $v_{\tau}$ from the support set $D_{\tau}^{s}$ in the embedding space, where the input $D_{\tau}^{s}$ is the same as an encoding network $f$. Thus we built the inference network $\phi$ upon $f$ (with two aggregation functions) to minimize parameter use for avoiding overcomplicated model. As in Fig 1, the two aggregation functions first aggregates the sample embedings in the support set $D_{\tau}^{s}$ to class embeddings, then aggregates the class embeddings to the task embedding $v_{\tau}$. This is similar to VAE where the inference network is an encoder, but in our case, the inference network coincides with the encoder (a.k.a., the base model) in the metric-based meta-learning. On the other hand, we think it is unnecessary to define two encoders, one for metric-based meta-learning, i.e., classifying the sample embedings, another for the inference network, i.e., infer task embeddings. This is because the task embedding is inferred from the class embeddings, thus classifying sample embeddings is inevitable in the inference network, making the former encoder a duplicate. Hence, it is intuitive to build $\phi$ in $q_{\phi}$ upon $f$. However, it is possible to add more parameters on $f$ to form $\phi$, such as using parameteric aggregation functions than non-parametric ones, as the framework is flexible. In this work, we used the simplest model structure as it was found suffiently effective to demonstrate the usefulness of the proposed model. Q2.2 Empirical results of quantity $D_{hl}$. Response: We tested the average $D_{hl}$ of our model on different training steps. And with $D_{hl}$, we can estimate the error ratio of our approximation to the partition function. This is because in our model, the error ratio $\frac{N\sqrt{2^{d-1}\pi^d}-Z}{N\sqrt{2^{d-1}\pi^d}}$ is bounded by $er(D_{hl})=\frac{\gamma(\frac{d}{2},D_{hl}^2/4)}{(\frac{d}{2}-1)!}$. We calculate them for each average $D_{hl}$ and include them in the Fig. 2 in the common rebuttal. As we can see, as training goes by, the error ratio of our approximation monotonically decreases. At step 7000, where the error ratio is 0.25, the model acquires the best validation accuracy, which means that 0.25 is enough for the model to handle the tasks. Q2.3 Model adaptation with probablistic explanation. Response: Yes, this model adaptation has a probablistic explanation consisting of two steps. First we learn class means on support set: When a meta-testing episode comes, for each class $y_j$, we first encode its support samples to acquire its support sample embeddings $e^{(j)}_1,e^{(j)}_2,...,e^{(j)}_k$. Then we infer the task embedding $v_{\tau}$ via $q_{\Phi}(v_{\tau}|D^s_{\tau})$ (in line 208). After that, we freeze other parameters and learn class embedding $\bar{\mu}_j^c$ for each class $y_j$ via maximum likelihood estimation. According to line 184-187, the likelihood function that we observe the support set of each class $y_j$ should be $$L(\bar{\mu}_j^c)=\log p({y}_j|v{\tau})\prod_1^k p({e}{i}^{(j)}|{y}j))=\log \pi(\bar{\mu}^ck|v{\tau},w)\prod_1^k\mathcal{N}(e^{(j)}_i|\bar{\mu}^c,\Sigma)$$ i.e. $p(y_j|v_{\tau})$ is the likelihood of step (a) in line 184, and $\prod_1^k p(e^{(j)}i|yj))$ is the likelihood of step (b) in line 185-186 (the inner loop). Our theoretic analysis in the common rebuttal shows the adaptation step $\bar{\mu}_j=\alpha{\mu}^c + (1-\alpha)W{l}v{\tau}$ maximizes the above likelihood function. However, our empirical evaluation shows that sometimes the emperically optimal value of $\alpha$ might be different from theoretically optimal value $\frac{k}{k+2\sigma^{2d}}$ (See the common rebuttal). We suggests that it might be because the noise in partition function approximation (see the answer to Q1). Therefore, we treat $\alpha$ as a hyperparameter to fineunte on validation set. Second, we classify the query sample $x'$: We first compute $p(y'=j'|x')$ i.e. the posterior distribution of the query sample's label $y'$ conditioned on the sample. Then we select the label with the highest probability. Given the learnt class means, we compute the conditional distribution of label: \begin{equation} p(y'_i=j'|x'_i) = \frac{p(y'_i=j',x'_i)}{\sum_j^Np(y'_i=j,x'_i)} =\frac{p(x'_i|y'_i=j')p(y'_i=j')}{\sum_j^Np(x'_i|y'_i=j)p(y'_i=j)} \end{equation} During the meta-testing stage, we should equally treat each class, i.e. assuming $p(y'_i=1)=...=p(y'_i=N)$. So we have: \begin{equation} p(y'_i=j'|x'_i) = \frac{\exp(||f{\theta}(x'i)-\bar{\mu}{j'}^c||}{\sum_j^N\exp(||f{\theta}(x'i)-\bar{\mu}j^c||} \end{equation} Q3. Non-standard experiment dataset Response: We used the current datasets in our experiments because they were regarded as benchmarks and used by other mixture-distribution based meta-learning works (e.g., [47]). Among them, the Plain-Multi dataset consists of four datasets that also exists in Meta-Dataset. The main difference is that this benchmark does not include Mini-ImageNet. Thus, we provided an experiment on Mini-ImageNet dataset in our Appendix D.6 --- Rebuttal Comment 1.1: Title: Thank you Comment: I appreciate the detailed response. I believe Q1 and Q2 to be successfully answered but I am unsure if the method will work on harder datasets such as meta-dataset. I have decided to keep my score as is as I still think the work has merit and should be accepted. --- Reply to Comment 1.1.1: Comment: Thank you so much! We will include more discussions into our draft if it is accepted. Best regards, Authors of this paper.
Summary: This paper studies the task distribution in meta learning and proposes modeling the task in multimodal distributions. To enable efficient modelling and inference, the author develops the hierarchical Gaussian mixture task generative mode and optimizes the meta learning model in expectation maximization. The performance of the method is verified in few-shot image classification tasks. Strengths: 1. This paper is written clearly, and the modelling of task distribution has great significance in the literature. 2. It shows improved performance in adopted benchmarks over existing metrics based methods. Weaknesses: (1) Probabilistic relations and graphical models. It is necessary to explain why the generative process Eq. (1) is valid in metrics-based meta learning. In practice, meta learning seldom considers the sample embeddings and the task embedding is of interest. Hence, the conditional distribution $\ln\int p(D_{\tau}^{q}| v_{\tau})p(v_{\tau}\vert D_{\tau}^{s})dv_{\tau}$ is mostly commonly used as the objective to maximize. However, in Eq.(1), the conditional dependencies on the support dataset are neglected. My suggestion is that this needs to be reflected in Eq.(1) directly. (2) Model optimization. In Line201-202, it says, “the query set is not included because it is unavailable during model testing”. This does not hold in Bayesian meta learning. Like that in conditional variational autoencoder or neural processes, the $q_{\phi}(v_{\tau}|D_{\tau}^{s})$ is called the approximate prior and the approximate posterior $q_{\phi}(v_{\tau}|D_{\tau}^{s}, D_{\tau}^{q})$ is used in meta training for more effective inference (though the query set is unavailable in meta testing). Hence, I doubt the way of inference in this work since the conditional prior is most important for meta learning probabilistic models. Especially, the way to infer $e_i$ throughout the manuscript, e.g., $p(e_i|y_i)$ in Eq.(3)-(7), the one-hot information can be quite limited. However, the input $x_i$ is not included in the inference. (3) Mixture of task distributions. The task embeddings lie in a mixture of Gaussian manifold is a reasonable assumption. For the task2vec work, the embedding of the task is visualized and analyzed a bit. In this work, the focus is on task distribution, so it is necessary to include this part in experimental analysis, for example, to check the grouping of latent structures in the task space. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: (1) Would you please provide the detailed probabilistic graphical model to explain the relationship between the support, the query, the sample-wise embedding and etc? (2) Since the experiments were performed on typical benchmarks, would you explain the novel tasks a bit and how to define the discovery of novel tasks? (3) Are task distributions the same for the meta training and meta testing, or do out-of-distribution cases exist? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for helping us refine the work. Before answering the questions, please allow us to clarify our problem setting. In this work, we aim to develop a robust meta-learning model for mixture task distribution such that when a meta-testing task (e.g. few-shot classification) comes, the model can: 1. identify whether the task comes from the same mixture distribution of meta-training set (i.e., a known task), or a different distribution (i.e., a novel task). 2. if the meta-testing task is a known task, adapt the model to this task with the task-level information for boosted performance. The motivation is that in areas requiring robust decisions, such as healthcare, where the accuracy drop on novel tasks is inevitable, a robust and ethically safe choice is to raise an alarm to the users (e.g. doctors) for diagnosis and decision-making. Response to Weakness (1): Please let us validate our design from two prespectives. 1. The relation to the commonly used $\ln\int p(D^q_{\tau}|v_{tau})p(v_{tau}|D^s_{\tau})d_{v_{\tau}}$ As in the aforementioned problem setting, our work is trying to handle both novel task detection and meta-learning. The commonly used $\ln\int p(D^q_{\tau}|v_{\tau})p(v_{\tau}|D^s_{\tau})d_{v_{\tau}}=p(D^q_{\tau}|D^s_{\tau})$ in existing works only considers the how to infer the query set information (e.g. labels) given support set (i.e. meta-learning). Therefore, we propose to handle novel task detection by modeling the likelihood of a task $p(D^q_{\tau},D^s_{\tau})$ (Eq. (1)), so that we can use a low likelihood score to detect novel tasks in meta-testing. Also, note $p(D^q_{\tau},D^s_{\tau})=p(D^q_{\tau}|D^s_{\tau})p(D^s_{\tau})$. Thus, our training objective function also contained the information of the commonly used likelihood function. 2. The relation between $p(D_{\tau}|v_{\tau})$ and the sample embeddings. In our work, we modeled $p(D^q_{\tau}|v_{\tau})$ as (Eq. 1): $$\log p(D_{\tau}=\\{(x_i,y_i)\\}|v_{\tau})=\sum_i^n \log p(x_i|y_i)p(y_i|v_{\tau})$$ This is because we assume the samples in the task are i.i.d., so we can factorize the distribution to each sample's probability density. We modeled the sample probability density with their sample embeddings and class mean: $$p(x_i|y_i) = \frac{\exp(||f_{\theta}(x_i),y_i||^2_2)}{\sigma^2I} =\mathcal{N}(e_i|\mu_{y_i}^c,\sigma^2I)$$ This operation is common in representation learning. For example, in [1], the authors applied similar definition, whose difference from us is that they applied inner product rather than Euclidean distance. In Eq. 1 of our draft, we directly wrote the $p(x_i|y_i)$ as $p(e_i|y_i)$ to emphasize that this probability can be efficiently computed in embedding space. Response to Weakness (2): The difference between our model and the example in the review (conditional VAE and Neural Process) is as follows. In our case, during the meta-testing, the infered task embedding $v_{\tau}$ is fed to the learnt GMM model to compute the prior likelihood $p(v_{\tau})$. The likelihood value is used to detect novel tasks. For effective detection, we need to ensure the infered $v_{\tau}$ of in-distribution tasks in meta-testing to have the same distribution as the infered $v_{\tau}$ of meta-training. In the meta-testing stage, since the query set is not available for novel task detection, we can only infer the $v_{\tau}$ via $q(v_{\tau}|D^s_{\tau})$. If we use $q(v_{\tau}|D^s_{\tau}, D^q_{\tau})$ in meta-training for more powerful inference, there could be a distribution discrepancy between the in-distribution tasks $v_{\tau}$ in the meta-training set and meta-testing set. Specifically, the variance of $q(v_{\tau}|D^s_{\tau})$ may be larger than $q(v_{\tau}|D^s_{\tau}, D^q_{\tau})$ as the former is estimated on fewer samples. Response to Weakness (3): We included the visualization of task embedding clusters in the common rebuttal PDF file. Q1.Detailed probabilistic graphical model? We have drawn the probalistic graph for both generation and inference in the commom rebuttal PDF file (Fig. 3). Q2.Explain the novel tasks and how to define the discovery of novel tasks? In typical benchmarks of meta-learning, we evaluate models by sampling a meta-testing dataset, where each data instance in this dataset is a task containing a support set and a query set that are both sampled from the same distribution of the meta-training dataset. The accuracy of meta-learning algorithms usually refers to the average performance of the model on all meta-testing tasks. In our work, a novel task, as we explained in line 22-25 and line 312-316, is a task whose support samples and a query samples are not drawn from the same distribution of the meta-training dataset. In our experiment of novel task detection, as we explained in line 312-316, we applied two typical benchmarks, Art-Muli and Plain-Multi, whose data samples (images) are drawn from different distributions. We then trained meta-learning models only on the Plain-Multi dataset. After that, during meta-testing stage, we mix the meta-testing tasks drawn from Art-Muli and Plain-Multi and then evaluate the models' ability to detect the meta-testing tasks drawn from Art-Muli (i.e. novel tasks). Q3. Do out-of-distribution cases exist? If we understand it correctly, this question is about the settings in our two experiments. In the few-shot classification experiment, since some baselines cannot handle out-of-distribution cases, for fair comparison, we only evaluated the accuracy of our model and baseline models on in-distribution tasks. In this case, out-of-distribution tasks do not exist in the meta-testing set. In the novel task detection experiment, we compared our model with the baselines that can detect out-of-distribution tasks. In this case, we mixed the in-distribution tasks and out-of-distribution tasks and evaluated the ability of the models in detecting the out-of-distribution tasks. [1] Jian Tang, et al. LINE: Large-scale Information Network Embedding. WWW 2015 --- Rebuttal 2: Comment: Dear Reviewer KXFG, Thank you so much for providing the constructive feedbacks for our work. We have tried our best to address the questions in your review. We will sincerely appreciate if we could hear from you on whether there are any remaining questions or concerns, so that we can take this opportunity to further clarify them and improve our work. Thanks a lot! Best regards, Authors of this paper --- Rebuttal Comment 2.1: Comment: Thanks for the response. Most of conerns are addressed. Some can be future exploration, e.g., investigating the inference method like that in conditional prior like neural processes. Also, it is encouraged to release the code later for better comprehension. I've increased my score. --- Reply to Comment 2.1.1: Comment: Dear Reviewer KXFG, We sincerely appreciate your review and comments! If this paper is accepted, we will discuss the future explorations accordingly and opensource our code. Best regards, Authors of this paper.
Summary: This paper proposes a Hierarchical Gaussian Mixture method as a means to parameterize the task generation process in meta-learning. The authors suggest that their proposed model can effectively fit a mixture of task distributions and evaluate the scoring of testing tasks. The effectiveness of the proposed methods is demonstrated through experiments conducted on multiple datasets. Strengths: * This paper is well-written and easy to follow * It is interesting that the proposed method can meta-learn both a mixture of task distributions and detect novelty in testing tasks. Weaknesses: * In this paper, the authors present an application of the Hierarchical Gaussian Mixture model to the meta-learning setting. While the paper provides ideas into utilizing this model for meta-learning, the method itself is not novel and has been extensively studied within the machine learning community. * The paper's theoretical analysis appears limited and would benefit from a more in-depth analysis. Specifically, it is crucial to provide a comprehensive analysis of the generalization bound associated with the proposed methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have clearly addressed the limitations and potential negative societal impact of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Here are our feedbacks to the review: Response to Weakness 1: In this paper, the authors present an application of the Hierarchical Gaussian Mixture model to the meta-learning setting. While the paper provides ideas into utilizing this model for meta-learning, the method itself is not novel and has been extensively studied within the machine learning community. Response: Please let us clarify the difference between our method and the existing works on Hierarchical Gaussian Mixture (HGM) model (which was also discussed in the last paragraph in Section 2 and Appendix B.2). To the best of our knowledge, the Hierarchical Gaussian Mixture (HGM) model has appeared in some traditional works [8,30,3] for hierarchical clustering by applying Gaussian Mixture model agglomeratively or divisively on the input samples. They are unsupervised methods that infer clusters of samples, but do not pre-train embedding models (or parameter initializations) that could be fine-tuned for the adaptation to new tasks in meta-learning. Therefore, these methods are remarkably different from meta-learning methods, and we think it is a non-trivial problem to adapt the concept of HGM to solve the meta-learning problem. To this end, we need to (1) identify the motivation; and (2) solve the new technical challenges. For (1), we found the hierarchical structure of mixture distributions naturally appears when we want to model the generative process of tasks from a mixture of distributions, where each task contains another mixture distribution of classes (as suggested by Eq. 1). In other words, the motivating point of our method is more on meta-learning than HGM. However, we think drawing such a connection between meta-learning and HGM is a novel contribution. For (2), our method is different from traditional HGM in (a) its generative process of tasks (Sec. 3.1), which is a theoretical extension of the widely used empirical process of generating tasks in meta-learning; (b) its Gibbs-style task-conditional distribution (Eq. 2) for fitting uniformly sampled classes; (c) the metric-based end-to-end meta-learning framework (Fig. 1) (note the traditional HGM is not for learning embeddings); (d) the non-trivial derivation of the optimization algorithm in Sect. 3.2 and Alg. 1; and (e) the novel model adaptation process in Sec. 3.3. Solving the technical challenges in the new generative model is another novel contribution of the proposed method. As such, we think our work is a new meta-learning method with a mixture task generative model. Response to Weakness 2: The paper's theoretical analysis appears limited and would benefit from a more in-depth analysis. Specifically, it is crucial to provide a comprehensive analysis of the generalization bound associated with the proposed methods. Response: In this paper, we mainly have provided theoretic contributions from the following perspectives: 1. We extended the widely used empirical process of generating a task to a theoretical process specified by a hierarchy of Gaussian mixture (GM) distributions. HTGM generates a task embedding from a task-level GM, and uses it to define the task-conditioned mixture probabilities for a class-level GM, from which the data samples are drawn for instantiating the generated task. 2. To address the challenge of computing the partition function in the Gibbs distribution in the aforementioned class-level GM (in Eq. (2)), we propose to replace it with a theoretically justified upper bound (in Theorem 3.1). 3. In the global rebuttal (and the uploaded PDF file), we additionally provide a theoretic result indicating that our simple model adaptation strategy (in Section 3.3) is actually an Maximum Likelihood Estimation of the class means. We think the above three contributions can help sufficiently justify the novel design in our proposed model. Meanwhile, it is noteworthy that in this work, we are actually handling the generalization gap by detecting novel tasks (that is out-of-distribution) with low likelihood and raises alerts. Therefore, the generalization bound is only meaningful for the in-distribution tasks. As for the general in-distribution cases of non-convex models trained with SGD, [1] gives a good generalization bound, ensuring the effectiveness of the general framework applied in our model. Therefore, in this work, we mainly focus on theoretically justifying the aforementioned unique designs. [1] Kuzborskij, Ilja, and Christoph Lampert. "Data-dependent stability of stochastic gradient descent." International Conference on Machine Learning. PMLR, 2018. --- Rebuttal 2: Comment: Dear Reviewer XMuD, We are very thankful to your review on our paper. We have provided our response to the questions and concerns in your review. We will sincerely appreciate if you could let us know if there are any remaining questions or concerns so that we can take this opportunity to refine our work? Thanks a lot! Best regards, Authors of this paper --- Rebuttal Comment 2.1: Title: Thanks for your response Comment: Dear authors, Thanks for your response. I read the rebuttal and addressed some of my concerns, I increased my score to 5. --- Reply to Comment 2.1.1: Title: Thank you for your feedback! Comment: Dear Reviewer XMuD, We are really thankful for your review and feedback! Best regards, Authors of this paper.
Summary: In realistic scenario, training tasks and test tasks may come from different distributions. However, most existing meat-learning methods do not take it into account. Even if some do, they do not jointly handle the detection of novel tasks. The authors propose a metric-based meta learning model that handles both the mixture of task distributions and detection of novel tasks. The effectiveness of the proposed method compared to previous methods is verified on several benchmark datasets. Strengths: 1. It is novel to approximate the distribution for tasks. In particular, it aligns with realistic scenarios by modeling it as a mixture of distributions. 2. The paper is well-written; it clearly explains its gaol with mathematical support, and notations are well-described and consistent. Figures are also helpful to understand the proposed idea. 3. Although Eq.(3) still cannot be computed due to the normalizing constant in $p_\omega(y_i|v_\tau)$, the authors provided the reasonable workaround in Theorem 3.1, which is also an interesting contribution. Weaknesses: 1. Although $l_{HTGM}$ is well-motivated and clear, $l_{neg}$ is somewhat heuristic. 2. If my understanding is correct, a large part of computational overhead compared to baseline methods e.g. ProtoNet comes from E-step and GMM. Especially, conducting GMM at each epoch can be quite expensive. But the analysis for computational cost is not provided. Can the authors provide the computational overhead (e.g. in wall-clock time or flops) of the proposed method (and time complexity if possible)? 3. To demonstrate that the effectiveness of the method on cross domain, the experiments on two datasets the authors used are not sufficient, especially, if we want to validate whether it works in practice or not. It could have been much more plausible if experiments were conducted on a larger scale dataset such as Meta-Dataset. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I am a bit confused with Eq.(4)-(5). Here, the objective of the additional negative sampling loss is to make the different class means disperse. To this end, Eq.(4) is used. But, if we minimize $l_{neg}$, wouldn’t it force $e_{j}$ and $\mu$ to be close to each other? Please correct me if I am wrong. 2. Could the authors elaborate how to interpret the Figure 2? In my understanding, the more distant two tasks are, the better a model is to distinguish two task groups. If so, measuring the distance between two normalized likelihood histogram would be helpful (treating them as probability distributions if necessary). Intuitively, the distance may be roughly estimated as the difference between the red and blue regions. I do not see the significant difference between the models (except HSML). 3. Is there any particular reason why EM is used over VI? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the weakness and questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the constructive feedback. We sincerely appreciate your valuable suggestions and questions. The following are our responses. Q1.I am a bit confused with Eq.(4)-(5). Here, the objective of the additional negative sampling loss is to make the different class means disperse. To this end, Eq.(4) is used. But, if we minimize, wouldn’t it force and to be close to each other? Please correct me if I am wrong. Response: Thank you for the question. In our draft, we are maximizing our objective function (the lower bound of the likelihood in Eq. (3)). Thus, when we add the negative sampling term to the objective function, we are jointly maximizing it together with Eq. (3) (not minimizing it), so that the optimization will force the class means to disperse. Q2. Could the authors elaborate how to interpret the Figure 2? In my understanding, the more distant two tasks are, the better a model is to distinguish two task groups. If so, measuring the distance between two normalized likelihood histogram would be helpful (treating them as probability distributions if necessary). Intuitively, the distance may be roughly estimated as the difference between the red and blue regions. I do not see the significant difference between the models (except HSML). Response: Your understanding is right. To better interpret the figures, we calculated the ratio of non-overlap-area/total-area in the four figures. The higher this value is, the more distant the two distributions are. Here is the result: HSML:0.1379; MetaOpt:0.4952; ProtoNet-Aug: 0.4806; HTGM:0.5578; As we can see, the ratio of non-overlap-area of HTGM is larger, which means that the two distributions are more distant. We will include these results in a revised draft. Q3. Is there any particular reason why EM is used over VI? Response: If we are understanding correctly, this question is asking why we use EM to train the loss of variational inference instead of using common SGD (e.g. Variational Auto-Encoder uses SGD). The reason is that GMM in our model can not be trivially trained with SGD. This is because GMM has some constraint on its parameters (e.g. its covariated matrix must be semi-positive-definite). Therefore, we apply EM algorithm for VI (variational inference), so that in the E-step, we can use non-SGD algorithm to optimize GMM, and in the M-step, we can optimize other neural network parameters with SGD. Detailed algorithms are included in the Appendix A.4. If using VI incorporated with SGD, the GMM may get invalid parameters. Q4. (From Weakness 2) Can the authors provide the computational overhead (e.g. in wall-clock time or flops) of the proposed method (and time complexity if possible)? Response: Yes. Let us report the cost of ProtoNet-Aug, NCA, FEATS and our model because they use the same encoder architecture. We evaluated and trained all of the models on RTX 6000 GPU with 24 GB memory. We will include the time comparison in the Appendix. Training Time: According to the training logs, the training of the ProtoNet-Aug took 10 hours, the training of NCA took 6.5 hours, the training of FEATS took 10.5 hours, and the training of the proposed model HTGM took 13 hours. Please note that our algorithm and FEATS require pre-training the encoder with ProtoNet-Aug. The 10.5 and 13 hours include the 10 hours pre-training phase. The major cost of our model is not from the energy function, because we have reduced its partition function to a constant using Eq. (6) in Theorem 3.1, whose training cost is negligible. The higher cost is because (1) our model needs to jointly learn a GMM model in every EM step and (2) jointly learning the generative model and classification model takes more learning steps to converge. Given the pre-trained ProtoNet-Aug encoder, the FEATS took about 3000 steps to converge, the proposed model took about 10000 steps to converge. So we agree with the reviewer that the EM training algorithm takes more computational overhead. However, the advantage comes with the training cost is the better classification and novel task detection performance. Test Time: Because we approximated the partition function of the energy function with a constant upper bound, we almost added zero computational cost to the model inference. The test time of NCA, FEATS, ProtoNet, and our model are all around 85-95 seconds for 1000 tasks. Q5. (From Weakness 3) It could have been much more plausible if experiments were conducted on a larger scale dataset such as Meta-Dataset. Response: Thank you for this suggestion. We used the current datasets in our experiments because they were regarded as benchmarks and used by other mixture-distribution based meta-learning works (e.g., [47]). Among them, the Plain-Multi dataset consists of four datasets that also exists in Meta-Dataset. The main difference is that this benchmark does not include Mini-ImageNet. Thus, we added the following discussion and an experiment on Mini-ImageNet dataset to the Appendix. "In the case when the task distribution is not a mixture, our model would degenerate to and perform similarly to the general metric-based meta-learning methods, e.g., ProtoNet, which only considers a uni-component distribution. To confirm this, we added an experiment that compares our model with ProtoNet-Aug on Mini-ImageNet, which does not have the same explicit mixture distributions as in the Plain-Multi and Art-Multi datasets in Section 4. The results are summarized in the following table. From the table, we observe our method performs genrally better than ProtoNet-Aug, which validates the aforementioned guess. Meanwhile, together with the results in Table 1 and Table 2, the proposed method could be considered as a generalization of the metric-based methods to the mixture of task distributions" |Model | 5-way-1-shot | 5-way-5-shot | |------------|--------------|--------------| |ProtoNet-Aug|59.40±0.93 |**74.68±0.45**| |HTGM |**61.80±0.95**|74.55±0.45 | --- Rebuttal Comment 1.1: Title: Response to the Authors Comment: Thank you for the detailed explanations. It is much clearer, in particular, the comparison in time would be helpful for practitioners and the response for Q5 is insightful to better understand the relationship between the metric-based model and the proposed model. One last thing I want to ask to the authors is if the authors have a plan to release the code upon acceptance. I believe it would be helpful for the readers to better understand the paper if the code is available. --- Reply to Comment 1.1.1: Title: Response by authors Comment: Thank you so much! Sure, we will open source the code if the paper is accepted. Also, if this paper is accepted, we will include the discussions during the rebuttal in the draft.
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely appreciate your valuable comments that help us refine our work. If you have more questions or concerns about our response or current draft, please let us know. We are happy to discuss with you. We have uploaded a PDF file that contains figure and addtitional theoretic analysis for reference. Best regards, Authors Pdf: /pdf/5c5c3c996b433c127e7cc89f8f2127754d68b9cb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural Language Understanding
Accept (poster)
Summary: The authors formulate soft-prompt tuning as maximising mutual information between prompts and model parameters/encoded representations. The prompt is trained to maximise mutual information with the language model head parameters, and the representation of a task encoding from the model. The authors primarily test on GLUE, relation extraction, and NER tasks, and find that their approach closes the gap the most with full-finetuning. Analysis suggests that the proposed method is more stable and converges faster during training, perhaps due to the smoother loss landscape of the info loss. Strengths: Interesting approach, with clear motivation and description. The experiments are thorough, over a reasonable number of tasks, and show clear improvements over baselines. The analysis also provides useful and interesting insights into how the approach works, and improving both performance and convergence speed of soft prompt tuning is a useful and important result. Weaknesses: No glaring weaknesses, but I think the following could be improved: - Comparisons with popular parameter-efficient finetuning (PEFT) methods: soft prompt tuning is often considered one of several peft methods, including LoRA. While you do compare with adapters, it would be interesting to compare against Lora or MaM adapters [1], which are generally considered more effective than adapters. - The work claims their approach makes prompt tuning less sensitive to initialization. It would be good to see results over multiple seeds (or even multiple initialisation types), with smaller variation than a baseline approach, to back up this claim. Overall, I think this is solid work and lean to accept it. [1] He et al., 2022. *Towards a Unified View of Parameter-Efficient Transfer Learning*. https://openreview.net/pdf?id=0RDcd5Axok Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How does InfoPrompt compare to lora (and other PEFT methods)? - Could you apply a similar information loss technique to lora or other PEFT methods, and how effective would that be? - How did you choose the values for beta and gamma? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss limitations at the end of the paper to a reasonable degree. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments! Weakness 1 & Q1 & Q2: ***Comparisons with popular parameter-efficient finetuning (PEFT) methods*** Response: Thanks for your valuable suggestion. Below, we have introduced the Info-LoRA, which outperforms several other PEFT methods (e.g., adapter[4], BitFit[5]). Info-LoRA is implemented by combining LoRA finetuning and our soft-prompt tuning method. In addition to the soft-prompt tokens, we introduce additional parameters by LoRA and these parameters are also tunable in our downstream tasks. | Full-data | CoLA | RTE | MRPC | SST2 | Average | |---|---|---|---|---|---| | info-LoRA | 0.683 | 0.8736 | 0.9069 | 0.9667 | 0.8575 | | LoRA | 0.5880 | 0.6715 | 0.8235 | 0.9541 | 0.7592 | | 64 shots | CoLA | RTE | MRPC | SST2 | Average | |---|---|---|---|---|---| | info-LoRA | 0.0988 | 0.5776 | 0.7328 | 0.5333 | 0.4856 | | LoRA | 0.0991 | 0.5596 | 0.6985 | 0.5677 | 0.4812 | | 256 shots | CoLA | RTE | MRPC | SST2 | Average | |---|---|---|---|---|---| | info-LoRA | 0.3783 | 0.6462 | 0.7941 | 0.7362 | 0.6387 | | LoRA | 0.2854 | 0.5740 | 0.7206 | 0.8222 | 0.6005 | Weakness 2: ***Sensitivity to initialization.*** Response: Thanks for your comment on this. We have some related discussion in Section 6.1 (line 255). We also have additional results to validate that, our approach is less sensitive in the early learning stage compared to the baseline approaches. For the results in Figure 3, we further reported the standard errors in the early learning stage. Specifically, we used 10 different random initializations for our method and the baseline WARP, and then reported the standard errors of the performances in the early learning stage (e.g., after the first epoch). The results show that our method's standard errors are lower, compared to WARP. | | SST2 | NER-ACE | |---|---|---| | WARP | 0.7352 $\pm$ 0.0334 | 0.4606 $\pm$ 0.0385 | | InfoPrompt | 0.7639 $\pm$ 0.0270 | 0.8099 $\pm$ 0.0253 | Q3: ***How did you choose the values for beta and gamma?*** Response: Similar to previous works [1, 2], we empirically choose the current hyper-parameters (without using the validation data). Specifically, based on our observation of the scales of the three losses, we choose the current hyper-parameters (without using validation data), so the scales of the three loss functions after weighting (head loss and representation loss, and task loss) are similar in the final loss function. This strategy is simply based on the inherent assumption that each loss should contribute equally to the problem [3]. [1] Jia, Zhiwei, and Hao Su. "Information-theoretic local minima characterization and regularization." International Conference on Machine Learning. PMLR, 2020. [2] Shi, Yufeng et al. “Information-Theoretic Hashing for Zero-Shot Cross-Modal Retrieval.” ArXiv abs/2209.12491 (2022): n. pag. [3] Groenendijk, Rick, et al. "Multi-loss weighting with coefficient of variations." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2021. [4] Houlsby, Neil, et al. "Parameter-efficient transfer learning for NLP." International Conference on Machine Learning. PMLR, 2019. [5] Zaken, Elad Ben, Shauli Ravfogel, and Yoav Goldberg. "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models." arXiv preprint arXiv:2106.10199 (2021). --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Hi, thanks for the response and follow-up experiments! I've carefully read your rebuttal and the other reviews and am keeping my score. I think better methods for soft prompt tuning are interesting and useful for the field, and while this paper is somewhat incremental, the experiments and proposed approach appear solid and are useful for future researchers studying prompt tuning. Ideally, I agree with other reviewers that a wider range of experiments would be useful, especially generation tasks using a generation model (T5 or llama), but I think the paper is okay with the current set of experiments + the ones the authors have performed for rebuttal. I hope the authors release their code in the future to aid reproducibility. --- Reply to Comment 1.1.1: Comment: Thank you very much for the acknowledgement. We deeply appreciate your time and effort in the review. We will certainly release the code to the community, and add the experiments in the rebuttal into the final version of the paper.
Summary: This work focuses on the initialization inefficiency of soft prompt tuning. Crucially, soft prompt tuning is notoriously challenging to obtain a good initialization, and highly sensitive to some hyper-parameters, leading to some optimization difficulty especially in low-resource scenarios. In terms of this, this work starts from an information-theoretic perspective of maximizing mutual information to get a better initialization of soft prompts. The experiments on classification, relation extraction, and NER tasks show the effectiveness of their design to several baselines. Strengths: 1. The information-theoretic guarantee with some clearly organized mathematical equations is correct, and interesting as their designs, inspired by some contrastive learning techniques. 2. The experiments do show the effectiveness of their proposed algorithm, mainly in advanced NLU tasks, e.g., NER, and RE. Weaknesses: 1. Of course, this work has some contributions to soft prompt tuning initialization from a different perspective, even such perspective is highly similar to existing work on discrete prompt engineerings [1]. However, I think it has some limited impacts, as your task settings are somewhat less interesting given a lot of works on better discrete prompt optimization with significant better performance under few-shot setting, and many work simply initializing the soft prompts by pre-training on similar datasets show good performance as well. Moreover, current main focus in the community would not be how to tune such negligible soft prompts to a limited range of NLP tasks, instead, people would use better instruction-tuned LMs for real-world application, etc. 2. You need to re-polish your writing, as I do frequently find some typos, e.g., performances in page1, intro, "your method names -- Infopropmt" in section 9., limitation, and so on. 3. Your few-shot numbered improvements, which might be your main contributions in your introduction, are still negligible to some extent, further restricting your impacts in this conference. [1] An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels, ACL 2022 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: No. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please see my reply on the weaknesses. Their listed limitations are too general as the typical limitation of prompt tuning... Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments! Weakness 1: ***your task settings are somewhat less interesting given a lot of works on better discrete prompt optimization with significant better performance under few-shot setting*** Response: We would like to politely contend that the discrete and (vs.) soft prompt tuning (PT) are two parallel setups with pre-trained language models [2]. As mentioned in Section 2 of [1] (mentioned in the review), the best prompts are tuned with the continuous embeddings space (soft PT). This is because the continuous the search space of the discrete PT (i.e., a discrete set of token embeddings) is actually a subset of the search space of soft PT (i.e. the whole embedding space). Thus, the soft continuous prompts should be more expressive with a large search space, and the performance of the optimal prompt from soft PT is expected to upper bound that from discrete PT [3] (Section 7.2). Additionally, it is computational challenging with discrete PT to optimize in a discrete search space [2,3]. Comparative, our proposed objectives of soft PT are optimized continuously with provable convergence guarantee (Theorem 1). Weakness 1: ***many work simply initializing the soft prompts by pre-training on similar datasets*** Response: Pre-training on similar datasets indeed may lead to better performance. However, such a transfer learning set up is orthogonal to our problem formulation, i.e., we consider the scenario where no auxiliary datasets are available. Our scenario is actually more practical since it can be difficult and expensive to identify the similar datasets of the target task [7]. Weakness 2: ***You need to re-polish your writing, as I do frequently find some typos*** Response: Thanks for pointing out our typos. We will fix them accordingly. Weakness 3: ***Moreover, current main focus in the community would not be how to tune such negligible soft prompts......instead, people would use better instruction-tuned LMs for......*** Response: Please allow us to respectfully disagree with this statement. Firstly, we believe that tuning soft prompts is hardly considered negligible. Actually, it is an active field of efficient training with pre-trained language models and is being widely studied by many recent papers, e.g., [4-7]. Secondly, existing instruction-tuned LMs (e.g., GPT-3.5/4) are generally not task-specific, thus still yet to excel when being evaluated on many of the NLP tasks [8,9]. Therefore, it is still necessary to further tune such models with task-specific information. For this purpose, soft prompt tuning is an efficient approach and is still being actively studied [2-7]. Besides, although instruction-tuned LMs have been actively studied for real-world application, we have noticed some very recent works showing that instruction-tuned LMs and soft-prompt tuning are complementary. By properly developing soft-prompt tuning methods, it is promising to further improve the instruction-tuned LMs [10, 11]. [1] An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels, ACL 2022 [2] Liu, Pengfei, et al. "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing." ACM Computing Surveys 55.9 (2023): 1-35. [3] Li, X. L., & Liang, P. (2021, August). Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 4582-4597). [4] Razdaibiedina, Anastasia, et al. "Progressive Prompts: Continual Learning for Language Models." The Eleventh International Conference on Learning Representations. 2022. [5] Wang, Zhen, et al. "Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning." The Eleventh International Conference on Learning Representations. 2022. [6] Razdaibiedina, Anastasia, et al. "Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization." arXiv e-prints (2023): arXiv-2305. (Findings of ACL2023) [7] Pang, Bo, et al. "SharPT: Shared Latent Space Prompt Tuning." Findings of the Association for Computational Linguistics: EACL 2023. 2023. [8] GPT-4 Technical Report, OpenAI. [9] Koubaa, Anis. "GPT-4 vs. GPT-3.5: A concise showdown." (2023). [10] Sun, Simeng, et al. "How does in-context learning help prompt tuning?." arXiv preprint arXiv:2302.11521 (2023). [11] Shi, Zhengxiang, and Aldo Lipani. "Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner." arXiv preprint arXiv:2305.01711 (2023). --- Rebuttal Comment 1.1: Title: Replies from Reviewer 7rsH Comment: Thanks for your reply! I am acknowledging your contributions to soft prompt tuning on three tasks (including classification, NER/RE) under full-shot and few-shot scenarios compared to traditional soft prompt tuning, but there does exist some potential limitations which deserve your future hard work to improve the contents for a more rigorous paper, instead of any possible over-claims. -- All these replies are to ask for more rigorous studies to position your contribution. Your main contribution: "A different soft prompt tuning framework based on mutual information, a different perspective which brings some new insights, and advantages, such as **initialization/efficiency (here, higher performance under low-resource scenarios), and effectiveness**". Let me know if I have any misunderstandings. So in order to support your current claims on your advantages, e.g., efficiency at better capturing task data knowledge, you shall conduct rich experiments to show your advantages (yes, you do evaluate on traditional prompt tuning baselines, but for advantages in initialization, it is necessary to form a fair comparison against other initialization baselines --- see my words later, and perhaps for NLU title, it is still somewhat over-claimed), **in addition to insights related to your central topic** -- "information theoretical soft prompt tuning". <1. Task settings> - For the two options I listed here, including discrete prompt optimization and continuous prompt tuning, I think these two altogether limit your impacts in terms of the performance and the applicability at the beginning. It is just to say that it would be good for you to include their performance as well, to position your **applicability** on your currently tested tasks if you want to highlight this (Or if your selling point is your **low-resource initialization** (efficient in your abstract), it could be better to incorporate some baselines for this problem, like [7], which sets the baseline with SPoT-retrieval, but in terms of this top-tier conference, more baselines in this initialization point would definitely be a plus). -- But of course, this point shall be marginal, compared to your central topic/contribution: "a different starting point on mutual information". - And for NLU, if you want to claim your approach (add more values to your proposal) does contribute to a wide range of NLU tasks, you need to show more numbers on other task formats, e.g., QA task, and even more, since standard NLU is too broad with many possible prompt adaptations. --- So to avoid any over-claims. - To claim your approach is better at few-shot scenarios, it would be good to provide some more initialization baselines like your [7]. Currently, you basically assume that in your current task formulation, you do not have any additional similar datasets/resources to use in your few-shot sections (low-resource in your [7]). In [7], to show their framework is better in initialization, they already incorporate baselines like SPoT-retrieval, where you lack some rigorous empirical evaluations w/ even heuristics-based methods. Instead, you just test with other common prompt tuning baselines to say your efficiency is significant. Yes, compared to your baselines, it is true. But how about its potential to the real progress on "prompt initialization"? <2. Soft Prompts ==> Marginal Numbered improvements> - Sorry for the ambiguous language, which makes you **misunderstand my words**. I am not saying that soft prompts are negligible (please pay attention to my context carefully), but saying I am curious about whether your soft prompt tuning work with *marginal numbers (or limited studies w/ possible baselines) compared to other prompting paradigms, not just your current traditional prompt tuning baselines* could be significant to the field, instead of some incremental progress. For soft prompts, there are indeed many potential hot topics, such as prompt distillation/compression for text counterparts, same for the integrations between soft prompts and instruction-tuned LMs. Indeed, I am saying in your task scenarios, you have possible other alternatives deserving your empirical comparisons. And even more, other suitable soft tuning methods outperform yours (BBT, LPT, etc). - Of course, I acknowledge your contributions of a different prompt tuning framework to soft prompt tuning. <3. one of your contributions> - You have claimed that you also provide formal derivations of your information-theoretical framework in your pending appendix as one of your contributions. So I cannot provide one very decisive rating for this, which is not very objective and fair. <Minor> - Some typos. In terms of the above reasons, I still think this paper needs further improvements, which is the main reason that I keep my current borderline ratings. It would be good for ACs to help on the final judgements after reading my comments. Thanks, and hope my elaborations could address your concerns. --- Reply to Comment 1.1.1: Comment: Thank you very much for the insightful comments to improve the position of our contribution. We appreciate your acknowledgement of our contribution to soft prompt tuning. <1. Task settings> ***discrete prompt optimization and continuous prompt tuning*** We include the experiments with discrete prompt optimization listed below. Due to limited time of the rebuttal discussion period, we show four experiments with 64 shots. By MI-discrete, we follow [16] to design a set of candidate discrete prompts, and follow the mutual information metric in [16] to select the best discrete prompt. By MI-discrete+InfoPrompt, in addition to finding the best discrete prompt as in [16], we also insert the soft prompt and apply our soft prompt tuning method. We observe that approaches of soft prompt tuning (WARP and InfoPrompt) are better than MI-discrete, since soft prompt tuning has a larger search space of optimization than the discrete approaches. We also experiment with MI-discrete+InfoPrompt that achieves better performance than MI-discrete. This shows that the soft prompt tuning and discrete prompt tuning are not mutually exclusive. Instead, our propose soft InfoPrompt can be combined with the discrete prompt tuning to further improve MI-discrete's performance. | | RTE | SST2 | MRPC | CoLA | |--------------------------|--------|--------|--------|--------| | WARP | 0.5596 | 0.5872 | 0.7083 | 0.0749 | | MI-discrete | 0.5271 | 0.5344 | 0.6838 | 0.0547 | | | | | MI-discrete+InfoPrompt | 0.5848 | 0.6812 | 0.7206 | 0.1152 | | InfoPrompt | 0.6137 | 0.6697 | 0.7059 | 0.1567 | ***NLU tasks*** Thank you for the suggestion to avoid any over-claims. Similar to previous works [12-16] claiming to contribute to improved methods in natural language understanding, we followed their evaluation protocol and evaluated our approach on GLUE, ACE, and SemEval, instead of evaluating our approach on the QA task. [12] Liu, Xiaodong, et al. "Multi-task deep neural networks for natural language understanding." arXiv preprint arXiv:1901.11504 (2019). [13] Clark, Kevin, et al. "Bam! born-again multi-task networks for natural language understanding." arXiv preprint arXiv:1907.04829 (2019). [14] Zhang, Zhuosheng, et al. "Semantics-aware BERT for language understanding." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 05. 2020. [15] Zhang, Taolin, et al. "DKPLM: decomposable knowledge-enhanced pre-trained language model for natural language understanding." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 10. 2022. [16] Sorensen, Taylor, et al. "An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels." Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022. ***initialization baselines*** Thank you for the insightful comments. The baseline approaches in [7] is orthogonal to our problem formulation due to its transfer learning setup, i.e., requires additional resources (datasets). In our experiments, we consider the scenario where no auxiliary datasets are available. Thus, we only include initialization baselines without additional resource requirements for fair comparison. To validate the effectiveness of our initialization approach, we compare our initialization to some common initialization approaches mentioned in [17, 18]. Specifically, in addition to the baseline WARP using class-label initialization in our paper, we further report the baselines with Random Uniform and Sampled Vocabulary initialization. By Random Uniform, we randomly sample prompt initialization from the continuous latent space. By Sampled Vocabulary, we randomly sample prompt initialization from language model's vocabulary set. Due to the limited time of the rebuttal discussion period, we present the results on 4 datasets, and we will certainly include more results in the updated version. | | RTE | SST2 | MRPC | CoLA | |-------------------|--------|--------|--------|--------| | WARP (Random Uniform) | 0.5211 | 0.5391 | 0.6265 | 0.0312 | | WARP (Sampled Vocabulary) | 0.5475 | 0.6140 | 0.6725 | 0.0602 | | InfoPrompt | 0.6137 | 0.6697 | 0.7059 | 0.1567 | [17] Lester, Brian, Rami Al-Rfou, and Noah Constant. "The power of scale for parameter-efficient prompt tuning." arXiv preprint arXiv:2104.08691 (2021). [18] Gu, Yuxian, et al. "Ppt: Pre-trained prompt tuning for few-shot learning." arXiv preprint arXiv:2109.04332 (2021). (***Please check our additional responses to the other questions in the next comment.***)
Summary: This paper introduces InfoPrompt: an information theoretic framework for soft prompt tuning. It does so by introducing two additional loss terms: the head loss (maximizing prompt similarity with the LM head) and the representation loss (maximizing prompt similarity with the encoder's last hidden state). The authors show that InfoPrompt is better than WARP, IDPG, and Adapter baselines with Roberta Large as the base model, on sequence classification, RE, and NER datasets. The paper includes analysis of loss landscapes and theoretical guarantees that their method can be optimized with gradient descent. Strengths: Strengths: * The paper is clearly written, and is easy to follow. * Infoprompt is simple to understand and implement. * Infoprompt does better than the baselines it compares against: WARP, IDPG, Adapters and vanilla prompt tuning on Roberta Large. * The paper illustrates interesting loss dynamics of their method in section 6. Weaknesses: Weaknesses: * *Baselines*: I think there could be more baselines than WARPS and IDPG used here, like non-prompt-based parameter-efficient tuning methods besides Adapters (like LoRA or HyperFormer), or a more relevant one like HyperPrompt. I think this paper would be a valuable contribution even if it didn't completely beat the performance of those methods, but it would be nice to see how it compares to more diverse baselines (not necessarily the exact ones I described here, though they are certainly good choices). * *Datasets*: The authors evaluated on most datasets in GLUE, it would be nice to have the full suite of GLUE evaluation. Having only a subset of GLUE feels like the datasets are cherry picked. * *Tasks*: Some results on sequence generation tasks would have made this paper a lot stronger in my opinion, and it would not be significantly more expensive to train a T5 variant with this method on a summarization or translation task. * *Minor typos* (did not affect review score): * Table 4 description * line 331 * line 307 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions: * Why is there no IDPG baseline in section 5.2? * Prompt tuning (Lester et al) required a traditional causal LM training stage of 100k steps on T5 before prompt tuning for it to work well. The same issue might be hidering vanilla prompt tuning on the encoder-only models used here (Roberta variants). Given that the authors could train Roberta large, it seems that they have the compute required to train the released LM-adapted T5 variants in Lester et al [[1](https://huggingface.co/google/t5-base-lm-adapt), [2](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md##lm-adapted-t511lm100k)] for a better comparison to vanilla prompt tuning. Do the authors think that the prompt tuning baseline here (both InfoPrompt loss weights set to 0) might not be fairly compared to because of the misalignment of the base pretrained models used here? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations: * Evaluation is specific to Roberta large, and only sequence classification and RE/NER datasets. This does not diminish the intellectual value of the method, but is nonetheless a limitation that makes this method hard to confidently brand as better than vanilla prompt tuning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments! Weakness 1: ***Baselines*** Response: Thanks for the suggestion. We further include the comparison between our approach and a new baseline LoRA, as below. | Full-data | CoLA | RTE | MRPC | SST2 | RE | NER | SemEval | Average | |---|---|---|---|---|---|---|---|---| | LoRA | 0.5880 | 0.6715 | 0.8235 | 0.9541 | 0.6636 | 0.8228 | 0.7214 | 0.7492 | | InfoPrompt | 0.6018 | 0.6968 | 0.8137 | 0.9599 | 0.7616 | 0.8962 | 0.7917 | 0.7888 | | 64 shots | CoLA | RTE | MRPC | SST2 | RE | NER | SemEval | Average | |---|---|---|---|---|---|---|---|---| | LoRA | 0.0991 | 0.5596 | 0.6985 | 0.5677 | 0.1232 | 0.1345 | 0.1711 | 0.3362 | | InfoPrompt | 0.1567 | 0.6137 | 0.7059 | 0.6697 | 0.2119 | 0.3331 | 0.2113 | 0.4146 | | 256 shots | CoLA | RTE | MRPC | SST2 | RE | NER | SemEval | Average | |---|---|---|---|---|---|---|---|---| | LoRA | 0.2854 | 0.5740 | 0.7206 | 0.8222 | 0.2291 | 0.1955 | 0.3817 | 0.4583 | | InfoPrompt | 0.1750 | 0.6580 | 0.7377 | 0.7305 | 0.2993 | 0.4739 | 0.4034 | 0.4968 | In our paper, we target improving the performance for a $\textbf{single task}$ via prompt tuning (similar to WARP[1], IDPG[3] and LoRA[4]). Therefore, we only include single task prompt tuning approaches as baselines. HyperFormer is originally designed for $\textbf{multi-task}$ learning, which is setting that is orthogonal to our paper. In our experiments, we follow the evaluation protocol in WARP[1], IDPG[3] and LoRA[4], and it is not feasible to directly compare HyperFormer in our designed evaluation protocol and experiment setting. Weakness 2: ***Dataset*** Response: As explained in line 175, these 4 datasets chosen from the GLUE benchmark are relative small in size (line 175), which simulate a more challenging low-resource scenario that is compatible with prompt tuning. Weakness 3: ***Tasks*** Response: In the paper, we follow previous works of prompt tuning, e.g., WARP [1], IDPG [3], etc., that mostly experiment with classification-based tasks. It would also be interesting to further consider sequence generation tasks. Thank you for the suggestion and we will keep this in mind in our future works. Weakness 4: ***Typos*** Response: Thanks for pointing out our typos. We will fix them accordingly. Q1: ***Why is there no IDPG baseline in section 5.2?*** A1: Similar to the evaluation in WARP [1], we only chose the most competitive baselines in full dataset experiments, in the further evaluations under the few-shot learning setting. Thanks for the suggestions. We further include the comparison between our approach and IDPG, in the few-shot learning setting. | 64 shots | CoLA | RTE | MRPC | SST2 | RE | NER | SemEval | Average | |---|---|---|---|---|---|---|---|---| | IDPG | 0.0902 | 0.5018 | 0.6593 | 0.5424 | 0.2596 | 0.3334 | 0.1984 | 0.3693 | | InfoPrompt | 0.1567 | 0.6137 | 0.7059 | 0.6697 | 0.2119 | 0.3331 | 0.2113 | 0.4146 | | 256 shots | CoLA | RTE | MRPC | SST2 | RE | NER | SemEval | Average | |---|---|---|---|---|---|---|---|---| | IDPG | 0.1513 | 0.5523 | 0.7010 | 0.8188 | 0.2503 | 0.4048 | 0.3577 | 0.4623 | | InfoPrompt | 0.1750 | 0.6580 | 0.7377 | 0.7305 | 0.2993 | 0.4739 | 0.4034 | 0.4968 | Q2: ***Do the authors think that the prompt tuning baseline here (both InfoPrompt loss weights set to 0) might not be fairly compared to because of the misalignment of the base pretrained models used here?*** A2: Since we target classification-based tasks in the paper, we follow WRAP [1] and iPEC [2] that adopt an encoder-only LM (Roberta-Large). For fair comparison (following [1, 2]), all our baselines are implemented with the same base pretrained model (i.e, Roberta-Large). [1] Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. Warp: Word-level adversarial reprogramming. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing 385 (Volume 1: Long Papers), pages 4921–4933, 2021. [2] Schick, Timo, and Hinrich Schütze. "It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners." Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021. [3] Wu, Zhuofeng, et al. "IDPG: An instance-dependent prompt generation method." arXiv preprint arXiv:2204.04497 (2022). [4] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the rebuttal! I think weaknesses 1 and 4 have been addressed well, as well as question 1. I do however still think that this method could benefit from more diverse tasks for this paper to be a confident accept in my opinion. Showing that this method works well in generative tasks like language modeling or translation (or even span selection tasks like extractive question answering) would make it a strong paper in my opinion. Also, while I appreciate the focus on smaller GLUE datasets, there is still value in showing numbers on higher-resource tasks. Parameter efficient tuning and dataset size are orthogonal axes of study in my mind, so it would still be useful to see numbers on the full suite of GLUE tasks (or even a newer benchmark if possible, since GLUE is at a point of saturation). I think this paper is promising work and I encourage the authors to make it stronger. I will be keeping my review score the same. --- Reply to Comment 1.1.1: Comment: Thank you very much for the suggestion. We sincerely appreciate the time and effort you invested in the review. Though we follows the scope of previous prompt tuning works (e.g., [1][3][4]), it would certainly be interesting to further explore with more tasks. We will keep this in mind in our work.
Summary: The paper proposes a new formula that models soft prompt tuning as maximizing mutual information between prompts and other model parameters. In order to improve the initialization of the prompts and to learn sufficient task-relevant information from prompt tokens, this paper develops two novel mutual information based loss functions. The authors provide analysis on the convergence of the prompt tuning and the proposed methodology is evaluated on several benchmark datasets. Strengths: + Simple but powerful idea: the authors propose a novel method for soft prompting. + Convincing and superior experimental results over baseline. + The paper is well-structured and easy to follow. Weaknesses: -Lack of ablation studies analysis: there are hyper-parameters that balance the weight of two loss function terms. However, I did not find any design choice and/or ablation study on this. -In addition, more implementation details should be provided in the paper, which is essential for reproducing the methods and experiments in this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some detailed questions: 1. Figure 2(a) caption: confusion of definition: task loss info loss, representation loss? Line 257, the loss is mentioned as info loss but representation loss in the caption. 2. Line 73 experiments on 6 tasks and 3 datasets? 3. Could you explain the determination of β = 0.1 and γ = 0.05, the comparison of these two loss terms the number of negative samples and the importance of the number of prompt tokens 4. What are the advantages of the proposed method compared with baseline IDPG? Why IDPG is utilized as a baseline method for full-dataset experiments while not utilized for few-shot dataset? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments! Weakness 1 & Q3: ***hyper-parameters ablation studies and the importance of the number of prompt tokens*** Response: Similar to previous works [2, 3], we empirically choose the current hyper-parameters (without using the validation data). Specifically, based on our observation of the scales of the three losses, we choose the current hyper-parameters (without using validation data), so the scales of the three loss functions after weighting (head loss and representation loss, and task loss) are similar in the final loss function. This strategy is simply based on the inherent assumption that each loss should contribute equally to the problem [4]. We improve on the previous soft prompt methods (e.g., WARP [1]) with our proposed novel information-theoretic objectives. Our choice of number of prompt tokens follows [1] and [5]. The submitted paper has already included results with different number of prompt tokens, detailed in Table 1 and 2. Similar to WARP [1], we observe that more prompt tokens generally lead to better performance. Weakness 2: ***more implementation details*** Response: The implementation details of our proposed loss functions are as detailed in Section 3 and Section 4.2. Besides, we also include more details about the configurations (e.g., number of prompts, optimization details and networks design) in Section 4.2. Thanks for the suggestion. We will better organize the information for the reproducibility in the updated version. Q1: ***confusion of definition: task loss info loss, representation loss?*** A1: Thanks for pointing this out. Line 257, the "info loss" should be the "representation loss". Q2: ***experiments on 6 tasks and 3 datasets?*** A2: Sorry for the confusion. We experiment with 3 NLP tasks and 6 datasets. Q4: ***compared with baseline IDPG?*** A4: Compared to IDPG, our approach has the advantages that we develop the information-theoretic method to better encode task-relevant information. Similar to the evaluation in WARP [1], we only chose the most competitive baselines (in full dataset experiments), in the further evaluations under the few-shot learning setting. Thanks for the suggestions. We further include the comparison between our approach and IDPG, in the few-shot learning setting. | 64 shots | CoLA | RTE | MRPC | SST2 | RE | NER | SemEval | Average | |---|---|---|---|---|---|---|---|---| | IDPG | 0.0902 | 0.5018 | 0.6593 | 0.5424 | 0.2596 | 0.3334 | 0.1984 | 0.3693 | | InfoPrompt | 0.1567 | 0.6137 | 0.7059 | 0.6697 | 0.2119 | 0.3331 | 0.2113 | 0.4146 | | 256 shots | CoLA | RTE | MRPC | SST2 | RE | NER | SemEval | Average | |---|---|---|---|---|---|---|---|---| | IDPG | 0.1513 | 0.5523 | 0.7010 | 0.8188 | 0.2503 | 0.4048 | 0.3577 | 0.4623 | | InfoPrompt | 0.1750 | 0.6580 | 0.7377 | 0.7305 | 0.2993 | 0.4739 | 0.4034 | 0.4968 | We can observe that InfoPrompt generally yields better performance than IDPG due to its advantage to better encode task-relevant information. [1] Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. Warp: Word-level adversarial reprogramming. ACL, 2021. [2] Jia, Zhiwei, and Hao Su. "Information-theoretic local minima characterization and regularization." International Conference on Machine Learning. PMLR, 2020. [3] Shi, Yufeng et al. “Information-Theoretic Hashing for Zero-Shot Cross-Modal Retrieval.” ArXiv abs/2209.12491 (2022): n. pag. [4] Groenendijk, Rick, et al. "Multi-loss weighting with coefficient of variations." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2021. [5] Zhou, Yuhang, Suraj Maharjan, and Beiye Liu. "Scalable Prompt Generation for Semi-supervised Learning with Language Models." EACL 2023.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Rise of AI Language Pathologists: Exploring Two-level Prompt Learning for Few-shot Weakly-supervised Whole Slide Image Classification
Accept (poster)
Summary: The paper introduces a new approach called Few-Shot Weakly Supervised Learning for Pathology Whole Slide Image classification (FSWC), which aims to classify bags and instances within a WSI with only a limited number of labeled bags. The proposed solution utilizes a large language model, GPT-4, and prompt learning. The approach leverages CLIP to extract instance features for each patch and uses a prompt-guided pooling strategy to aggregate these instance features into a bag feature. The language prior knowledge is obtained using GPT-4 in a question-and-answer mode at both the instance and bag levels. The method is evaluated on three real WSI datasets encompassing breast cancer, lung cancer, and cervical cancer, and demonstrates notable performance in bag and instance classification. Strengths: The idea is interesting, and is one of the early works trying to apply large language and cross-modal models to WSI (Whole Slide Imaging) image analysis problems. Due to the knowledge learned wit the prompt, requirement on large number of traing samples can thus be alleviated. Weaknesses: While the idea is interesting, there are significant problems in presentation and languages of the paper. Lots of technical details are not clearly explained. The experiments are not intensive as well, more sota methods need to be compared. Pls see the section of questions for details. After reading other reviewer's comments and the rebuttal, I think the paper reauires significant works to improve it, and currently it's not ready to be included in NeurIPS Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1.Unlike general few-shot scenarios,what does shot represent in medical images ? And how is the dataset divided ? 2.In one part of the instance-level prompt, various instance phenotypes are generated. For different datasets, whether the generated functional descriptions are the same or not, without professional knowledge, how to determine these functional descriptions ? 3.What is learnable in prompt learning ? 4.The description of methodology is not clear enough. For example, how does the guided pooling work ? 5.More comparative experiments based on MIL should be added. 6.The reviewer would suggest the author to proofread the manuscript and the quality of the writing and the presentation should be significantly improved. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**. Meaning of Shot and Division of the dataset. **Response**. In Few-shot WSI classification, "shot" refers to the number of labeled slides. For publicly available datasets, we follow official guidelines to split the data into training (including validation) and testing sets. In-house datasets are randomly divided into training (including validation) and testing sets. In N-shot experiments, the training set uses only N pairs of positive and negative slides for training, and the model is evaluated on the complete testing set. In **Response to Q1 of the General Response**, we have included the results of five trials using randomly selected N slides for training and five trials using fixed N slides for training. **Q2**. Are generated instance prompts varying from different datasets? how to determine? **Response**. Instance prototype descriptive prompts vary based on the specific task. For tumor detection and lung cancer subtyping, prompts consider different tissue phenotypes. In the task of determining lymph node metastasis from primary lesions, the focus is on phenotypic information related to tumor cells. Representative prompts for each task are in Supplementary materials Figure 1, 2, and 3. We rigorously reviewed pathology knowledge descriptions from GPT-4 with three senior pathologists and found them accurate and detailed. Literature [1] supports GPT-4's reliability in producing medical domain knowledge due to its vast medical expertise in training data. [1] Nori et al. Capabilities of gpt-4 on medical challenge problems. arXiv:2303.13375 (2023). **Q3**. What is learnable in prompt learning? **Response**. In our approach, the last 10 tokens in the bag-level and instance-level prompt groups (referred to as learnable prompts) are trainable and treated as learnable parameters. These tokens are encoded into 10 features of 512 dimensions. On the other hand, other descriptive prompts are fixed and directly encoded by the language encoder in CLIP. During the training process, the tokens in the learnable prompts are dynamically learned. For detailed formulas and training methods, please refer to the main text, Section 3.2. **Q4**. How does the guided pooling work? **Response**. Prompt guided pooling (PGP) is designed for few-shot WSI classification (FSWC). The key challenge in MIL is aggregating instance features critical for slide classification. In FSWC, limited supervised information hinders the effectiveness of max/mean pooling methods in aggregating crucial instances. Using a separate attention module may lead to overfitting. PGP utilizes GPT-4 to generate visual descriptions of instance phenotypes, guiding feature aggregation. It calculates similarity weights between instance features and text prototypes, then uses the weighted average of all instance features as the bag feature. **Q5**. More comparative experiments based on MIL should be added. **Response**. Limited research exists on prompt learning+MIL for Few-Shot Weakly Supervised WSI Classification (FSWC), and methods available for direct comparison are quite limited. We constructed four baselines based on the SOTA few-shot learning methods CoOp and Linear Probe: Linear-Probe+Mean/Max/Attention pooling, and CoOp+Attention pooling. More comprehensive experiments are supplemented in the rebuttal and please see **Response to Q1 and Q2 of General Response** and **Response to Q2 of Reviewer 5SeZ**. In future work, we aim to compare more MIL methods under the FSWC setting to advance research in this area. **Q6**. Improve the quality and presentation. **Response**. We will further improve the paper in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, which help clarify some of the concerns. But I still think there are lots of works to be done, so I keep my rating --- Reply to Comment 1.1.1: Comment: Thank you very much for your comments and we are happy that our response clarifies some of your concerns. As a major part of work left to be done after our first round of response, we finished experiments on *the TCGA* and *Cervical Cancer datasets* using **non-fixed** and **fixed** bags. We report the Mean and STD of five runs in our new comment “**Mean AUC and STD on the TCGA and Cervical Cancer datasets**” to our **General Response** at the beginning.
Summary: This work proposes TOP - a framework using Vision-Language models for few shot Multiple Instance Learning on pathology datasets. TOP uses features generated by passing natural language prompts through a LLM to guide the pooling process for the bag level aggregation in MIL. Results show improvements over other few shot MIL approaches in 3 pathology datasets. Strengths: - Few shot learning for MIL in histopathology is important since the effective number of data points is greatly reduced when predicting at WSI-level instead of patch-level. This paper thus addresses this highly relevant problem by providing bag level predictions with only a few training bags. - The prompt-guided pooling idea, where similarity of natural language embeddings of text descriptions of different kinds of pathology to image embeddings is used to ground and calculate the instance-level weights, makes a lot of intuitive sense. Weaknesses: - My main concern pertains to section 3 in supplementary material, where it is mentioned that "..we randomly trained the network five times with different labeled bags and reported the highest performance of each method..". This implies that the performance comparison is biased towards TOP. - There can be additional ablations around using pathology specific LLMs [1] instead of generic ones to check if this improves the large variance in performance for TOP across lower number of shots - The motivation behind adding the correlation loss between the instance prototypes can be clarified further. Also, details around the sensitivity of the model performance to the relative weight of this loss should be shared. [1] - Santos et al, PathologyBERT - Pre-trained Vs. A New Transformer Language Model for Pathology Domain Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - For comparison on instance-level predictions, [1] uses the Additive MIL framework to get exact instance-level classwise predictions. Can this method be added to the comparison? - Any specific reason for restricting the learnable params to 10 tokens for instance and bag prompts? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: - Can the authors give a ballpark estimate of the cost associated with using LLMs for the description generation as a function of the number of prototypes? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**. performance comparison is biased. **Response**. Firstly, "training five times with non-fixed labeled bags" in the original text means random selection of different labeled bags in each run based on shot count to compose the training set. The test set is pre-divided and remains unseen during training. All comparative methods use the same training and test sets in each run, ensuring fair comparisons. In the few-shot learning setup, limited labeled bags for training can cause performance disparities due to varying bag representativeness (like selecting slides with larger or smaller tumor areas). This leads to high standard deviation (STD) across runs with different labeled bags for training. This work is one of the first few-shot WSI learning studies, and there is not a relevant benchmark. Therefore, we conducted five randomized trainings with non-fixed labeled bags, studying the algorithm's holistic performance across diverse slide representativeness. As suggested by multiple reviewers, we supplemented five runs on Camelyon16 dataset using non-fixed bags, reporting Mean AUC (left panel) and STD (right panel) in Table 1 (bag AUC) and Table 2 (instance AUC). As the shot count increases, standard deviations of all methods tend to stabilize. Our proposed method still achieves the best performance. Mean AUC and STD on the other datasets will be provided in a camera-ready version. Furthermore, we supplemented five rounds of training using fixed labeled bags, employing the same bag in each run for training. Mean AUC (left panel) and STD (right panel) are presented in Table 3 (bag AUC) and Table 4 (instance AUC). The consistent labeled bags notably decrease variability and our method still achieves the best performance. **Q2**. Ablations using pathology specific LLMs [1] instead of GPT-4’s ones. **Response**. We thoroughly examined the model in [1], but it appears restricted to masked language modeling and lacks question-answering abilities like GPT-4. Nevertheless, the concept of using pathology-specific LLMs to generate domain-tailored knowledge shows promise. We intend to investigate this approach further in our future research. **Q3**. The motivation, ablation and sensitivity tests of auxiliary loss. **Response**. The auxiliary loss aims to separate instance prototypes learned by each instance prompt, ensuring distinct phenotypes representing WSIs. Crucial instance prototypes for slide classification stand out during aggregation. New ablation and sensitivity tests on the Camelyon16 dataset (Table 5) demonstrate that our method is not highly sensitive to the loss weight, but its addition significantly improves performance compared to not using it. **Q4**. Add Additive MIL [1] to the comparison. **Response**. Our prompt learning framework can be combined with any bag-level and instance-level MIL methods, including the Additive MIL [1] mentioned. We will include this experiment in the camera-ready version of our work. **Q5**. Reason for restricting the learnable params to 10 tokens. **Response**. Token selection is based on empirical choices, and other quantities can be used in practice. For a comparison of parameter size experiments, please see **Response to Q5 of reviewer 5SeZ**. **Q6**. Ballpark estimate of the cost using LLMs for generation. **Response**. We employed GPT-4's question-answering mode to obtain slide-level and instance-level knowledge descriptions. The generation of prompts for a single task takes around 2 minutes, and experienced pathologists spend approximately 10-20 minutes organizing and reviewing these prompts. These descriptions do not need to be regenerated during inference. --- Rebuttal Comment 1.1: Comment: - Thank you for addressing the comments and sharing the additional results pertaining to training with fixed labeled bags. Its encouraging to see that TOP consistently outperforms other baselines in this evaluation scheme. - The authors have shared ablations around impact of auxiliary loss' weight in Table 5. - Thank you for including experiments with instance-level MIL (Additive MIL) [1] in the final version. After going through the new data from the rebuttal response, I improve my original rating. [1] - S A Javed et al, "Additive MIL: Intrinsically Interpretable Multiple Instance Learning for Pathology" --- Reply to Comment 1.1.1: Title: Gratitude for Your Review Comment: Thank you for reviewing our paper. We highly appreciate your valuable comments, which unquestionably elevate our research quality. We eagerly anticipate incorporating the "Additive MIL" comparison results into the final version.
Summary: The authors introduce a two-level prompt learning framework for label-efficient classification of WSIs using pretrained visual language encoders. The main novelty of the proposed method seems to be that at the instance-level, domain-knowledge + GPT-4 guided prompt prototype groups are used to guide pooling of instance embeddings into the slide-level embedding and while the use of learnable prefix tokens + pretrained text embeddings for few-shot classification follows closely of the established V+L few-shot literature. Experimentation was performed on several WSI datasets (CAMELYON16, TCGA, and an in-house cervical cancer dataset). Strengths: - This work explores several interesting ideas regarding instance- and bag-level prompts. Previous and concurrent works (e.g. - MI-Zero, PLIP [1,2]) have only explored the application of "instance-level" prompts (text prompts correlated with pathology region-of-interests) for computational pathology. The application of using bag-level prompts in combination of with GPT-4 provides new mechanisms for training and evaluating models for slide-level tasks in pathology. 1. Lu et al. 2023, Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images. CVPR 2023. 2. Huang et al. 2023, Leveraging medical Twitter to build a visual–language foundation model for pathology AI. bioRxiv 2023. Weaknesses: While the study presents interesting ideas which the reviewer believes to be valuable to the machine learning and computational pathology community. the reviewer has several concerns regarding the evaluation framework and ablation studies. 1. In the methodological details of this work, because the results in the few-shot settings can be highly variable, therefore for each shot, 5 randomly sampled sets of training bags are used, and the highest performance for each model is reported. The reviewer find this practice somewhat questionable given that stability/consistency of the algorithm should be an important factor in deciding which algorithms works well in the few-shot setting. By only reporting the highest performing run on the test set, we can have the scenario where Algorithm 1 obtains consistently reasonable, above chance performance (e.g. [0.7, 0.7, 0.7, 0.7, 0.7]) while Algorithm 2 obtains performance below chance but a single lucky run (e.g. [0.4, 0.4, 0.4, 0.4, 0.75]) and the conclusion will be that Algorithm 2 outperforms Algorithm 1 by a wide margin of 5% - which is not a reasonable conclusion. In the few-shot setting reported in the paper, if the SD of the 5 runs can be as high as 10% - how can the reviewer be certain the proposed model in fact does outperform the other methods consistently when only the highest performing run is reported? I would encourage this work to instead use the median for comparing the different models instead of the max, which is more robust to outliers compared to the mean. Additionally, the exact numbers of all 5-runs should be reported as a box plot in the supplement, and other metrics besides AUC (e.g. balanced accuracy) should be reported as well. 2. This work does not make comprehensive comparison to other methods/in the ablation experiments. An important concern being that the proposed method uses both learnable prefix tokens at the instance level and at the slide-level compared to CoOP, therefore effectively doubling the number of learnable parameters. (a) What happens when the proposed method uses the same number of learnable parameters as CoOP? (b) What happens if we cut the number of prefix tokens in half at both the slide-level and the instance-level relative to CoOP such that the total parameter count is the same? 3. Some design choices in the proposed method do not seem to be properly ablated, for instance. During training, an auxiliary loss (equation 9) is used to encourage de-correlation of different prompt prototypes, but the effectiveness of this choice does not seem to be ablated. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: Summarizing the concerns above: 1. What is the performance when using the median versus the max? 2. Effect of reducing the # of prefix tokens so that the total # of parameters is the same? 3. Ablation study concerning the auxiliary loss? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 3 good Limitations: - Limitations discussed in this work include the effectiveness of the prompt depending on the quality of visual representations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**. Using the median versus the max. **Response**. Firstly, "training five times with non-fixed labeled bags" in the original text means random selection of different labeled bags in each run based on shot count to compose the training set. The test set is pre-divided and remains unseen during training. All comparative methods use the same training and test sets in each run, ensuring fair comparisons. In the few-shot learning setup, limited labeled bags for training can cause performance disparities due to varying bag representativeness (like selecting slides with larger or smaller tumor areas). This leads to high standard deviation (STD) across runs with different labeled bags for training. This work is one of the first few-shot WSI learning studies, and there is not a relevant benchmark. Therefore, we conducted five randomized trainings with non-fixed labeled bags, studying the algorithm's holistic performance across diverse slide representativeness. As suggested by multiple reviewers, we supplemented five runs on Camelyon16 dataset using non-fixed bags, reporting Mean AUC (left panel) and STD (right panel) in Table 1 (bag AUC) and Table 2 (instance AUC). As the shot count increases, standard deviations of all methods tend to stabilize. Our proposed method still achieves the best performance. Mean AUC and STD on the other datasets will be provided in a camera-ready version. Furthermore, we supplemented five rounds of training using fixed labeled bags, employing the same bag in each run for training. Mean AUC (left panel) and STD (right panel) are presented in Table 3 (bag AUC) and Table 4 (instance AUC). The consistent labeled bags notably decrease variability and our method still achieves the best performance. **Q2**. Comparison with CoOp using equal learnable parameters **Response**. We conducted the experiment on the Camelyon 16 dataset, and the results are presented in Table 6 (mean bag AUC) and Table 7 (mean instance AUC). Experiments marked with the same "*" or "#" have the same learnable parameters. *a)* We maintained the baseline's parameter size (1. CoOp+Attention pooling) and reduced the tokens in (4. Our approach) and (3. Ablation method) to match the baseline's size. Despite this reduction, our method still outperformed others marked with "*" in Table 6 and Table 7. *b)* Comparing (5. CoOp+Prompt guided pooling) and (6. Our approach) with equal parameter size (#), our method again achieved superior performance in Table 6 and Table 7. *c)* Although reducing the parameter size led to some performance degradation, our approach consistently achieved the best performance. **Q3**. Ablation study concerning the auxiliary loss. **Response**. The auxiliary loss aims to separate instance prototypes learned by each instance prompt, ensuring distinct phenotypes representing WSIs. Crucial instance prototypes for slide classification stand out during aggregation. New ablation and sensitivity tests on the Camelyon16 dataset (Table 5) demonstrate that our method is not highly sensitive to the loss weight, but its addition significantly improves performance compared to not using it.
Summary: This paper introduces an innovative problem setting known as Few-shot Weakly Supervised WSI Classification (FSWC), wherein the availability of labeled WSIs is severely limited. In order to address this challenge, the authors propose a novel Two-level Prompt Learning MIL framework, named TOP, which leverages the power of the VL model and GPT-4 to enhance the learning process of the model. Strengths: 1. This paper demonstrates exceptional writing quality, distinguished by its clear and well-organized structure. The authors adeptly utilize intuitive figures and well-crafted paragraphs to effectively summarize their methods and contributions. Furthermore, the inclusion of extensive experiments and detailed ablation studies serves as a robust validation for their proposed method. 2. The proposed instance prompt guided pooling is a captivating and fitting approach for the vision language model task. It exhibits an intriguing method that holds potential for improving the performance of the model in this context. Weaknesses: 1. The problem addressed in this paper is not entirely novel within the field of WSIs, as previous studies like MI-Zero[1] have also recognized the challenge of limited labeled WSIs. 2. The utilization of pathology language prior knowledge derived from GPT-4 is indeed an interesting aspect of this paper. However, it is essential to address the concerns and questions raised regarding the validity and importance of this knowledge in the task at hand. To assess the correctness and relevance of the knowledge obtained from GPT-4, the paper should ideally outline an evaluation method or standards based on the output of GPT-4. This would help establish the reliability and usefulness of the information derived from the model. Additionally, the authors should clarify their rationale for choosing GPT-4's knowledge over manually designed and professionally validated visual descriptions. It would be beneficial to discuss how they compared the performance and efficacy of the two approaches and why GPT-4-based knowledge was deemed more appropriate. Regarding the generation of questions for GPT-4, it is crucial to provide insights into the process of selecting and designing these questions. The paper should clarify whether there were any specific criteria or standards used to ensure the validity and reasonableness of the questions. Addressing potential biases introduced by the question selection process is also important. Considering that GPT-4 may undergo updates over time, it is essential to acknowledge the potential impact on the answers generated by the model. The authors should discuss the implications of model updates on the overall performance and reproducibility of their proposed framework. In summary, addressing these questions and providing more detailed explanations would enhance the clarity and credibility of the paper, particularly concerning the validity and reliability of the pathology language prior knowledge derived from GPT-4. 3. The technical novelty of the paper is acknowledged to be limited. Although the paper introduces the instance prompt guided pooling method, which is derived from prototype learning and presents some novelty, it may not be considered sufficient to significantly differentiate it from existing approaches. Furthermore, it appears that the use of a Vision Language (VL) model to extract features and the utilization of learnable prompts from CoOp are commonly employed techniques within the field[1]. These aspects may not contribute significantly to the novelty of the proposed framework. Considering these limitations, it would be valuable for the authors to address the relative novelty of their contributions more explicitly. It could be helpful to discuss how their approach builds upon or improves existing methods, and to identify the specific aspects that differentiate their work from prior research in the field. 4. I have a concern regarding the comparison and reporting of results in the paper. Randomly training the network multiple times with different labeled bags and reporting the highest performance of each method may introduce some ambiguity and make the results appear misleading or tricky. To ensure a more robust and fair comparison, it would be preferable to adopt a standardized evaluation methodology. This could involve training and testing the network multiple times using consistent sets of labeled bags for each method, and then reporting the average performance across these trials. This approach would provide a more reliable representation of the methods' benefits and allow for a more accurate comparison between them. By employing a standardized evaluation methodology, the authors would enhance the credibility and trustworthiness of their reported results, addressing concerns about potential biases or inconsistencies introduced by the current approach. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: It would be greatly appreciated if the author could consider making the code, trained models, and specific train-validation-test data splits publicly available for the associated public datasets used in the different methods. This step is crucial for ensuring the reproducibility of the results. Access to these resources would significantly facilitate the verification and replication of the finding. Other questions please refer to the weekness part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weekness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1 & Q3**. Novelty compared with MI-Zero. **Response**. Our approach differs significantly from MI-Zero [1] in the following ways: *a)* MI-Zero focuses on training a large model like CLIP for instance-level or slide-level zero-shot transfer and classification, while we leverage CLIP as a backbone for instance and slide-level few-shot learning from a prompt-learning perspective. Our concepts and experimental approaches are distinct. *b)* As highlighted by reviewer 5SeZ in "Strengths", our work explores both instance and bag-level prompts, while MI-Zero only explores "instance-level" prompts and uses simple top-k pooling solely for bag-level classification. Our work presents notable innovations: *a) Conceptually*, we address the novel Few-Shot Weakly Supervised WSI Classification (FSWC) problem and pioneer bag and instance-level prompt learning with large vision-language model (CLIP) and large language model (GPT-4) in WSI classification. *b) Technically*, we propose a Two-level Prompt Learning MIL framework. At the instance level, we use pathology language prior knowledge from GPT-4 to guide feature aggregation into bag features. At the bag-level, we introduce bag-level pathology categories and visual pathology descriptions as prior knowledge for few-shot learning supervised by bag labels. These ideas are fundamentally different from previous studies. **Q2**. Validity, rationale, motivation and potential model updates' impact for using GPT-4’s knowledge. **Response**. *a) Validity and rationale*: We rigorously reviewed pathology knowledge descriptions from GPT-4 with three senior pathologists and found them accurate and detailed. Literature [2] supports GPT-4's reliability in producing medical domain knowledge due to its vast medical expertise in training data. *b) Motivation and importance*: Leveraging GPT-4's knowledge as templates enhances efficiency versus manual design. This approach aligns with the few-shot learning goal, easing pathologist annotation. Manual templates might not cover all aspects; specialized doctors' templates could be needed for varied cancer types/tasks. Additionally, different doctors' descriptions vary, lacking a standardized manual description. By leveraging GPT-4's versatility, our aim is to attain knowledge descriptions for multiple cancer types and tasks while avoiding manual domain biases. *c) Impact of model updates*: GPT-4's language descriptions contributed to training pathology models in our research. We will publicly share all used descriptions, codes, and models. This disclosure ensures reproducibility in reported tasks without the need of invoking GPT-4 for inference or new training. GPT-4’s upgrades won't influence current outcomes. We'll explore if GPT upgrades generate new descriptions and their effect on results. **Q4**. Consistent labeled bag sets for each method. **Response**. Firstly, "training five times with non-fixed labeled bags" in the original text means random selection of different labeled bags in each run based on shot count to compose the training set. The test set is pre-divided and remains unseen during training. All comparative methods use the same training and test sets in each run, ensuring fair comparisons. In the few-shot learning setup, limited labeled bags for training can cause performance disparities due to varying bag representativeness (like selecting slides with larger or smaller tumor areas). This leads to high standard deviation (STD) across runs with different labeled bags for training. This work is one of the first few-shot WSI learning studies, and there is not a relevant benchmark. Therefore, we conducted five randomized trainings with non-fixed labeled bags, studying the algorithm's holistic performance across diverse slide representativeness. As suggested by multiple reviewers, we supplemented five runs on Camelyon16 dataset using non-fixed bags, reporting Mean AUC (left panel) and STD (right panel) in Table 1 (bag AUC) and Table 2 (instance AUC). As the shot count increases, standard deviations of all methods tend to stabilize. Our proposed method still achieves the best performance. Mean AUC and STD on the other datasets will be provided in a camera-ready version. Furthermore, we supplemented five rounds of training using fixed labeled bags, employing the same bag in each run for training. Mean AUC (left panel) and STD (right panel) are presented in Table 3 (bag AUC) and Table 4 (instance AUC). The consistent labeled bags notably decrease variability and our method still achieves the best performance. [1] Lu et al. Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images, CVPR. (2023). [2] Nori et al. Capabilities of gpt-4 on medical challenge problems. arXiv:2303.13375 (2023). **Q5**. Code, data and model availability. **Response**. We will publicly share all used descriptions, codes, and models. --- Rebuttal Comment 1.1: Comment: Thanks the authors for supplementing five rounds of training using fixed labelled bags, employing the same bag in each run for training. Mean AUC (left panel) and STD (right panel) are presented in Table 3 (bag AUC) and Table 4 (instance AUC). The answer to my "Validity, rationale, motivation and potential model updates' impact for using GPT-4’s knowledge." also makes sense, so I would like to make the new recommendation of the paper as "acceptance beyond marginal". --- Reply to Comment 1.1.1: Title: Gratitude for Your Review Comment: Thank you for reviewing our paper. Your comments are greatly appreciated and will undoubtedly enhance the quality of our research. We look forward to revising our paper and submitting an improved version.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for your valuable comments. We first reply to questions raised by multiple reviewers and then other questions from every reviewer. We have uploaded all seven tables into a PDF file, and we sincerely appreciate your downloading and reviewing them. **Q1**. Average AUC for 5 runs instead of max AUC. (Reviewer VPd6, 5SeZ and 7zsM) **Response**. Firstly, "training five times with non-fixed labeled bags" in the original text means random selection of different labeled bags in each run based on shot count to compose the training set. The test set is pre-divided and remains unseen during training. All comparative methods use the same training and test sets in each run, ensuring fair comparisons. In the few-shot learning setup, limited labeled bags for training can cause performance disparities due to varying bag representativeness (like selecting slides with larger or smaller tumor areas). This leads to high standard deviation (STD) across runs with different labeled bags for training. This work is one of the first few-shot WSI learning studies, and there is not a relevant benchmark. Therefore, we conducted five randomized trainings with non-fixed labeled bags, studying the algorithm's holistic performance across diverse slide representativeness. As suggested by multiple reviewers, we supplemented five runs on Camelyon16 dataset using non-fixed bags, reporting Mean AUC (left panel) and STD (right panel) in Table 1 (bag AUC) and Table 2 (instance AUC). As the shot count increases, standard deviations of all methods tend to stabilize. Our proposed method still achieves the best performance. Mean AUC and STD on the other datasets will be provided in a camera-ready version. Furthermore, we supplemented five rounds of training using fixed labeled bags, employing the same bag in each run for training. Mean AUC (left panel) and STD (right panel) are presented in Table 3 (bag AUC) and Table 4 (instance AUC). The consistent labeled bags notably decrease variability and our method still achieves the best performance. **Q2**. The motivation, ablation and sensitivity tests of auxiliary loss. (Reviewer 5SeZ and 7zsM) **Response**. The auxiliary loss aims to separate instance prototypes learned by each instance prompt, ensuring distinct phenotypes representing WSIs. Crucial instance prototypes for slide classification stand out during aggregation. New ablation and sensitivity tests on the Camelyon16 dataset (Table 5) demonstrate that our method is not highly sensitive to the loss weight, but its addition significantly improves performance compared to not using it. **Q3**. Validity, rationale, motivation and potential model updates' impact for using GPT-4’s knowledge. (Reviewer VPd6 and r2fX) **Response**. *a) Validity and rationale*: We rigorously reviewed pathology knowledge descriptions from GPT-4 with three senior pathologists and found them accurate and detailed. Literature [1] supports GPT-4's reliability in producing medical domain knowledge due to its vast medical expertise in training data. *b) Motivation and importance*: Leveraging GPT-4's knowledge as templates enhances efficiency versus manual design. This approach aligns with the few-shot learning goal, easing pathologist annotation. Manual templates might not cover all aspects; specialized doctors' templates could be needed for varied cancer types/tasks. Additionally, different doctors' descriptions vary, lacking a standardized manual description. By leveraging GPT-4's versatility, our aim is to attain knowledge descriptions for multiple cancer types and tasks while avoiding manual domain biases. *c) Impact of model updates*: GPT-4's language descriptions contributed to training pathology models in our research. We'll publicly share all used descriptions, codes, and models. This disclosure ensures reproducibility in reported tasks without the need of invoking GPT-4 for inference or new training. GPT-4’s upgrades won't influence current outcomes. We'll explore if GPT upgrades generate new descriptions and their effect on results. [1] Nori et al. Capabilities of gpt-4 on medical challenge problems. arXiv:2303.13375 (2023). Pdf: /pdf/ba1fffaf6c89f349d13eaca83323d2e9d80c5760.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Scaling Open-Vocabulary Object Detection
Accept (spotlight)
Summary: The paper extends previous OWL-ViT detectors with self-training an 10B Web image-text data. The authors design different self-training strategies, such as token dropping, instance selection and Mosaics augmentation. It is an achievement to beat all previous methods on open-vocabulary benchmarks. Strengths: - The authors provide necessary implementation details about scaling up the open-vocabulary detector on massive web data with a self-training method. - The performance on the open-vocabulary LVIS benchmark is truly impressive. - The authors promise to open-source their code. Weaknesses: - GLIP2 performs better than OWL-ST on the ODinW dataset, can the authors explain the possible reasons? - The authors may consider adding the ablation studies about w or w/o the mosaic augmentation. The additional gain from the mosaic augmentation is not clear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What's the meaning of the $AP^{mini}$ metric in Table 1? Does it mean the LVIS test set? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I like this paper which includes comprehensive experiments and excellent performance, so I think this paper deserves to be accepted. But to me, this paper is not pretty interesting since this paper is more like a technical report with lots of engineering efforts rather than a research paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below, we address your questions and comments. ## Response to Weaknesses **_GLIP2 performs better than OWL-ST on the ODinW dataset, can the authors explain the possible reasons?_** GLIP2 uses much more human-annotated data than OWL-ST. Specifically, OWL-ST only uses Objects365 and Visual Genome (indirectly, through the annotator model). GLIP2 additionally trains on COCO, OpenImages, ImageNetBoxes, and GoldG (which includes 800'000 images with human grounding annotations from RefCOCO, RefCOCOg, and RefCOCO+). In addition, some of the tasks in ODinW are derived from OpenImages, so GLIP/GLIP2 is partially trained on ODinW evaluation data. We believe that these additional human annotations explain why GLIP reports higher numbers on ODinW, and will explore adding these to the OWL-ST training or fine-tuning data. **_The authors may consider adding the ablation studies about w or w/o the mosaic augmentation._** We performed an ablation study on the mosaic augmentation and provide it in the rebuttal PDF. We will add it to the paper appendix. The results show that increasing the number of mosaic tiles improves overall LVIS AP (top left plot) at least up to $12 \times 12$ (the maximum we tried). Improvements are primarily due to the "rare" classes and "small" objects. Note that the benefit of mosaics may be due to seeing smaller objects on average, or due to seeing more WebLI images per training step (a $12 \times 12$ mosaic contains 144 WebLI images). ## Response to Questions **_What's the meaning of the $AP^{mini}$ metric in Table 1? Does it mean the LVIS test set?_** $AP^{mini}$ refers to the LVIS "minival" set, which is a subset of the validation set and was introduced by MDETR (https://arxiv.org/pdf/2104.12763.pdf) and subsequently used by some object detection papers like GLIP/GLIPv2. We will clarify this in the table caption. We report $AP^{mini}$ for comparability with the papers that only report that metric. Note that $AP^{mini}$ values are significantly different (usually higher) from the standard LVIS AP. ## Response to Limitations **_...this paper is more like a technical report with lots of engineering efforts rather than a research paper._** Given the focus on scaling in the title and text of the paper, we understand this impression. However, we want to emphasize that **we do not simply scale up existing methods. On the contrary, we provide new research results which show that prior approaches to self-training for object detection were not optimal. These new insights improve performance independently of massive scale.** Specifically, through systematic study, we find that weak filtering and simple label space construction outperform prior approaches like strong filtering (e.g. using only a single pseudo-box or query per image) or complex grammatical query parsing. Figures 2, 3, and 5 all provide new research results that are applicable to other tasks, already at moderate data and compute scales (also see response to Reviewer 5UCC). We therefore believe that our work is not only a technical scaling effort but provides a valuable update to the research on training localization models.
Summary: The paper proposes a self-training recipe for open-vocabulary object detection that leverages weak supervision in the form of image-text pairs from the Web. The authors identify three key ingredients for optimizing the use of weak supervision for detection: choice of label space, filtering of pseudo-annotations, and training efficiency. They propose to use all possible N-grams of the image-associated text as detection prompts for that image and apply only weak confidence filtering to the resulting pseudo-labels. The self-training recipe is applied to the OWL-ViT detection architecture and is called OWL-ST. The authors introduce OWLv2, an optimized architecture with improved training efficiency. The proposed method surpasses prior state-of-the-art methods already at moderate amounts of self-training and achieves further large improvements when scaled to billions of examples. The authors also evaluate their models on a suite of "in the wild" datasets and study the trade-off between fine-tuned and open-vocabulary performance. Strengths: This paper is well-written, easy to understand, and very rigorous. This paper thoroughly discusses the methods and challenges of extending OWL-VIT to the billion-level image scale. This paper achieves surprising performance. Weaknesses: As the authors mentioned, the biggest issue with this paper is the significant amount of computation required, which makes it difficult for general research institutions to reproduce. Therefore, my question is whether the authors will make the research results available as a foundational model, similar to SAM, for public use. Additionally, can it create a wave of downstream tasks, like SAM did, by providing basic capabilities? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Since the authors have mentioned SAM, I am curious about how it compares to SAM. Although one is a segmentation model and the other is a detection model, SAM can also be combined with Grounded, such as in https://github.com/IDEA-Research/Grounded-Segment-Anything, to achieve detection. I'm wondering how the authors view the comparison between their model and SAM in terms of certain task capabilities. This is not a necessary question to answer, I just want to have a friendly discussion with the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The main issue is still the enormous amount of computation required. However, if it is open-sourced as a foundational model for public use, it would still make a significant contribution to the community. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below, we address your questions and comments. ## Response to Weaknesses We are happy to confirm that we will make checkpoints and code publicly available. We hope that it will be useful as a foundation for downstream tasks. ## Response to Questions We see our work as complementary to SAM and think that it has large potential when combined with SAM, similar to the Grounded Segment Anything that you mentioned. SAM performs highly accurate conditioned segmentation, but it requires some spatial conditioning information (e.g. a box or a point) to know what to segment. Conditioning on text is only discussed briefly in the SAM paper and is not quantitatively evaluated. In contrast, we focus on the semantic aspect of object recognition, i.e. matching text descriptions to objects in an image. A combination of the two models, where OWLv2 is used to obtain boxes matching text queries, and SAM is then used to predict masks for the detected boxes, would provide a strong open-vocabulary segmentation solution. ## Response to Limitations Our fully trained models indeed required large amounts of compute. We will make these checkpoints publicly available to save other researchers the need for repeating the training. However, we would also like to point out that our method provides substantial improvements already at amounts of training that are comparable to previous models. In Figure 1 (right), the black cross indicates the amount of training used for the original OWL-ViT model. With similar training compute, and only about 250M additional web images, our L/14 model (orange line) improves over OWL-ViT by about 6 percentage points. We therefore make a useful contribution even at moderate amounts of compute. --- Rebuttal Comment 1.1: Comment: After reading the other reviews and the authors' rebuttal, all of my questions have been well addressed. Therefore, I am inclined to maintain my rating.
Summary: This paper presents OWLv2 model and OWL-ST self-training recipe for open-vocabulary object detection. The detection data is greatly enriched with the aid of self-training. Concretely, the authors use WebLI dataset as the source of weak supervision for self-training. The dataset consists of approximately 10B images and associated alt-text strings (noisy captions). Then OWL-ViT CLIP-L/14 is utilized to annotate all images with pseudo boxes. For self-training at scale, the authors present several techniques including Token dropping, Instance selection and Mosaics to make the training efficient. Experiments on LVIS show the effectiveness of the proposed approach. Strengths: 1. This paper attemps to scale open-vocabulary object detection with self-training on large-scale weakly labeled dataset. This is a good engineering work built upon the existing detectors and datasets. This work verifies that large model + large data is a good option to enable zero-shot open-vocabulary detection. 2. The experimental results look like promising. Result on LVIS dataset is good. Weaknesses: 1. Authors should carefully polish their paper and fixtypos such as: In L12, L/14->VIT/14; L109, consist->consists. 2. The authors are encouraged to discuss how to handle noise in self-training. What's effect without handling these noise. 3. What's the inference process? The authors are encouraged to describe this in their next version. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses box. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately discussed in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below, we address your questions and comments. ## Response to Weaknesses 1. **_Typos:_** Thank you for identifying these typos, we will fix them and do an additional round of proof-reading. 2. **_Discussion of noise in self-training:_** In the paper, we already discuss one way to handle noise in Section 4.4 (_Filtering of Pseudo-Annotations_). In this section, we filter pseudo-annotations by annotator confidence, which removes noisy annotations at the expense of biasing the remaining annotations to objects that the annotator is good at. We find that strong filtering (e.g. confidence threshold 0.7) tends to be worse than weak filtering (e.g. threshold 0.3), which suggests that bias introduced by strong filtering is worse than the noise in the data. Based on the trends in Figure 3 (i.e. confidence threshold 0.1 is worse than 0.3), we believe that no filtering (i.e. not handling noise at all) also performs poorly, suggesting that some filtering is necessary but that the remaining noise is acceptable. We will expand this discussion in the paper. Please let us know if you have other analyses in mind. 3. **_What's the inference process?_** The inference is the same as for the original OWL-ViT (see L187f: _"Note that at inference, the model is identical to the original OWL-ViT."_). We will expand this sentence and provide some details rather than only referring to the original OWL-ViT paper. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: My concerns have been addressed. I tend to keep my original rating.
Summary: This work adopts a self-training strategy to generate web scale object annotations for open-vocabulary object detection. It uses an existing open-vocabulary object detector (OWL-ViT) to annotate 10B image-text pairs, and uses the annotated dataset for self-training. A various architecture is also proposed to improve the training efficiency. Experimental results show a large performance gain on open-vocabulary detection. Strengths: 1. A self-training strategy to annotate the bounding boxes for the large-scale image-text pairs. This provides an open-vocabulary detection dataset for various location-specific tasks, including detection, segmentation, visual grounding, etc. Despite self-training being widely explored in many previous works (discussed in Sec 2.3), this work provided a detailed ablation in the filtering strategy in Sec. 4.4. 2. The performance is promising in rare classes, indicating the ability of zero-shot transfer with the self-training strategy. Weaknesses: See questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This paper is a coherent work of OWL-ViT. It's not straight forward for me to drop the DETR decoder in OWL, but rather predict a bounding box with each patch token. Such a dense prediction results in a significant number of background proposals (Instance selection in Sec. 3.2), which leads to inefficient detection. 2. While N-grams tend to maintain a higher number of image-text alignments in comparison to rule-based methods (such as highest-scoring or threshold-based approaches), they also tend to generate more redundant alignments, particularly for obvious objects where the confidence scores are higher (e.g., both 'boy', 'a boy', 'a boy in red' might correspond to the same region). Does this bias the model towards learning obvious objects while potentially overlooking the hard examples? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The self-training strategy in the paper requires a large amount of computing resources and web-crawled data, which is not feasible for many colleges and research institutes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below, we address your questions and comments. ## Response to Questions 1. **_Is dropping the decoder inefficient?_** Our decoder-free architecture is actually more efficient than an equivalent encoder-decoder architecture like DETR. The number of encoder output tokens is the same in both cases. However, in a decoder model, every decoder layer cross-attends into the large number of encoder output tokens. Our encoder-only architecture only applies a lightweight objectness-prediction head to each encoder output token. Background tokens (= low objectness score) are then filtered out and do not need to be decoded into boxes (Instance Selection in Section 3.2). Therefore, the amount of processing that our model has to do per encoder output token is less than that of encoder-decoder models like DETR. 2. **_Do redundant ngrams bias the model towards learning obvious objects?_** No, because obvious objects do not contribute more pseudo-annotations than hard objects. Consider your example of an image containing a boy: OWL-ViT would predict one box for the boy in the image, and then assign high probabilities for the ngrams "boy", "a boy", "a boy in red" etc. There would be a large number of high-confidence, redundant ngrams for that box. However, for self-training, we only choose the single highest ngram for that box. This way, a box with many redundant matching ngrams will not have a higher weight in training than a box that has only a single matching ngram. Additionally, we use non-maximum suppression to remove near-duplicate pseudo-annotations. Further, we do not use "soft labels" during self-training, i.e. we treat low-confidence and high-confidence pseudo-boxes the same (as long as they meet the inclusion threshold). We will clarify this in the paper. ## Response to Limitations **_The self-training strategy in the paper requires a large amount of computing resources and web-crawled data, which is not feasible for many colleges and research institutes._** We would like to point out that our method provides substantial improvements already at amounts of training that are comparable to previous models and useful for resource-constrained researchers. In Figure 1 (right), the black cross indicates the amount of training used for the original OWL-ViT model. With similar training compute, and only about 250M additional web images, our L/14 model (orange line) improves over OWL-ViT by about 6 percentage points. Datasets with this amount of web data are publicly available (e.g. at https://laion.ai/). We therefore believe that this research is feasible with moderate resources and public data. Additionally, we will release self-trained model checkpoints. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Most of my concerns are addressed. Despite this work being an extension of object detection self-training on a large dataset, it provides some insights into scaling up the detection dataset. Besides, the experimental results impressed me, thus my decision tends to accept.
Rebuttal 1: Rebuttal: We thank the reviewers for their useful feedback and comments. The reviews were unanimously positive. In our response, we address the remaining questions and concerns. We provide an overview below and detailed responses to the individual reviewers. In particular: 1. We confirm that we will release trained model checkpoints upon publication. 2. We now provide additional experiments to show the benefit of mosaics (see PDF; requested by Reviewer F7mV). 3. We clarify that we do not simply scale up existing methods, but provide new research results that challenge prior approaches to self-training and are useful independently of scale (see response to Reviewer F7mV). We hope our work will be of value to the NeurIPS community both through the model release and through our research contributions. Pdf: /pdf/3dff2f36f76f79bce16a3b8ab29adb5866186a38.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Slimmed Asymmetrical Contrastive Learning and Cross Distillation for Lightweight Model Training
Accept (poster)
Summary: This paper proposed a new method to enable effective contrastive learning (CL) on the lightweight encoder without a mega-size teacher model, which can thus reduce the training cost. More specifically, it expanded the target model to a larger model, which shares its weights with the target model. Then the CL problem is formulated as slimmed training task with asymmetrical encoding. Furthermore, this work incorporated cross-distillation to further minimize the decorrelation between the embeddings of the same view but from different encoders. Experimental results show that the proposed method can outperform the existing baselines with much smaller computational costs on the lightweight models. Strengths: 1. The paper is well-motivated. Figure 1 clearly shows that the efficient CL of the lightweight model is challenging and what improvement the proposed method can make to this problem. 2. The proposed method is reasonable and Figure 2 clearly shows the key idea. 3. The proposed method can not only be effective in the CL of a lightweight model but also in a mega-size model. 4. It is good to see that the authors include the experimental results to show that the trained encoder can be well generalized to different types of downstream tasks. Weaknesses: 1. Although the authors claim Ref [25] cannot work on the mega-size encoder, it should be compared with the proposed method on the lightweight encoder. 2. It would be better if the authors could show the performance of supervised learning in the tables of the experiment section to serve as the upper bound of the proposed method. 3. It would be better if the authors can add a column for the training cost in Table 3, 4 and 5. 4. Some technical details are not clear, which is shown in the question part. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can you provide some explanation about why the proposed method can achieve better performance than the supervised learning in Table 6? 2. In Figure 1, there are two red spots annotated with "SCAL + XD". What are the differences? 3. Can the authors provide an example to illustrate the meaning of the style of "K $\times$ -1 $\times$ " for the slimming ratio? For instance, if the pruning ratio of the task model is 0.5 compared with the expanded model, what is the value of K and $s$? 4. In Equation (7), what is the meaning of $n, i, j$? 5. For $h_{\phi}$ from the two different encoders, are they of the same dimension? Do $Z_{A}$ and $Z_{B}$ have the same dimension? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Although the author claims that their method does not need an extra mega-size teacher, their method still needs an expanded version of the task model for effective CL. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 8Hzi Thank you for your thorough review on our paper and we appreciate your positive feedback. ### Weakness **Weakness 1:** Although the authors claim Ref [25] cannot work on the mega-size encoder, it should be compared with the proposed method on the lightweight encoder. **Response to Weakness 1:** Thank you for your feedback. [25] proposed an input-scaling and cropping strategy dedicated to lightweight encoders. Noticeably, [25] introduces **four** different types of multi-view sampling parameters as the hyperparameters of the training process. Such a fine-grained scaling strategy is orthogonal to our proposed method, which focuses on a generic training solution for lightweight contrastive learning with normal initialization and commonly-used data augmentation strategy as prior contrastive learning methods. Therefore, our method has better generality compared to [25], for both lightweight models (SACL+XD) and large-sized encoders (XD only). The model-specific data augmentation and local-view scaling strategy will remain as our future work. --- **Weakness 2:** It would be better if the authors could show the performance of supervised learning in the tables of the experiment section to serve as the upper bound of the proposed method. **Response to Weakness 2:** Thank you for pointing this out. Due to the page limit of rebuttal, we will include the supervised learning results as the upper bound of the performance of all the tables in the next version of the paper. --- **Weakness 3:** It would be better if the authors can add a column for the training cost in Table 3, 4 and 5. **Response to Weakness 3:** Thank you for highlighting this. We updated Table 5 with the total training FLOPs as **Table 7** of the attached PDF. Due to the page limit of rebuttal, we will include the detailed training FLOPs in Table 3 and 4 in the next version of the paper. ------ ### Questions: **Q1:** Can you provide some explanation about why the proposed method can achieve better performance than the supervised learning in Table 6? **A1:** Thank you for your question. As presented in the prior works of contrastive learning, aligning the latent information leads to enhanced robustness in learning various visual representations. Compared to supervised learning with deterministic labels, the encoder (e.g., ResNet-50) trained by contrastive learning can achieve better performance in the downstream small vision tasks with minimum tuning, as shown in [16, 31]. --- The proposed cross-distillation algorithm not only maximizes the correlation between latent features but also avoids feature aliasing across different dimensionality, which further enhances the representation learning of the model. **Q2:** In Figure 1, there are two red spots annotated with "SCAL + XD". What are the differences? **A2:** Thank you for pointing this out and we apologize for the confusion. The large dots and small dots in Figure 1 represent the EfficientNet-B0 encoder (4.0 Million parameters) and MobileNet-V3 encoder (3.0 Million parameters), respectively, as shown in the bottom right legend of the figure. We will further clarify that in the next version of the paper. --- **Q3:** Can the authors provide an example to illustrate the meaning of the style of "K -1 " for the slimming ratio? For instance, if the pruning ratio of the task model is 0.5 compared with the expanded model, what is the value of K and s? **A3:** Thank you for your question. By saying "K-1" asymmetry, it means that we slim both input and output channels by 1/K times. If $K$=2, the input/output channels are slimmed by half, which leads to $s=75\%$ structured sparsity. --- **Q4:** In Equation (7), what is the meaning of $n$, $i$, $j$? **A4:** Thank you for your question. $n$ represent the index of batch, and $i$ and $j$ represent the dimensionality indices across the latent output. Correspondingly, $C_{i,j}$ represents the ${i,j}$ element of the correlation matrix. We will further clarify the symbols in the next version of the paper. --- **Q5:** For from the two different encoders, are they of the same dimension? Do $Z^A$ and $Z^B$ have the same dimension? **A5:** To make the dimensionality matched and non-empty at the output of the last layer, we skip the slimming in the output channel (input channel is still slimmed) of the final layer. Therefore, $Z^A$ and $Z^B$ will have the same dimensionality. --- Rebuttal Comment 1.1: Title: Confirmation of Reading the Rebuttal Comment: Thanks for the detailed feedback from the authors. The answers they provided addressed most of my questions and helped me to have a deeper understanding of the contribution and techniques of this work. Therefore, I prefer to keep my previous score to show my support for this work.
Summary: This paper proposes a self-supervised contrastive learning method to improve the light-weight network performance and reduce the training cost. It mainly consists of two components: slimmed asymmetrical contrastive learning (SACL) and cross-distillation (XD), which are able to train the efficient network from scratch without the usage of pre-trained strong teacher. Experiments the effectiveness of the proposed method in terms of accuracy and training FLOPs reduction. Strengths: 1. SACL+XD is effective to improve the light-weight network performance with efficient training. 2. well-written and easy to follow this paper. Weaknesses: 1. The SACL+XD seems to apply the slimmable neural network[ref-1] to Disco[15]. Therefore, the novelty seems to be limited. 2. [Experimental issues.] (1)This paper mainly uses several efficient backbones (e.g. MobileNets) as the base model backbone, It lacks of comparisons on the more dense backbones, such as ResNet-50. (2) SACL+XD trains the models from scratch by removing a unified amount of channels based on the lowest magnitude score. I think the initial weights should have an effect on the final performance, as the channel selection in a slimmed network f_{\theta}^s may be different under the different weight initialization settings. However, the experiments lack such comparison about the initialization, as well as the mean accuracy with std. (3) In this paper, computation reduction is evaluated by training FLOPs. The authors better add the practical training time for a more comprehensive comparison. [Ref-1] Yu J, Yang L, Xu N, et al. Slimmable neural networks[J]. arXiv preprint arXiv:1812.08928, 2018. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Refer to the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. The authors well address the impact and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer J8Hb Thank you for your thorough review on our paper and we appreciate your positive feedback. **Weakness 1:** The SACL+XD seems to apply the slimmable neural network[ref-1] to Disco[15]. Therefore, the novelty seems to be limited. **Response to Weakness 1:** Thank you for your feedback, but we respectfully disagree with your comments. Firstly, the slimmable network [ref-1] is designed for training **multiple** sparse subset networks with **supervised learning**. However, the proposed method introduced two novel techniques of (1) *slimmed asymmetry* and (2) *cross distillation* for **self-supervised** contrastive learning. Our paper aims to resolve the low accuracy issue of lightweight models (e.g., MobileNet) trained by contrastive learning schemes. The objective is to obtain a **single** powerful, lightweight model via contrastive learning without introducing the large-sized teacher, which can also achieve superior performance in the downstream vision tasks, as summarized in Table 2 and Table 6 of the original manuscript. Secondly, the proposed method has key essential differences and novelties compared to Disco [15]. In particular, Disco [15] introduced an auxiliary "mean student'' encoder in addition to the student-teacher pair. In other words, Disco [15] complicates the lightweight model contrastive learning with complex distillation design, and the large-sized ResNet teacher model **is still preserved** in the contrastive learning process. On the contrary, our proposed method **completely eliminates** the "ResNet teacher" distillation and teacher pre-training for lightweight contrastive learning, while achieving superior performance in both ImageNet and downstream tasks. In conclusion, the proposed method is orthogonal to slimmable neural networks [ref-1] and Disco [15]. Furthermore, our method completely eliminates the "student-teacher'' distillation that is heavily exploited in Disco [15]. --- **Weakness 2-1:** (1) This paper mainly uses several efficient backbones (e.g. MobileNets) as the base model backbone, It lacks of comparisons on the more dense backbones, such as ResNet-50. **Response to Weakness 2-1:** While it is correct that the main objective of our paper is to train the high-performance, lightweight encoder via contrastive learning (which is a long-lasting problem in prior contrastive learning methods), our original manuscript still also validated the proposed cross-distillation (XD) method on the large-sized ResNet-50 backbone model. To that end, we respectfully disagree with your comment that it lacks comparisons on denser backbones like ResNet-50. As shown in Table 2 of the original manuscript, our method can achieve improved performance on ResNet-50 compared to the recent contrastive learning methods, which proves the versatility of our proposed method. In conclusion, the proposed method achieves good performance with **both** lightweight and large-sized encoder models. **Weakness 2-2:** (2) SACL+XD trains the models from scratch by removing a unified amount of channels based on the lowest magnitude score. I think the initial weights should have an effect on the final performance, as the channel selection in a slimmed network $f_{\theta}^s$ may be different under the different weight initialization settings. However, the experiments lack such comparison about the initialization, as well as the mean accuracy with std. **Response to Weakness 2-2:** Thank you for your question. We want to highlight the fact the major focus of our method is providing a generic and versatile contrastive learning solution for lightweight models. In particular, our motivation is directly training the lightweight encoder via contrastive learning without introducing expensive distillation [9, 14, 15, 32], additional tuning on initialization [24], or fine-grained view scaling and sampling [25]. With the simple training setup of the proposed method, the learned visual representation via our proposed method leads to superior performance on the downstream tasks compared to the lightweight model trained by supervised learning. Unlike previous work [24], we posit that initializing the weights with minimal distillation may have minimal impact on our asymmetrical slimming. This is because the lightweight encoder is sliced from the widened model, and this corrupts the benefits of the weight initialization. --- **Weakness 2-3:** (3) In this paper, computation reduction is evaluated by training FLOPs. The authors better add the practical training time for a more comprehensive comparison. Thank you for pointing this out. To address your question, we summarize the comparison of training time per epoch in **Table 4** of the attached PDF. These are measured by running a single training task on a single Nvidia A100 GPU. We will include this table in the next version of the paper together with all the other models. --- Rebuttal Comment 1.1: Title: After the rebuttal Comment: Thanks for the authors' response. My concerns are well addressed in the rebuttal, such as the novelty problem and several experiments. I will increase my score to weak accept this paper.
Summary: This paper introduces a self-supervised contrastive learning algorithm designed specifically for training lightweight models, eliminating the need for a large teacher model. The algorithm comprises two main components: slimmed asymmetrical contrastive learning (SACL) and cross-distillation (XD). The authors evaluate their approach using different lightweight models and datasets, demonstrating its superiority in terms of performance and efficiency over existing methods. Strengths: 1, This paper is well-motivated. This paper tackles a significant and practical challenge of training lightweight models using contrastive learning, which has received limited attention in previous research efforts. 2, The paper presents a novel and straightforward concept of slimming the host encoder, creating asymmetrical encoding paths for contrastive learning. This innovative approach effectively reduces training costs and enhances the performance of lightweight models. 3, The writing is clear and easy to follow. 4, The paper conducts thorough experiments on diverse models, datasets, and downstream tasks, and conducts comprehensive comparisons with state-of-the-art methods. The results strongly validate the efficacy and general applicability of the proposed method. Weaknesses: 1, To enhance the clarity and understanding of the proposed methods, it would be beneficial to provide additional intuition. Specifically, more insights can be provided on how cross-distillation aids in overcoming the distortion resulting from asymmetrical encoding. Additionally, exploring the trade-offs associated with different levels of asymmetry and sparsity would further enrich the paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1, While the paper focuses on the limitations of contrastive learning (CL) for lightweight models, it is worth considering whether alternative self-supervised learning methods, such as generative pre-training or reconstruction-based pre-training, are effective for these models. Therefore, it would be valuable to include a comparison between the proposed methods and these alternative approaches to provide a comprehensive understanding of the performance of different self-supervised learning techniques on lightweight models. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Please refer to the weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer UkVS ### Weaknesses Thank you for your thorough review on our paper and we appreciate your positive feedback. **Weakness 1:** To enhance the clarity and understanding of the proposed methods, it would be beneficial to provide additional intuition. Specifically, more insights can be provided on how cross-distillation aids in overcoming the distortion resulting from asymmetrical encoding. Additionally, exploring the trade-offs associated with different levels of asymmetry and sparsity would further enrich the paper. **Response to Weakness 1**: Thank you for your comments. The asymmetric encoding of the proposed SACL algorithm introduces the distorted latent information in one branch of contrastive encoding. Meanwhile, the asymmetrical encoding + contrasive learning can also be treated as a distillation task. However, na\"ively minimizing the distance between the dense and slimmed encoders is not the perfect solution for learning, as pointed out in Section 1.2 of the supplementary material. Therefore, the design of the contrastive loss should (1) align the features across different dimensions of the latent space and (2) avoid the feature mismatch and collapse between the slimmed and original features. As a result, the cross-distillation loss is added as a moving average term on top of the original SACL loss function. Regarding the tradeoffs with different sparsity levels, we explored the different asymmetry-accuracy tradeoffs in Table 5 of the original manuscript. To further clarify the tradeoff between performance-accuracy-training cost, we extended Table 5 of the original manuscript with training FLOPs and summarized the comparison in **Table** 7 of the attached PDF. ------ ### Questions **Q1:** While the paper focuses on the limitations of contrastive learning (CL) for lightweight models, it is worth considering whether alternative self-supervised learning methods, such as generative pre-training or reconstruction-based pre-training, are effective for these models. Therefore, it would be valuable to include a comparison between the proposed methods and these alternative approaches to provide a comprehensive understanding of the performance of different self-supervised learning techniques on lightweight models. **A1:** Thank you for pointing this out. Currently, our paper mainly focuses on contrastive learning with lightweight models. We will further investigate the lightweight generative pre-training (GPT) in our future work.
Summary: The authors propose a combination of two method: Slimmed Assymterical Contrastive Learning(SACL) , and Cross-Distilation(XD) with a correlation-maximization loss. SACL does magnitude pruning of filters at each epoch, and the pruned model is used as a encoder of one of two views used in the contrastive loss. They find that XD with an efficient network already provides competitive performance, and SACL brings performance to state-of-the-art. Evaluations are done with linear evaluation on ImageNet, CIFAR, and VOC2007. Strengths: Originality: Though cross-distillation and training sparse networks are both known techniques, their combination with correlation-based contrastive learning is a novel combination. Quality: The quality of the experimental design is sufficient to be convincing, and the idea makes sense, and seems easy to implement, increasing potential impact. Clarity: The clarity of the writing is reasonably high, with the exception of the description of XD(see weakneses). Significance: Self-supervised pre-training has proven to be quite useful, and efforts to remove key limitations in encoder size can achieve important practical impact. Weaknesses: * Experiments: One key advanatage of SSL methods is in transfer learning, and learning general features. However, the transfer learning study in this work is relatively small, with only CIFAR and VOC presented. In my mind, this is the biggest weakness of the work. * It's not explicitly stated how XD experiments are setup. Are they cross-distillation across two instances of a network of the same architecture (but independent weights), like Figure 2c but without slimming? * Slimming idea is general, but implementation seems CNN-specific. * Line 85: Minor, but saying ResNet-50 is mega-sized is a bit of an overclaim IMO. * Lack of substantial discussion of different axes of efficiency (FLOPs, wall time, activation count). Each of these metrics is useful in its own way, but training FLOPS is primary focus in this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * How does the method compared to a supervised high-water mark, beyond the experiments in Table 6? * How is transfer performance, beyond CIFAR and VOC? * How could one implement an equivalent slimming procedure for ViTs? Having a plausible path for implementation in ViT's could further strengthen the work. * In Table 2, row SSL-Small, why is the training flops much larger than XD-only? It is same architecture, same epoch count, no pretraining. * Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately addressed? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Rebuttal to Reviewer CDjk Thank you for your thorough review on our paper and we appreciate your positive feedback. ### Weaknesses **Response to Weakness 1**: Thank you for pointing this out. To address your comments, we extended the experiments in Table 6 of the original manuscript with extended datasets and results, as shown in **Table 1** of the attached PDF file. As shown in Table 1, the lightweight model trained by the proposed method can be powerful across a wide range of %spectrums in terms of downstream tasks. Compared to the model that is pre-trained on ImageNet-1K with **supervised learning**, the proposed method consistently achieves superior performance among all tasks, which indicates the high effectiveness of the proposed method in visual representation learning. We will include this updated table in the next revised version of the paper. ------ **Response to Weakness 2:** Thank you for your comments. We want to clarify the settings of XD as follows: - The cross distillation (XD) and SACL-XD exhibit the same settings where the weights are **shared** between two encoders. Since the augmentation and sparsity are collectively applied to the encoder, the latent output needs to preserve the similarity by having identical non-slimmed weights. - As described from line 230 to line 241 of the original manuscript, minimizing the cross distillation loss $\mathcal{L}_{\texttt{CD}}$ avoids the aliasing features across different dimensions of the augmented input. Therefore, consistent encoding is required, which necessitates the shared weights between two encoders. ------ **Response to Weakness 3:** We will modify the "mega-sized" encoder to "large-sized" encoder in the next revised version of the paper. ------ **Response to Weakness 4:** Thank you for your feedback on this. To address your question, we extended Table 5 of the original manuscript with the results of the ViT-Tiny [12] model trained by the proposed SACL-XD method under the slimming ratio of 1.25$\times$ - 1$\times$. The updated table (**Table 2**) is attached in the PDF file. Specifically, we slim down the embedding dimensionality of the ViT model, which further leads to the reduced model width in both input and output features of the fully-connected layers (and normalization layers). As shown in Table 2, the proposed SACL+XD continuously achieves improved accuracy compared to the prior baselines. We will include the details of the transformer-based SACL-XD in the next version of the paper. ------ **Response to Weakness 5:** Thank you for pointing this out. To address your comment, we summarized the forward pass activation count of the training process in Table 3 of the attached document. Specifically, we compute the total activation count in **both** contrastive encoders (or teacher and student) in each forward pass during the training process. For the previous distillation process, the intermediate activation resulted from both student and large-sized teacher models. As shown in **Table 3**, the proposed method trains the lightweight encoder without introducing the large ResNet teacher model, collectively achieving better accuracy and lower activation count, which will further lead to reduced memory consumption during the training process. In **Table 4** of the attached PDF file, we also summarize the comparison of training time per epoch. These are measured by running a single training task on a single Nvidia A100 GPU. ------ ### Questions **Q1:** How does the method compare to a supervised high-water mark, beyond the experiments in Table 6? **A1:** Thank you for your question. Regarding the general comparison with the supervised learning on the small-sized vision tasks (e.g., CIFAR-10/100), we validated the proposed algorithm from two perspectives: 1) Comparison with the large-sized dense model and 2) The sparse model pruned from the large-sized dense model. We use the widely-reported CIFAR performance as an example and summarize the comparison in **Table 5** of the attached PDF file. [Ref-1]: U.Evci, et al., "Rigging the lottery: Making all tickets winners", ICML, 2020 [Ref-2]: S.Liu, et al., "Sparse Training via Boosting Pruning Plasticity with Neuroregeneration", ICLR 2021 As shown in the table, the lightweight MobileNet-V3 model trained by the proposed method achieves better performance compared to the SoTA sparse training algorithms with similar parameter sizes ($\sim$3 million). In other words, the proposed method can successfully train a strong, lightweight vision learner with superior downstream performance **without** compression or pruning. ------ **Answer to Q2:** Please refer to our response to Weakness 1. ------ **Answer to Q3:** Please refer to our response to Weakness 4. ------ **Q4:** In Table 2, row SSL-Small, why is the training flops much larger than XD-only? It is same architecture, same epoch count, no pretraining. **A4**: Thank you for pointing this out. The FLOPs value for SSL-Small [24] in Table 2 of the original manuscript was incorrect due to a mistake, which we apologize for. We corrected the top part of Table 2 of the original manuscript as Table 6 in the attached PDF file. As reported in Table 7 and Table 10 of [24], the model is trained by 800 epochs with contrastive learning and initialized with 2 epochs SEED pre-training. Specifically, we adopt the Thop as the FLOPs counter. For EfficientNetB0, the total training FLOPs **per epoch** is 4.2E+15, and the total FLOPs of 800 epochs is 3.4E+18. Together with the 2 epochs pertaining with SEED [14] (5.29E+16 per epoch), the total training FLOPs of SSL-Small = 3.43E+18. Applying the same computation to MobileNet-V3 leads to the total FLOPs = 1.94E+18. Due to the larger training effort and pre-training, the training cost of [24] is still more expensive than ours. --- Rebuttal Comment 1.1: Title: Acknowledgment Comment: Thank you to the authors for their rebuttal. I have kept my score but am increasing my confidence to reflect my now stronger understanding of the paper.
Rebuttal 1: Rebuttal: Dear Reviewers and AC, We thank all the reviewers for their helpful feedback, comments, and questions. As mentioned in the individual rebuttal thread, we upload the 1-page PDF attachment that includes all the updated tables for your reference. Please let us know if you have any further questions or comments. Thank you. Pdf: /pdf/54392e9bb0926ae0bd4817da48559c9c8e5bd532.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Provable Adversarial Robustness for Group Equivariant Tasks: Graphs, Point Clouds, Molecules, and More
Accept (poster)
Summary: This paper studies the robustness certification for equivariant tasks. Specifically, the adversarial robustness with group equivariance is defined in terms of the input and output distance. Then equivariance-preserving randomized smoothing is introduced as well as various smoothing schemes. Experiments show that robustness guarantees can be obtained via equivariance preserving randomized smoothing. Strengths: - The motivation of studying robustness with equivariance is clear and the problem is well formulated. - The theory part looks sound and detailed and the paper is well organized. Weaknesses: - There are not enough baselines to show that the equivariance can improve the provable robustness as claimed on line 8 in the abstract. It is expected to compare with some previous works to back up the claim that prior work underestimates the robustness as line 399 states. - If I understand correctly, the permutation invariant task is a special case of equivariant task as the experiment of point cloud classification shows. Therefore, transformation specific robustness work [1, 2, 3, 4] can be involved like image/point cloud classification against 2D/3D rotation or translation. It will be of great interest to discuss the connection between these resolvable/non-resolvable transformations and group equivariance. - Each experiment needs more justifications for choosing different smoothing schemes and measure types. - The notations in section 3 regarding measure-theoretic randomized smoothing are a bit confusing and not easy to follow. It is better to explain each domain notation ahead and explicitly. --- [1] Li, Linyi, et al. "Tss: Transformation-specific smoothing for robustness certification." *Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security*. 2021. [2] Chu, Wenda, Linyi Li, and Bo Li. "Tpc: Transformation-specific smoothing for point cloud models." *International Conference on Machine Learning*. PMLR, 2022. [3] Hu, Hanjiang, et al. "Robustness Certification of Visual Perception Models via Camera Motion Smoothing." *Conference on Robot Learning*. PMLR, 2022. [4] Hao, Zhongkai, et al. "Gsmooth: Certified robustness against semantic transformations via generalized randomized smoothing." *International Conference on Machine Learning*. PMLR, 2022. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Summary of rating: I am most concerned with the first bullet point in Weaknesses where more experiments are expected to support the claims. I would be happy to raise my rating if it is well addressed Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! ## Baselines and the deficiencies of prior work We discuss this aspect in detail in the global rebuttal comment above. In short: We did not mean to say that existing certification procedures had some weakness that needed to be resolved in order to obtain stronger guarantees. What we meant to express is that **prior work does not adequately interpret the robustness guarantees computed by their proposed certification procedures**. Prior work uses methods like convex relaxation to prove that an equivariant model's prediction is (almost) constant within a certified region $B = \\{x' \mid d_\mathrm{in}(x,x') \leq \epsilon\\}$. One of our main arguments in Section 5 is that, under our sound notion of robustness, this implies that the model is robust (but not constant) within region $\hat{B} = \\{x' \mid \hat{d}_\mathrm{in}(x,x') \leq \epsilon\\} \supseteq B$. Here, $\hat{d}\_\mathrm{in}$ is the action-induced distance from Section 4. Depending on $d\_\mathrm{in}$ and the equivariance group, the region $\hat{B}$ may be significantly larger than the region $B$ reported in prior work (see visual example in global comment above). Importantly, this does not depend on a specific certification procedure or baseline. **Baselines:** Our paper is not primarily about which certification method we use, but about what their computed guarantees actually mean. However: * For GNNs, we evaluate every existing certificate for insertion/removal of edges/attributes. This corresponds to the $(c^+=1 = c^-)$ lines in Fig. 5, Fig. 6 and Appendix A.3. We further develop novel methods for non-uniform cost. * There is no prior work for models with continuous equivariances like DimeNet++. This is why we had to develop equivariance-preserving randomized smoothing in Section 5.1. * Similarly, there is no method outside randomized smoothing that can prove robustness for DGCNN under $\ell_2$ norm threats. During the rebuttal period, we became aware of "3DVerifier" [1], which is specifically designed for the PointNet architecture and $\ell_2$ perturbations. They claim a higher average certified $\ell_2$ radius of $0.959$ (see Table 2 in [1]). This is actually great for us. Combined with our novel theoretical insights (Proposition 2), we can use this method to provide even stronger guarantees against correspondence distance attacks. Again, please note that our focus is not on advocating for a specific certification method, but on providing semantically meaningful guarantees. We propose equivariance-preserving smoothing as a general-purpose method for cases where no specialized ones are available. We will update Sections 5.0 and 5.1 of the camera-ready to further clarify this for future readers and use 3DVerifier to showcase stronger guarantees for PointNet. ## Transformation-specific robustness for (non-)resolvable transformations and group equivariance. We discuss this in detail in the global comment. In short: 1.) If the transformations have a group structure, and the task is invariant w.r.t. this group, then transformation-specific robustness is equivalent to ($G$, $d_\mathrm{in}$, $d_\mathrm{out}, \epsilon, \delta)$ robustness with adversarial budget $\epsilon=0$ -- or a relaxation thereof. 2.) Otherwise, it can be thought of as guaranteeing classic adversarial robustness w.r.t. a distance function $d_\mathrm{in}(x,x')$ that determines whether $x'$ is a transformed version of $x$. **Concerning (non)-resolvable transformations** Resolvability is similar to the closedness of groups. A family of functions is resolvable if composing two functions yields another function from the family. A group is closed if combining two group elements yields a new element from the group. The group operator is basically the same as the functions' composition operator. However, groups impose additional constraints on what transformations are to be considered (invertibility, existence of an identity element). Overall, our paper does indeed deal with resolvable transformations with a group structure (e.g. rotation). Unlike work on transformation-specific robustness, we consider adversaries that are not limited to such transformations, but can perform arbitrary perturbations within some distance $\epsilon$. In addition, we consider the entire group, whereas transformation-specific robustness usually considers some small range of parameters (e.g. rotation angles). We further account for how the model's predictions should change under specific perturbations (equivariance), whereas transformation-specific robustness assumes them to remain constant (invariance). We will include this and all papers you mentioned in Section 2 of the camera-ready. ## Justification for smoothing schemes and measures Having to choose smoothing schemes and measures is inherent to randomized smoothing. In prior work, the scheme is dictated by the co-domain and the output distance. The measure is dictated by the domain and input distance (see Fig. 1 of [2]). In equivariance-preserving smoothing we additionally need to consider their equivariances, which we specify in Section 5.1. We created two tables that show which scheme and measure is to be used when (see global rebuttal pdf above). These should facilitate the practical application of equivariance-preserving randomized smoothing and will be included in the camera ready. ## Notation in Section 3 We agree that our explanation of a randomized smoothing was a little too terse. We will use the higher page limit in the camera ready to provide an explanation of the involved symbols before explaining randomized smoothing. --- We hope that we addressed all your comments to your satisfaction. Please let us know if you have any further questions during the discussion period. --- [1] Mu et al. "3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models". Machine Learning 2022 (Springer) [2] Yang et al. "Randomized smoothing of all shapes and sizes". ICML 2020 --- Rebuttal Comment 1.1: Comment: Thanks for your reply. Although the experiment is not comprehensive with enough baselines, I think the paper is significant in problem formulation and theoretical framework. Therefore, I raise my rating to borderline accept.
Summary: The paper investigates adversarial robustness for group equivariant tasks. To this end, the authors propose a novel notion of adversarial robustness: An attacker aims to find a perturbation $x'$ "close" (up to some distance constraint in the input space) to a natural input $x$. For this perturbation, they aim to find the worst-case perturbation within the input constraint and over the group of symmetries over the input. By a model equivariant to the underlying group, this reduces to classical adversarial robustness. Further, the paper shows that randomized smoothing, a popular framework for making models provably adversarially robust, (and many of its variants) can fulfill this notion of robustness when the underlying model, distribution, measure, and smoothing scheme are equivariant. Several instantiations of this approach are showcased on a variety of tasks, including graphs, point clouds, and molecules. Strengths: - Well written and easy to read - The proposed definition for adversarial robustness is a) novel, b) a natural extension of the classical definition, and c) well motivated - The section on the different variants of randomized smoothing and their relation to the problem is extensive - I appreciate the combination of semantic preserving change (here the group action) and perceptual noise in the set of admissible perturbations and think this is a promising direction for the field. Weaknesses: - No investigation of the robustness notion directly, i.e., no attacks against undefended models where performed to showcase the issue - Some further disambiguation from related work might be helpful to the reader (see below) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - It seems that the described notion of adversarial robustness requires knowing the inherent equivalences in the data. However, for sufficiently complex data many of the underlying symmetries are not known. Could the notion of robustness be applied in such cases? - In the conclusion the authors state equivariant robustness guarantees for non-equivariant models as a future direction. It seems that this is the approach already taken in [53,50,77 (Appendix B.5)]. Thus, while these target a fundamentally different setting, could you comment on the difference and relation to these works, particularly when viewed through the novel notion of robustness? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are adequately discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Please note that we cannot update the manuscript during the rebuttal period. We will however make several changes for the camera ready version, some of which are already integrated into the one-page pdf attached to the global comment above. ## Adversarial attacks As you mention, for "a model equivariant to the underlying group, this [the novel notion of robustness] reduces to classical adversarial robustness", which is one of our key theoretical results. Please note that this result applies to both provable robustness and adversarial attacks. Attacks on equivariant models for graphs and point clouds under the classic notion of robustness has already been extensively studied [1, 2]. In fact, **even when the model's equivariances do not match those of the task, any classic adversarial attack is a valid adversarial attack under our notion of robustness**. Recall the optimization problem from Eq. 1: $\max_{x' \in X} \max_{g \in G} d_\mathrm{out}(f(x), g^{-1} \circ f(g \circ x'))$ s.t. $d_\mathrm{in}(x, x') \leq \epsilon$. A classic adversarial attack is equivalent to constraining group element $g$ to be the identity element $e$. Thus, any classic adversarial attacks is a valid adversarial attack chosen from some more constrained search space. Nevertheless, conducting some adversarial attacks of our own is a great opportunity to showcase that our novel notion of robustness is not just some theoretical construct, but a practical property that can be empirically studied. You can find our results for adversarial attacks against PointNet (using a single-step gradient attack) and GCN (using the method from Section 4.4 of [3]) in the pdf attached to the global comment above. Based on your comment we will * explicitly discuss the relation between classic adversarial attacks and our notion of robustness * and include the aforementioned experiments (including a full experimental setup) in the camera-ready version of our submission. ## Unknown equivariances We would first like to point out that assuming certain equivariances to be known is not a specific limitation of our work. In fact, the entire field of geometric machine learning is centered around building models implementing specific known equivariances. How unknown equivariances fit into our framework is however a very interesting question. Firstly: The notion of **robustness does not require knowledge of the underlying equivariances** (as long as they have a group structure). It is just a property of the model and task that is either fulfilled or not fulfilled. If one had a guarantee that the model's and the task's equivariances match, one could even derive robustness guarantees using classic methods. However: Without specifying the "$G$" part of $(G, d_\mathrm{in}, d_\mathrm{out}, \epsilon, \delta)$-equivariance-robustness, it would not really be possible to interpret what these robustness guarantees specifically mean. Making robustness guarantees for equivariant tasks meaningful and interpretable is, in a sense, the exact goal of our paper. Secondly: One could even generalize our notion of robustness to **unknown equivariances without a group structure**. We could simply define $\max_{x' \in X} \max_{\theta \in \Theta} d_\mathrm{out}(f(x), \psi_{Y,\theta}^{-1} (f( \psi_{X,\theta}(x') )))$, where $\psi_{X,\cdot}$ and $\psi_{Y,\cdot}$ are (unknown) parametric functions in the input and output space. The same argument as above would apply. But again: Without knowledge of the equivariances, the robustness properties would not really be interpretable. Based on your comment, we will mention that * our definition does not assume the equivariances to be known, * but knowing the equivariances is necessary to provide interpretable robustness gurantees . in Section 6 of the camera-ready version. ## Transformation-specific robustness viewed through our notion of robustness These methods can indeed be seen as a first step towards provable robustness for non-equivariant models applied to equivariant tasks (for the special case of group invariance). Thank you for pointing out this connection. We discuss this relation in detail in the global rebuttal comment above. In short: 1.) If the transformations have a group structure, and the task is known to be invariant w.r.t. this group, then transformation-specific robustness is equivalent to ($G$, $d_\mathrm{in}$, $d_\mathrm{out}, \epsilon, \delta)$ robustness with adversarial budget $\epsilon=0$ -- or a relaxation thereof. 2.) Otherwise, it can be thought of as guaranteeing classic adversarial robustness, i.e. $G = \\{e\\}$, w.r.t. a distance function $d_\mathrm{in}(x,x')$ that determines if $x'$ is a transformed version of $x$. There are two key **differences to our notion of robustness**. Firstly, we do not constrain the adversary to only applying a specific type of (group-structured) transformation. Instead, they can perform arbitrary perturbations within some distance $\epsilon$ of the original input. Secondly, we do account for how the model's predictions should change under transformations of its inputs (equivariance), whereas transformation-specific robustness always requires the predictions to remain constant (invariance). Based on your comment, we will include this discussion in the camera ready version of our paper. --- We hope that we could answer all your questions to your satisfaction. Please let us know if you have any further questions during the discussion period. --- [1] Naderi et al. "Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey". arxiv 2023 [2] Jin et al. "Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study". SIGKDD Explorations 2020 [3] Zügner et al. "Certifiable robustness of graph convolutional networks under structure perturbations". KDD 2019 --- Rebuttal Comment 1.1: Title: Reply Comment: I thank the authors for their reply. I'm delighted by the suggested discussions to be include. Currently I do not have any further questions.
Summary: In this paper, it propose equivariance-perserving randomized smoothing to provide the robustness of the GNN model. The proof demonstrates the provably robustness of the proposed smoothing method. Strengths: 1. The theoretic guarantee of this paper is strong. And the proposed random smoothing methods can be applied on multiple tasks including the point cloud classification and the force field prediction. Weaknesses: 1. The experiment of Figure 4 can contain more geometric GNN such as Spherenet (https://arxiv.org/abs/2102.05013), and GemNet,(https://arxiv.org/abs/2106.08903) to demonstrate. 2. The math part of this paper is a little hard to follow. Maybe some some figures or concrete descriptions on how to apply the proposed smooth techniques can help people understand better. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: What is the model settings of the DimeNet++ in Figure 4? What is the performance compared to reported results? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Is there any experiments showing the performance of model when no smoothing techniques are used? I think it can provide the reason why such smoothing technique is needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful review, based on which we will add additional information to the camera ready version, as detailed in our discussion below. ## Experiments with different architectures Following your comment, we performed experiments with additional geometric models for molecule data. Thus far, we added SchNet (one of the best known baselines in the field) and PaiNN (another geometric GNN), as implemented in PyTorch geometric and SchNetPack, respectively. You can find these results in the rebuttal pdf attached to the global comment above. We observe that PaiNN and SchNet offer very similar (but not identical) provable robustness w.r.t. registration distance attacks. For the camera ready version we will also include experiments with SphereNet and GemNet. ## DimeNet++ model settings As mentioned in Appendix B.2.2 we use the default parameters specified in the original DimeNet and DimeNet++ papers. The parameters are also included in the file `seml/configs/force_fields/train_dimenet_pp_md17.yaml` provided with the supplementary material. We use 4 layers with 128 hidden channels. The triplet embedding size is 64. The basis embedding size is 8. The output embedding size is 256. The number of basis functions is 8 (bilinear), 7 (spherical), 6 (radial). The cutoff radius for graph construction is $5 \mathring{A}$. The number of residual layers before the skip connection is $1$. The number of residual layers after the skip connection is $2$. Based on your comment, we will explicitly include these parameters (and the parameters for any additional models we investigate) in Appendix 2 of our submission. ## SchNet and PaiNN model settings For SchNet, we use 128 hidden channels and 128 filters. We set the number of interaction blocks to 6 and use 50 Gaussians. The cutoff distance is $10 \mathring{A}$, with a maximum number of $32$ neighbors. We use addition as our global readout function. For PaiNN, we use 128 embedding dimensions in 3 interaction blocks (without shared interaction or filter weights). We use $20$ Gaussian radial basis functions and a cutoff distance of $5 \mathring{A}$. Again, these parameters (and details on the training procedure) will be added to Appendix 2 of our submission. ## Model performance with and without smoothing As suggested by you, we also performed experiments without randomized smoothing. Recall that we had eight sub-datasets "aspirin", "uracil", "ethanol", "benzene", "napthalene", "toluene", "salicylic acid", "malonaldehyde". The average force MAE of DimeNet++ was: * 0.344, 0.182, 0.176, 0.158, 0.095, 0.111, 0.227, 0.304 (without smoothing) * 0.369, 0.201, 0.181, 0.192, 0.104, 0.122, 0.238, 0.299 (with smoothing) The average force MAE of SchNet was: * 0.870, 0.322, 0.263, 0.244, 0.250, 0.289, 0.449, 0.445 (without smoothing) * 0.931, 0.371, 0.285, 0.260, 0.285, 0.321, 0.496, 0.449 (with smoothing) The average force MAE of PaiNN was: * 0.450, 0.247, 0.295, 0.168, 0.167, 0.188, 0.312, 0.423 (without smoothing) * 0.516, 0.285, 0.372, 0.197, 0.192, 0.229, 0.367, 0.493 (with smoothing) We observe that the accuracy of the model decreases as we apply smoothing noise. This is expected and inherent to all randomized smoothing methods. We essentially trade off accuracy for provable robustness. We found $\sigma=1 \mathrm{fm}$ to be a sweet spot where the errors are still very similar to the baseline but we can effectively prove that the adversarial change of the model's predictions is small compared to the test error. Combined with the evaluation of our randomized smoothing certificates above, we can conclude that DimeNet++ and PaiNN offer a better accuracy-robustness tradeoff than SchNet. ## Comparison to the reported results in DimeNet++/SchNet/PaiNN papers DimeNet++ is actually not evaluated on MD17 in the original paper. However, the reported MAEs for DimeNet (which DimeNet++ is a slightly tweaked, more efficient version of), are * 0.499, 0.301, 0.230, 0.187, 0.215, 0.216, 0.374, 0.383 We observe that the DimeNet++ MAEs are slightly lower, which is consistent with the claims in the DimeNet++ paper. The numbers reported for SchNet and PaiNN in the original PaiNN paper are * 1.35, 0.56, 0.39, -, 0.58, 0.57, 0.85, 0.66 * 0.371, 0.140, 0.230, -, 0.083, 0.102, 0.209, 0.319 We observe that our numbers for SchNet are slightly better and our numbers for PaiNN are slightly worse than in the PaiNN paper (which may be expected). However, all numbers are in the expected range of $[0.05, 1.0]$. ## More detailed description on how to apply randomized smoothing We agree that, due to the 9 page limit for the initial submission, the description is currently somewhat terse and directed at an audience already familiar with randomized smoothing. In essence, all randomized smoothing methods are the same. We just need to choose a smoothing distribution based on the input domain and distance, sample from it and feed the results into our model. Then we need to choose a smoothing scheme based on the output domain and distance to compute some simple statistic of the random predictions (e.g. mean or median). This may have been somewhat obscured by our measure-theoretic formalization of the procedure. Based on your comment, we will make the following changes to the ten-page camera ready version: * Provide a less dense explanation of randomized smoothing in Background Section 3 * Provide pseudocode for the overall randomized smoothing procedure * Provide a table specifying when to use which smoothing distribution * Provide a table specifying when to use which smoothing scheme * Provide pseudocode for the certification procedure of the different smoothing schemes (currently, we only reference the original papers for this) We already included the tables in the rebuttal pdf above. --- We hope that we answered all questions to your satisfaction. Please let us know if there is anything else you would like to discuss during the second part of the rebuttal period.
Summary: Authors note that the definition of adversarial robustness needs to be adjusted in cases when the problem is equivalent - for example, graph perturbations that are large in terms of the number of removed/added edges might actually be small if we take into account graph isomorphism (eg if a large graph perturbation ends up not change the graph topology much). Effectively, authors propose (Proposition 1) to change to notion of the distance between inputs x and x’ in the definition of adversarial robustness to equal the minimal distance between x and all elements of the equivalence class of x’ (eg all permutations of x’ if the input domain is graphs, or all translations and rotations of x’ if the problem is translation and rotation equivariant = use set registration distance between x and x’ to define adversarial robustness of the translation+rotation equivariant task). Then authors argue that symmetries in the output domain should also be accounted for (Def 1) which is often hard, but can be skipped entirely (Proposition 2) if the model itself is equivariant by construction. Then authors argue that the notion of smoothing distribution can (and should) take into account the updated notion of distance between points and that this definition allows that to theoretically justify several existing equivariant smoothing schemes and define more generalized notions of robustness (e.g. Eq 2). Authors note that in many cases computing such distances is computationally hard (e.g. NP-hard when working with permutation equivariance groups). Authors measure their updated equivariance-aware adversarial robustness of PointNet and DGCNN on permutation-invariant problem of ModelNet40 classification, of DimeNet++ on rigid-transformation-equivariant regression on MD17, and on graph convolutional DNN on permutation-equivariant node classification with several notions of graph edit distance (weight of deletion/insertion). Strengths: The paper is relatively easy to follow and sound. The motivation makes sense. Authors provide a formal generalization of the intuitive notions of how to measure distances between inputs in problems that have natural symmetries and having a shared language might be useful as a reference for future research. Weaknesses: My main concern with this paper is that it proposes a shared general formalization for the intuitive procedure that people have been already using extensively across many tasks with symmetries (eg rigid-aligned error in pose estimation), but this formalization that does not appear to bring much value beyond defining “shared language” and identifying “generalized knobs” that one can tweak to get slightly varied definitions of adversarial robustness. While reading the paper, I was hoping that authors would provide some kind of surprising practically useful result that makes use of their generalized definitions (eg a general algorithm that lets us compute such distances efficiently), but that did not happen - all presented results appear to be mathematizations of observations we already take for granted (eg if the model is equivariant by construction, there’s no need to invert its outputs because they are already “canonical”). One good example of a paper that worked with symmetries, introduced some simple new general definitions, and proved something surprising and practically useful from that point is “Training Generative Adversarial Networks with Limited Data” by Karras et al (see Appendix C). I might be wrong, so I would appreciate input from other reviewers. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Authors acknowledge that their approach is applicable only to equivariant models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. In the following, we would like to highlight the ways in which our results are both surprising and practically useful. We would further like to illustrate this by constrasting them with those of Karras et al. Then, we would like to call attention to the fact that the results of Section 4 are not primarily "a shared language", but contributions in themselves. Finally, we would like to emphasize two key technical and useful contributions. ## Surprising practically useful results We would strongly argue that "It is possible to prove the robustness of neural networks to perturbations bounded by semantically meaningful, extensively studied and computationally hard (even NP-hard) distance functions" is in itself a very surprising result. To go into more detail: Section 4 leaves us with a novel notion of robustness that requires optimization over a set bounded by potentially NP-hard functions. As such, developing a general algorithm that lets us compute such distances efficiently is not really realistic and would have implications beyond machine learning. There are two paths one could take: * The first one would be the classic certification approach of relaxing the problem. This is essentially the approach taken by recent work on robustness w.r.t. graph optimal transport. For instance, [27] essentially replaces the discrete permutation matrices corresponding to symmetric group $S_N$ with a continuous matrix defining a soft correspondence between nodes in two graphs. The resulting guarantees would be pessimistic bounds on the actual robustness. * The second one is the path we take: We identify that, nowadays, equivariant tasks are solved with equivariant models. This allows us to completely eliminate the hard, group-theoretic aspects without any pessimistic relaxation. Ignoring the surprise factor: It cannot be denied that this is a practically useful result that resolves an inherent limitation of recent work, which invested significant effort into a suboptimal solution to the practically relevant problem of in-/equivariance aware robustness certification. ## Concerning Karras et al. Thank you for pointing out the interesting connection to the work of Karras et al., which we will also include in the related work section of the paper. This offers a great opportunity to highlight the practical usefulness of our novel "equivariance-preserving randomized smoothing" framework. The results from Appendix C of Karrass et al. show for different transformations $T$ and all distributions: $\mu = \mu' \iff T \mu = T \mu'$. A discriminator can thus learn (properties of) the original data distribution. This allows the generator to learn the original distribution while benefitting from a less overfitting discriminator enabled by data augmentation. The results from Appendix D of our work show for different transformations $T_g$ (corresponding to the push-forward of group actions) and families of smoothing distributions $(\mu_x)\_{x \in X}$: $\mu_{g \bullet x} = \mu_{g \bullet x'} \implies T_g \mu_x = T_g \mu_{x'}$. An in-/equivariant base model thus recovers the original smoothing distributions. This allows the smoothed model to appropriately model the equivariant ground truth while benefitting from provable robustness enabled by randomized smoothing and Proposition 2. Because robustness is a practically useful property, both works make a strikingly similar "amount" of practically useful contribution. ## "Shared language" Our goal in Section 4 is not (primarily) to define shared language so that members of the trustworthy and geometric ML communities can collaborate. We first identify deficiencies of directly transplanting common notions of robustness from the image domain onto equivariant domains (Section 1). We then develop a sound notion of adversarial robustness from the perspective of semantics-aware robustness, i.e. defining robustness with knowledge about the ground-truth (Section 4). We do not deny that similar equivariance errors appear in other fields (see "special cases" in Section 4.3). But the contribution is justifying, in a principled manner, why this notion of robustness should be be worked on by trustworthy machine learning researchers. ## Observations we already take for granted We understand that certain intermediate steps may be less exciting for some readers with a background in geometric machine learning (which are not our primary audience, see above). However, we believe that our findings are crucial for the trustworthy ML community, which thus far fails to appropriately address key aspects of extending their methods to geometric machine learning tasks and models. ## Additional technical contributions Finally, we want to highlight two key technical and practical contributions. Even though Proposition 2 eliminates some of the hard group-theoretic aspects, we still need to prove that our equivariant model is robust under the classic notion of robustness. Prior to this work, there simply existed no specialized method to do so for many practically relevant models (e.g. those with continuous equivariances). We identify that randomized smoothing is a promising candidate, but may invalidate Proposition 2. We thus develop the novel framework of equivariance-preserving smoothing to enable model-agnostic certification under our notion of robustness. We further derive multiple non-probabilistic certification procedures for GNNs under perturbations with non-uniform cost. This was not possible before and offers new insights into the robustness properties of this important class of equivariant models (see Section 5.2 and Appendix E). We will update the beginning of Section 5.1 to better emphasize these contributions. --- We hope that we could convince you that our submission is a meaningful contribution to the field of trustworthy machine learning. Please let us know if you have any additional questions during the discussion period.
Rebuttal 1: Rebuttal: While we have already personally responded to each of the reviewers insightful comments and questions, there are two points that appeared multiple times. We would like to use this comment to discuss them in detail. We also attached a pdf with additional figures and tables. We further list prospective changes for the camera ready version at the end of this comment. The comments asked for further clarification concerning: * How transformation-specific robustness fits into our proposed notion of robustness. * In what way prior work underestimates the strength of the adversary. --- ## Transformation-specific robustness Consider a parametric function $\psi_\theta : X \rightarrow X$ with parameter $\theta \in \Theta$. A model is robust to transformation-specific attacks if $\max_\mathrm{\theta \in \Theta} d_\mathrm{out}(f(x), f(\psi_\theta(x))) \leq \delta$ for some small $\delta$. We can distinguish two cases: **Transformations with a group structure:** The first case is that the set of functions $\\{\psi_\theta \mid \theta \in \Theta\\}$ forms a group $G$, with function composition $\circ$ being the group operator and group action $\bullet$ being the application of the function (e.g. rotation). In this case, the above problem can be reformulated as $\max_\mathrm{g \in G} d_\mathrm{out}(f(x), f(g \bullet x))$. This optimization problem is equivalent to $\max_{x' \in X} \max_\mathrm{g \in G} d_\mathrm{out}(f(x), f(g \bullet x'))$ under the constraint $x' = x$. We can reformulate this constraint using any formal distance function $d_\mathrm{in}(x,x')$ as $d_\mathrm{in}(x,x') \leq 0$. We can thus express transformation-specific robustness as $(\max_{x' \in X} \max_\mathrm{g \in G} d_\mathrm{out}(f(x), f(g \bullet x'))$ s.t. $d_\mathrm{in}(x,x') \leq \epsilon) \leq \delta$ with adversarial budget $\epsilon = 0$. We observe that this is a special of our notion of robustness, where the adversary has a budget of 0 (i.e. can only apply group actions) and the task is group invariant. However, in practice, methods for proving transformation-specific robustness are only applied to some subset of $G$ (e.g. a small range of rotation angles). Their guarantees can thus be thought of as a relaxation of this special case. **Transformations without a group structure:** If the parametric function $\psi_\theta$ does not have a group structure, we can define any distance function $d_\mathrm{in}(x,x')$ such that $\\{x' \mid d_\mathrm{in}(x,x') \leq \epsilon\\} = \\{\psi_\theta(x) \mid \theta \in \Theta\\}$. That is, perturbed inputs are closer than $\epsilon$ iff they are the result of applying the function to $x$ with some choice of parameter. With this choice of distance function, transformation-specific robustness is equivalent to $(\max_{x' \in X} d_\mathrm{out}(f(x), f(g \bullet x')$ s.t. $d_\mathrm{in}(x,x') \leq \epsilon) \leq \delta$. This is an instance of classic adversarial robustness, which is a special case of our notion of robustness (no equivariance). ## How existing works underestimate the strength of the adversary One of our key results (Proposition 2) is that classic robustness certificates for equivariant models are in fact valid for our novel notion of robustness. What we meant to say is that, by definition, being robust under our notion of robustness means being robust within a potentially much larger region. **Importantly, there are no technical deficiencies with the existing robustness certification methods that exist for specific equivariant model architectures. The problem is with how their provided robustness guarantees have been interpreted thus far.** Prior work guarantees that a model’s prediction is (almost) constant within certified region $B = \\{x' \mid d_\mathrm{in}(x,x') \leq \epsilon \\}$. As discussed in Section 4, this means that the model is actually robust (but not necessarily constant) within certified region $\hat{B} = \\{x' \mid \hat{d}_\mathrm{in}(x,x') \leq \epsilon\\}$, where $\hat{d}\_\mathrm{in}$ is the action-induced distance (e.g. the graph edit distance). All we meant to say is that region $\hat{B}$ may be significantly larger than original region $B$, depending on original input distance $d_\mathrm{in}$, budget $\epsilon$ and equivariance group $G$. **Visual example (see Fig. 3 in attached pdf):** Assume clean input $x = (\sqrt{2} \ \sqrt{2})^T \in R^2$. Futher assume that $d_\mathrm{in}$ is the $l_2$ distance and $\epsilon=1$, i.e. we have a certified circle of radius $1$ around $x$. If we assume the model and task to be equivariant w.r.t. to rotation group $\mathit{SO}(2)$, then the model is equivariant-robust within an annulus ("donut") around the origin. It's area is $\pi \cdot (3^2 - 1^2)$, which is significantly larger than the original $\pi \cdot 1^2$ If we additionally assume translation equivariance, then the certified region even has an infinite area, unlike the original small circle. Furthermore, it contains an $l_2$ ball centered at $x$ with a radius of $3$ (compared to the original $1$). --- ## Changelog Based on the reviewer's valuable feedback, we have decided to make the following changes to the camera ready version of our paper (note that this year's review process does not allow changes during the rebuttal): * Expand discussion of transformation-specific smoothing in Section 2 and Appendix (see above) * Discuss image interpolation errors in Section 6 * Further clarify the "existing methods underestimate the strength of the adversary" comment (see above) * Add discussion of attacks/defenses (Reviewer 2/5) * Discuss GAN training with data augmentation in Section 2 (Reviewer 3) * Add experiments on SchNet / PaiNN / GemNet / SphereNet (see Reviewer 4 and rebuttal pdf) * Provide more detailed instructions on applying randomized smoothing, including tables (see Review 6 and rebuttal pdf) * Add experiments on adversarial attacks (Reviewer 5 and rebuttal pdf) * Mention that we assume equivariance to be known (Reviewer 5) Pdf: /pdf/0c2d0a12bf7f6d36dd4c9a8a68ab95e5576d42b0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents several contributions to the field of adversarial robustness for group equivariant tasks. The authors propose a sound concept of adversarial robustness for these tasks and demonstrate that using equivariant models can facilitate achieving provable robustness. They also prove that various randomized smoothing approaches preserve equivariance and extend existing robustness certificates for graph and node classification from $l_0$ perturbations to graph edit distance perturbations with user-specified costs. The paper underscores the importance of reevaluating adversarial robustness in equivariant tasks for future research in robust and geometric machine learning. Extensive experiments were conducted in various settings to validate the claims of the proposed method. Strengths: - The paper presents a novel definition of adversarial robustness for group equivariant tasks, which is a significant contribution to the field. The $\epsilon-\delta$ definition is interesting. - The paper provides theoretical proofs for the soundness of their proposed notion. - The proposed method is applicable to a wide range of tasks, including graphs, point clouds, molecules, and more. - The paper includes experimental results that validate the theoretical claims, providing empirical evidence for the effectiveness of the proposed method. Weaknesses: I think this paper is innovative and sound in general, and just want to list a few things to address below: - While the proposed definition of robustness in this paper specifically tailors toward group equivariance tasks, I am wondering how it can address the limitations and problems with the previously used definition. In figure 1 and figure 2, the authors give two examples related to graph isomorphism and rotation – how would the flaws in the old notion of robustness cause issues in applications related to those scenarios? For instance, one potential setting of graph node classification attacks is making imperceptible insertions/deletions to the recommendation graphs (example attacks in [1] and [2]). How will this new notion of robustness affect the consideration of attacks when grounded to those specific tasks/applications? [1] Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2847–2856, 2018. [2] Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412, 2019. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Although the paper mainly focuses on certified robustness, will this new notion of adversarial robustness for group equivariant tasks have any implications for the design of new adversarial attacks & defenses? If so, can you discuss some potential directions for proposing new attacks & defenses with respect to this definition? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We are glad to hear that you find our paper sound and innovative. We are not quite sure if we interpreted your first question correctly, so please let us know if you would like any additional clarification during the second part of the rebuttal period. ## Flaws of classic robustness exemplified in Figures 1 and 2. The goal of provably robust machine learning is to provide a robustness certificate, i.e. a guarantee that a model's prediction cannot be adversarially attacked (similar to how a high test accuracy can be thought of as certifying good generalization to unseen data -- but provably). The shown flaws would cause this certificate to either oversell (Figure 1) or undersell (Figure 2) the actual robustness of the model. In Fig. 1, a method operating under the classic notion of robustness might prove that a prediction does not change with up to $\epsilon=7$ edge perturbations and thus claim it to be very robust. However, this claim could even be made about a model whose prediction is changed by something as simple as representing exactly the same graph via an equivalent adjacency matrix. It thus has little practical value. In Fig. 2, a method operating under the classic notion of robustness would deem a model non-robust if it rotates its prediction as the input image rotates. However, this equivariance (learned or enforced) is exactly what is needed to solve the task with high accuracy. Thus, existing certificates would force model developers to either prioritize accuracy or provable robustness -- even though they are not actually add odds with each other. Our proposed notion of robustness resolves these flaws by accounting for equivariance, so that meaningful robustness certificates can be provided without causing wrong incentives for model developers. ## How the novel notion affects the considerations of attacks (particularly on graphs) We can distinguish two cases. When **the model equivariances and the task equivariances match**, then the model is equivariant-robust if and only if it is classically robust. We can thus reuse any existing method to disprove robustness, i.e. perform an adversarial attack. In particular, this applies to graph neural networks, where one can continue using Nettack or meta learning. Furthermore, existing attacks are even valid adversarial attacks when **the model and task equivariance do not match**. Recall the optimization problem from Eq. 1: $\max_{x' \in X} \max_{g \in G} d_\mathrm{out}(f(x), g^{-1} \circ f(g \circ x'))$ s.t. $d_\mathrm{in}(x, x') \leq \epsilon$. A classic adversarial attack is equivalent to constraining group element $g$ to be the identity element $e$. Thus, any classic adversarial attacks is a valid (but not necessarily optimal) adversarial attack chosen from some more constrained search space. Based on the feedback of Reviewer 5, we have also conducted some adversarial attacks of our own (see global rebuttal pdf). ## Implications / Potential directions for attacks and defenses A direct implication of the discussion above is that one can continue using existing methods -- at least as baseline. There are however interesting directions for developing improved methods that are specifically tailored towards equivariant settings. **Concerning attacks:** One interesting direction is developing adversarial attacks for scenarios where the model and task equivariances do not match (e.g. vision transformers). As discussed above, existing attacks are in such cases only a pessimistic bound on the model's actualy vulnerability. The challenge here is finding good ways of optimizing both the adversarial noise and the group action -- a first attempt could be made via alternating optimization. Another interesting direction is using our knowledge about equivariances to reduce the search space for adversarial noise. Finding an attack essentially requires optimization over $\hat{B} = \\{g \bullet x \mid g \in G, x \in B\\}$, where $B$ is a classic perturbation set $B = \\{x' \mid d_\mathrm{in}(x,x') \leq \epsilon\\}$, such as a $\ell_p$ ball. There may be a smaller set $S \subset B$ which gets mapped to the same $\hat{B}$ by group $G$. Consider for example Fig. 3d in the pdf attached to the global comment above. Instead of by a ball, the same set could also be generated by a line that is diagonally translated. Thus, we could find $x'$ via optimization over a one-dimensional space. **Concerning defenses:** A natural direction is using newly developed attacks for adversarial training. Stronger and more efficient attacks allow the trained model to be more robust without overly impacting the computational cost of the training procedure. Another interesting direction direction is actively using a model's equivariances as a defense mechanism. A first example of this is actually discussed in [1]. There, the inputs of a permutation invariant model are randomly shuffled to disrupt gradient-based attacks without changing the model's prediction. --- Based on your comment and those of other reviewers, we will add a section on attacks and defenses to the camera ready version of our paper (after Section 5). We also conducted some initial experiments (see Reviewer 5). We hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions during the second part of the rebuttal period. --- [1] Zhang et al. "The Art of Defense: Letting Networks Fool the Attacker". IEEE Transactions on Information Forensics and Security. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Dear Authors, Thank you for the comprehensive rebuttal and the clarifications provided. At present, I have no further questions and am willing to raise my rating to weak accept. Best regards, Reviewer y2Vs
Summary: This paper provides a new insight into adversarial robustness for equivariant tasks (like graph classification) where we need to separate the perturbations that occur according the allowable transformations (that actually form a group) - and where we'd like to have the same output disregarding the norm of perturbation - and harmful perturbations, where we need to care about its norm. Strengths: The novel work stating the very important question: what is the objective for the adversarial robustness for inputs which are equivariant according to some group of transformations? Although the task seems very vague, the authors answered it in a very elegant manner by: - introducing the new distance $\hat{d}_{in}$ which should be invariant w.r.t. to the group transformations - introducing two properties of this new distance (upper bound as usual distance $d_{in}$ + being the max of such distances) - and finally proving the Proposition 1 that shows that $\hat{d}_{in}$ is actually min among usual distances w.r.t. group transformations for one of its input. Based on it, the authors came to a definition of adversarial robustness for group equivariant tasks, which can be shrunk to the usual definition of adversarial robustness when we don't have the group properties, or compatible with other group-invariant distances like Hausdorff and Chamfer ones. Moreover, they proved the following (Proposition 2): if a model is equivariant w.r.t. to the group transformations, then the new definition it is equivalent to being robust in the usual sense. As a result, we just need to prove the usual robustness (for which we have a number of techniques like Certified Smoothing) for an equivariant model. Weaknesses: During the discussion in "Related Work / Transformation-specific robustness" (line 88), and even more seriously during "Product measures" (lines 287-288), it makes sense to mention the work [1] where the interesting usecase was studied: - multiplicative group of the "gamma correction" image transformations - Rayleigh distribution to serve as a smoothing distribution Would be interesting to know how it fits into the proposed framework. Additionally, how the proposed framework is dealing with interpolation error (e.g., [1], [2]) which is very non-trivial and is out of group properties, is not mentioned in the paper. [1] Muravev, Nikita, and Aleksandr Petiushko. "Certified robustness via randomized smoothing over multiplicative parameters of input transformations." Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 3366-3372, 2022. [2] Marc Fischer, Maximilian Baader, and Martin Vechev. Certified defense to image transformations via randomized smoothing. Advances in Neural information processing systems, 33: 8404–8417, 2020. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The main question is why do the authors provide multiple experimentation results if according to their theoretic results we can just re-use any existing methods provided for usual adversarial robustness? We just don't need to implement anything. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: According to the text, there is a statement: "prior work drastically underestimates the strength of the adversary and actually proves robustness for significantly larger sets of perturbed inputs" (e.g., lines 398-400). Is there any : - empirical proof for it? - theoretical consideration reasons for it? Probably the answer was somehow blurred inside the text, would love to hear the clear explanation for it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review of our submission and pointing out the interesting connection to smoothing over multiplicative transformation parameters. ## How randomized smoothing over multiplicative parameters fits into the proposed framework We provide an in-depth discussion of transformation-specific robustness through the lens of our novel notion of robustness in the global rebuttal comment above. **In short:** 1.) If transformations have a group structure, and the task is known to be invariant w.r.t. this group, then transformation-specific robustness is equivalent to ($G$, $d_\mathrm{in}$, $d_\mathrm{out}, \epsilon, \delta)$ robustness with adversarial budget $\epsilon=0$ -- or a relaxation thereof. 2.) Otherwise, it can be thought of as guaranteeing classic adversarial robustness, i.e. $G = \\{e\\}$, w.r.t. a distance function $d_\mathrm{in}(x,x')$ that determines if $x'$ is a transformed version of $x$. **Concerning randomized smoothing of transformation parameters**: An interesting connection here is that transformation-specific smoothing can actually be an instance of equivariance-preserving smoothing. The smoothed model may simply inherit equivariances from the transformation: If $\forall g \in G, \forall \theta, \forall x : \psi_\theta(g \bullet x) = g \bullet \psi_\theta(x)$, then the smoothed model $\xi(\psi_\beta(x))$ with random parameters $\beta$ will be equivariant w.r.t. group $G$ -- assuming an appropriate smoothing scheme $\xi$ (e.g. center smoothing). Concerning **multiplicative transformations:** The Gamma transformation is a multiplicative group. In fact, any multiplicatively composable transformation inherits its group structure from the multiplicative group $(R_+,\cdot)$ if we define the group operator as $\psi_\alpha \bullet \psi_\beta = \psi_{\alpha \cdot \beta}$. Thus, if our task is invariant w.r.t. to group $(\psi_\theta)$, we are in case 1.) above. If our task is not specifically invariant w.r.t. $(\psi_\theta)$, we are in case 2.). As discussed above, we can use the method from [1] as a form of equivariance-preserving smoothing. Multiplicative transformations actually have many practically interesting equivariances. For example, scaling of point clouds is multiplicatively composable and also rotation and permutation equivariant. So we can use the results in [1] together with our work to prove the robustness of models for rotation-invariant point cloud classification to adversarial scaling. Concerning **Rayleigh smoothing:** Thus far, we have not been able to discern whether specifically using Rayleigh parameter noise offers us any additional equivariances that are not achievable with other parameter distributions. But we will try to look further into it. Based on your feedback, we will expand our discussion of transformation-specific smoothing and specifically [1] in the camera-ready version. ## How interpolation errors fit into the framework Interpolation errors are primarily an issue with digital images, because they are the result of rasterizing the continuous image signal and quantizing the color values. Using the terminology from [1], this means that certain transformations are non-composable. Using the terminology from our paper, this means that the transformations are not a group. Since we are focusing on group equivariance, such transformations are not covered by our framework. Please note that this is an inherent problem of the image domain and not specifically a limitation of our work. This is also why equivariant models for images are typically derived for continuous image signals (see, e.g., [2, 3]). Domains where equivariant models are actually used in practice (graphs, point clouds, molecules ...) do not suffer from this problem. For example, PointNet is perfectly permutation invariant, DimeNet++ is perfectly rotation equivariant and GCNs are perfectly ismorphism equivariant. Based on your comment, we will include this discussion in Section 6 of the camera ready. ## Why we provide experimental results You are right. Our rigorous derivations show that existing certification methods for equivariant models can be reused for the proposed notion of robustness. The reason we conducted new experiments is that they offer novel insights: 1.) Firstly, there is no prior work on proving the robustness of models with continuous equivariances. Our equivariance-preserving randomized smoothing approach can thus be seen as a baseline for future work. 2.) Secondly, we derive new graph guarantees with non-uniform cost for insertion and deletion (see Section 5.2). Their experimental evaluation provides more detailed insights into the robustness of graph neural network, which were impossible to obtain with existing certification methods. ## How prior work underestimates the strength of the adversary We discuss this point in the global rebuttal comment. In short: What we meant to say is that prior work uses their method to claim robustness within a set $B = \\{x' \mid d_\mathrm{in}(x,x')\leq \epsilon\\}$. Our work shows that, under a sound notion of robustness, the same models are actually robust (but not constant) in $\hat{B} = \\{x' \mid \hat{d}\_\mathrm{in}(x,x')\leq \epsilon\\} \supseteq B$ with action-induced distance $\hat{d}_\mathrm{in}$. This certified region may be significantly larger, depending on the distance function and group (see Fig. 3 in the pdf attached to the global comment). Based on your comment, we will make this more explicit in the camera-ready version of our paper. --- Please let us know if you have any additional questions. --- [1] Nikita Murarev and Aleksandr Petiushko. “Certified robustness via randomized smoothing over multiplicative parameters of input transformations.” IJCAI 2022. [2] Maurice Weiler and Gabriele Cesa. "General E(2) - Equivariant Steerable CNNs". NeurIPS 2019. [3] Gabriele Cesa, Leon Lang, and Maurice Weiler "A Program to Build E(N)-Equivariant Steerable CNNs". ICLR 2022. --- Rebuttal Comment 1.1: Title: Thanks! Comment: I thank the authors for their reply. I'm grateful for the suggested discussions to be included. Currently I do not have any further questions. My main concern still is that the group structure is nice in theory but in practice it is usually broken by multiple real considerations (like interpolation), so would be nice somehow to make "almost" group structure research in the future :)
null
null
null
null
Efficient Testable Learning of Halfspaces with Adversarial Label Noise
Accept (poster)
Summary: This paper studies the problem of learning halfspaces under Gaussian distribution and adversarial label noise in the testable learning framework of Rubinfeld and Vasilyan (STOC'23). In this setup, the learning algorithm, given a few training examples drawn i.i.d. from some unknown distribution $\mathcal{D}$, may either *accept* and returns a hypothesis or *reject*. The goal is to satisfy the following two natural constraints: - Completeness: If the marginal of $\mathcal{D}$ is indeed Gaussian, the algorithm should accept w.h.p. - Soundness: The probability that the learning algorithm accepts and returns an inaccurate hypothesis is small. The main result is a poly-time algorithm that works for the Gaussian distribution against adversarial label noise. In particular, if $\mathsf{opt}$ is the smallest possible testing error achieved by the best halfspace on $\mathcal{D}$, the algorithm guarantees $O(\mathsf{opt}) + \epsilon$ error in time $\mathrm{poly}(d/\epsilon)$. Prior work achieves $\mathsf{opt} + \epsilon$ error but requires $d^{\mathrm{poly}(1/\epsilon)}$ time. A first building block is a weak learner (given in Proposition 2.1) that outputs a weight vector $v$ that is $O(\sqrt{\mathsf{opt} + \eta})$-close in time $d^{\tilde O(1/\eta^2)}$. (This is not sufficient due to the extra square root as well as the exponential dependence on $1/\eta$.) This baseline learner is then boosted by localizing to the decision boundary of $\mathrm{sgn}(v\cdot x)$. In particular, each iteration either makes progress in improving the accuracy of $v$, or obtains a proof of non-Gaussianity in terms of moments. Strengths: The paper studies a well-motivated and fundamental learning theory problem in the recently proposed model of testable learning. The results are strong, and the solution involves several novel and non-trivial ideas. Despite this complexity, the main paper is nicely written and achieves a great balance between intuition and technical details. I vote for accepting this paper. Weaknesses: The algorithm only handles the homogeneous halfspaces without a bias term, and cannot achieve the $\mathsf{opt} + \epsilon$ error guarantee as in the prior work of [RV22, GKK22]. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Following up on the weakness part, some discussion on whether/how the current technique could handle non-homogeneous halfspaces and/or the stronger agnostic learning guarantee would be helpful. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: This work is purely theoretical and its limitations are in the assumptions (e.g., the concept class is the class of homogeneous halfspaces, and the "reference distribution" is a spherical Gaussian). This has been properly addressed in the paper (clearly stated in the abstract and the main paper). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive assessment of our work. First, regarding the limitation on the error guarantees (does not achieve OPT but only O(OPT)), we remark there is strong evidence that achieving error $OPT + \epsilon$ in fully polynomial time under agnostic learning, even without the testable requirement, is hard. In particular, as shown in [DKPZ21], no statistical query algorithm can learn halfspaces under Gaussian marginal up to error $OPT +\epsilon$ in time less than $d^{O(1/\epsilon^2)}$. Moreover, the same result is shown to hold for general algorithms under standard cryptographic assumptions in [DKR23]. Given the aforementioned hardness results, the best one can hope to do with a polynomial time algorithm is to learn up to error $O(OPT)$. Our result is optimal in a sense that it achieves essentially the same error guarantee efficiently as in the absence of the testable requirements. Secondly, the reviewer raises the question whether our approach can be made to work with non-homogeneous halfspaces. We remark that, to the best of our knowledge, almost all prior literature for learning LTFs under adversarial noise is for homogeneous cases with only two exceptions [DKS18, DKTZ22]. We believe our approach can be extended for general LTFs but would give a bit worse error. We leave this as an interesting question for future work. [DKS18] Diakonikolas, I., Kane, D. M., & Stewart, A. (2018, June). Learning geometric concepts with nasty noise. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (pp. 1061-1073). [DKPZ21] Diakonikolas, I., Kane, D. M., Pittas, T., & Zarifis, N. (2021, July). The optimality of polynomial regression for agnostic learning under gaussian marginals in the SQ model. In Conference on Learning Theory (pp. 1552-1584). PMLR. [DKTZ22] Diakonikolas, I., Kontonis, V., Tzamos, C., & Zarifis, N. (2022, June). Learning general halfspaces with adversarial label noise via online gradient descent. In International Conference on Machine Learning (pp. 5118-5141). PMLR. [DKR23] Diakonikolas, I., Kane, D., & Ren, L. (2023, July). Near-optimal cryptographic hardness of agnostically learning halfspaces and relu regression under gaussian marginals. In International Conference on Machine Learning (pp. 7922-7938). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions! My overall evaluation of the paper remains positive.
Summary: Learning halfspaces is a very well studied problem in machine learning. In the agnostic (adversarial label noise) and distribution free setting, this problem has been known to be computationally intractable. As a result, there have been several works of agnostic learning in distribution specific settings (where the marginal distribution belongs to a particular family of distributions, say Gaussian or log-concave). In this scenario, the learner has an error of the form $OPT + \epsilon$, where OPT denotes the optimal 0-1 error. This however has complexity $d^{1/\epsilon^2}$, and the exponential dependency on $1/\eps$ is tight. This motivates the designing of learning algorithms that have better sample complexity with respect to $1/\epsilon$, whereas the error becomes $f(OPT) + \eps$ for some function f. In this work, the authors studied this problem in the newly introduced Testable learning framework by Rubinfeld and Vasilyan (STOC 23). Here the goal is if the tester accepts, then with high probability the output of the learning is close to some function of OPT, and if the data satisfies the distributional assumptions, the algorithm accepts. In this paper, the authors study the class of homogeneous halfspaces over $R^d$ with Gaussian marginals. This work designs a tester-learner with sample complexity $N=poly(d, 1/\epsilon)$ and runtime $poly(dN)$. Additionally, their algorithm also works in the active learning framework where the algorithm only queries the labels of a subset of the samples. The authors first design a weak agnostic learner with respect to Gaussian distribution which checks if the low degree moments of the marginal distribution approximately matches with that of Gaussian distribution (Section 2). The idea is if the moments are close, then the vector corresponding to the degree-1 chow parameters will not be far from v^* (the true vector of the half-space), see Lemma 2.3. They also design an algorithm that either reports if the marginal distribution is not Gaussian or it outputs a unit vector w (Algorithm 2 corresponding to Proposition 2.1) Once they have the weak learner, the authors will try to find a vector that has small 0-1 error with v^* by calling Algorithm 2 (Lemma 2.1) in an iterative manner. After logarithmic many iterations, they show that the probability mass of the disagreement region is bounded (Lemma 3.1). However, this process gives a collection of vectors such that one of them is close to the optimal vector v^*.The authors finally run a tournament among these candidate halfspaces corresponding to these vectors and outputs the winner hypothesis (Lemma 3.6). Strengths: The paper gives the first algorithm for testable learning of halfspaces that runs in poly(d, epsilon). The algorithm is very nice. With the complexity pulled down drastically, a proper implementation and experimental results for this algorithm would be possible and it would be nice to see the relevance of the concept of testable learning in various applications. Weaknesses: The usefulness of the testable learning model in real life applications is yet to be understood. The paper can only handle adversarial noise. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Can this approach be used to design tester-learners for function classes other than halfspaces. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: It is a pure theoretical work in the paradigm of testable learning - a relatively new concept whose importance is not yet fully confirmed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Regarding practicality: The testable learning framework is a new learning paradigm whose definition (in a paper that appeared in STOC’23) was motivated by practical considerations (namely, the fact that one cannot verify the output of known distribution-specific agnostic learners). Since its introduction, the ML theory community has been intensely studying the algorithmic possibilities and limitations in this learning model. Ours is the first work that gives fully-polynomial time algorithms with near-optimal error guarantees for learning Gaussian halfspaces in this model. Since the testable learning model was defined only recently, it has not yet been picked up by more practical ML researchers. Related to this, we note that prior to our work no known efficient implementation of testable learners was possible (because prior algorithms had complexity scaling exponential in $1/\epsilon$). As pointed out by the reviewer, our work drastically pulls down the complexity and hence makes experimental investigation of the model in practical settings a viable option. Finally, we would like to note that learning theory results such as ours are within the scope of NeurIPS (as stated in the call for papers). Consequently, we request that our work is judged on its merits based on the appropriate criteria. Regarding generalizing our results to other hypothesis classes: even without the testable requirement, halfspaces are the only class for which it is known how to achieve O(OPT)+eps error in polynomial time in the distribution-specific agnostic model. In fact, most known efficient learners (with distribution-independent error guarantees) tolerant to label noise focus on the class of halfspaces and their generalizations [KKMS08, KLS09, ABL17, DKS18, DKKTZ21, DKPTZ21, DKTZ22]. [ABL17] Awasthi, P., Balcan, M. F., & Long, P. M. (2017). The power of localization for efficiently learning linear separators with noise. Journal of the ACM (JACM), 63(6), 1-27. [DKS18] Diakonikolas, I., Kane, D. M., & Stewart, A. (2018, June). Learning geometric concepts with nasty noise. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (pp. 1061-1073). [DKKTZ21] Diakonikolas, I., Kane, D. M., Kontonis, V., Tzamos, C., & Zarifis, N. (2021, July). Agnostic proper learning of halfspaces under gaussian marginals. In Conference on Learning Theory (pp. 1522-1551). PMLR. [DKPZ21] Diakonikolas, I., Kane, D. M., Pittas, T., & Zarifis, N. (2021, July). The optimality of polynomial regression for agnostic learning under gaussian marginals in the SQ model. In Conference on Learning Theory (pp. 1552-1584). PMLR. [KKMS08] Kalai, A. T., Klivans, A. R., Mansour, Y., & Servedio, R. A. (2008). Agnostically learning halfspaces. SIAM Journal on Computing, 37(6), 1777-1805. [KLS09] Klivans, A. R., Long, P. M., & Servedio, R. A. (2009). Learning Halfspaces with Malicious Noise. Journal of Machine Learning Research, 10(12). [DKTZ22] Diakonikolas, I., Kontonis, V., Tzamos, C., & Zarifis, N. (2022, June). Learning general halfspaces with adversarial label noise via online gradient descent. In International Conference on Machine Learning (pp. 5118-5141). PMLR.
Summary: This paper considers computationally efficient learning of halfspaces with adversarial noise in a recently proposed "testable" task: with high probability either reports the example distribution is not standard Guaissian, or outputs a halfspace with error of at most O(opt)+epsilon. There are existing works in a similar setting but targets the error of opt+epsilon, so their time and sample complexity is unavoidably $d^{O(1/\epsilon^2)}$. This paper relaxed the error target to O(opt)+epsilon as in previous non-testable works, and shows that the testable task can be achieved with time and sample complexity of around poly(d, 1/epsilon) and label complexity of around d log(1/epsilon). Technically, the results are obtained by (non-trivially) combining known ideas, like moment matching to assist testing, the Chow parameter (roughly speaking, E[yx]), robust mean estimation, and soft localization for learning in a label-efficient way. -- I've read the author's response and I remain positive for this paper. Strengths: - This paper considers a recently proposed task of efficient testable learning of halfspaces with Gaussian distribution, and shows that, similar to the results in classical non-testable setting, it can be done with poly(d/epsilon) when considering constant-factor approximation. In my opinion, this result is not very surprising, but it is still interesting to know. - Most of the individual ideas used for the proof and algorithm are known to the community. On the other hand, the careful analysis to combine them in this new testable learning setting is still novel and nontrivial. - I briefly checked the proofs and they looked sound to me. - The paper is written very clearly: it clearly explains the problem and how it relates to existing work, and it organizes its technical part well so that one could follow both the high level steps and details. Weaknesses: There are some weakensses (and I mentioned some of them above already), but I don't think they're really major: - The results are not very surprising to me, and the individual ideas used are mostly known. - It only considers standard Gaussian distributions and adversarial noise. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I don't really have any questions, but one minor comment is that since this is a relatively new setting, it might help readers understand the difficulty of the problem by explaining that one cannot simply first test if the distribution is Gaussian and then proceed with standard learning algorithms. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The reviewer mentions the result is itself not surprising. While we respect this opinion, we want to point out that there are known hypotheses classes where there are separations between testable learning and standard agnostic learning. In particular, as shown in the initial work of [RV23], there are agnostic learning algorithms of monotone boolean functions under the uniform distribution in time $2^{\tilde O(\sqrt n)}$, but their lower bound construction suggest that achieving the same under the testable learning setting requires a runtime of $2^{\Omega(n)}$. The reviewer also pointed out “Most of the individual ideas used for the proof and algorithm are known to the community”. While we indeed use many standard tools appearing in prior works, the wedge bound algorithm (Algorithm 2) and its analysis, to the best of our knowledge, is a novel technical contribution. Quite differently from the tester that exists in prior literature, this testing routine no longer blindly compares global low-degree moments but rather uses “local’’ information restricted to areas defined by the output of the learner. In our opinion, this is one of the most important pieces in achieving efficient testable learning of halfspaces. Finally, we thank the reviewer for bringing up the discussion of the difficulty of testable learning. The insufficiency of the naive combination of testing distributional closeness and the standard learning algorithm has already been discussed in the initial paper of [RV23]. We will include this in our paper and explain in detail why testable learning is a non-trivial task. [RV23] Rubinfeld, R., & Vasilyan, A. (2023, June). Testing distributional assumptions of learning algorithms. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing (pp. 1643-1656).
Summary: The paper studies the problem of efficient testable learning of half spaces under Gaussian marginals and adversarial noise. The setting of testable learning addresses the issue of restrictive assumptions on the marginal distributions (such as Gaussianity) under learning algorithms are designed. Traditionally, a learning algorithm has no guarantee when the marginal distribution is not the prescribed one. In tester-learner setting, a learning algorithm consists of a tester and a learner. The tester "certifies" that the marginal distribution satisfies certain conditions (ideally this test would be a test of whether the marginal distribution is "close" to the original distribution). Conditioned on the tester passing, the learning algorithm has the desired correctness guarantee. The main idea is that tester does not need to test for distributional closeness but rather just closeness sufficient to "fool" the learning algorithm. A recent line of work has considered the learning of half spaces in this setting. Previous work show that the halfspaces can be agnostically learnt under Gaussian marginals in time $d^{1/\eps^2}$ which is optimal if one insists on algorithms that return hypotheses that have error $OPT + \eps$. The present paper relaxes this notion to require that the hypothesis returned only has error $O(OPT) + \eps$. The paper presents a polynomial (in $d$ and $1/\eps$) time tester-learner algorithm under Gaussian marginals. Strengths: The paper presents a natural and interesting adaptation of the localization technique for learning half spaces in the testable learning setting. The main idea is that learning algorithms designed to work under Gaussian marginals require significantly less information (for example only rely on moments and anti concentration). The paper analyses a robust version of Chow parameter recovery under the condition that the marginals match Gaussian marginals (this can be tested efficiently). Using this as a weak learner, the paper uses a localization technique to convert a weak learner (from Chow parameter recovery) to a strong learner. Again the paper uses the fact that closeness of parameter and loss for halfspaces can be efficiently tested. In summary, the paper presents a nice analysis of learning algorithm Weaknesses: As presented the paper is too specific to Gaussian marginals. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It would be nice to address which parts of the paper can be generalized to more general distributions? For example, what notions of testable concentration and anti concentrations would suffice for these algorithms to work. For example, would having sufficient moment decay suffice for the Chow parameter recovery? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Regarding the concern that our tester-learner may only work under Gaussian marginals, we refer the reviewer to our General Response. The reviewer brought up the question of which part of the algorithm can be generalized to work under a broader family of distributions. Just as the reviewer guessed, the Chow parameter estimation part can indeed be relaxed to work with distributions whose second moments are bounded. The main challenge in generalizing our approach to other distributions is the localization step: while the conditional distribution of Gaussian restricted to a thin slice is still a Gaussian, this need not be the case for other family of distributions. --- Rebuttal Comment 1.1: Comment: We thank the authors for the response and maintain my positive score.
Rebuttal 1: Rebuttal: ## General Response We thank the reviewers for their time and effort in providing feedback. We are encouraged by the positive comments from reviewers for the following: (i) the **improved running time for testable learning halfspaces** from $d^{ \text{poly}(1/\epsilon) }$ to fully polynomial in all parameters (ii) the **optimality of the error guarantees** (Reviewers fkEL, 8SDJ, XEN1) and (iii) **the writing quality and the clarity of the presentation of the ideas** (Reviewers 8SDJ, XEN1). Additionally, as mentioned by reviewer 1kSW, we emphasize that another major strength of our algorithm is that it **works under the active learning setting requiring only $d \text{ } \text{poly} \log(1/\epsilon)$ labeled examples.** Below, we first address some common concerns and then provide responses to each individual reviewer in order. **Learning under Gaussian Marginals** Two reviewers pointed out that our result is specific for Gaussian marginals. While generalizing our results to broader families of distributions is an interesting future direction, we remark that Gaussianity here is a natural assumption that has been commonly made in many prior works studying agnostic learning of halfspaces (see [DKKTZ21], [DKPTZ21], [DKZ20], etc.). Analogous to situations in the standard agnostic learning setting, understanding polynomial time testable learners in the Gaussian case can be seen as an important first step in achieving a universal theory of efficient testable learning under a variety of different distributions. We emphasize that our work is the first polynomial-time algorithm for testable learning halfspaces with near-optimal error guarantees that matches that without the testable requirement. Therefore, we view this is a natural and important first step in understanding efficient testable learning under more general marginal distributions. Some reviewers also questioned the choice of our noise model. We remark that the adversarial noise model is theoretically well motivated, extensively studied, and also one of the strongest noise models in the literature. Hence, the error guarantees and runtime hold for all other weaker families of noise models as well. Whether the error guarantees can be improved under these weaker noise conditions is a direction we did not pursue but is a potential interesting future direction. **Adversarial Noise Model** Some reviewers also questioned the choice of our noise model. We remark that the adversarial noise model is theoretically well motivated, extensively studied, and also one of the strongest noise models in the literature. Hence, the error guarantees and runtime hold for all other weaker families of noise models as well. Whether the error guarantees can be improved under these weaker noise conditions is a direction we did not pursue but is a potential interesting future direction.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Theoretically Guaranteed Bidirectional Data Rectification for Robust Sequential Recommendation
Accept (poster)
Summary: Overall comment: This paper proposes two detect and rectify methods to solve the problem of unreliable instances in Sequential Recommender Systems (SRSs), including unreliable items in the input and unreliable targets. The two methods can be applied to existing SRSs models, effectively improving the model performance. Strengths of the paper include: 1. The paper provides sufficient theoretical proof for the methods, which is very impressive. 2. The discussion and clarification of related work, as well as the differences and connections with existing work, are very clear. 3. The experimental results are sufficient, and the original model performance is improved on multiple baseline models via proposed method. The ablation study also compares the performance of the DRUT, DDUI, their combination, and whether to use self-ensemble under different conditions. Weakness of the article includes: The paper does not discuss the limitations of the methods. In particular, I am interested in whether this method can be applied in large-scale real-world recommender systems? Are there any practical issues? Strengths: Strengths of the paper include: 1. The paper provides sufficient theoretical proof for the methods, which is very impressive. 2. The discussion and clarification of related work, as well as the differences and connections with existing work, are very clear. 3. The experimental results are sufficient, and the original model performance is improved on multiple baseline models via proposed method. The ablation study also compares the performance of the DRUT, DDUI, their combination, and whether to use self-ensemble under different conditions. Weaknesses: Weakness of the article includes: The paper does not discuss the limitations of the methods. In particular, I am interested in whether this method can be applied in large-scale real-world recommender systems? Are there any practical issues? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I am interested in whether this method can be applied in large-scale real-world recommender systems? Are there any practical issues? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: As I written in weakness: The paper does not discuss the limitations of the methods. In particular, I am interested in whether this method can be applied in large-scale real-world recommender systems? Are there any practical issues? Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your appreciation on our work and your valuable reviews. > Question and Limitation: the paper does not discuss the limitations of the methods. In particular, I am interested in whether this method can be applied in large-scale real-world recommender systems. Are there any practical issues? Thank you for highlighting the need to discuss the limitations and practical considerations of our proposed BirDRec framework. The main limitations are twofold, which will be highlighted in our camera-ready. - **Limited improvements on storage cost for smaller item sets:** As mentioned in lines 293-295, we acknowledge that the storage cost of BirDRec is marginally higher than STEAM's on ML-1M. This is primarily because the rectification sampling strategy with Self-ensemble has a less apparent advantage on datasets with smaller item sets (see lines 246-249). To be specific, the storage cost reduction for calculating weighted average prediction scores is from $O(|V|*H)$ to $O(K)$ for each instance. Thus if $|V|$ is small, the benefit of this reduction will be less obvious. - **Marginally higher computational cost than the backbone:** Although BirDRec is significantly faster than the latest robust SRS (STEAM), it is worth noting that BirDRec is 1.6 times on average slower than its backbone model SASRec (as depicted in Fig 3) in each training epoch. This increased training time could be a practical concern in systems with **extremely large-scale** datasets and real-time recommendation demands. --- Rebuttal Comment 1.1: Title: Reply to author rebuttal Comment: I have no further questions. The limitations of the work are discussed.
Summary: This paper aims to address the issue where the performance of sequential recommender systems (SRSs) could be harmed by mismatched input-target pairs introduced by distracted users. The paper claims that existing studies only focus on either unreliable inputs or targets but not both simultaneously. Also no theoretical guarantee of effectiveness for the existing methods are provided. To address these issues, the paper proposes to replace/remove target/input item using a forward/backward SRS's weighted average scores accumulated over training epochs. Error bounds for both forward and backward rectifications are provided. To further reduce the time and space complexity, sampling strategy as well as self-ensembled mechanism are applied. Experiments show the proposed approach is able to improve the performance is representative SBSs on four datasets. Comparing to rectified/denoised SBSs, the performance of the proposed approach is also reported to be better. Ablation test as well as hyperparameter analysis are also provided. Strengths: * The ideas of a. rectifying input and target items using weighted average prediction score of forward and backward SBR, b. using sampling and self-ensembled mechanism of improving efficiency are interesting and new in my opinion. * The main steps leading to the error bounds are reasonable, though I did not manage to check the proofs in the supplementary in details. * The empirical studies that support the effectiveness of the propose approach appeals to be convincing. Weaknesses: As I am not familiar with the literature of the robustness of SRS nor the rectification / denoising in SRS, I am only able to give a few suggestions: * Analyzing the existing work as well as the proposed approach on a synthetic dataset with ground truth could further strengthen the claims about: a. the drawbacks of the existing studies, b. the effectiveness of the proposed approach. Conduct simulation on the synthetic dataset could validate the error bound, as well as evaluate the approximation quality of the proposed method. * Qualitatively evaluating the rectified items for a few instances in different dataset would further help the audience understanding the behavior of the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the suggestions in the "weakness" section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitation is mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your recognition of our work and your valuable feedback. > Weakness 1 Thank you for the interesting and insightful suggestions. Constructing a synthetic dataset with ground truth for data reliability is indeed of importance for robust SRSs. However, the major challenge is how to obtain ground truth annotations. There are **NO definitive ground truth rules** that can be used to accurately identify the reliability of each instance due to the inherent ambiguity of user interactions. That is, the reason behind each interaction may not be apparent even to the users themselves, as noted in existing research [1]. Thus the annotation can not be done directly via either machine or crowdsourcing or domain experts. To this end, one possible solution is to design our own rules that are as close to the ground truth rules as possible for determining data reliability. However, we have to overcome key concerns including but not limited to the following ones: - **C1: How to guarantee the validity of the designed rules?** For example, we may design a rule: for users who are huge fans of Sci-Fi movies, if any instance of these users involves films of other genres, then the instance is unreliable. However, we may not be able to guarantee the generalizability of this rule to all these users without exceptions in the real world. So, it is challenging to quantitatively measure to what extent each rule is valid, leading to concerns about the validity of the synthetic data. - **C2: How to guarantee the completeness of the designed rules?** Considering the diverse user behavior patterns, it is impractical to design rules to exactly cover all the possible behavior patterns of all users in a specific domain. In this case, the synthetic data may either oversimplify or complicate the recommendation task, rendering the performance of SRSs on such datasets less meaningful. In conclusion, we acknowledge the invaluableness of the suggestions as well as the challenges involved. We will strive to make progress in this direction as part of our future research. > Weakness 2 Thank you for the invaluable comments, which allow an intuitive and detailed analysis of the proposed method. We will randomly sample some unreliable instances on different datasets and visualize the rectification process and results. These will be added in the camera-ready. Reference [1] Cosley et al. Is seeing believing? How recommender system interfaces affect users’ opinions. In SIGCHI 2003. --- Rebuttal Comment 1.1: Comment: Thanks for the explanations, I have no further questions.
Summary: This paper proposes a bidirectional data rectification framework, called BirDRec, to address the challenges of training sequential recommender systems (SRSs) with unreliable input and targets. The authors provide two theoretically guaranteed error-bounded strategies to rectify the data, which can be flexibly implemented with most existing SRSs. They also introduce a rectification sampling strategy and self-ensemble mechanism to reduce the computational and space complexity of BirDRec. The proposed framework is evaluated on several benchmark datasets, and the results demonstrate its generality, effectiveness, and efficiency compared to state-of-the-art robust SRSs. Strengths: 1. The paper introduces BirDRec, a novel bidirectional data rectification framework designed to enhance the robustness of sequential recommender systems (SRSs) by addressing unreliable data. BirDRec can seamlessly integrate with existing SRSs, offering a flexible solution for improving their performance. 2. The authors present two error-bounded strategies within BirDRec that provide theoretical guarantees for detecting and rectifying unreliable input and targets in SRSs. These strategies contribute to the reliability and accuracy of the recommendations generated by the system. Weaknesses: 1. The evaluation of the proposed framework is limited to a few benchmark datasets, and it remains uncertain how well it would generalize to different types of data or domains. It would be beneficial for the authors to include further experiments or discuss potential challenges in applying their framework to diverse datasets. 2. While the authors attribute the unreliable data to external distractions, it is important to acknowledge that there might be other sources of noise or errors that their approach does not account for. Including a discussion on potential alternative causes of unreliable data would strengthen the paper's analysis. 3. The paper lacks a thorough analysis of the computational and space complexity of the proposed framework. Understanding the resource requirements of the framework is crucial for assessing its scalability to larger datasets or real-time applications. It would be valuable for the authors to provide insights into the framework's efficiency and discuss any potential limitations in terms of computational resources. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you provide additional details on the theoretical guarantees of the error-bounded strategies proposed for data rectification? 2. Can you explain the process behind selecting the benchmark datasets used in your experiments? What criteria did you consider when choosing these datasets? Additionally, could you elaborate on the evaluation metrics or performance measures used to assess the effectiveness of the proposed framework on these benchmarks? 3. Have you taken into account other sources of noise or errors commonly encountered in SRSs, such as cold-start, diversity, or fairness issues? If so, how does your approach address or mitigate these challenges? 4. In real-world applications, how do you envision the proposed framework being utilized? Are there specific domains or use cases where you believe BirDRec would be particularly beneficial? Additionally, what are some potential challenges or limitations that might hinder the adoption or scalability of your framework in practical settings? It would be valuable to understand the feasibility and potential constraints of implementing BirDRec in real-world scenarios. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors missed an opportunity to compare their approach with other recent methods that address the problem of unreliable data in SRSs. By including such a comparison, the authors could provide a more comprehensive evaluation of their proposed framework and highlight its advantages or limitations in relation to existing solutions. This addition would enhance the paper's contribution to the field. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your reviews. > Weakness 1 We exactly follow the scientific research standards and common practice in SRSs, that is, selecting (around 3~6) representative benchmark datasets that are widely adopted by SOTAs [1, 2, 4] for evaluation. We have tried our best to cover various domains ranging from movie (ML-1M), video (QK-Video), to E-commerce (Amazon-Beauty) and location-based social networks (Yelp). Due to space limitation, we believe no one and no method can enumerate all datasets for evaluation. Thus the generalizability of SOTAs cannot be fully guaranteed on datasets that have not been tested. > Weakness 2 & Question 3 Our solution is indifferent to the cause of unreliable instances. Regardless of the reasons causing an unreliable instance, it will involve either unreliable input or unreliable target, or both. All of these can be handled by BirDRec. We acknowledge that analyzing the causes of unreliable data might help devise solutions. However, investigating why a user interacts with irrelevant items is a challenging psychological problem [3], which is beyond the scope of this paper. We may leave it as our future work. > Weakness 3 The time and space complexity of our proposed framework are respectively analyzed in Sections 4.1 and 4.2. The main limitations of BirDRec are twofold, which will be highlighted in our camera-ready. - **Limited improvements on storage cost for smaller item sets:** As mentioned in lines 293-295, we acknowledge that the storage cost of BirDRec is marginally higher than STEAM's on ML-1M. This is primarily because the rectification sampling strategy with Self-ensemble has a less apparent advantage on datasets with smaller item sets (see lines 246-249). To be specific, the storage cost reduction for calculating weighted average prediction scores is from $O(|V|*H)$ to $O(K)$ for each instance. Thus if $|V|$ is small, the benefit of this reduction will be less obvious. - **Marginally higher computational cost than backbone:** Although BirDRec is significantly faster than the latest robust SRS (STEAM), it is worth noting that BirDRec is 1.6 times on average slower than its backbone model SASRec (as depicted in Fig 3) in each training epoch. This increased training time could be a practical concern in systems with **extremely large-scale** datasets and real-time recommendation demands. > Question 1 The details and proofs of all the lemmas and theorems can be found in Appendix. > Question 2 **For datasets**, we exactly follow the scientific research standards and common practice in SRSs, that is, selecting (around 3~6) representative benchmark datasets that are widely adopted by SOTAs [1, 2, 4, 5] for evaluation. Typically, they are selected by considering the following criteria: domains, sizes, sparsity levels, and average sequence lengths as shown in Tab 1. **For metrics**, these evaluation metrics are selected by following SOTAs in SRSs [1, 2, 4]. Higher metric values indicate better ranking performance. We can add the following detailed explanations in Appendix. Suppose there are $M$ users, and $R_{u}$ is the full recommendation list (RL) for user $u$.  Let $R_{u}[j]$ be the j-th item in $R_{u}$, and $R_{u}[1:N]$ be the Top-$N$ RL. $i_{u}^{t}$ is $u$'s intereacted item in the test set. $I(x)$ is an indicator function whose value is $1$ when $x > 0$, and $0$ otherwise. The formal definitions of HR, NDCG, and MRR are as follows. **HR (Hit Ratio)** gives the percentage of users that can receive at least one correct recommendation from the Top-N RL: $$HR@N=\frac{1}{M}\sum_{u=1}^{M}I(|R_{u}[1:N]\cap\{i_{u}^{t}\}|).$$ **NDCG (Normalized Discounted Cumulative Gain)** evaluates the ranking performance by measuring the positions of correct recommended items: $$NDCG@N=\frac{1}{M}\sum_{u=1}^{M}\frac{1}{Z}DCG@N=\frac{1}{M}\sum_{u=1}^{M}\frac{1}{Z}\sum_{j=1}^{N}\frac{2^{I(|\{R_{u}[j]\}\cap \{i_{u}^{t}\}|)}-1}{\log_2(j+1)},$$ where Z, as a normalization constant, is the maximum possible value of DCG@N. **MRR (Mean Reciprocal Rank)** also evaluates the ranking performance via the rank position of the correct recommended item $i_{u}^{t}$ (denoted by $rank_{i_{u}^{t}}$) in the RL: $$MRR=\frac{1}{M}\sum_{u=1}^{M}\frac{1}{rank_{i_{u}^{t}}}.$$ > Question 4 BirDRec would be particularly beneficial for domains where users' behaviors can be occasionally influenced by external distractions. For example, in movie domain, users may receive recommendations from friends once in a while. Meanwhile, it also benefits from datasets with large item sets, leading to significant improvements in storage cost compared with the latest robust SRS STEAM [2]. > Limitation 1 We did compare our model with 3 recent SOTA robust SRSs, i.e., BERD [1] (IJCAI-21), FMLP-Rec [4] (TheWebConf-22), and STEAM [2] (TheWebConf-23). They aim to tackle unreliable targets, unreliable input, and both, respectively. The results are in Tab 3. References [1] Sun et al. Enhancing sequential recommendation by eliminating unreliable data. In IJCAI 2021. [2] Lin et al. A self-correcting sequential recommender. In TheWebConf 2023. [3] Cosley et al. Is seeing believing? How recommender system interfaces affect users’ opinions. In SIGCHI 2003. [4] Zhou et al. Filter-enhanced mlp is all you need for sequential recommendation. In TheWebConf 2022. [5] Yuan et al. Tenrec: A Large-scale Multipurpose Benchmark Dataset for Recommender Systems. In NeurIPS 2022. --- Rebuttal Comment 1.1: Comment: I recommend that the author incorporate this section into the main body of the revised paper. While I am inclined to give a higher score based on this addition, I believe it's crucial to consider the feedback from other reviewers as well. I have no further questions. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our response. We sincerely appreciate your willingness to award a higher score. Your feedback is invaluable, and we will thoughtfully address your comments and integrate the mentioned section into the main body of our camera-ready submission.
Summary: The authors proposed Bidirectional Data Rectification (BirDRec) framework that can be incorporated into existing Sequential recommender systems (SRSs) to tackle unreliable data. In addition, the authors also adopt rectification sampling strategy. The authors also provided theoretical guaranteed for BirDRec and conducted various experiments on the robustness of the framework. Strengths: Strengths: - The idea is straightforward and easy to follow - The authors provided theoretical guaranteed for BirDRec and conducted various experiments on the robustness of the framework. - Theoretical aspects are well-written - The authors also provided source code for verification. Weaknesses: Weaknesses: - Some of the experimental results are not convincing such as the performance of baselines in Table 3. The authors need to provide more explanation and clarification such as why BERD is good for Movielens1M but STEAM is better for Yelp. - It’s not clear which parts of the architecture in Figure 2 made the performance increases dramatically - Comparing BirDRec and SASRec for hyper-parameters analysis in Appendix C.2 and C.3 seems a bit ‘unfair’. In my opinion, we should compare with most recent baselines, so at least we can compare with Figure 5 in the main paper. - How much time did it take to do grid search across models in Table 1 from Appendix Page 13? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to my comments above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your invaluable comments. ## Weaknesses > W1 The possible reasons are twofold: the item-wise correction process of STEAM, and the longer interaction sequences on ML-1M (see Tab 1). To correct each instance, STEAM selects one type of operation from ‘keep’, ‘delete’, and ‘insert’ for **each item** in the instance. Such correction is more likely to make mistakes on long sequences as one single wrong operation will render the entire sequence unreliable for training. By contrast, BERD ONLY cares whether the **target item** is matched with the input items given an instance, so its performance is insensitive to the length of the sequence. Besides, the mistakes made by STEAM can be more serious than BERD, as STEAM can modify reliable instances into unreliable ones by deleting relevant items or inserting irrelevant items. In contrast, BERD simply abandons some instances that might be unreliable, but would not generate new unreliable instances. Such negative impacts will be amplified on datasets with long sequences where STEAM is more inclined to make mistakes. > W2 Based on our ablation study (see lines 296-305 and Fig 4), the performance gain of separately adding DRUT, DDUI, and Self-ensemble is 45.7%, 43.3%, and 3.1%, respectively. Thus the ranking of their importance should be DRUT, DDUI, and Self-ensemble. We will complement the ablation study by adding this in the camera-ready. > W3 All the figures in C.2 and C.3 show the performance comparison with SASRec itself and our BirDRec with SASRec as backbone. These figures mainly aim to investigate the effects of different hyper-parameters on BirDRec, so as to help determine the best setting for the hyper-parameters. Hence, the comparison is fair. For a more comprehensive visualization, we have added the three recent baselines (BERD: IJCAI-21, FMLP-Rec: TheWebConf-22, STEAM: TheWebConf-23) into C.2 and C.3. Note that their performance is already available in Tab 3, and their per-epoch accuracy is available in the training logs. The results show that BirDRec still performs the best. > W4 It takes around three months to complete the grid search in Tab 1 in Appendix. This aims to help find out optimal hyper-parameter settings for all methods, thus delivering a fair and rigorous comparison. To improve efficiency, we accelerated this process with two strategies following SOTAs [1, 2]. **First**, we adopt the early stop mechanism [1] to terminate the model training if the performance does not increase in 20 consecutive epochs. **Second**, we split the hyper-parameters of each method into three independent groups, including learning rate-related ones (e.g. learning rate, batch size), model capacity-related ones (e.g. embedding size, number of attention heads), and the others. Then we conduct the grid search on each group separately [2], and the hyper-parameters not in this group are set as suggested in their original papers. Ultimately, the optimal setting of each group together forms the final optimal settings of a specific method. References [1] Goodfellow et al. Deep Learning. 2016. [2] Yoshua Bengio. Practical Recommendations for Gradient-Based Training of Deep Architectures. Neural Networks: Tricks of the Trade. 2012. --- Rebuttal Comment 1.1: Title: Thanks for rebuttal Comment: Thanks for the rebuttal. I would suggest the authors to add W2 and W3 to the main body. I decided to keep my score after taking a closer look at other reviews and responses.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on the inconsistency between user’s true preference with interaction history. Specifically, the authors propose a theoretically guaranteed data rectification strategies based on SRS predictions to tackle both unreliable input and targets for more robust SRS. Besides, the authors also devise a rectification sampling strategy and adopt a self-ensemble mechanism to ensure better scalability. Experiments on four real-world datasets show the superiority of the proposed method. Strengths: 1) The research problem of rectification of interaction data is very interesting. 2) Theorem analysis for the proposed modules. 3) The proposed model can be compatible with most existing models. 4) Extensive experiments with respect to effectiveness and efficiency are conducted. Weaknesses: 1) Some details are not clear, for example, what’s the formulation of the Self-ensemble Mechanism? 2) The time comparison is important is important for real application. However, Figure 3 compares SASRec, STEAM, and BirDRec simultaneously, which makes the increasing of time comparing SASRec and BirDRec not clear. More analysis is necessary. 3) The experimental datasets are all small datasets with less than 100K users and items. Besides, the improvements are too large which is not very common. 4) No online results are given. Too much hyper-parameters, which limit the applicability of the model. 5) Many parameters are pre-defined, without careful tuning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Theorem 4 indicates that the relative rank of an item over a list of randomly sampled items can approximate this item’s relative rank over the full item set. However, if K is too smaller than |V|, the randomly sampled items may still introduce so many noises? 2) The space complexity reduction of Self-ensemble Mechanism if from K*H to K, but not from |V|*H to K? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1) The proposed model seems to be too complicated to be deployed in real industrial environments. 2) Besides sequential recommendation, can this model employed in other recommendation tasks, for example, rating prediction, CTR prediction? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your invaluable reviews. ## Weaknesses > W1 We did have the formulation of the Self-ensemble mechanism (see lines 238-245). > W2 Thanks for bringing up the clarity issue with Fig 3(a). The table below shows the per-epoch time cost (seconds) of SASRec and BirDRec: | | ML-1M | Beauty | Yelp | QK-Video ---|---|---|---|--- SASRec | 14.1  | 5.0    | 5.7  | 44 BirDRec | 28.5  | 7.9    | 7.5  | 74 We will clarify this by adding values for each bar in Fig 3(a) in the camera-ready. > W3 The dataset selection exactly follows the common practice in SRSs. They are representative benchmark datasets and have been widely adopted by SOTAs [1, 2, 3] in SRSs. Although the numbers of users and items are less than 100K, the total interactions are in a large scale, e.g., QK-Video has larger than 2M interactions. **BirDRec vs. non-robust SRSs.** It is common for robust SRSs to achieve large improvements over non-robust SRSs due to the negative effects of unreliable data in misleading SRSs. The table below shows the average improvements of BirDRec and the best robust SRS over six non-robust SRSs. Both methods achieve significant improvements. | |   FPMC  |  Caser  | GRU4Rec | SASRec | BERT4Rec |   MAGNN ---|---|---|---|---|---|--- Best Robust SRS |   31.13%  |32.72%  |42.52% | 17.73%  | 21.81%  | 10.45% BirDRec | 38.91% | 40.65% | 64.81% | 49.27% | 51.06% | 25.94% **BirDRec vs. Robust SRSs (see Tab 3).** The relative improvement achieved by the best SOTA robust SRS against other robust SRSs is 23.45% on HR. BirDRec wins the best SOTA robust SRS by a lift of 25.68%. The reasons are threefold. BirDRec not only detects but also rectifies unreliable data; BirDRec handles both unreliable input and targets; and BirDRec is error-bounded. > W4 Online evaluation for SRSs is challenging due to privacy and ethical concerns w.r.t. accessing real-world platforms and user interaction data. We follow SRS SOTAs [1, 2, 3] to report offline results on public datasets. Beyond shared hyper-parameters (e.g., embedding size, batch size), BirDRec only has **4 additional** hyper-parameters (see Fig 5). SOTA robust SRSs [1, 2] normally have **5 additional** hyper-parameters. > W5 We have carefully tuned hyper-parameters of each method with the grid search (see Fig 5 in the main paper and Tab 1 in Appendix). To improve efficiency, we accelerated this process with two strategies following SOTAs [4, 5]. **First**, we adopt the early stop mechanism [4] to terminate the model training if the performance does not increase in 20 consecutive epochs. **Second**, we split the hyper-parameters of each method into 3 independent groups: learning rate-related ones, model capacity-related ones, and the others. Then we conduct the grid search on each group separately. The optimal setting of each group forms the final optimal setting [5]. ## Questions > Q1 Theoretically, based on Theorem 4, larger $K$ yields a more accurate estimation of the true rank. If $K$ is too smaller than $|V|$, the upper bound of Theorem 4 might be high, thus introducing noise. Empirically, we found that $K\ge 10$ is sufficient to reach promising accuracy on all datasets. The ratio of $K$ to $|V|$ is from 1/300 to 1/4000 across different datasets. The possible reasons are twofold. - Each rectification pool is initialized properly with an instance's succeeding or preceding items, which are potentially better substitutions. - Each rectification pool is progressively improved during training as it is continuously updated by adding newly sampled high-scored items. > Q2 The reduction is from $|V|*H$ to $K$ (see lines 234-251). **Without Self-ensemble**, each instance has to maintain prediction scores with all $|V|$ items in all $H$ previous epochs to trace back historical scores and calculate the weighted average scores. As we cannot forecast which item will be added to the rectification pool. **With Self-ensemble**, the weighted average scores of $K$ items in a rectification pool can be directly approximated by a self-ensembled model. ## Limitations > L1 Based on Fig 3, BirDRec is much more efficient than the latest SOTA Robust SRS STEAM, and only 1.6 times slower than its backbone. Besides, BirDRec is particularly beneficial for datasets with large item sets (see lines 246-249) thus being applicable for real environments. > L2 BirDRec can be adapted to other recommendation tasks, as long as timestamps of user-item interactions are available. Thus, the problem can be first formulated as a sequential recommendation task. Then, the expected output can be generated by properly adding extra neural layers. For rating prediction, the output of sequential recommendation (i.e., prediction scores for all candidate items) could be first mapped into $[0, 1]$ by a softmax layer, and then re-scaled based on the rating range. For CTR prediction, each CTR can be computed by a softmax layer over the output of sequential recommendation. References [1] Sun et al. Enhancing sequential recommendation by eliminating unreliable data. In IJCAI 2021. [2] Lin et al. A self-correcting sequential recommender. In TheWebConf 2023. [3] Qiu et al. Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation. In WSDM 2022. [4] Goodfellow et al. Deep Learning. 2016. [5] Yoshua Bengio. Practical Recommendations for Gradient-Based Training of Deep Architectures. Neural Networks: Tricks of the Trade. 2012. --- Rebuttal Comment 1.1: Title: Reply to Authors' Rebuttal Comment: Thanks for your rebuttal. Your responses have addressed most of my concerns.
null
null
null
null
null
null
Provably Fast Convergence of Independent Natural Policy Gradient for Markov Potential Games
Accept (poster)
Summary: This work proposes an analysis of the independent natural policy gradient algorithm for Markov Potential games. Under technical assumptions on a sub-optimality gap problem dependent quantity and supposing access to exact (averaged) advantage functions, this paper provides a novel $O(1/\epsilon)$ iteration complexity to guarantee that the average Nash gap along iterations is smaller than the accuracy \epsilon, improving over the previously known $O(1/\epsilon^2)$ iteration complexity in the same independent learning setting. After discussing the potential game setting as a warm-up and generalizing to the MPG setting, the paper provides simulations in a synthetic potential game and a congestion game. Strengths: - The convergence analysis improves over the $O(1/\epsilon^2)$ iteration complexity in prior work under some technical assumptions. - To the best of my knowledge, the analysis provided in this paper is new and the proofs seem to be correct (I went through the appendix, please see some additional comments below). However, please see some comments below regarding related works and quantities/notations to be precised for clarifications. In my opinion, the main technical novelty is Lemma 3.2 (and Lemma B.3 relying on the technical lemma C.1) which is key to obtain the result in Theorem 3.3. This result is interesting and connects the sum of the ‘gaps’ (over agents) for a fixed temperature parameter $\eta$ and for $\eta$ going to infinity. - The paper is well-organized. Weaknesses: **1. About the suboptimality gap \delta_K and the $O(1/K)$ convergence rate**: Theorem 3.3 provides an upper-bound on the average NE-gap which depends on the K-dependent quantity $\delta_K$ without further assumption, this provides a $O(1/K\delta_K)$ convergence rate as mentioned in the paper. I think it should be clearly stated that the $O(1/K)$ convergence rate which is claimed in the abstract and in the contributions under ‘mild assumptions’ (as it is stated) is actually “asymptotic”. Indeed, under assumption 3.2, the bound in Corollary 3.6.1 features an unknown constant (number of iterations) $K$’ that is not explicit in the problem parameters (such a constant is only guaranteed to exist under assumption 3.2). This is due to the fact that the NE gap is only controlled (independently of $\delta_k$) for a large enough number of iterations as it appears in l. 500 in the appendix (p. 17) under the chosen assumption. The results that the paper compares to in Table 1 have stronger guarantees in the sense that they are not asymptotic. This is not clear in the presentation and the comparison to prior work. **2. About Assumption 3.2**: this technical assumption guarantees that $\delta_K$ is uniformly lower bounded away from zero but it seems hard to interpret or give a meaning to this assumption although the ‘sub-optimality gap’ $\delta_k$ is a standard quantity in the bandit literature for instance. I am also not sure if this assumption is ‘mild’ as it is formulated. See the ‘Questions’ section for clarification questions regarding this assumption. **3. Discussion of related works**: Some relevant related works are missing in the discussion. **(a)** While Song et al. 2021 [27] is cited, it is not mentioned in the discussion that a $O(1/\epsilon)$ iteration complexity has been achieved in that work for MPGs with a Nash-coordinate ascent algorithm (see Section 5 Algorithm 7, note that sample complexity is given and the $O(1/\epsilon)$ iteration complexity appears in the proof when discarding the $O(1/\epsilon^2)$ sample complexity needed for policy evaluation). However, this algorithm is ‘turn-based’ and requires coordination and hence is not independent as the present work. **(b)** Fox et al. 2022 derived an asymptotic convergence result for the independent natural policy gradient algorithm considered in this work that seemed to be later used by Zhang et al. 2022 [33] but this reference does not appear in related works. This same result seems also to be used in Proposition 3.1 of the paper. **(c)** The results shown in this paper seem to have some similarities with the asymptotic convergence analysis provided by Khodadadian et al. 2022 in the single agent setting. For instance, the analysis in that paper introduces the optimal advantage function gap (see $\Delta$ as defined in Definition 2), a quantity similar to the sub-optimality gap $\delta_k$ in the present paper (up to the multi-agent setting). However, I would like to point out that this is just for the purpose of comparison and the present work has to overcome many difficulties related to the game theoretic setting and the multi-agent nature of the problem that make this work very different from Khodadadian et al. 2022. The abstract precises that the result improves over “ $O(1/\epsilon)$, that is achievable for the single-agent case”. Actually, Khodadadian et al. 2022 provide an asymptotic geometric convergence rate. Other recent related works even prove a global linear rate with increasing step sizes for the natural policy gradient algorithm (see for e.g. Xiao 2022, Section 4.2). Fox et al. Independent natural policy gradient always converges in Markov potential games, AISTATS 2022. Khodadadian et al. On linear and super-linear convergence of Natural Policy Gradient algorithm, Systems and Control letters 2022. Xiao. On the convergence rates of policy gradient methods. JMLR 2022 **(d)** minor remark: you might want to give reference to [Monderer and Shapley 1996, Potential Games, Games and Economic Behaviour 14, 124-143] which introduced this class of games in section 3.1 when you mention [12] (l. 128) which is much more recent. **4. Regarding the definition of Markov potential games** (Definition 2.1), for a fair comparison in Table 1, it might be worth mentioning that this definition differs from the one considered in [8,16] although it matches the definition used in [33, 34]. Indeed, the potential function in Definition 2.1 is supposed to have a discounted cumulative structure whereas such a structure is not available in the more general definition considered in [8,16]. As a matter of fact, the analysis becomes more challenging in [8] for instance since showing the potential function improvement is then more involved in that case, another decomposition different from the decomposition in Lemma B.2 l. 440 is then used to guarantee policy improvement (see Lemma in [8]). Also the dependence with respect to some parameters such as $(1-\gamma)$ and the state action space sizes are usually improved with this additional structure. **5. Originality of the analysis:** While Lemma 3.2 and its use is indeed new, the potential function improvement lemmas (Lemma 3.1, Lemma 3.5) and their proofs in appendix follow prior techniques used in [4, 33]. I suggest that authors mention this somewhere in the main part or in the appendix and emphasize the novelty of the analysis (Lemma 3.2). For instance Lemma A.1 was proved in [4], the proof of Lemma A.2 is almost identical to the proof in [4] while the proof of Lemma B.2 is very similar to the proofs in [33]. **6. Clarity**: Overall, writing can be substantially improved in my opinion. Few minor details below: — l. 108: what do you mean by ‘multiple stationary points for the problem’? Stationary points if the potential function? — l. 134-135: not very clear, see the ‘Questions’ section below. — l. 136-137: ‘for any two sets of policies’. I guess you mean any two joint policies (in the product of the individual simplices) when you say sets. — l. 158: ‘They are related by the following lemma’, the constants you just defined in l. 157 or the quantities defined few lines above in l. 153? — notation $f(\infty)$ is used in Lemma 3.2 and does not seem to be defined before although it is quite obvious this corresponds to $\lim_{\alpha \to \infty} f(\alpha)$ as used in l. 153. — Lemma A.4 in the appendix: the statement and the proof are not very precise. What is $\mathcal{R}_i$ (it seems only defined in the end of the proof of this lemma, or we can guess it from the title of the lemma)? I guess this is the set of reward functions which is known to have a particular structure for MPGs as you write it. $\mathcal{R}_i$ is a linear space in which ambient space? Please precise the proof even if we can guess the idea. Also, where is this result used? — Lemma C.1: if rewards are vectors, please precise somewhere in notations that inequalities hold for all the entries of the vectors. **Typos (main part and Appendix)**: 
l. 91: $V_i(s)$; do you mean $V_i^{\pi}(s)$? l. 476-477 (proof of Lemma B.2 in appendix): $\bar{A}_{f_i}^{\pi^k}$, what is $f_i$? I guess you mean $h_i$. l. 479 to l. 480: $\pi_i^{k+1}(\cdot|s)$, $s$ is missing in the first term of the last equation and in the KL divergences. l. 503 (proof of corollary 3.6.1): is there a missing $\phi_{\max}$ in front of $K’$ in the first inequality of the page (given l. 501)? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The assumption of ‘isolated stationary policies’ seems to be needed to guarantee that $c>0$ as stated in Theorem 3.6 (as it is also stated and used in [33]). However this assumption is not stated in the theorem. How does the theorem guarantee that $c> 0$? 2. Please clarify the definition of ‘isolated stationary policies’. What do we mean by ‘stationary’ policies in this context? Although such a terminology is used in [33], it is not clear to the reader what this means, especially that ‘stationary’ has also another meaning for policies (time-independent). The reference Fox et al. 2022 above may help with this. 3. About assumption 3.2, isn’t it possible to use the assumption that the ‘stationary policies are isolated’ instead to obtain corollary 3.6.1? According to the comments l 248-250 in the paper (a short proof in the appendix may be useful then to state this implication properly), this assumption would guarantee that Assumption 3.2 hold. Moreover, the assumption that the ‘stationary policies are isolated’ seems to be already needed in Theorem 3.6 to guarantee that $c >0$. Would this mean that only the assumption that ‘stationary policies are isolated’ would be then sufficient for Theorem 3.6 to hold without needing Assumption 3.2? 4. Could you please precise the mathematical definition of $\pi_i^{*k}$? It seems to be only defined once with words in l. 134-135. Why should it be unique (‘the optimal solution …’)? Do you mean it is an optimal policy in the sense that it maximizes the averaged advantage function w.r.t. the policy of other agents but i being fixed to $\pi_{-i}^k$ (or the averaged reward function in the potential games setting)? This is also important to clarify the limit statement in l. 153. 5. l. 153: why do we have that $f(\eta) \geq 0$? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper mentions the MPG setting as a limitation in the conclusion. **Extension to the stochastic setting:** While the analysis in the deterministic setting is an important step towards understanding the more practical stochastic setting where exact advantage functions are not available and can only be estimated via sampled trajectories, this analysis does not seem to be easily extendable to the aforementioned stochastic setting. This seems to be related to the fact that showing that the constant c is positive in the stochastic setting seems to be much harder if not hopeless even in a single agent setting (see for e.g. Mei et al. 2021). However, this is a limitation that also applies to prior work in MPGs analyzing independent natural policy gradient such as Zhang et al. 2022 [33]. A comment on this or an additional remark in the paper would be welcome. For instance, [8] analyzes the sample complexity in the stochastic setting but their algorithm does not cover the case where the regularization in the policy mirror descent-like update rule is not euclidean, KL regularization (which leads to natural policy gradient) is not covered. 


Mei et al. ‘Understanding the effect of stochasticity in policy optimization’ (Neurips 2021) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed assessment and constructive feedback on the paper, with both main paper and appendix. We are encouraged by the fact that the reviewer finds our paper "well-organized" with "new and interesting analysis". Our submission has been revised to include previously missing relevant works, fix typos and rephrase potential obscurities raised by the reviewer. We also address the reviewer's concerns and questions below and sincerely hope that the reviewer would consider increasing the score. > About $\delta_K$ and rate... convergence is actually 'asymptotic'. A more detailed discussion on $\delta^k, \delta_K, \delta^*$ is provided **in Authors' Response to All**. We will clarify the "asymptotic" property of results in abstract, contributions, and Table 1 in our final version. > Asm. 3.2 guarantees... hard to interpret although... standard quantity in the bandit literature.... The reviewer is correct that it is hard to guarantee a lower bound on $\delta^k, \forall k$. Therefore, we proposed the theoretical relaxation with Cor. 3.6.1 based on $\delta^*$. **Please see the details in Authors' Response to All.** In the bandit literature, the suboptimality gap indicates the structure of the environment. In the multi-agent setting, other agents can also be seen as part of environment w.r.t. a specific agent. Based on this intuition, we introduce this concept based on the marginalized reward and use Lem. 3.2 to capture the impact of $\delta^k$. > Related works We sincerely thank the reviewer for pointing out these previous works and we will include them in the final version. > Definition of MPGs Our formulation of MPG was originally adapted from that of [33, 34, R1, R2]. We consider this formulation to be well-known and somewhat standard in the literature. We will mention this difference in formulation with additional statements alongside Tab. 1 and Sec. 2. > Originality of the analysis We thank the reviewer for acknowledgement of the novelty of our analysis in Lem. 3.2, B.3, and C.1. The proofs of Lem. A.1 and A.2 are similar to [4], but we arrive at different conclusions due to new lemmas (Lem. 3.2 and B.3) as bridges. The analysis of Lem. B.2 is related to [33] but with some key differences. Firstly, we provide a new Lem. B.1 to better explain the final equality in l. 476, which doesn't appear in [33]. Secondly, we use Young's inequality in l. 479 to save a $\sqrt{n}$ factor compared with [33]. Nevertheless, we will make sure to mention these works while providing revised proofs of these lemmas. > Clarity - We use the definition of 'stationary point' from mathematics and optimization to denote a point with zero gradients. Incidentally, in our paper's context, the stationary point denotes a set of policies with zero policy gradients. - Yes, we used the 'set of policies' to denote a joint policy of product simplices. We will make sure to clarify at our first use of this specific term. - We meant the quantities defined in l. 153 and we will clarify it in the final version. - Yes, the definition of $f(\infty)$ will be included in our updated draft. - Yes, $\mathcal{R}\_i$ is the set of reward functions with a particular structure. The ambient space is $\mathbb{R}^{\prod\_{j=1}^n |\mathcal{A}\_j|}$. We wanted to provide a discussion over the structure of PGs in Lem. A.4 without further complicating other lemmas or theorems. - Here our intention was to show for a vector $\mathbf{r} = [r_1, ..., r_n]$, where $r_1 > r_2 ... > r_n$. We will make sure to clarify this in our statements. > Typos We thank the reviewer for the detailed evaluation of the paper. We will fix these typos in our final version. > 'isolated stationary policies’... as stated in Thm. 3.6 (and [33])... not stated in the theorem. How does the theorem guarantee that $c>0$? Yes, this assumption is required. We will add it to the final version. > What do we mean by ‘stationary’ policies in this context? Although such a terminology is used in [33], it is not clear... has also another meaning for policies (time-independent). A stationary policy is what our paper describes as a 'stationary point', a set of policies that has zero policy gradients. In this sense, 'isolated stationary policy' means no other stationary points exist in an open neighborhood of any stationary point. We will add this clarification in the revision. > ... isn’t it possible to use the assumption to obtain corollary 3.6.1? ... would guarantee that Assumption 3.2 hold. The mild Asm. 3.2 assumes the limit $\delta^*$ is larger than 0, which is required by Cor. 3.6.1. It is not possible to only use "stationary policies are isolated" to obtain Cor. 3.6.1. A counter example would be a 2-by-2 matrix game with $r_{11} = 1, r_{12} = 1, r_{21} = 2, r_{22} = 1$, where one NE would be $\pi_1 = (0, 1), \pi_2 = (1, 0)$. In this example, we have an isolated stationary policy with $\delta^* = 0.$ > Precise definition of $\pi_i^{*k}$? The exact definition is that $\pi_i^{*k} \in \arg\max_{\pi_i} V_i^{\pi_i, \pi_{-i}^k}(\rho)$. > l. 153: why $f(\eta)\geq 0$? NPG updates $\pi_i^{k+1}$ as follows: $\pi_i^{k+1}(a_i|s) = \arg\max_{\pi_i} \eta \langle \bar{r}_i^k(\cdot), \pi_i(\cdot) \rangle - KL(\pi_i || \pi_i^k)$. Therefore, $f(\eta) = \sum_i \langle \bar{r}_i^k(\cdot), \pi_i^{k+1}(\cdot) - \pi_i^{k}(\cdot) \rangle \geq \sum_i KL(\pi_i^{k+1} || \pi_i^k) / \eta \geq 0$. > Extension to the stochastic setting: We follow [33] and only consider the oracle setting in our analysis. In general, an oracle can be estimated by Monte Carlo or temporal difference methods for the stochastic setting and can be analyzed similarly as [1]. However, as mentioned by the reviewer, it is much harder to handle $c$ in the stochastic setting. We will add the above comment and mention the sample-based results in [8] in our final version. We leave the related analysis as future work. *For brevity, please see response to reviewers Qt2L and tti4 for referenced works.* --- Rebuttal Comment 1.1: Title: post rebuttal Comment: I confirm that I have read the authors’ rebuttal and I thank them for their response. I still think that assumption 3.2 is not a verifiable assumption under its current form. While the dependence on the suboptimality gap is natural as it appears in prior work (as I also mentioned in my original review and the authors further reconfirmed it), the suboptimality gap is usually **proved** to be a positive quantity (see for e.g. Lemma 3 in Khodadadian et al. 2022). Nevertheless, I acknowledge that this paper addresses a more challenging multi-agent setting. I raise my score to 6 following the authors’ rebuttal. Remaining limitations I see include Assumption 3.2, the asymptotic nature of the rate and the difficulty to extend the results to the stochastic setting given the dependence on the constant $c$ which positivity is difficult to guarantee in this setting (beyond assuming oracle access). I also encourage the authors to improve the writing, presentation and discussion of related works along the lines of the rebuttal. Khodadadian et al. 2022, On linear and super-linear convergence of natural policy gradient algorithm, Systems and Control Letters. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for carefully reading our rebuttal and raising the score. As observed by the reviewer, due to the challenging multi-agent setting, $\delta^k > 0$ is not necessarily true for all iterations as Lemma 3 in [R1]. For a few iterations $k$, it is possible to have $\delta^k = 0$. Therefore, we only define $\delta^* = \lim_{k \to \infty} \delta^k$ and use $\delta^*$ in the upper bound (cf. Table 1 and Corollary 3.6.1). We agree that the assumption about $\delta^*$ cannot be proved directly. Instead, it is an additional assumption on the structure of (Markov) potential games. We agree that the introduction of $c$ makes the analysis of stochastic setting difficult. This problem also exists in the previous works [33] and we will leave it as our future work. Additionally, we acknowledge the limitations and potential improvements to writing, presentation, and discussion of related works. All of them will be addressed in the final version. Moreover, our results can be extended to a more general form of potential function as in [8]. By using Lemmas 2 and 21 in [8], we can replace Lemma 3.5 in our paper with the following lemma. **Lemma B.4** Given policy $\pi^k$ and marginalized advantage function $\bar{A}_i^{\pi^k}(s, a_i)$, for $\pi^{k+1}$ generated using independent NPG updates, we have the following inequality, $$\Phi^{\pi^{k+1}}(\rho) - \Phi^{\pi^k}(\rho) \geq \left(\frac{1}{1-\gamma} - \frac{2M^3 \max_i |\mathcal{A}\_i| n \eta}{(1-\gamma)^3}\right) \sum_s d\_{\rho}^{\pi\_i^{k+1}, \pi\_{-i}^k}(s) \sum\_{i=1}^n \langle \pi\_i^{k+1}(\cdot|s), \bar{A}\_i^{\pi^k}(s, \cdot)\rangle. $$ Using our new Lemma B.3 as a bridge, we can get a similar convergence guarantee without the explicit definition of $\phi$. Some additional terms about $M$ (distribution mismatch coefficient) and $|\mathcal{A}\_i|$ (size of action space) will be introduced, but the $O(1/\epsilon)$ order will be kept. [R1] Khodadadian et al. On linear and super-linear convergence of natural policy gradient algorithm, Systems and Control Letters, 2022. [33] Runyu Zhang, Jincheng Mei, Bo Dai, Dale Schuurmans, and Na Li. On the global convergence rates of decentralized softmax gradient play in markov potential games. Advances in Neural Information Processing Systems, 35:1923–1935, 2022. [8] Dongsheng Ding, Chen-Yu Wei, Kaiqing Zhang, and Mihailo Jovanovic. Independent policy gradient for large-scale markov potential games: Sharper rates, function approximation, and game-agnostic convergence. In International Conference on Machine Learning, pages 5166–5220. PMLR, 2022.
Summary: The paper provides a new analysis for the (Markov) potential games with a $O(1/\epsilon)$ convergence rate. The new results are problem-dependent and may be a tighter guarantee for certain classes of potential games. Asymptotic guarantees are also provided to elaborate on the problem-dependent nature of the results. Strengths: The results it presents showcase an improvement over previous results on (markov) potential games. The new rates are now unaffected by the number of actions and are at the order of $O(1/\epsilon)$. The problem-dependent nature of the results may provide a tighter guarantee on a certain class of potential games and the new analysis methods may lead to future works. The results are extensively discussed and empirical results are provided to corroborates the theoretical results. Weaknesses: The new results do not seem to be directly comparable to the previous ones due to the use of a suboptimality gap. I encourage the authors to explicitly state this early in the paper to avoid confusion (e.g. in Table 1). I also encourage the authors to complement the work with more empirical results. For example, it would be interesting to see how the algorithm performs against previous ones when the suboptimal gap is very small. It may also be helpful to state when the suboptimality gap is very small, the results can degenerate into previous results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Table 1, the results for PG + softmax [33] also include a c in the denominator. Is the c defined as the same as this paper? 2. It seems that in [33], the c in the denominator can be alleviated by using log barrier regularization. Will log barrier regularization have the same effect on the results presented in this paper? 3. In Definition 2.1, the potential function is assumed to take a specific form with $\phi$. However, this form does not seem to be needed in previous analyses. How important is this specific form to the analysis and can it be lifted? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, it is discussed in the conclusion part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and suggestions concerning our paper. Please see our response below with respect to the specific comments and we sincerely hope that the reviewer would consider increasing the score. **Q1.** "The new results do not seem to be directly comparable to the previous ones due to the use of a suboptimality gap. I encourage the authors to explicitly state this early in the paper to avoid confusion." **Response:** We use Table 1 to provide a clear summary of our result and existing works, which performs the same function as Table 1 in [33]. $\delta^*$ was included in the iteration complexity in our Table 1. We will further clarify the use of the suboptimality gap in our statements in the abstract and contributions. **Q2.** "I also encourage the authors to complement the work with more empirical results." **Response:** We have added a purposefully constructed example to show the impact of $\delta^*$ on the algorithm in practice. We consider an example of a 2-by-2 matrix game with the reward matrix $r = \begin{bmatrix} 1&2\\\3+\delta^*&3 \end{bmatrix}$. For the experiments, we have selected various values of $\delta^*$ ranging from $1e^{-3}$ to 10. We run the same algorithm with the same initial policy for all experiments, and plot both the NE-gap and the L1 accuracy (L1 distance between the current-iteration policies and Nash policies) of the algorithm. It can be seen from the experiments that $\delta^*$ indeed plays an important role in the convergence of the algorithm. **Please refer to newly attached pdf in "Authors' Response to All" for details.** **Q3.** "It may also be helpful to state when the suboptimality gap is very small, the results can degenerate into previous results." **Response:** In fact, the iteration complexity of independent NPG algorithms is the smaller of the results in this paper and those in [33]. The specific minimum value depends on $\delta^*$, which depends on the structure of (Markov) potential games. **Q4.** "Is the c defined as the same as this paper?" **Response:** Yes, the definition of $c$ is the same. **Q5.** "Will log barrier regularization have the same effect on the results presented in this paper?" **Response:** Since the log-barrier regularization repels the trajectory from regions with small policy values, we can derive a lower bound for policy value as Lemma 24 in [33]. While it is true that the log-barrier regularization can remove the dependence on $1/c$, the introduction of log-barrier parameter $\lambda$ makes the convergence rate slower by $O(1/\sqrt{K})$ since the upper bound of convergence rate has the form $\frac{c_1}{\lambda K} + c_2 \lambda$. **Q6.** "In Definition 2.1, the potential function is assumed to take a specific form with $\phi$. However, this form does not seem to be needed in previous analyses. How important is this specific form to the analysis and can it be lifted?" **Response:** Our formulation of MPG was originally adapted from that of [33, 34, R1, R2]. The definition of $\phi$ provides an additional structure to the problem, which changes the way of analysis and may lead to better convergence rates. However, it should be emphasized that even under this formulation, the best-known convergence rate is $O(1/\sqrt{K})$ [33, 34]. [33] Runyu Zhang, Jincheng Mei, Bo Dai, Dale Schuurmans, and Na Li. On the global convergence rates of decentralized softmax gradient play in markov potential games. Advances in Neural Information Processing Systems, 35:1923–1935, 2022. [34] Runyu Zhang, Zhaolin Ren, and Na Li. Gradient play in multi-agent markov stochastic games: Stationary points and convergence. arXiv preprint arXiv:2106.00198, 2021. [R1] Macua, Sergio Valcarcel, Javier Zazo, and Santiago Zazo. "Learning Parametric Closed-Loop Policies for Markov Potential Games." International Conference on Learning Representations. 2018. [R2] Zazo, Santiago, et al. "Dynamic potential games with constraints: Fundamentals and applications in communications." IEEE Transactions on Signal Processing 64.14 (2016): 3806-3821. --- Rebuttal Comment 1.1: Comment: Thank you for the explanation. I wonder if the results can be extended to the case with the more general form of potential function? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal carefully. Yes, the results can be extended to a more general form of potential function as in [8]. By using Lemmas 2 and 21 in [8], we can replace Lemma 3.5 in our paper with the following lemma. **Lemma B.4** Given policy $\pi^k$ and marginalized advantage function $\bar{A}_i^{\pi^k}(s, a_i)$, for $\pi^{k+1}$ generated using independent NPG updates, we have the following inequality, $$\Phi^{\pi^{k+1}}(\rho) - \Phi^{\pi^k}(\rho) \geq \left(\frac{1}{1-\gamma} - \frac{2M^3 \max_i |\mathcal{A}\_i| n \eta}{(1-\gamma)^3}\right) \sum_s d\_{\rho}^{\pi\_i^{k+1}, \pi\_{-i}^k}(s) \sum\_{i=1}^n \langle \pi\_i^{k+1}(\cdot|s), \bar{A}\_i^{\pi^k}(s, \cdot)\rangle. $$ Using our new Lemma B.3 as a bridge, we can get a similar convergence guarantee without the explicit definition of $\phi$. Some additional terms about $M$ (distribution mismatch coefficient) and $|\mathcal{A}\_i|$ (size of action space) will be introduced, but the $O(1/\epsilon)$ order will be kept. We thank the reviewer for the constructive suggestions. We will provide additional discussion and remarks on the structure of MPG in our revision. The detailed proof of this new Lemma will also be added to Appendix.
Summary: The article considers what the authors describe to be “natural policy gradient” learning in normal form and Markov potential games. This is the discrete time algorithm in which individual players' mixed strategies are multiplied by a softmax response to the opponent mixed strategies then normalised. Convergence rates are derived in both normal form and the Markovian potential games. Strengths: The article is nicely written, and the claimed results are supported by the theory. The convergence rate results are nice, given the general difficulty to provide convergence rates in anything multiagent, and especially in Markovian games - of course restricting to situations in which there is a potential function for the full Markovian game makes things much less impossible, but it is still a hard problem. The paper is nicely self-contained, and a real pleasure to read. Weaknesses: I have some doubts over the claimed results which I would like to see clarified. The authors claim that Thm3. Implies a 1/eps convergence rate. I don’t buy it I’m afraid. c and delta_K are not controlled. They could be arbitrarily small, at least without further work. I suspect c is okay, although a sudden switch in best response could easily result in a very small pi_i^k(br_i(pi_{-i}^k)) coming into c late in the process. And since delta is the optimality gap when playing a mixed strategy, I find it very difficult to see how to constrain it effectively. (I think line 158 tells us that c is the smallest ever pi_i^k(br(pi_{-i}^k)), and delta_K is the smallest optimality gap that occurs up to time K – if I have misinterpreted this then my objections may dissipate!) The synchronous form of "learning", and the tight construction of Markovian potential games with average reward, means the step up to Markovian settings is much less then in a less restrictive setting. A more philosophical point is that the article assumes players can receive oracle information about the long term payoff of any action. While it makes for a nice compact paper, I think for NeurIPS the authors need to at least posit some suggestions for where learners might be able to access the advantage functions that are required to implement the method. Furthermore, the dynamic is well known as the multiplicative weights algorithm, and I would expect the authors to compare their results with those presented under the multiplicative weights description. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. In (5), the function f has dependence only on alpha. However there is also important dependence on k, and on pi_{-i}^k. This may seem like pedantry, but I think it’s important (see below). 2. In lines 154/155 the definition of a^k_{i_q} is clumsy, due to possible multiplicities in the argmax for a^k_{i_p}. I think that a^i_q is only needed to define the optimality gap? If this is the case, why not just define delta^k directly max_a r^i(a) – max_{a\notin \argmax} r(a), or something like that? The current formulation taking a_i_q to be the max that is not a_i_p is incorrect, is noted to be incorrect in the text, but is used anyway. 3. In (6) we begin to get bitten by the lack of care in defining what the functions depend on. In particular, since the r_I^k functions depend on mixed strategies pi_{-i}^k, so does the delta term. And so delta_k may well be extremely small. Have I missed something here? 4. On line 159, we see that c_K and delta_K are the smallest such quantities that are observed in the first K iterations of the algorithm, and indeed that c is minimum c observed in the limit. The notation means that this path-dependency is somewhat obscured. The authors try to argue, later, that these quantities are not small. But I do not think the argument has been made strongly enough. Furthermore, it seems strange to take a limit of c_K values, but not take a limit of delta_K values in the theory that follows? This is clearly very relevant to my point made in "weaknesses". 5. First inequality in the proof of Theorem 3.3, I think should be an equality? 6. Line 259, I think we cannot claim that |delta^k-delta*| is small, since we are only assuming a lim inf? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: No discussion of limitations is presented. I don't feel such a discussion would add much to the paper, but the review form asks the question! Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and support of our paper as well as the valuable suggestions. We are encouraged by the fact that the reviewer finds our paper "nicely written", with "nice convergence rate results", andthat it is "self-contained". Please see our response below with respect to the specific comments. **Q1.** "$c$ and $\delta_K$ are not controlled. They could be arbitrarily small, at least without further work." **Response:** We understand the concern on $c$ and $\delta_K$ raised by the reviewer. **We refer to the above "Authors' Response to All" for a more detailed explanation.** **Q2.** "... the step up to Markovian settings is much less than in a less restrictive setting." **Response:** We agree that the Markov potential game (MPG) is a restrictive setting compared to the general multi-agent RL (MARL) setting. Considering the difficulty of multi-agent settings, we believe that the convergence analysis of independent NPG in MPG will be an important step in solving the general MARL problems. **Q3.** "A more philosophical point is that the article assumes players can receive oracle information. ... where learners might be able to access the advantage functions that are required to implement the method." **Response:** We follow [33] and only consider the oracle setting in our analysis. In general, an oracle can be estimated by Monte Carlo or temporal difference methods for the stochastic setting and can be analyzed similarly as [1]. However, as mentioned by the Reviewer XDXN, it is much harder to handle $c$ in the stochastic setting. We will add the above comment in our final version and leave the related analysis as future work. **Q4.** "... and I would expect the authors to compare their results with those presented under the multiplicative weights description." **Response:** As pointed out by [1] (after Lemma 15), NPG with softmax parameterization is "identical to the classical multiplicative weights updates". **Q5.** "function f ... also important dependence on k, and on $\pi_{-i}^k$." **Response:** At iteration $k$, both $\pi_i^k$ and $\bar{r}_i^k$ (oracle) are known and fixed. So the only decision variable for function $f$ is $\alpha$. For clarity, we will use $f^k(\alpha)$ in the final version. **Q6.** "the definition of $a^k_{i_q}$ is clumsy ... ." **Response:** The notations for action sets have been updated in the supplementary materials, please see **Authors’ Response to All** for the updated notations. For the main paper, it is enough to only define $\delta^k$ directly. The definitions of $a^k_{i_q}$ and $a^k_{i_p}$ were in fact used only for the proof of Lemma 3.2 in the appendix. We included them in our main paper only for consistency of analysis. **Q7.** "... we begin to get bitten by the lack of care in defining what the functions depend on" **Response:** Please refer to the response to Q5. **Q8.** "... Furthermore, it seems strange to take a limit of $c_K$ values, but not take a limit of $\delta_K$ values in the theory that follows" **Response:** $c^k > 0$ and $c > 0$ are guaranteed as shown in [33], but $\delta^k > 0$ is not necessarily true for all iterations. For a few iterations $k$, it is possible to have $\delta^k = 0$. Therefore, we only define $\delta^* = \lim_{k \to \infty} \delta^k$ and use $\delta^*$ in the upper bound (cf. Table 1 and Corollary 3.6.1). **Q9.** "First inequality in the proof of Theorem 3.3, I think should be an equality?" **Response:** It should be an inequality due to the fact that NE-gap takes the maximum over all agents, whereas the function $f$ is defined with respect to the summation. **Q10.** "Line 259, I think we cannot claim that $|\delta^k-\delta^*|$ is small, since we are only assuming a lim inf?" **Response:** The reviewer is correct. In fact, asymptotic convergence for this algorithm is guaranteed in previous works [33], and a limit exists. We will replace $\liminf$ by $\lim$ in the final version. [1] Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. The Journal of Machine Learning Research, 22(1):4431–4506, 2021. [33] Runyu Zhang, Jincheng Mei, Bo Dai, Dale Schuurmans, and Na Li. On the global convergence rates of decentralized softmax gradient play in markov potential games. Advances in Neural Information Processing Systems, 35:1923–1935, 2022.
Summary: This paper studies the convergence of Natural Policy Gradient method (NPG) in Markov Potential Game. Under stronger assumptions, the convergence result improves upon previous ones. Strengths: Under the considered setting, NPG is shown to achieve a $1/K$ convergence rate for multi-agent Markov Potential Game, matching the convergence rate in single-agent setting. Discussion on the parameters that appear in the convergence rate is presented in detail. Weaknesses: The convergence result only makes sense under the assumption that $c>0, \delta>0$. However, such assumption could be too strong and unnecessary (as the table 1 indicates). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Line 177: "A small value for c generally describes a policy that is stuck at some regions far from an NE, yet the policy gradient is small. It has been shown in [17] that these ill-conditioned problems could take exponential time to solve even in single-agent settings." In my opinion, this justification is not suitable. In single-agent settings, NPG has a nice convergence that does not depend on $1/c$, while [17] shows that PG can take exponential many iterations to converge. Thus, taking [17] as an excuse, it is okay to introduce certain assumptions (e.g. concentrability) to ensure better convergence, but directly assuming $c>0$ is still too strong (and possibly unnecessary). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback regarding the paper. Please see our responses below with respect to the specific comments. We believe that we have addressed all the concerns raised by the reviewer, and we sincerely hope that the reviewer would consider increasing the score. **Q1.** "$c > 0, \delta > 0$ could be too strong and unnecessary." **Response:** We understand the concern on $c$ and $\delta$ raised by the reviewer. **We refer to the above "Authors' Response to All" for a more detailed explanation.** **Q2.** "... this justification is not suitable. In single-agent settings, NPG has a nice convergence that does not depend on $1/c$, while [17] shows that PG can take exponential many iterations to converge ... directly assuming $c>0$ is still too strong (and possibly unnecessary)." **Response:** In single-agent RL, policy gradient depends on a product of the advantage function, the occupancy measure, and the action probability (Eqn. 10 in [1]). Therefore, it is possible for PG algorithm to make a small update although the advantage function is significant. Single-agent NPG solved this issue using the Moore-Penrose inverse of Fisher information matrix to cancel out occupancy measure as well as action probability. However, in MARL, Fisher information matrix does not fully cancel out everything, since its calculation only uses local policy. Based on this observation, we make a comparison between single-agent PG and multi-agent NPG. Note that we are not the first work to introduce $1/c$ in the analysis of multi-agent independent NPG algorithms. The same assumption is made in [33] with similar accompanying statements saying that "Based on our analysis and numerical results, even for natural gradient play—which is known to enjoy dimension-free convergence in single agent learning we find in the multiagent setting that it can still become stuck in these undesirable regions. Such evidence suggests that preconditioning according to the Fisher information matrix is not sufficient to ensure fast convergence in multi-agent learning. " We will rephrase our statements accordingly in the final version. [1] Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. The Journal of Machine Learning Research, 22(1):4431–4506, 2021. [33] Runyu Zhang, Jincheng Mei, Bo Dai, Dale Schuurmans, and Na Li. On the global convergence rates of decentralized softmax gradient play in markov potential games. Advances in Neural Information Processing Systems, 35:1923–1935, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. I now accept the justification of introducing $c, \delta$ in analyzing the complexity rate, and I will raise my rate accordingly. By the way, perhaps it would be better to highlight that the $O(1/K)$ convergence is in some sense asymptotic under such conditions. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for carefully reading our rebuttal and raising the score. We will clarify the "asymptotic" property of results in the abstract, contributions, and Table 1 in our final version.
Rebuttal 1: Rebuttal: ## Authors' Response to All (Discussion on $c$ and $\delta^*$): We wholeheartedly thank all the reviewers for their time and their constructive feedback on our paper. The reviewers' comments provided us with great insights on how to increase clarity and reduce potential confusion about our paper. All the reviewers have raised similar concerns regarding our assumptions on $c > 0$ and $\delta^* > 0$ in the paper. Please note that reference numbers and line numbers are based on the supplementary material (MPG Main\&Appendix.pdf). First, for simplicity, we recall the definitions of $c^k, c\_K, c$ and $\delta^k, \delta\_K, \delta^*$ in the potential game setting (Lines 154-157 in “MPG Main\&Appendix.pdf”). For agent $i$ at iteration $k$, define $a\_{i\_p}^k \in \arg\max\_{a\_j \in \mathcal{A}\_i} \bar{r}\_i^k(a\_{j}) =: \mathcal{A}\_{i\_p}^k$ and $a\_{i\_q}^k \in \arg\max\_{a\_j \in \mathcal{A}\_i\backslash \mathcal{A}\_{i\_p}^k} \bar{r}\_i^k(a\_{j})$, where $\mathcal{A}\_{i\_p}^k$ denotes the set of the best possible actions for agent $i$ in the current state and $\bar{r}\_i(a\_i) = \mathbb{E}\_{a\_{-i} \sim \pi\_{-i}}[r\_i(a\_i, a\_{-i})]$. We define $$c^k := \min\_{i\in [n]} \sum\_{a\_j \in \mathcal{A}\_{i\_p}^k} \pi\_i^k(a\_j) \in (0, 1),\quad \delta^k := \min\_{i \in [n]} [\bar{r}\_i^k(a\_{i\_p}^k) - \bar{r}\_i^k(a\_{i\_q}^k)] \in (0, 1). $$ Additionally, we denote $c_K := \min_{ k\in[K]} c^k; c := \inf_{K \to \infty} c_K; \delta_K := \min_{k\in[K]}\delta^k, \delta^* := \lim_{k \to \infty} \delta^k$. - The effect of $c$ was discussed in our submission at l. 175-183. Our exact definition of c is indeed the same as the $c$ defined in [33], with similar motivation and justifications. Combining Lemma 2 in [33] and Proposition 3.1 in this paper, $c^k$ asymptotically converges to 1. Since $c^k > 0$ for any softmax policy parameterization ($\pi_{i}^k(a_i|s) > 0$) and $c^k$ asymptotically converges to 1, we have $c := \inf_k c^k > 0$. This can also be observed in Fig. 1a in the original paper. Therefore, we believe our assumption on $c$ is mild and has been used within the same context in prior work. - The introduction of the suboptimality gap $\delta^k$ enables us to draw a crucial connection between the difference in the potential function and the NE-gap using Lemma 3.2. As stated in Section 3.3, the best rate without this assumption is $O({1/\sqrt{K}})$. We would also like to justify our introduction of the suboptimality gap with reference to two separate lines of work. In single-agent RL, Khodadadian et al. [R1] proved asymptotic geometric convergence with the introduction of "optimal advantage function gap $\Delta^k$". $\Delta^k$ is very similar to our definition of $\delta^k$. Additionally, the notion of suboptimality gap, though different in formulation, is commonly used in the multi-armed bandit literature [R2]. In both works, the introduction of a suboptimality gap greatly benefits the analysis. It also serves a similar purpose in our work. We agree with the reviewers that requiring $\delta^k > 0$ to hold for all iteration $k$ is indeed restrictive. However, we found that occasional $\delta^k \approx 0$ does not affect the global convergence rate. Motivated by this observation, we have proposed a theoretical relaxation with Cor. 3.6.1. Instead of requiring the suboptimality gap to be always non-zero, we only require the limit (existence of a limit is due to asymptotic convergence guarantee in [33]) to be non-zero (Assumption 3.2), which is far from restrictive. Recalling our definition in Eq. (6), the suboptimality gap is defined as the performance gap between the best and second-best classes of actions. Having a zero suboptimality gap implies that all actions of one agent belong to one class with the exact same expected reward values, which implies a zero NE-gap for the specific client. Moreover, we provide more experiments regarding special scenarios in MPGs in the attached pdf file as requested by Reviewer tti4. We will revise and rephrase the related statements in order to reduce the potential confusion to readers in the final version. Additionally, we will incorporate valuable suggestions concerning other related literature, and correct the typos and notation errors pointed out by various reviewers. [33] Runyu Zhang, Jincheng Mei, Bo Dai, Dale Schuurmans, and Na Li. On the global convergence rates of decentralized softmax gradient play in markov potential games. Advances in Neural Information Processing Systems, 35:1923–1935, 2022. [R1] Khodadadian, Sajad, et al. "On linear and super-linear convergence of Natural Policy Gradient algorithm." Systems \& Control Letters 164 (2022): 105214. [R2] Lattimore, T. and Szepesvári, C., 2020. Bandit algorithms. Cambridge University Press. Pdf: /pdf/6052843446fc33fd0d1984347f1246d27a0538b4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Motion Refinement for Unsupervised Face Animation
Accept (poster)
Summary: This paper anaylizes the limitations of existing face animation methods in capturing the finer facial motions, and hence design a non-prior-based motion refinement approach to achieve finer face motion transfer in local areas. A correlation volume between the source and driving images is constructed as non-prior motion evidence, and a refinement module is introduced to generate the fine facial motions iteratively. Extensive experiments show the effectiveness of this paper. Strengths: 1. The idea of involving a non-prior based motion refinement module is effective to capture the fine motions in local areas. 2. The manuscript is well written and clearly states the main idea and contribution in face animation. 3. This paper performs extensive experiments, and the results of the same-identity video reconstruction task outperform many previous methods. Weaknesses: 1. This paper concentrates on the motion issue, but the video results of this paper are not impressive enough. No obvious improvements can be seen given the video results on the self reconstruction. 2. This paper does not show the quantitative results in terms of some usual metrics, i.e. CSIM and FID, on the cross-identity reenactment task. I wonder the performance of this work in preserving identity. 3. The introduction should be improved, as it only concludes limited contributions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the authors give more explations about the meanings of flow in the figures? I can not see the changes after updation. There are obvious improvements on the occlusion map, does the improvement in motions mainly come from the enhanced occlusion map. 2. Since Face vid2vid [1] shows a promissing performance on the cross-identity reenactment task, it is suggested to give a comparison with Face vid2vid. It is good to provide the video comparisons. [1] Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu. One-shot free-view neural talking-head synthesis for video conferencing. In CVPR, 2021. 1. It is better to provide computation cost evaluation at inference, i.e. Flops, memory, and FPS, compared to other SOTA methods. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper is effective to refine the motions to obtain better results, but is still limited to the quality of the learned keypoints, as the limitaions in the paper stated, which I think is more urgent to solve for unsupervised face animation. And this paper did not give comprehensive evaluations on the performance in identity preserving, which is important for face animation in practical applications. I'm not sure whether it is qualified for the conference. If the authors could give reasonable response for the problems stated in **Weakness** and **Questions**, I would like to consider improving my rating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer zuqN ### Q1: No obvious improvements can be seen given the video results on the self reconstruction. On one hand, we will rearrange the video results to form a nine-square presentation for better comparison. On the other hand, we suggest zooming in our Figure 4 in the main paper for detailed comparison, since our method produces finer motions in local areas such as lips and eyes. And we have also uploaded more video results on the CelebV-HQ dataset. ### Q2: This paper does not show the quantitative results in terms of some usual metrics, i.e. CSIM and FID, on the cross-identity reenactment task. Thanks for the good suggestion. It is indeed our carelessness. And we have compared this metric in the **common response**. While our method performs slightly better on the CSIM metric, it produces higher-quality motion transfer, as suggested by the better ARD and AUH metrics. ### Q3: The introduction should be improved, as it only concludes limited contributions. Thanks for the suggestion. We will revise it in the later version. But are there any more specific suggestions? Since "it only concludes limited contributions." is a little confusing to me. ### Q4: Could the authors give more explorations about the meanings of flow in the figures? Does the improvement in motions mainly come from the enhanced occlusion map. * Sorry for the confusion. The flows in the figure are actually estimated from a source and driving image pair, and they come from two different iterations. For better understanding, we provide the refined motion flow across all iterations in the video (the last few seconds). * The improvement in motions mainly comes from the enhanced motion, but not the occlusion map. As indicated by the component ablation studies in Table A2 of the supplementary material, removing flow updation undergoes a considerable performance decrease. However, the updated occlusion map is still helpful for understanding the flow updation process, since it reflects the non-warping area which is reversed to the motion flow that controls the warping area. ### Q5: Comparison with Face Vid2Vid Thanks for the suggestion. We quantitatively compare our method with Face vid2vid on the Voxceleb dataset. As seen in the table, our method outperforms it by a large margin. We also provided the link to the video comparison, which can be accessed from the Area Chair, please check in for details. It can be seen that similar to other methods, Face vid2vid also fails in capturing finer face motions, and our method generally produces better animation. | | L1 | PSNR | LPIPS | AKD | AED | |:---------------------:|:------:|:-----:|:-----:|:-----:|:-----:| | FaceVid2Vid | 0.0444 | 23.40 | 0.175 | 1.405 | 0.138 | | ours | **0.0353** | **25.51** | **0.152** | **1.176** | **0.107** | ### Q6: It is better to provide computation cost evaluation at inference, i.e. Flops, memory, and FPS, compared to other SOTA methods. Thanks for the good suggestion. We have analyzed this in the **common response**. --- Rebuttal Comment 1.1: Comment: Thank the authors for providing such a comprehensive reponse to all my questions. For limited contributions, this paper only focuses on refining the motion representation through a non-prior-based motion refinement approach, which also brings increased inference cost. It is more like a incremental method based on current models. The experiment results show the improvements in terms of motion representation over the proirs, so I raise the rating to 5. But I am still a bit conflicted about whether it is qualified for the conference. For Face vid2vid, it is known that Face vid2vid performs good at identity preservation but fails in motion modeling. From the provided videoes, we can also observe this point. But I am confused why the generated quality is so bad. --- Reply to Comment 1.1.1: Title: Post discussion Comment: Thank you for improving the rating! We address your remained concerns in the following. ### Discussion 1: For limited contributions, this paper only focuses on refining the motion representation through a non-prior-based motion refinement approach, which also brings increased inference costs. It is more like an incremental method based on current models. * For the contribution, we would like to emphasize that existing face animation methods generally ignore the importance of non-prior based motion modeling. Since the existing face animation framework generally consists of the two stages of **motion estimation** and **image generation**, improving motion representation is meaningful and necessary for this task. And we proposed an enhanced motion representation called the non-prior based motion refinement, which is proven to be effective through extensive experiments. Moreover, due to the uniqueness of the non-prior based motion modeling, our method is generalizable to improve a bunch of existing prior-based animation methods such as FOMM[1], TPSM[2], MoTrans[4], etc. This generalization ability also reflects the importance of modeling non-prior motion for face animation, indicating that our method is not incremental but a novel approach that is complementary to existing methods. * We acknowledge that the non-prior based motion refinement will bring some extent increased inference cost. But we still achieve a good tradeoff between efficiency and effectiveness, as in the common response, our method is only 3.26 FPS slower than FNeVR[5], and 1.39 FPS slower than TPSM, while faster than DaGAN[3]. We should also note that currently in face animation, with not-so-bad inference speed, better performance may be preferred than better speed. For example, the suggested method Face Vid2Vid is actually the slowest one among existing methods, due to the heavily used 3D convolution for 3D landmark estimation and motion estimation, and it only achieves 5.9 inference FPS and 593.5 GFLOPs under our same testing of a single NVIDIA 3090. [1] First Order Motion Model for Image Animation, NeurIPS 2019. [2] Thin-plate Spline Motion Model for Image Animation, CVPR 2022. [3] Depth-aware Generative Adversarial Aetwork for Talking Head Video Generation, CVPR 2022 [4] Motion Transformer for Unsupervised Image Animation, ECCV 2022. [5] FNeVR: Neural Volume Rendering for Face Animation, NeurIPS 2022. ### Discussion 2: Better identity preservation of Face Vid2Vid. Why the generated quality is so bad. * The ability of identity preservation is indeed a limitation of our method, and it also exists in existing unsupervised animation methods. It should also be noted that the Face Vid2Vid used 3D head pose supervision during training, which may lead their method to better disentangling the global head motion and the local expression motion, thus achieving better identity preservation. We will include this as a limitation in our later version, and in future work, introducing 3D supervision may help us alleviate this problem. * On one hand, as you suggested, the Face Vid2Vid performs not well in motion transfer, though preserves better identity. And without losing generality, we have shown the representative videos which may a little magnify their inability in finer motion modeling. On the other hand, since the authors of Face Vid2Vid didn't release their source code, we reproduce it based on the [unofficial code](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis) following DaGAN and FNeVR, though we have checked that the main components did correspond to the original paper, the detailed implementations may still cause some performance gap. But we believe this may not be significant as the reproducer communicated with the authors of Face Vid2Vid during the implementation.
Summary: This paper proposes a method for face animation via learning to refine a coarse motion field. The refinement is performed in a recurrent manner using the previous iteration's motion, occlusion map and structure correlation volume. Both the qualitative and quantitative results show improvement over prior art. Strengths: 1) Qualitative results show that the proposed method is clearly model finer facial movement much better than prior art, especially around the eye. FNeVR is very close but the proposed method has fewer uncanny artifacts. 2) Quantitatively as well, the proposed method outperforms prior art. 3) The paper is well written. Weaknesses: 1) Across all the results, I notice a strong identity shift during animation. While this is common (and seemingly worse) in prior art as well, it is important to quantify. One way to measure this is to report the FaceIDLoss or L2-loss as the head is rotated 15, 30, 60 degrees from the original head-pose. I believe this is an important evaluation to include for the sake of completeness. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: 1) An important limitation of this method is that it is sensitive to the scale of the face of the driving video. For example, around the 1:47 of the supplementary video, the face structure of the animated face deviates significantly from the source and is actually closer to that of the target. This is unavoidable due to the use of 2D key points in the method, in fact it is expected. The authors must include this as a limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer SbHP ### Q1: Identity shift during animation, and FaceIDLoss Thanks for the good suggestion. We have compared the common face identity preservation metric CSIM in the **common response**, and our method performs slightly better on this metric while much better on the motion related metrics. ### Q2: Limitation on the face scale Thanks for the suggestion. Indeed this is a limitation of our method, and we will include this point in the later version. --- Rebuttal Comment 1.1: Title: Rebuttal Update Comment: I would like to thank the authors for the rebuttal. After reading it and the other reviews, I am inclined to keep my rating. I would strongly encourage the authors to include the table from Q3 of the general response in the main paper. --- Reply to Comment 1.1.1: Comment: Thank you for the reply and for keeping the rating! Following your suggestion, we will include the table of cross-identity evaluation in the main paper.
Summary: This paper contributes to the field of unsupervised face animation by introducing a novel motion refinement method to overcome the limitations of existing prior-based motion models, especially in estimating detailed facial movements. The paper's approach introduces a new method which uses a structure correlation volume built from keypoint features. This provides motion information that does not rely on prior data. This information is used to iteratively refine the coarse motion flow estimated by a previous motion model. The authors have conducted numerous experiments on challenging benchmarks to test the effectiveness of their approach. The results indicate that this new method enhances the capability of prior-based motion representation by learning how to refine motion. This suggests that their approach can effectively increase the accuracy of unsupervised face animation tasks. Strengths: In this research, the authors present a new unsupervised face animation approach that concurrently learns both coarse (global) and finer (local) facial motions. Their method integrates a local affine motion model to learn the global, coarse facial motion and a novel motion refinement module to compensate for the local affine motion model's ability to model more detailed facial motions in local areas. The motion refinement process is based on the dense correlation between the source and driving images. To achieve this, a structure correlation volume is first constructed using the keypoint features of the source and driving images. The authors then train a model to iteratively generate minor facial motions from low to high resolution. The learned motion refinements are combined with the coarse motion to generate the new image. After performing extensive experiments on widely used benchmarks, the method was found to deliver the best results among existing state-of-the-art methods. The authors have also committed to making the source code for their method publicly available in the future. Weaknesses: 1. The goal of this article is to learn fine facial motion, but using the VoxCeleb dataset may not be sufficient for this purpose. In my view, finer facial motion refers to the ability to reproduce wrinkles, eyeballs, and micro-expressions at high resolution, which may require higher resolution datasets. 2. As noted by the author, the reliability of keypoints estimation heavily influences the quality of finer facial motion. It is not clear whether the improvement of existing models comes from more accurate keypoints estimation or the proposed module. 3. This paper is very similar to RAFT from ECCV2020 in terms of insight and specifically for the module. A clearer comparison between the two works and their essential differences would be helpful. 4. It is important to note that fine facial motion may not equal to better image quality. Therefore, a more thorough evaluation of the estimated motion would be beneficial. Additionally, conducting this task on higher resolution facial images would provide more convincing results. 5. The author sets the iteration number to 8, which is an important hyperparameter. It would be useful to know why this setting was chosen and whether it significantly affects the model's runtime. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This paper is very similar to RAFT from ECCV2020 in terms of insight and specifically for the module, which raises novelty concerns. 2. Related works [1] are not discussed and compared in this paper. [1] Tao, Jiale, Biao Wang, Tiezheng Ge, Yuning Jiang, Wen Li, and Lixin Duan. "Motion Transformer for Unsupervised Image Animation." In European Conference on Computer Vision, pp. 702-719. Cham: Springer Nature Switzerland, 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Both limitations and social impact are discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer pDRj ### Q1: VoxCeleb dataset may not be sufficient for evaluating finer facial motion and higher resolution dataset may be more appropriate. Thanks for the suggestion, but we may not agree that the VoxCeleb dataset may not be sufficient for evaluating finer facial motion, as our experiments in the main paper had proved this (Figure 4, Figure 5 and Table 1) and results from other methods generally failed in capturing finer face motions. We here also perform experiments on the 512-resolution CelebVHQ dataset. As seen in the table, our method can still improve existing SOTA methods on all metrics on the high resolution setting. Since it is time-consuming to train on the 512-resolution dataset, we are not able to reproduce other baselines at this time, and we will include them in a few days. Thanks for the suggestion again. | | L1 | PSNR | LPIPS | AKD | AED | |:---------------------:|:---:|:----:|:-----:|:---:|:---:| | TPSM | 0.0435 | 24.91 | 0.193 | 3.035 | 0.158 | | SCORR+TPSM | **0.0418** | **25.24** | **0.185** | **2.911** | **0.149** | ### Q2: Whether the improvement of existing models comes from more accurate keypoints estimation or the proposed module Thanks for the good question. If I'm not mis-understanding, so the question is whether the improvements come from that the proposed module promotes estimating more accurate keypoints, or come from that the proposed module did learn finer motions. We explain this in the following. Since our structure correlation volume is built on keypoints, the gradient on the structure correlation volume will back-propagate to the keypoint detector, thus it may affect the learning process of keypoints. We validate this by a simple but well-designed experiment on the Voxceleb dataset, that is to stop the gradient flow from our structure encoders to the keypoint detector, by simply detach the keypoints from previous gradient graph before sending them to the structure encoders. The operation is denoted by Ours-$sg$ in the Table. As seen, this operation didn't affect the final performance at all, and the stop gradient operation even performs slightly better. So we believe that the improvements come from that our proposed module did learn finer motions. | | L1 | PSNR | LPIPS | AKD | AED | |:---------------------:|:------:|:-----:|:-----:|:-----:|:-----:| | Ours | **0.0353**| 25.51 | 0.152 | 1.176 | 0.107 | | Ours-$sg$ | **0.0353** | **25.54** | **0.150** | **1.175** | **0.105** | ### Q3: A clearer comparison and essential differences between the RAFT and our method. We initially discussed this in the related work of the main paper, and we here make a more detailed and clearer discussion in the **common response**. We sincerely hope you could reconsider the novelty of our method in leaning non-prior based motion in face animation. ### Q4: Fine facial motion may not equal to better image quality. A more thorough evaluation of the estimated motion would be beneficial. Additionally, conducting this task on higher resolution facial images would provide more convincing results. Thanks for the thoughtful question. In general, the unsupervised face animation methods (both existing and our method) consist of two stages: **motion estimation** and **image generation**. One can investigate better image generation modules to improve the image quality, such as FNeVR which adopted the SPADE[1] module as their image generator and considerably improves the generation performance as seen in their ablation studies. But we think finding better motion estimation is more meaningful and urgent in this area. So we made our motivation fall into learning motion refinements and our experiments did validate this motivation. For evaluating the motion quality, we think the metric **AKD** in the self reconstruction task, and the **ARD** and **AUH** metrics in cross-identity experiments that we present in Table A1 of the supplementary material, are sufficient for the purpose. We also conduct experiments on the high-resolution dataset in Q1. We sincerely hope that you could reconsider this point. [1] Semantic Image Synthesis with Spatially-Adaptive Normalization, CVPR 2019. ### Q5: Why set the current iteration number. Thanks for the question. Due to the length limitation of characters, we addressed this similar concern in the **Q1 of Reviewer mUhK**, please refer to that. ### Q6: Comparison and discussion with existing method [1]. Thanks for suggesting the paper. This method is also prior-based animation methods, which employs a vision transformer for better learning affine motion relationship. So our non-prior based motion refinement module would still be effective for improving it. We thus make a direct comparison with it on the Voxceleb dataset, by applying our motion refinement module on the top of the motion transformer. As seen, the performance even surpass our original result in the main paper, which reflects the robustness of our non-prior based motion refinement module to different prior-based motion models. We will discuss this in the later version. | | L1 | PSNR | LPIPS | AKD | AED | |:---------------------:|:------:|:-----:|:-----:|:-----:|:-----:| | MoTrans | 0.0370 | 25.08 | 0.160 | 1.191 | 0.120 | | SCORR+MoTrans | **0.0352** | **25.64** | **0.150** | **1.164** | **0.108** | [1] Motion Transformer for Unsupervised Image Animation, ECCV 2022. --- Rebuttal Comment 1.1: Comment: Thanks for providing the detailed rebuttal. I have read the other five comments and the rebuttal. The visualization results in the provided video are still not in high quality from my view. I am ok to accept this paper but at the Borderline level. --- Reply to Comment 1.1.1: Comment: Thank you for the reconsideration of our work. We have uploaded more video results of the CelebV-HQ dataset, and it can be clearly observed that our method produced much better results than others. Thank you for the reply. And could you please update your rating? Title: On Updating the Rating
Summary: The paper presents an unsupervised method for learning to create continuous video animations of faces given a single input image and a driving video of the same or a different subject. The method relies on the optical flow between the driving and source images to warp the features of the source image, which are then passed to a generator to create the final output frames. The main contribution of the proposed method lies in the two-step formulation of the flow estimation, in which starting from a coarse prediction using a known prior-based method (Affine Transforms, Thin Plate Splines) it iteratively produces updates to reach a more fine-grained flow (and occlusion) estimation with more local details. The flow updates are learnt in an unsupervised way without using prior models, but by building a correlation volume between the source and driving images. The paper includes experiments, generated images and supplementary videos that validate the authors' claims for more detailed animations in comparison to similar methods. Strengths: - The paper is well written overall and it is easy to follow along with the presented concepts and results. - The paper includes experimental results that validate the authors' claims, as well as ablations that offer a better understanding about model design choices. - The method produces higher-quality generated images/videos than the compared methods for unsupervised learning of face animation. - The idea of using motivation from RAFT to built a similar correlation structure for estimating motion flow for face animation is interesting. Weaknesses: - Some design choices are not empirically or experimentally justified in the paper. For example, why are specifically 6 iterations used to update the initial coarse estimations? Why does it make sense to sample from the same correlation structure for all iterations, while each iteration operates at a different image scale? - Even though the method performs well compared to previous methods, there is still some evident identity shift in cross-subject experiments, meaning that the shape of the face slightly changes from the original to the one of the driving frame. This is evident in Figure 5 (for example 2nd row) as well as in the supplemental video. This effect makes sense because the estimated flow between different subjects possibly includes more deformation information, rather than only deformation because of expressions or rigid motion. Disentangling identity from other motions is a common weakness of cross-identity animation methods, which exists in this method, too. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In 3.4 it is mentioned that a perceptual and an equivariance loss are employed for training. Is the model trained with both same and different subjects for source and driving frames? - How are C^{i} combined to get the final values from the correlation pyramid? - What is the effect of the correlation pyramid to the memory requirements of the method? How does the memory requirements compare with the compared methods? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have included a section discussing the method's limitations and possible negative societal impact. A mentioned limitation (the correlation matrix relying on learned keypoints) might also be the reason for changing the facial shape in cross-subject animations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer mUhK ### Q1: Some design choices are not empirically or experimentally justified in the paper. For example, why are specifically 6 iterations used to update the initial coarse estimations! Thanks for the careful review and sorry for the confused understanding. We set the iteration number according to the image resolutions. Specifically, we start the iteration at a lower feature resolution $H/32\times W/32$ that is meaningful for motion flow for warping, and end the iteration at the highest resolution $H\times W$, thus the total iterations are set to 6 for a 256-resolution image. By changing the highest resolution to $H/2\times W/2, H/4\times W/4, H/8\times W/8$, we obtain iteration settings of $5,4,3$ accordingly. We then analyze the performance and run time of different iteration settings. As seen in the table, the FPS goes down as the iteration number increases, while the performance is generally better with higher iterations especially for the motion-related metric AKD and identity-related metric AED. Overall, the 5-iteration setting can be a good trade-off. | | L1 | PSNR | LPIPS | AKD | AED | FLOPs (G) | FPS | |:--:|:------:|:-------:|:-----:|:-----:|:-------:|:------:|:-------:| | 3 | 0.0357 | 25.37 | 0.153 | 1.194 | 0.112 | **89.86** | **21.30** | | 4 | 0.0354 | **25.54** | 0.152 | 1.188 | 0.113 | 91.20 | 20.45 | | 5 | **0.0353** | **25.54** | **0.151** | 1.177 | 0.108 | 96.34 | 19.56 | | 6 | **0.0353** | 25.51 | 0.152 | **1.176** | **0.107** | 116.5 | 18.57 | ### Q2: The identity shift in cross-subject experiment. Thanks for pointing out this. We addressed this concern in the **common response**. Though our method performs slightly better on the identity-preservation metric, it is indeed a limitation of our method, and in the future, we would like to explore leveraging 3D face information for helping alleviate this problem. ### Q3: Why does it make sense to sample from the same correlation structure for all iterations, while each iteration operates at a different image scale? Thanks for the detailed question. The correlation matrix is of size $h\times w\times h\times w$, where $h=H/4, w=W/4$, $H,W$ define the image size. The indexes of the first two dimensions are corresponded to the driving frame coordinate while that of the last two dimensions denote the source coordinate. By a simple downsampling or upsampling operation on the first two dimensions, we obtain the different-scale correlation matrixes which are used in different iterations. The reason why it makes sense to sample from the same correlation structure for all iterations is that, the warped source features at different resolution should share similar spatial structures, thus we can downsample or upsample the correlation matrix to the different resolution, and then use it to obtain refined motion flow at different resolution for warping different-scale source features. ### Q4: Is the model trained with both same and different subjects for source and driving frames No, similar to FOMM, the model is only trained on the same subjects for source and driving frames. ### Q5: How are $C^{i}$ combined to get the final values from the correlation pyramid? What is the effect of the correlation pyramid to the memory requirements of the method? * For each $C^{i}$, as described in the line 177-179 of the main paper, we lookup a $(2r+1)\times (2r+1)$ patch correlation feature. We then concatenate all correlation features in the pyramid, result in a $P\times (2r+1)^2$-channel correlation feature, where $P$ denotes the pyramid level. * Actually, the correlation pyramid brings few additional memory requirements, as the concatenated correlation feature will be send to a convolution (with kernel size equal to 1) to get a fixed 128-channel feature. That means, the only increased memory requirements come from the increased convolution parameters of size $(P-1)\times (2r+1)^2\times 128$. In practice, we set $P=2, r=3$, and the increased parameter size is equal to $6272=0.006$M, leading to few additional memory requirements. ### Q6: How does the memory requirements compare with the compared methods? Thanks for the question. We addressed this concern in the **common response**. --- Rebuttal 2: Comment: Dear Reviewer mUhK: Thank you for the recognition of our work. We addressed your concerns in the rebuttal. Could you please share your post comments? We are happy to discuss and address any remained concerns. Authors of the submission. --- Rebuttal Comment 2.1: Comment: Thank you for the detailed response. My questions about design choices and memory requirements have been resolved after the rebuttal. Having read the other reviewers' comments and because identity shift is still evident in the provided videos, I would like to keep my score for the paper as is. --- Reply to Comment 2.1.1: Comment: Thank you for the reply and for keeping the score! As suggested, while we focus on the non-prior based motion refinement, identity shift is a common issue in existing unsupervised methods. And in the future, we would like to explore supervision from 3D face information to help alleviate this problem.
Rebuttal 1: Rebuttal: ## Common Response We thank all Reviewers for the valuable feedback and insightful comments. We appreciate the reviewer's positive comments regarding the novelty and the effectiveness of our method. We now clarify the reviewer's common questions as follows. ### Q1: Difference with optical flow and neural style transfer methods (Reviewer ELv3 and pDRj). * We should emphasize that a key difference between our method and optical flow methods lies in building the correlation volume, which we discussed in the related work from line 90 to line 104 in the main paper. Optical flow methods compute the **appearance feature correspondence** of two consecutive frames, while our key idea is to learn the **structural correspondence**. The methods in optical flow can't be directly applied to face animation, as computing appearance feature similarity between driving and source images will naively leak the driving appearance, which is not expected in face animation. As a result, the optical flow methods will directly find appearance from the source face that is similar to the driving face. However, the task of face animation aims to find motion between source and driving (structure correspondence), but not to find appearance correspondence. Existing face animation methods generally model the structural correspondence by local parametric transformations between the source and driving frame, which ignores the importance of **building the non-prior based correspondence**. Thus our efforts in constructing the structure correlation volume, though inspired by the optical flow method RAFT, are meaningful and motivatable for face animation. And based on this motivation, we made the first attempt to find a way to introduce the non-prior evidence for motion refinement. Therefore, we believe our method is novel and can fill some gaps in the field of face animation. * Thank you for suggesting the neural style transfer methods. And we have carefully read the paper [1-2] in this area. We summarize two main differences between our method and theirs, the way to build correspondence and the purpose/motivation of building it. On one hand, they still lie in a similar paradigm with optical flow methods, which computes the visual appearance similarity between the content image and the geometric style image. While our method uses the structure features as input to build correlation volume. More essentially, on the other hand, the neural style transfer methods [1-2] utilize the correspondence matrix to estimate **a global parametric transformation**, which is totally different from our motivation that aims to use the structure correlation for estimating the **non-parametric motion refinement** across all spatial locations. This non-parametric motion modeling is specifically designed for face animation. And it is generally ignored by existing methods. Thanks for suggesting the paper again. We will include these discussions together with the optical flow methods in the later version. [1] Geometric Style Transfer. arXiv 2020. [2] Learning to Warp for Style Transfer. CVPR2021 ### Q2: Run time and memory evaluation (Reviewer mUhK and zuqN) Thank reviewers for the suggestion. We here present the inference-stage run time evaluation. ALL results are obtained by running reconstruction experiments on a single NVIDIA 3090 GPU. Compared to year-2022 methods, the memory cost is similar among all methods, the FPS of our method is slightly worse than that of LIA, DAM, TPSM, and FNeVR, but better than DaGAN. Considering that our method produces the best animation quality, our method achieves better trade-off between performance and inference speed. | | FOMM | MRAA | LIA | DAM | DaGAN | TPSM | FNeVR | Ours | |:--:|:------:|:-------:|:-----:|:-----:|:-------:|:------:|:-------:|:------:| | Memory (G)$\downarrow$ | 6.48 | 6.51 | **2.63** | 6.49 | 7.07 | 6.82 | 6.53 | 7.02 | | FLOPs (G)$\downarrow$ | **56.06** | 61.07 | 88.32 | 56.39 | 89.78 | 142.4 | 130.2 | 116.5 | | FPS$\uparrow$ | **41.64** | 33.04 | 20.98 | 30.29 | 17.72 | 19.96 | 21.83 | 18.57 | ### Q3: Cross-identity evaluation (Reviewer ELv3, 2aeH) and identity preservation (Reviewe mUhK, SbHP and zuqN ) * Thank reviewers for the suggestion. Initially we compare the ARD and AUH methric for cross-identity animation in the Table. A1 of the supplementary, which evaluates the pose and expression quality of generatedimages . We here further compare the CSIM (cosine similarity of the face embedding using arcface[1]) and FID metric. As seen in the following table, our method also performs best in terms of CSIM and FID, indicating that our method preserves better the source identity information. It should be noted that, in self reconstruction task, the AED metric is also designed for evaluating identity preservation, and our method generally performs best in term of this metric. | | FOMM | MRAA | LIA | DAM | DaGAN | TPSM | FNeVR | Ours | |:--:|:------:|:-------:|:-----:|:-----:|:-------:|:------:|:-------:|:------:| | ARD$\downarrow$ | 3.122 | 2.678 | 3.883 | 2.669 | 3.090 | 2.724 | 2.755 | **2.399** | | AUH$\downarrow$ | 0.850 | 0.729 | 0.772 | 0.717 | 0.751 | 0.668 | 0.751 | **0.625** | | CSIM$\uparrow$ | 0.702 | 0.678 | 0.706 | 0.698 | 0.682 | 0.689 | 0.714 | **0.722** | | FID$\downarrow$ | 70.00 | 69.29 | 71.01 | 69.16 | 68.50 | 68.67 | 68.42 | **66.27** | [1] ArcFace: Additive Angular Margin Loss for Deep Face Recognition, CVPR2019 * As suggested, we also conducted a user study experiment. Specifically, 20 participants are asked to evaluate 50 randomly generated videos of different methods, according to the transferred motion and the preserved identity. There are a total of 1000 votes, and we compute the ratio in favor of our method. It can be seen that our method is generally more favorable compared to TPSM and FNeVR. | Ours $vs$ TPSM | Ours $vs$ FNeVR | |:------:|:-------:| | 61.1% | 56.9% |
NeurIPS_2023_submissions_huggingface
2,023
Summary: 1. This work proposed a new unsupervised face animation approach to learn simultaneously the coarse and finer motions. 2. The results outperformed the state-of-the-art methods on two representative datasets. Strengths: 1. The target task is important. 2. The method is intuitive and reasonable. 3. The results are good. 4. The paper is well-written. Weaknesses: 1. Compared to prior model. The proposed non-prior models seems like a non-linear version of prior model based on affine transformation. When the number of keypoints increasing and the grid of affine transformation being much finer, will it approach the proposed non-prior model? Another related question is that why the proposed model works better than a local thin-plate-spline motion model (such as [38])? 2. Visualization. 1) In the visualization of comparison in video, the results of the proposed model looks close to FNeVR. Authors could also provide user study in visual quality for comparison. 2) Also, the results on CelebV-HQ is really important, since it is a much challenging dataset. Visualization results on this dataset is necessary, in comparison, ablation study and video. 3) Why didn't report the results of FNeVR on CelebV-HQ on Table 1? I am inclined to accept this paper, if the concerns can be solved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations are discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 2aeH ### Q1: Will the increasing number of keypoints of the local affine motion model approach the non-prior based motion model? And why the proposed model works better than a local thin-plate-spline motion model? * Thanks for the good question. If one can learn the dense human face keypoints that are meaningful and structured, the local affine motion model will approach the non-prior based motion, as many local linear models will approach a good non-linear model. However, it is very hard to learn dense structured keypoints in unsupervised face animation. We conduct an experiment that configure the FOMM with 100 keypoints, the result in the table is denoted as "FOMM-kp100". It should be noted that the motion related metric AKD even goes through a decrease compared to 10 keypoint configuration. We found that many keypoints are non-meaningful and overlapped with other keypoints or located in the background. On the other hand, in a relatively sparse configuration such as 10-20 keypoints, our non-prior based motion refinement approach can consistently improve the performance. * The reasons why the proposed model works better than a local thin-plate-spline motion model are two folds. On one hand, though nonlinear, the local thin-plate-spline motion model is still a parametric (prior-based) motion model, thus the non-prior based motion still has the stronger ability to model complex deformations. On the other hand, in TPSM[37], it needs 5 geometric consistent keypoints to determine a local thin-plate-spline transformation, and 10 local transformations require 50 geometric consistent keypoints which heavily increases the learning difficulty. Though it used the dropout strategy in learning transformations, the learning difficulty may still limit their method in reaching the representation ability of local thin-plate-spline transformations. | | L1 | PSNR | LPIPS | AKD | AED | |:---------------------:|:------:|:-----:|:-----:|:-----:|:-----:| | FOMM-kp100 | 0.0371 | 25.15 | 0.162 | 1.288 | 0.116 | | FOMM-kp10 | 0.0386 | 24.62 | 0.164 | 1.254 | 0.124 | | SCORR+FOMM-kp10 | **0.0367** | **25.15** | **0.156** | **1.224** | **0.114** | | FOMM-kp15 | 0.0376 | 24.91 | 0.161 | 1.245 | 0.123 | | SCORR+FOMM-kp15 | **0.0360** | **25.34** | **0.154** | **1.199**| **0.108** | | FOMM-kp20 | 0.0376 | 24.97 | 0.160 | 1.222 | 0.118 | | SCORR+FOMM-kp20 | **0.0356** | **25.49** | **0.152** | **1.194** | **0.107** | ### Q2: User study, visualization of the CelebV-HQ dataset, and why didn't report the results of FNeVR on CelebV-HQ on Table 1 * Thanks for the suggestion. We provide the user study in the **common response**. * Thanks for the suggestion. We have provided the anonymous link to video comparison results on the CelebV-HQ dataset, it can be accessed from the Area Chair. As can be seen in the provided video, on this more challenging dataset, our method generally performs the best in terms of learning finer motions, while other methods often produce blurry results, indicating that motion learned by their methods is blurrier than ours. * As claimed in lines 255-257 in the main paper, the authors of FNeVR didn't release their training codes and it didn't conduct experiments on this dataset. We had trouble in reproducing their paper due to the unclearness of the use of 3DMM parameters, so we didn't report their results on this dataset. --- Rebuttal 2: Comment: Dear Reviewer 2aeH: Thank you for the recognition of our work. We addressed your concerns in the rebuttal. Could you please share your post comments? We are happy to discuss and address any remained concerns. Authors of the submission.
Summary: The paper presents a new approach to generate human face videos based on a source image and a driving video, which simultaneously learning both coarse and finer facial motions. The proposed approach utilizes a structure correlation volume constructed from keypoint features to provide non-prior-based motion information, which is used to iteratively refine the coarse motion flow that is estimated by a prior motion model. The main contributions of the paper are: 1. A non-prior-based motion refinement approach to compensate for the inadequacy of existing prior-based motion models. 2. Utilize the keypoint features to build a structure correlation volume that represents the structure correspondence between the source and driving images across all spatial locations. 3. Extensive experiments on challenging benchmarks that demonstrate the effectiveness of the proposed approach in enhancing the capability of prior-based motion representation through learning the motion refinement. Strengths: 1. The paper presents a novel approach to generate human face videos that simultaneously learns both coarse and finer facial motions. The proposed approach utilizes a structure correlation volume constructed from keypoint features to provide non-prior-based motion information, which is used to iteratively refine the coarse motion flow estimated by a prior motion model. The approach addresses a significant problem in the field of unsupervised face animation. 2. The paper is of good quality, and the method proposed is clear. The authors provide a detailed description of the approach, including the structure correlation volume and the motion refinement module. The experimental results demonstrate the effectiveness of the proposed approach in enhancing the capability of prior-based motion representation through learning the motion refinement. 3. The paper is well-written and easy to follow. The authors provide a clear and concise description of the proposed approach, including the key components and the experimental setup. The paper is well-organized, and the authors provide a clear summary of the contributions and limitations of the proposed approach. 4. The proposed approach has significant implications for the field of unsupervised face animation which addresses an important problem in this field: the inadequacy of existing prior-based motion models to capture detailed facial motions. The proposed approach is effective in enhancing the capability of prior-based motion representation through learning the motion refinement. The approach has potential applications in creating imaginative image animations for entertainment purposes, but it also has the potential to be used in creating deepfakes, which could have negative impacts. The authors acknowledge this limitation and provide recommendations for future work. Weaknesses: 1. The pictures in Figure 1 are too small, especially the optical flow map is not clear enough, no obvious changes can be seen before and after refinement. Also, is the schematic diagram of affine transformation in Figure 1 drawn based on a real example? Why does the affine transformation change so much after refinement? 2. Straightforward combination of existing techniques. The innovation of this paper is not enough, the main innovation point lies in the motion refinement module. However, the method of correlation volume and iteratively refine optical flow used in it is very similar to the correlation matrix and iteratively update in some flow estimation[1,2,3] and correspondence estimation[4,5] methods, and this set of process of first constructing a 4D correlation volume, and then iteratively updating and optimizing optical flow has also been used in some neural style transfer(NST) methods[6,7], but the author did not clarify the differences between the module they used in this paper and related modules in these NST methods. 3. There is an error in line 180 of the article: the author claims that the look up operation on the correlation volume is shown on the left side of Figure 2, but there is no corresponding content in Figure 2. Should Figure 2 be changed to Figure 3? 4. In the comparison video provided in the supplemental material, the video of each method is too small, and it is difficult to see the tiny facial deformation details. It is recommended to arrange the videos of each method in the form of a nine-square grid, and enlarge the size of each video window. 5. The experiments of verifying the effectiveness of the proposed method are insufficient. The framework used by your method is the framework of unsupervised image animation [8,9,10,11], which should be able to generate animation videos on any object category. In addition to human faces, this framework can also be applied to human bodies and animals. Previous unsupervised image animation methods [8,9,10,11] have also been tested on related datasets of different subjects, including TaiChiHD[9], TED-talks[10], and MGif[8]. Your method has only been tested on face-related datasets, but from your overall framework, I don't see any modules that restrict the method to only work on human face. Therefore, I think that the non-prior-based motion refinement module should be applied to datasets of different objects to further test its effectiveness. 6. Lack of quantitative experiments on the Cross-identity task, such as user study that have been used in previous methods [8,9,10,11]. References [1] Dosovitskiy, Alexey, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. 2015. “FlowNet: Learning Optical Flow with Convolutional Networks.” In 2015 IEEE International Conference on Computer Vision (ICCV). doi:10.1109/iccv.2015.316. [2] Teed, Zachary, and Jia Deng. 2020. “RAFT: Recurrent All-Pairs Field Transforms for Optical Flow.” In Computer Vision – ECCV 2020, Lecture Notes in Computer Science, 402–19. doi:10.1007/978-3-030-58536-5_24. [3] Xu, Haofei, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, and Dacheng Tao. 2022. “GMFlow: Learning Optical Flow via Global Matching.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr52688.2022.00795. [4] Kim, Seungryong, Stephen Lin, SangRyul Jeon, Dongbo Min, and Kwanghoon Sohn. 2018. “Recurrent Transformer Networks for Semantic Correspondence.” arXiv: Computer Vision and Pattern Recognition,arXiv: Computer Vision and Pattern Recognition, October. [5] Zhang, Pan, Bo Zhang, Dong Chen, Lu Yuan, and Fang Wen. 2020. “Cross-Domain Correspondence Learning for Exemplar-Based Image Translation.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr42600.2020.00519. [6] Liu, Xiaochang, Xuanyi Li, Ming-Ming Cheng, and Peter Hall. 2020. “Geometric Style Transfer.” Cornell University - arXiv,Cornell University - arXiv, July. [7] Liu, Xiao-Chang, Yong-Liang Yang, and Peter Hall. 2021. “Learning to Warp for Style Transfer.” In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr46437.2021.00370. [8] Siarohin, Aliaksandr, Stephane Lathuiliere, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. “Animating Arbitrary Objects via Deep Motion Transfer.” In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2019.00248. [9] Siarohin, Aliaksandr, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. “First Order Motion Model for Image Animation.” Neural Information Processing Systems,Neural Information Processing Systems, January. [10] Siarohin, Aliaksandr, Oliver J. Woodford, Jian Ren, Menglei Chai, and Sergey Tulyakov. 2021. “Motion Representations for Articulated Animation.” In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr46437.2021.01344. [11] Zhao, Jian, and Hui Zhang. n.d. “Thin-Plate Spline Motion Model for Image Animation.” Technical Quality: 2 fair Clarity: 3 good Questions for Authors: It’s suggested that the author should clarify the differences between the module they used in this paper and the related modules in some NST methods [6,7]. Also, experiments on more datasets of different object categories are needed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, they have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer ELv3 ### Q1: Presentation of Fig.1, cross reference of Fig.3 (line 180), composing type of supplementary videos. * Sorry for the confusion. The affine transformation is for illustration purposes, not from real examples. But the two motion flows are influenced by a pair of real source and driving images. The motion flow is actually obtained by compositing a set of local affine transformations in FOMM, while we use a single global affine transformation for the purpose of the simple introduction. For better understanding, we provide the visualization of refined motion flows across all iterations in the video (the last few seconds). We will improve Fig.1 in the later version. * You are right, Figure 2 should be changed to Figure 3, we will correct this in the later version. * Thanks for the suggestion! We will rearrange the videos of each method to form a nine-square result for better comparison. Benefiting from your suggestion, our newly updated video results are clearer than before. ### Q2: The innovation of this paper, difference with optical flow methods and neural style transfer methods. * Thank you for the valuable thoughts. We addressed these concerns in the **common response**. And we sincerely urge your reconsideration on this point. ### Q3: The experiments of verifying the effectiveness of the proposed method are insufficient, other object animation experiments are needed. Similar to DaGAN and FNeVR which explored facial specific information such as facial depth and 3DMM information, for helping or enhancing learning the local affine motion representation of faces, we would like to emphasize that our method is better applied to face animation than human body animation. The reason is that the locally rigid assumption is reasonable and robust for human bodies, and thus our non-prior based motion refinement approach may bring no significant improvements. However, this assumption can be easily violated for the human face, since non-rigid deformation is very normal in a local face area, and this special attribute of face motion motivates us to investigate the non-prior based motion refinement. Existing methods such as FOMM, MRAA, DAM and TPSM generally focused on finding better parametric motion models, while bringing considerable improvements on the human body animation, it ignores the importance of non-prior motion for face animation. Therefore, our method is more specially designed for face animation, and we believe the experiments on face animation are sufficient to validate its effectiveness. As a thought, our method may help the deformation modeling like clothes of the human body, which may need the finer motion modeling compared to the body itself. However, currently the more urgent problem in body animation is to more accurately find the body structures and then it can be better transferred. We here simply conduct additional experiments on the TEDTalks dataset. As it shows, our method can improve the earlier baseline FOMM by a considerable margin, however, it didn't improve the sota baseline TPSM much especially for the motion related metric AKD. | | L1 | AKD | AED | | :--------: | :--------: | :--------:| :----:| | FOMM | 0.0281 | 4.413 | 0.129 | | SCORR+FOMM | 0.0269 | 3.535 | 0.119 | | TPSM | 0.0271 | 3.501 | 0.126 | | SCORR+TPSM | 0.0265 | 3.499 | 0.118 | ### Q4: Lack of quantitative experiments on the Cross-identity task, such as user study that have been used in previous methods. * Thanks for the suggestion. Initially, we have conducted a cross-identity task in the supplementary, evaluating the pose and expression quality of generated videos. And we here further present a comparison on the CSIM and FID metrics. We addressed this concern in the **common response**. --- Rebuttal 2: Title: On Responses of the Rebuttal Comment: Dear Reviewer ELv3: We addressed your concerns in the rebuttal on innovation, experiments, and presentation. We sincerely urge your reconsideration of this paper and hope that you could share your post comments on our rebuttal. Thank you very much. Authors of the submission.
null
null
null
null
Characterizing Graph Datasets for Node Classification: Homophily-Heterophily Dichotomy and Beyond
Accept (poster)
Summary: The main contribution of this paper is a new measure called label informativeness that characterizes how much information a neighbor’s label provides about a node’s label. The paper also discusses the expected properties of homophily. The experimental results are based on the relation between the performance of GNN and the proposed metric. Strengths: 1. The quality of the theoretical demonstrations. Most of the demonstrations seem correct, even though I did not go exhaustively through the details, as they are in the supplementary material. 2. Good discussion about homophily. The paper covers several aspects of homophily, mentioning the desired properties of this measure. 3. The clarity of the proposition. It is clearly stated the main contribution of the paper. 4. Readability. The paper is well-written, being very simple to read. Weaknesses: 1. The novelty of the work could be questionable. The measure is not novel, please refer to "On the Estimation of Relationships Involving Qualitative Variables", available at https://www.jstor.org/stable/2775440. The submitted paper presents an uncertainty coefficient that is closely related to the work proposed in this paper. Even though, that paper does not focus on graphs, the proposed measure seems to be the application of this measure over a particular distribution, rather than the proposition of a new measure. Second, the same formulas and equations are already published in [16]. Moreover, in [16], it is mentioned that LI was previously introduced. 2. The conclusion mentions "LI characterizes how much information a neighbor’s label provides about a node’s label". Unfortunately, this is not explained in the paper, except by the phrase "$LI \in [0, 1]$ If the label y_n allows for unique reconstruction of y_\epsilon , then LI = 1. If y_\epsilon and y_n are independent, LI = 0". 3. The paper mentions that "Through a series of experiments, we show that LI correlates well with the performance of GNNs.". Based on https://www.jstor.org/stable/2775440, I believe that this is related because the measure is based on the classification performance, rather than GNN. 4. The limitations mention "this measure to be both informative and simple to compute and interpret.". However, no interpretations are given besides the values of 0 and 1. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Regarding the analysis of the value. Does LI have a linear scale of the values? Does a value of 0.5 mean 50% of reconstruction? 2. Is the LI general enough to obtain similar correlations with other classification models (I think so)? 3. Could you explain the main difference between LI (the metric, not the way that is used) and https://www.jstor.org/stable/2775440? 4. Could you explain the difference between LI with respect to the LI metric used in the published paper [16]? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors clearly state the limitations of their work. However, there are some limitations that were omitted. As a suggestion, if the main contribution is the use of the JSTOR paper in graphs. Please state it this way, rather than saying that this is a new measure. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and comments! In this response, we address your questions and concerns. **Contribution of our work** In our discussion below we argue why we think that LI is a novel graph characteristic and how it relates to known concepts in information theory. However, in this discussion, we do not want to narrow the scope of our paper to only one formula (3), thus let us briefly state the main contributions of our work. We think that our main contributions are the proposed theoretical framework and theoretical analysis of different measures within this framework. We believe that the important takeaway from our work is that one has to be careful when choosing graph characteristics to describe and compare graph datasets. In particular, it is important to verify the unbiasedness of the measures. We advise using adjusted homophily and LI since both these measures are unbiased and have other important desirable properties. **The novelty of LI and its relation to concepts from information theory** In our work, we use normalized mutual information (equation (3)) as a well-known and widely used concept and therefore do not cite any works here. Thanks for pointing out that we do not explicitly call this measure “normalized mutual information” (or “coefficients of constraint”, “uncertainty coefficient”, or “proficiency” which are other terms used in the literature) -- this happened because we tried to explain the intuition and interpretation of this measure while introducing it step-by-step. This will be fixed - we will explicitly write that the concept is well-known. However, the main novelty of LI is applying this known notion of mutual information to graph datasets. For this, we need to define graph-based random variables to which this concept can be applied. Note that such random variables are not predetermined and there are several options, some of which we discuss in the paper. We are not aware of prior work applying the concept of mutual information to graphs in a similar manner to characterize edge-label relations. We address other questions and concerns below. > Could you explain the main difference between LI (the metric, not the way that is used) and https://www.jstor.org/stable/2775440? As described above, we use the uncertainty coefficient as a known concept in information theory (and will explicitly state it in the paper to avoid any possible misunderstanding). However, to properly address the questions on the relation to Theil (1970), we would like to kindly ask if you can point us to a particular equation or definition in Theil (1970) to which we should compare our notion of LI. Also, we cannot agree with the distinction between the metric and the way that it is used, since in our case defining the random variables is a part of the metric definition. Drawing parallels with other known graph characteristics, one could say that the assortativity coefficient is the application of Pearson correlation to graphs, edge homophily is the application of the operation “average” to graphs, PageRank is the application of eigenvectors to graphs, and so on. In all these cases, the characteristics are defined by the application of a known concept to a graph domain. > The conclusion mentions "LI characterizes how much information a neighbor’s label provides about a node’s label". Unfortunately, this is not explained in the paper The cited sentence refers to the concept of information from the information theory, i.e., this is just a rephrasing of the definition of LI: by definition, LI describes the (relative) amount of information revealed by the label of a single neighbor. We provide more intuition in our answer to the next question. > Regarding the analysis of the value. Does LI have a linear scale of the values? Does a value of 0.5 mean 50% of reconstruction? The value of LI can be related to the cross-entropy loss. Indeed, assume that no information about the neighbors’ labels is available. Then, the best classifier in terms of the cross-entropy assigns probability $p(k)$ to each class $k$. For this classifier, the cross-entropy loss is the entropy of $p(\cdot)$. If we know the label of one neighbor, the cross-entropy loss of the best possible classifier is reduced to the conditional entropy. Thus, we can say that LI describes the relative reduction of the expected cross-entropy loss of the best possible classifier. So, the value of 0.5 means that this loss becomes two times smaller. We hope that this explanation addresses the concern about the interpretability of the measure. This simple intuition, however, does not account for the fact that in graphs each node has several neighbors which may have various distribution of labels. Thus, the reasoning above cannot be directly applied, so we additionally verify the usefulness of LI as a graph characteristic via experiments. > Is the LI general enough to obtain similar correlations with other classification models (I think so)? We hope that our reply to the previous question addresses this concern. But if no, could you please clarify which classification models we need to consider? > Could you explain the difference between LI with respect to the LI metric used in the published paper [16]? Since explaining the current paper's relationship to LI used in [16] involves a potential double-blind issue, we have checked it with the AC, and the AC confirms that the LI in [16] does not negate the novelty of the LI in the current work. We hope that our response addresses your concerns. We are happy to answer any additional questions during the discussion period if needed. --- Rebuttal Comment 1.1: Title: Thanks for the answer Comment: I have read the author response. Thanks for agreeing that the measure is similar to previous work. This was one of my main issues. --- Reply to Comment 1.1.1: Comment: Thank you for reading our response! This concern will be addressed in the text by explicitly saying that we use normalized mutual information as an ingredient of our definition. We hope that our response to other questions clarifies the intuition behind LI and its scale. We also thank the reviewer for acknowledging the quality of the theoretical analysis, discussion about homophily, and readability of the paper in the review.
Summary: This paper analyzes homophily measures and proposes a new characteristic highly correlated with performance. Specifically, the authors suggest desirable properties for the metric and show that current homophily measures do not satisfy some important properties, so suggest a new homophily measure to satisfy them. Furthermore, beyond homophily arguments, the authors suggest a new characteristic that is more highly correlated with performance from the perspective of the usefulness of neighbor labels to predict the label of the corresponding center node. They demonstrate that the correlation between the values of proposed measures and performance is significantly higher than current homophily measures on various architectures and datasets. Strengths: - The authors suggest the desirable properties for the metrics and show that current homophily measures do not satisfy important traits such as asymptotic constant baseline. - They suggest adjusted homophily to satisfy most of the important properties including the asymptotic constant baseline. - They propose a new characteristic describing the effectiveness of graph structure for performance. - The proposed characteristic exhibits a significantly higher correlation with performance compared to homophily measures. Weaknesses: Although the authors provide the correlations between performance and a given metric on Cora and Citeseer by modifying graphs synthetically, it seems not enough to show the general applicability. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Could you provide the results when calculating the correlation only based on the values of metrics and performance in Appendix Table 4, without manipulating the original graphs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: As the authors mentioned, LI only considers node label interaction in graphs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and positive feedback! In our experiments we indeed mostly focus on synthetic datasets where connectivity patterns can be easily controlled and on semi-synthetic data previously used in the literature for the analysis of homophily measures. Originally, we have not experimented with real datasets since various factors besides edge-label relations may affect the performance. Moreover, we did not want to move the focus of our work from theoretical analysis to only experimental results - LI is a very simple measure that cannot pretend to capture all complex situations that can arise in practice. However, as asked by several reviewers, we also measured the correlation between performance and different measures on real datasets from Table 4, please see our [general response](https://openreview.net/forum?id=m7PIJWOdlY&noteId=hxPviGCfUx). Here we show that even on real datasets LI better agrees with GNN performance than homophily measures. We hope these additional experiments address your concern and will be happy to answer any additional questions! --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your response. I read your response and PDF carefully. I'll keep my score as weak accept.
Summary: This paper focuses on developing techniques to characterize graph node classification datasets. First, this paper has proposed a theoretical framework to develop measures of graph’s characteristics. Then two measures called “adjusted homophily” and “label informativeness” are proposed to easily characterize the connectivity patterns of graph datasets. And experimental results show the stronger correlation of proposed measures with GNNs’ performance. Strengths: 1. This paper has analysis the drawbacks of existing measures, like node homophily and edge homophily. 2. This paper has proposed a generic framework with theoretical analysis to develop techniques for characterizing graph node classification datasets, which will help to design more suitable models on certain datasets. And proposed minimal agreement, maximal agreement, and monotonicity is very useful. 3. The proposed adjusted homophily get rid of the number of class and class size on different datasets and is of broader applicability. 4. The proposed label informativeness have a strong correlation with the GNN’s performance, which is interesting and novel. Moreover, this can help to design more powerful development of heterophily-suited GNNs. Weaknesses: 1. Only Cora and Citeseer that be considered as homophilic graphs are used for showing the correlation between GNNs’ performance and LI, more heterophilic graphs should be tested. 2. Assortativity coefficient is mentioned in section 2.4, but there is not comparison with proposed measures. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. More details about how to get the adjusted value in lines 183 and 184. 2. Edge label informativeness seems to be smaller than node label informativeness in most cases, any explanation about this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have adequately addressed the limitations in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and for acknowledging the usefulness of our theoretical framework! We address your questions and concerns below. > Only Cora and Citeseer that be considered as homophilic graphs are used for showing the correlation between GNNs’ performance and LI, more heterophilic graphs should be tested. In our experiments we mostly focus on synthetic datasets where connectivity patterns can be easily controlled. Here we consider both SBM and LFR random graph models with varying parameter combinations allowing us to get non-trivial patterns. We also use semi-synthetic data from [17], where datasets are generated based on Cora and Citeseer. Originally, we have not experimented with real datasets since various factors besides edge-label relation may affect the performance. However, as asked by several reviewers, we also measured the correlation between performance and different measures on real datasets from Table 4, please see our [general response](https://openreview.net/forum?id=m7PIJWOdlY&noteId=hxPviGCfUx). Here we show that even on real datasets LI better agrees with GNN performance than homophily measures. We hope these additional experiments address your concern. > Assortativity coefficient is mentioned in section 2.4, but there is not comparison with proposed measures. As written in Section 2.4, the assortativity coefficient reduces to (2) when applied to discrete node attributes on undirected graphs. Thus, it produces the same values in our setting. We call this measure _adjusted homophily_ in our paper for the following reasons: - To avoid ambiguity with other usages of assortativity, e.g., assortativity coefficient often refers to degree-degree correlations in graphs without node labels - To emphasize that adjusted homophily is a standard adjustment for chance applied to edge homophily (i.e., adjusted homophily relates to edge homophily as, e.g., adjusted Rand index relates to Rand index) - To emphasize that this measure indicates the level of homophily since this coefficient is rarely used in graph ML literature for this purpose > More details about how to get the adjusted value in lines 183 and 184. According to the configuration model, the probability that a given edge goes to a node of degree $d$ equals $\frac{d}{2|E|}$, where the denominator equals the sum of all node degrees. Then, to get the probability of connection to a node from class $k$, we need to sum these values over all nodes from class $k$. Thus, we obtain $\frac{D_k}{2|E|}$ since $D_k$ is the sum of all degrees for nodes from class $k$. Multiplying this by $D_k$, we get twice the expected number of edges going from class $k$ to class $k$. Dividing this by $2|E|$ gives that $\frac{D_k^2}{4|E|^2}$ is the expected fraction of edges going from class $k$ to class $k$. Then, if we sum over all the classes, we get the expected number of homophilous edges. We subtract this value from $h_{edge}$ to get an unbiased measure. There are some negligible error terms in this reasoning, so we refer to section B.4 where all error terms are carefully tracked. > Edge label informativeness seems to be smaller than node label informativeness in most cases, any explanation about this? Edge and node label informativeness differ in how they weight high/low-degree nodes. For node label informativeness, averaging is over the nodes, so all nodes are weighted equally. For edge label informativeness, averaging is over the edges, which implies that high-degree nodes make a larger contribution to the final measure. It is natural to expect that for high-degree nodes the amount of information from each individual neighbor is lower than for low-degree nodes since neighbors of high-degree nodes are expected to be more diverse and closer to the “average” distribution. Thus, edge LI is expected to be smaller. Thank you very much for this observation, we will add this discussion to the text! We hope that we addressed your concerns and will be happy to answer any additional questions! --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have read all other reviews and rebuttal, and submitted rebuttals have addressed most concerns. I have raised my score.
Summary: The paper proposes new measures to characterize homophily and heterophily in graph datasets. First, some desirable properties are asserted for a good measure of homophily. It is discussed whether various prior measures of homophily satisfy these properties, then a new measure called *adjusted homophily* is proposed; this is a modification of the established measure of *edge homophily* using information about (degree-adjusted) class distributions of nodes. After this, to capture the fact that a node's class label can be, depending on the dataset, either informative or uninformative about its neighbor's class label, a measure called *label informativeness* (LI) is also proposed. When randomly sampling an edge $(u,v)$, this is the fraction of the information content of a $u$'s class label that is given by knowing $v$'s class label. After establishing these measures, the paper lists them for several real-world datasets. In experiments on synthetic SBM graphs and semi-synthetic modifications of the Cora graph, the authors show that LI is better correlated with GNN performance than homophily. Strengths: - The homophily/heterophily topic has seen a lot of interest in the past few years, so this work is likely to get much attention. - The paper is very well-written. It is logically organized and easy to read. - The paper makes a compelling case that both the adjusted homophily and LI measures are the most sensible such measures. Weaknesses: - While LI seems like a very sensible measure, it is rather straightforward, so I am left thinking: First, has it really not been proposed by some prior work, especially given that the general concept of normalized information measures is well-established (e.g., [this](https://en.wikipedia.org/wiki/Mutual_information#Normalized_variants))? I don't see this as a weakness on it's own, since I am not aware of such prior work. However, I do feel that some more graph-specific theoretical work addressing the first limitation listed in the paper (i.e., variants of homophily/LI that consider multiple hops in the graph) would significantly increase the contribution. The paper seems to state that this is beyond the scope of the work, but I feel the amount of contribution is a bit borderline as it is. - The experiments positing that LI is well correlated with GNN performance are on synthetic or semi-synthetic datasets only. Ideally, some trend could be shown across a variety of real-world datasets, though it is understandable that such a trend may not exist given various factors besides LI. ### Typos - Figure 8: "edge homopily" Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My questions correspond to points from the prior section: - Assuming that LI is indeed a new measure, do you see some variant of it that could work for more than 1 hop across the graph? Or otherwise, do you think such measures are unlikely to be informative? - Do you expect there to be a general trend (even if it is a weak trend) that, across the usual benchmark graph datasets, higher LI corresponds to higher node classification performance? Or do you at least expect a higher correlation than with (adjusted) homophily? I think it would be interesting to see plots like the ones for the synthetic datasets, except for (some fraction of) the graphs in Table 4. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Limitations are adequately addressed in a designated section of the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and useful comments! We address your concerns below. > While LI seems like a very sensible measure, it is rather straightforward, so I am left thinking: First, has it really not been proposed by some prior work, especially given that the general concept of normalized information measures is well-established We indeed use the normalized mutual information as a well-known concept when defining the LI measure. Our suggestion here is to apply this concept to graphs by defining random variables to which it should be applied. We are not aware of prior work using normalized mutual information for graphs in a similar manner to characterize edge-label relations. > Assuming that LI is indeed a new measure, do you see some variant of it that could work for more than 1 hop across the graph? Or otherwise, do you think such measures are unlikely to be informative? This is a very reasonable suggestion. At earlier stages of our research, we indeed defined and measured “two-hop homophily” and “two-hop LI”. These are similar measures to those discussed in the paper but defined via pairs of nodes at distance two (or larger distances can be considered). However, we have not observed any interesting patterns for these measures: non-trivial connection patterns become harder to detect as we increase distances between nodes. Thus, we decided to make our work more focused and to advise simple but efficient one-hop measures. > Do you expect there to be a general trend (even if it is a weak trend) that, across the usual benchmark graph datasets, higher LI corresponds to higher node classification performance? Or do you at least expect a higher correlation than with (adjusted) homophily? I think it would be interesting to see plots like the ones for the synthetic datasets, except for (some fraction of) the graphs in Table 4. You are absolutely right that various factors besides LI may affect the performance - and it is the reason why we initially experimented only on synthetic and semi-synthetic datasets. However, to address this question, we also conducted the experiment you suggest and measured the correlation on the datasets in Table 4. We report the results in our [general response](https://openreview.net/forum?id=m7PIJWOdlY&noteId=hxPviGCfUx). Here we show that despite other factors potentially affecting the results, LI better agrees with GNN performance than homophily measures. We hope that we addressed your concerns and will be happy to answer any additional questions! --- Rebuttal Comment 1.1: Comment: Thank you for the response. While I am still a bit doubtful on the overall amount of contribution / novelty of the introduced measure, I think the new experimental result is a good argument for LI, and I have raised my score. Edit: I see that Reviewer oQPA has the same doubts as me regarding novelty. While I won't lower my score, I do think that the level of contribution here may be more suitable for a workshop paper rather than the main conference; the contribution would be improved if the authors added content by finding and analyzing some interesting pattern with a more graph-theoretic measure, e.g., a multi-hop one as we discussed above.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their comments and suggestions! In this general response, we address the comment raised by several reviewers about conducting additional experiments on real datasets. Analyzing which measures better agree with GNN performance on real data is non-trivial. Indeed, if we consider diverse node classification problems, there are many factors affecting the GNN performance: the size of the datasets and fraction of training examples, the edge density, the number of features and their informativeness, the number of classes and their balance, and so on. However, for the completeness of the study, we analyze how well different homophily measures and LI correlate with GNN performance on real datasets from Table 4. In this experiment, we measure the ROC AUC score since some of the datasets are highly unbalanced, and thus accuracy is not a suitable performance measure. For the datasets with more than two classes, we report the macro-averaged ROC AUC score. The relation between ROC AUC and LI (for GAT and GraphSAGE) is shown in Figures 1 and 2 in the attached pdf. Table 1 reports the Spearman correlation of ROC AUC with all the measures. We see that the largest correlation is achieved by LI (node and edge variants). Let us note that this experiment does not aim to compare homophily measures with each other since a good homophily measure is expected to properly capture the tendency of edges to connect similar nodes and is not expected to correlate with GNN performance. That is why we compare homophily measures using the proposed theoretical framework in our paper. We hope that these experiments address the concerns of the reviewers and we are open to further discussions. Pdf: /pdf/f2cd8257a94943d8d4a64bd41a373df44a4130cb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Equivariant Adaptation of Large Pretrained Models
Accept (poster)
Summary: The authors tackle an important problem: equivariant NNs are very useful because of capturing important symmetries of the data that lead to higher sample efficiency and more accurate and robust predictions. Yet, EqNNs are very hard to scale because of their specialized architectures. In particular, they cannot benefit from large scale pre-traininig. The authors propose to use the canonical localization network (an EqNN) from [11] to equip pre-trained model with equivariance. The authors train the localization NN jointly with the prediction network. They test both on the original and augmented test set (with various transformations). The authors identify that directly applying the approach from [11] does not work because the localization network can transform the input into atypical views for the training dataset. Then they propose a KL regularization term that encourages a delta function at the identify transformation. Their approach works effectively both for discrete and continuous groups. Strengths: S1: The idea of equipping pre-trained models with equivariance is very timely and potentially useful for large-scale applications. S2: The regularization term is well-motivated and effective. S3: The experiments are thorough, including both discrete and continuous groups. Weaknesses: W1: It would be useful to study your approach in more realistic setting where you do not have to augment the test dataset but rather take an existing test dataset. I'd recommend standard out-of-distribution evaluation on IN-{adversarial, sketch, renditions}, INv2 etc. Another domain for experimentation is in Transfer Learning from IN-1K pertaining. There are some standard datasets, such as Flowers-102, Food, etc. W2: Since you want to integrate equiavriance with pre-trained models, I think you should also consider SSL pre-trained checkpoints, since they are the foundation of SOTA pre-training (Pre-training on IN-1K may not give you the best features for downstream tasks). Some pre-trained checkpoints are available on github, e.g. https://github.com/rdangovs/essl/tree/main/imagenet/simclr. W3: As you acknowledged, your model is constrained by the EqNN architectures for the canonical localization network. I wonder if you could consider in your main text discussion more flexible (albeit non-exact) approaches for encouraging symmetry in predictions, such as MSE (https://arxiv.org/abs/2303.02484) and E-SSL (Equivariant Contrastive Learning). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See the above Weaknesses. My comments should be treated as questions that could improve the quality of your work. I think you have an awesome paper, but I also think that addressing W1-3 could make your work even more impactful. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work and constructive feedback to improve the quality of the paper. > More realistic settings without augmented test sets: I'd recommend standard out-of-distribution evaluation on IN-{adversarial, sketch, renditions}, INv2 etc. Another domain for experimentation is in Transfer Learning from IN-1K pertaining. There are some standard datasets, such as Flowers-102, Food, etc. We would like to highlight that the out-of-distribution in the submission refers to the robustness to transformations that form a group, like rotation of the original images. OOD generalization with other suggestions such as IN-{adversarial, sketch, renditions}, INv2 datasets involve potentially non-linear transformations for which we do not have the corresponding equivariant network required for canonicalization. Extending our framework to transformations beyond groups with approximate equivariance will be an interesting future direction of research. > Experiments with SSL pretrained checkpoints: [...] I think you should also consider SSL pre-trained checkpoints [...] We now provide the full fine-tuning results for the E-SSL checkpoint (with best acc1 on the GitHub link shared by the reviewer) on CIFAR10 in Table 3 in the rebuttal pdf. Interestingly, while the Accuracy was similar to vanilla classification checkpoints, the C8-Average Accuracy for E-SSL was poorer. Using our method, we are able to observe similar Accuracy as well as equivariance. > Non-equivariant architecture for canonicalization: I wonder if you could consider in your main text discussion more flexible (albeit non-exact) approaches for encouraging symmetry in predictions, such as MSE [...] and E-SSL In addition to the E-SSL results above, we now also provide results for training non-equivariant canonicalization function $c$ in Table 2 in the rebuttal pdf (row titled ConvNet Canonicalizer). We use a simple ConvNet as $c$ and train it to predict the angle for a rotated image, i.e., the prior loss is replaced with an MSE loss between the predicted and actual angle of rotation. As a result, we observe “approximate” equivariance with a slight drop in the Accuracy over the original test sets. --- Rebuttal Comment 1.1: Title: Thanks for the clarifications. Comment: I read the whole rebuttal and would like to maintain my score. A revised paper that incorporates the discussions from the rebuttal would be suitable for publication. Small comment: It would be nice to see OOD tasks in future work because they are more realistic and the extension beyond groups is a meaningful one.
Summary: The paper proposed a fine-tuning based adaptation method for equivariance in pre-trained models. The core of the method is the canonicalization function, which is trained to normalize the input to a common orientation. The paper investigates challenges associated with training canonicalization functions and proposes a canonicalization prior regularization to alleviate those challenges. Experiments in image and point-cloud domains demonstrate that fine-tuning a backbone network with the proposed method improves equivariant properties of a backbone model. Strengths: 1. The paper is very clearly written and easy to follow. All parts of the paper/method are nicely motivated. 2. The paper tackles an important problem of efficient equivariance in neural networks. This problem is especially relevant with regard to large models. 3. The paper provides valuable insights into a practical application of canonicalization functions to achieve equivariance/invariance. Weaknesses: In no particular order. 1. The paper argues about the equivariant adaptation, but only invariance (and only to the rotation group) is demonstrated in the experiments. 2. Although there is a conceptual novelty in the paper (in the part of analyzing the challenge of applying canonicalization functions), the proposed adaptation approach relies heavily on the prior work of [11]. With this, the methodological novelty of the proposed approach is rather limited. 3. The proposed method requires fine-tuning the whole model together with the canonicalization function. This can be troublesome, especially for large models. With this, I am also wondering if the "equivariant adaptation" is the right name, because the essence of the method seems to be more in equivariant fine-tuning. It would be nice to have experiments, where the backbone model is frozen, and only the canonicalization function is trained. 4. The paper claims out-of-distribution robustness as a key benefit of the method (L224). However, it is not clear how the canonicalization function is itself robust to the distribution shift. If the canonicalization function is trained on cifar10 and tested on cifar100, for example, will it still be able to deliver reasonable distribution over a group orbit? In that sense, it is important to note that the proposed method does not provide guaranteed equivariance as equivariant networks do. 5. I am not sure that comparing methods based on raw Accuracy is a fair evaluation protocol. If the goal of equivariant adaptation is to robustify a model (to a symmetry group of interest), then it seems more appropriate to keep track of the relative change of accuracy from original to G-averaged test sets. Otherwise, it is hard to disentangle if the performance improvement is due to better equivariance or due to improved data augmentation (and a higher accuracy on the non-transformed dataset as a result). This seems to be an important part of the comparison, which is missing from the paper. For example, consider Table 2 results for VIT on Cifar10, the LC model loses 0.2% of its Accuracy, while Regularized LC loses 1% of its Accuracy. Which one would we call more robust? Others: 1. Possibly useful related work to resolve the issue discussed in L208 - 219. Moskalev et al. LieGG: Studying Learned Lie Group Generators. NeurIPS22 https://arxiv.org/abs/2210.04345 2. Missed related work. Tai et al. Equivariant Transformer Networks. ICML19. https://arxiv.org/abs/1901.11399 UPD: Authors addressed most of my concerns in the rebuttal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I suggest authors address weaknesses for the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitation section nicely highlights limitations of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and suggestions. > Experiments on equivariant tasks: The paper argues about the equivariant adaptation, but only invariance (and only to the rotation group) is demonstrated in the experiments. We appreciate the reviewer's concern. To address this, we provide results of instance segmentation on COCO 2017 [1] (val set) with Segment Anything Model [2] (SAM) and MaskRCNN [3], where the pre-trained models were not fine-tuned. Our training includes only training the canonicalization function $c$ to map images in the train set of COCO 2017 to identity. As presented in Table 1 in the rebuttal pdf, we observe that our equivariant adaptation obtains mask-mAP scores on the original test set identical to the zero-shot performance of the model while being equivariant to the instance segmentation task. The original models are far from being equivariant, as highlighted by our results. Finally, extending our framework to transformations beyond rotations with approximate equivariance will be an interesting future direction. > Experiments with frozen backbone: fine-tuning the whole model together with the canonicalization function. This can be troublesome, especially for large models. [...] nice to have experiments, where the backbone model is frozen, and only the canonicalization function is trained. We now provide results for instance segmentation in Table 1 in the rebuttal pdf, where the setup is identical to the reviewer’s suggestion, i.e., we freeze the Segment Anything Model and MaskRCNN and only trained the canonicalization function $c$. > Out of distribution generalization of canonicalization function, and equivariance: [...] If the canonicalization function is trained on cifar10 and tested on cifar100, for example, will it still be able to deliver reasonable distribution over a group orbit? In that sense, it is important to note that the proposed method does not provide guaranteed equivariance as equivariant networks do [...] We would like to highlight the fact that the out-of-distribution in the submission refers to the robustness to the rotation of the original images. Since the canonicalization function $c$ is equivariant to rotation groups, contrary to the statement in the review, it is guaranteed that $c$ and the entire pipeline is equivariant and robust to distribution shift. In the particular case of CIFAR 10 and CIFAR 100, since the datasets are identical, their canonicalization networks can be identical and can transfer. In general, out-of-distribution generalization to transformations beyond those handled through equivariance remains an interesting direction for future. > Fair comparison metrics: I am not sure that comparing methods based on raw Accuracy is a fair evaluation protocol. [...] it seems more appropriate to keep track of the relative change of accuracy from original to G-averaged test sets.[...] Both the LC [3] and our Prior-Regularized LC are equivariant, which means, in principle, there should be no gap between accuracy and G-averaged accuracy. Therefore the suggested metric will not be informative. In practice, we observe small discrepancies between Accuracy and G-Averaged Accuracy, which we attribute to the rotation artifacts which destroy information in the image. We will add a sentence to the paper to clarify this. > Additional related work: We thank the reviewer for their constructive feedback and suggestion of these related works, which we will incorporate into our work. $$$$ $$$$ [1] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything, 2023. [2] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. [3] Sékou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, and Siamak Ravanbakhsh. Equivariance with Learned Canonicalization Functions Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15546-15566, 2023 [4] Moskalev et al. LieGG: Studying Learned Lie Group Generators. NeurIPS22. --- Rebuttal Comment 1.1: Comment: I thank authors for the response. The rebuttal addresses my concerns, given the rebuttal experiments, clarifications and related work are added to the main paper. I thus raise my score. --- Reply to Comment 1.1.1: Title: Response to Official Comment by Reviewer vNov Comment: We thank the reviewer for their comment. >The rebuttal addresses my concerns, given the rebuttal experiments, clarifications and related work are added to the main paper We're pleased that our rebuttal has effectively addressed the reviewer's concerns. We'll add all additional experiments, clarifications, and related work in the main text. > I thus raise my score Since we haven't observed any score increase, we'd kindly like to inquire with the reviewer if they have already made any changes. **We see the changes now**
Summary: Paper proposes to enforce equivariance/invariance in pretrained models by using the recently proposed canonicalization functions that transform a given input to its canonical form. This allows one to enforce symmetry during the finetuning stage without the need for retraining the model. Paper uses a prior so that the canonicalization network is biased to output identity transformation for the images in the finetuning dataset (i.e., a canonical orientation is defined as the orientation in the finetuning dataset). Empirically, this is shown to be more robust than without the prior. Strengths: Imposing invariances (via canonicalization) during finetuning rather than pretraining can be very beneficial as different finetuning datasets can have different symmetries. Authors clearly motivate the challenges of using canonicalization naively for a pretrained model that do not exist when training from scratch (e.g., Figure 3) and propose a simple regularization to solve it. Experiments show better performance across multiple finetuning datasets and pretrained models. Weaknesses: W1. Details of the overall loss function is not presented or is assumed to be known to the reader. It is harder to evaluate the impact of the prior without these details. W2. In Section 3.1, the first drawback of Learned Canonicalization is its undesired augmentation effect during the initial phase of finetuning. - However, this should exist even for the proposed prior-regularized method because the network still begins with a random initialization. - To solve this, one probably has to encode the prior of outputting identity within the architecture (for example, by defining c(x):= I + r(x) where I is the identity and r(x) initialized with very small weights). W3. The assumption that the orientations in the finetuning and pretraining datasets are "similar" (line 153) is not well defined. It may be more reasonable to apply the prior using (a small subset of) the pretraining dataset. This will ensure that the finetuning images match the orientation that the model was pre-trained on. W4. Experiments only evaluate invariance and not equivariance in its general form. I think it is important to demonstrate the advantage of imposing end-to-end equivariance over a pretrained model (as this is different from imposing equivariance in its intermediate representations). W5. Paper can be strengthened with experiments on other transformations than rotations (but not strictly required). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1. Section 4.1: why is the rotation augmentation baseline applied over all angles from -180 to 180 (lines 235-236)? I believe the group is known to be C8 for the LC and prior-regularized LC methods, and should be used for the baseline as well. Q2. Section 4.1: Were rotations part of the data augmentations that were performed on the model during its pretraining on ImageNet? Q3. Section 4.2: VN/CN/PCN- in Table 3 are not defined; I am not sure what these baselines are. Which dataset were the PointNet and DGCNN models pretrained on? Do all the other methods in Table 3 use pretrained models? Q4. Line 247 says that proposed approach tries to map all images in the "original training dataset" to identity. Should it say "finetuning dataset" instead? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their useful comments on the paper. > Missing overall loss function equation: Details of the overall loss function is not presented or is assumed to be known to the reader [...] We thank the reviewer for pointing out the absence of an overall loss function. The total loss function consists of task loss (which is cross-entropy loss) and prior loss. We will add an expression for total loss as $\mathcal{L} = \mathcal{L_{task}} + \lambda * \mathcal{L_{prior}}$, where $\lambda$ is a hyperparameter. We refer the reviewer to eq. (3), which describes the prior loss, and further to line 171, which demonstrates that it reduces to cross-entropy loss over the number of discrete rotations. We set $\lambda$ as 100 in our experiments. > Effect of initial undesired augmentation in canonicalization model: [...] We agree with the reviewer that if the prediction network is not frozen when the network begins to train, the problem of the undesired augmentation effect exists in the case of our proposed prior-regularized canonicalization. However, this undesired effect quickly reduces with training with the regularization, as demonstrated in Figures 4 and 5 in the Supplementary material (Appendix C, Effect of prior regularization paragraph). Encoding the prior in the architecture, as suggested by the reviewer, breaks the equivariance of the canonicalization function, and therefore the entire setup is no longer guaranteed to be equivariant. > Similar orientations in pre-training and fine-tuning dataset: The assumption that the orientations in the finetuning and pretraining datasets are "similar" (line 153) is not well defined. It may be more reasonable to apply the prior using (a small subset of) the pretraining dataset. We agree with the suggestion and will add a sentence to point out this possibility. While the proposed suggestion is interesting, in the natural image datasets we considered, the canonical orientations are aligned in the finetuning and pretraining datasets. Hence, we assume that this canonical orientation is close to the identity element (or upright images). However, learning or inferring this prior automatically using the pretraining dataset could be an exciting future research direction of this work. > Experiments with equivariance tasks: Experiments only evaluate invariance and not equivariance in its general form. We appreciate the reviewer's concern that the current set of experiments does not evaluate equivariance and rather focuses primarily on invariance tasks. To address this, we provide the result for instance segmentation on COCO 2017 [1] (val set) with Segment Anything Model [2] (SAM) and MaskRCNN [3], where the pre-trained models were not fine-tuned. Our training includes only training the canonicalization function to map images in the train set of COCO 2017 to identity. As presented in Table 1 in the rebuttal pdf, we observe that our equivariant adaptation obtains mask-mAP scores on the original test set identical to the zero-shot performance of the model while being equivariant to the instance segmentation task. > Results for fine-tuning with $C8$ augmentations: [...] the group is known to be C8 for the LC and prior-regularized LC methods, and should be used for the baseline as well [...] We now provide the results for fine-tuning with C8 augmentations in Table 2 in the rebuttal pdf. Our proposed canonicalization method is better than the suggested baseline across all datasets. > Pre-training setup: Were rotations part of the data augmentations that were performed on the model during its pretraining on ImageNet We want to point out that we did not perform the pretraining of large models but rather utilized the widely available checkpoints, as mentioned in line 229. However, the training setup of ResNet-50 and ViT can be found in [4] and [5], which shows that large rotations were not part of data augmentations which explains the poor G-Average Accuracy. > Description of abbreviations: Expand VN/CN/PCN in Table 3 and do all the other methods in Table 3 use pre-trained models? We thank the reviewer for pointing this out. Abbreviations VN and CN are taken from [6] and refer to Vector Neuron and Learned Canonicalization (LC). PCN stands for Prior Regularized LC. In the final version, we will change CN and PCN to LC and Prior Regularized LC to make the abbreviations consistent across all domains. For classification, we took checkpoints trained on ModelNet40, and for instance segmentation, we took checkpoints trained on ShapeNet dataset. The other methods in the table are trained from scratch. Note that we weren't aware of any foundation model for the PointCloud domain but still wanted to test our idea and show prior regularized finetuning can improve learned canonicalization [6]. We will add these details on pretrained checkpoints in Section 4.2. > Typo in the submission: Line 247 says [...] Should it say "finetuning dataset" instead? Thanks for pointing out the typo. $$$$ $$$$ [1] Lin, T. et al. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014. [2] Kirillov, A. et al. Segment anything, 2023. [3] He, K. et al. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. [4] He, K. et al. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016 [5] Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. [6] Kaba, S. et al. Equivariance with Learned Canonicalization Functions Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15546-15566, 2023
Summary: The work proposes a method for making large pre-trained models equivariant to specified group actions. The method uses the canonicalization approach, where the input is oriented to a specific orientation before being fed to the pre-trained model with the help of a trainable canonicalization network. The authors use a prior distribution over the group elements to regularize the output of the canonicalization network. The proposed technique shows robustness with respect to the group actions in different downstream tasks, such as classification and part segmentation. Strengths: 1. The paper pursues a novel approach to make pre-trained models scale equivariant. 2. The proposed technique is simple and effective, making it well-suited for practical adaptation. 3. The work is well-written and easy to follow. Weaknesses: 1. Experimental Setup: One major drawback of this work is the experimental setup. The authors used priors over the discrete group elements (i.e., 8 discrete rotations of C8 for the image domain) for the experiments. During the evaluation on the test set, the images were augmented by the action of group C8. This approach is not appropriate because we are providing the model with information about the test set augmentation during training. On the other hand, for the data augmentation setup, the baseline models were fine-tuned using random rotations between -180 to 180 degrees (i.e., all possible rotations). In this case, no information about the test set augmentation was given to the baseline. This difference in augmentation approaches makes the comparisons inappropriate. An appropriate comparison should either: 1. Use a uniform continuous prior over the group elements. 2. Perform data augmentation only with the discrete group actions while training the baseline. Option 1 is more practical because in the real world, we often do not know specific priors over the group actions. 2. Recent studies have shown that fine-tuning large pre-trained models on small datasets hampers downstream tasks [1] and makes the model vulnerable to out-of-distribution data. Experiments with frozen pre-trained backbones and trainable classification/segmentation modules would be more justifiable. 1.Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Should the Kronecker delta functions in line 172 be normalized by the number of group elements for discrete priors? 2. What is the specific structure of the **canonicalization** module used in this work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their useful feedback. Below we address the reviewer's concerns. > Concerns on experimental setup: [...] appropriate comparison should [...] perform data augmentation only with the discrete group actions while training the baseline. We thank the reviewer for pointing out this issue. We now provide the results for fine-tuning with C8 augmentations in Table 2 in the rebuttal pdf. Our proposed canonicalization method is better than the suggested baseline across all datasets and significantly so in the case of CIFAR100. Regarding uniform continuous prior, Appendix C details some of the optimization challenges when incorporating this with steerable networks. As mentioned in lines 314-316, a complete understanding of this and an extension of our method to continuous rotation in images is left as a future work. > Fine-tuning and frozen backbone architectures: [...] experiments with frozen pre-trained backbones and trainable classification/segmentation modules would be more justifiable We understand the justification of the reviewer’s suggestion to use frozen pre-trained backbones and trainable modules. To address this concern for more realistic settings, we provide the result for instance segmentation on COCO 2017 [1] (val set) with Segment Anything Model [2] (SAM) and MaskRCNN [3], where the pre-trained models were not fine-tuned (due to computational resource constraint). Our training includes only training the canonicalization function to map images in the train set of COCO 2017 to identity. As presented in Table 1 in the rebuttal pdf, we observe that our equivariant adaptation obtains mask-mAP scores on the original test set identical to the zero-shot performance of the model while being equivariant to the instance segmentation task. > Should the Kronecker delta functions in line 172 be normalized by the number of group elements for discrete priors? Since we are putting all probability mass of the prior on identity for natural images, we are using a Kronecker delta function. If we use the normalized Kronecker delta function as suggested, the prior will not be a probability distribution. > What is the specific structure of the canonicalization module used in this work? We thank the reviewer for pointing out this omission, and we will add the following architectural details to the appendix of the final version of our paper: We extensively use the $\texttt{escnn}$ library [4, 5] to design equivariant convolutional architectures. - For classification experiments, we use a $C8$-equivariant convolutional network; each layer consists of convolution with regular representation except the first layer, which maps the trivial representation of the $C8$ group to its regular representation. We use equivariant implementation of batch normalization, ReLU activation function, and dropout as proposed in [4, 5]. - For instance segmentation, we use a $C4$-equivariant WideResNet, which includes repetitive stacking of equivariant versions of *basic* residual blocks on several consecutive *bottleneck* residual blocks (for details on these residual blocks, we refer readers to Figure 1 in [6]). The rest of the architecture details and hyperparameters are identical to the design of the $C8$-equivariant convolutional network. $$$$ $$$$ [1] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014. [2] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything, 2023. [3] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. [4] Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. Advances in Neural Information Processing Systems, 32, 2019. [5] Gabriele Cesa, Leon Lang, and Maurice Weiler. A program to build e(n)-equivariant steerable CNNs. In International Conference on Learning Representations, 2022. [6] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I thank the authors for the detailed response and additional experiments. It seems finetuning the baselines with C8 augmentation reduced the performance gap between the proposed method and the baseline. And should be added to the main text. I am increasing my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer qZrQ Comment: We thank the reviewer for increasing the score. >It seems finetuning the baselines with C8 augmentation reduced the performance gap between the proposed method and the baseline. And should be added to the main text We will add the C8 baseline to the main text as suggested. While employing C8 augmentation as a baseline has slightly narrowed the performance gap, our method continues to exhibit consistent performance improvement across all datasets, particularly pronounced in CIFAR 100. Given that the reviewer still rated it borderline, we would appreciate any concerns/questions they might have in mind that could enhance their evaluation and potentially lead to an increased score.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable and constructive feedback. We are pleased to see that they found our work simple (reviewers **45sc**, **qZrQ**), novel, and effective (reviewers **qZrQ**, **nPdb**) in addressing a relevant, important (**vNov**) and timely (**nPdb**) problem. In the general response, attached rebuttal PDF, and individual reviewer responses, we believe to have addressed most if not all, reviewer questions and concerns. In particular, we are including significant new results on the Segment Anything Model [1] (SAM) and MaskRCNN [2] for equivariant segmentation, as well as additional baselines and ablations for invariant classification. These new results address common concerns on 1) equivariant tasks, 2) adapting with completely frozen weights, 3) missing baseline with C8 augmentation, 4) using E-SSL embeddings 5) a new baseline using a non-equivariant canonicalization network. Below, we give more context on these new experiments and refer the reviewers to the new and updated tables in the attached rebuttal PDF: 1. We provide results of instance segmentation on COCO 2017 [3] (val set) with Segment Anything Model (SAM) and MaskRCNN, where the pre-trained models were not fine-tuned, in part, due to the size of these pre-trained models. Our training includes only training the canonicalization function to map images in the train set of COCO 2017 to identity. As presented in Table 1 in the rebuttal pdf, we observe that our equivariant adaptation obtains mask-mAP scores on the original test set identical to the zero-shot performance of the model while being equivariant to the instance segmentation task. As this task is an equivariant task where we don’t finetune the large foundational prediction model, this addresses reviewers' concerns about only focussing on invariant tasks and fine-tuning the prediction network. We show that we can achieve equivariance by just training the canonicalization function (canonicalizer) and attaching it to a foundation model. 2. Based on reviewers’ feedback, we provide some additional baselines for the classification tasks on the image datasets: - we have added a baseline with C8 augmentation during fine-tuning, and the corresponding results are available in Table 2 in the rebuttal pdf (row titled C8-Aug.). - CIFAR10 results using the suggested E-SSL [4] checkpoint for ResNet-50 also appear in Table 3 in the rebuttal pdf. - Finally, based on the suggestion of Reviewer **nPdb**, we explore a setup for pre-trained ResNet-50 where we replace an equivariant canonicalizer with a non-equivariant convolutional network (ConvNet Canonicalizer). We will add the architectural details in the appendix of the final version of our paper. We train the ConvNet Canonicalizer to predict the angle for a rotated image, i.e., the prior loss is replaced with an MSE loss between the predicted and actual angle of rotation. As a result, we observe “approximate” equivariance with a slight drop in the Accuracy over the original test sets. The accuracy values are provided in Table 2 (row titled ConvNet Canonicalizer). $$$$ We plan to include all these new results in the final version of the paper. We hope that the reviewers can reassess our work and its contributions in light of this. $$$$ $$$$ [1] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything, 2023. [2] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. [3] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014. [4] Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, and Marin Soljacic. Equivariant self-supervised learning: Encouraging equivariance in representations. In International Conference on Learning Representations, 2022. Pdf: /pdf/ae30bd28ead7c6b23bb57b82db9fce1a1ef33685.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FITS: Modeling Time Series with 10k Parameters
Reject
Summary: The authors introduce FITS, a lightweight model for time series analysis. Unlike existing models that process raw time-domain data directly, FITS operates on the principle of manipulating time series through interpolation in the complex frequency domain. By discarding high-frequency components that have a negligible impact on the time series data, FITS achieves performance comparable to state-of-the-art models for time series forecasting and anomaly detection tasks. Additionally, FITS has a remarkably compact size, consisting of only approximately 10k parameters. This lightweight model is mainly constituted by a simple Complex-valued Linear Layer functioning on frequency domain. Strengths: - About the presentation: Straight-forward to follow, simple and direct sentences. - About the contribution: The proposed FITS framework is very light-weighted, since its learning parameters mainly coming from 1 dense layer. This contribution make its great application for actual real-world scenarios, applying on edge devices. The natural idea about *Low Pass Filter* is not new, but is used with good reasoning. - About the experiment: Authors illustrate the effectivenesses of FITS with two main time series tasks: Forecasting and detecting anomaly. While these experiments are not extensive, it effectively support the contributions the authors claim: Comparative performance and much more lightweight compared to existing SOTAs. Weaknesses: - About the presentation: Figure 1 has some too dim colors and hard to see; Figure 2 is also quite small, the font size should be increased. - About the contribution: - Details about input or output dimensions are not discussed (only discuss about the temporal axis). It is not clear whether the framework can be applied to multivariate series? How to choose cutoff frequency in case of multivariate series, when the harmonics are likely to be different over different variables? - While the different components constituting FITS are used with clear intentions, these techniques or algorithms (rFFT, RIN, low-pass filter, …) are not new and even well established. - The experiments for both time series and anomaly detection tasks suggest the framework has a great variance when evaluating on different datasets. In general, FITS cannot achieve SOTA performance in many scenarios, which may be unsuitable for performance-critical applications. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Are there any plans to enhance the visibility of the figure, perhaps by adjusting the colors, the font size, or using a different visualization technique? - Can the proposed framework be extended to handle multivariate time series? If so, how do you choose the cutoff frequency when the harmonics are likely to differ across different variables? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Authors recognize following weaknesses: - FITS struggles with binary-valued time series and time series with missing data. - Binary-valued time series are better suited for time-domain modeling due to their compact raw data format. - For time series with missing data, a two-step approach is suggested: apply simple time-domain imputation techniques before utilizing FITS for analysis. The authors should consider amend the weaknesses mentioned above if possible or make some modifications to the manuscripts to rebuttal the comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely apologize for the inconvenience caused by the color scheme of the figures. In the revised version, we will change the color scheme to enhance visibility and ensure a better viewing experience for readers. - W2.1, Q2: Can the proposed framework be extended to handle multivariate time series? If so, how do you choose the cutoff frequency when the harmonics are likely to differ across different variables? ***Certainly, FITS is adept at handling multivariate time series data.*** *Our experimentation, as highlighted in Table 1 of the original manuscript, was conducted on multivariate datasets, thereby showcasing FITS' competence in managing multi-channel data. Detailed dataset dimensions are provided in Table 1.* *We choose the harmonic cutoff by observing the amplitude spectrum and selecting the largest harmonic that appears across the dataset as a cutoff frequency. However, our experiment shows that the performance boost introduced by increasing the cutoff frequency is minor after the cutoff frequency is larger than the second/third order harmonic.* *Concerning datasets with different base frequencies,* *our observation suggests that the likelihood of inter-channel correlation is minimal. It's important to note that individual cutoff frequencies can be specified for each channel, providing the flexibility to tailor the choice according to specific data characteristics.* ***For detailed discussions, please refer to the common response 'About Multivariate' section.*** - W2.2: About the novelty: *We acknowledge that the specific blocks and algorithms are not our unique contributions. The LPF and RIN are well-established techniques and Complex-valued neural networks can even be dated back to the 19th century. But as you mention in the Strength, we select these techniques with good reasoning.* ***Our primary innovation lies in reformulating forecasting and reconstruction tasks as frequency domain interpolations.** We introduce an effective approach to leverage frequency domain information using complex-valued neural networks.* *Finally, FITS represents a significant contribution as an exceptionally lightweight TSA model suitable for deployment on edge devices with comparable or even superior performance to the SOTA models. This counterintuitively demonstrates that compact models can achieve comparable or even superior performance when compared to larger models.* - W2.3: About AD performance: *Indeed, FITS may not be the optimal choice for performance-critical applications. However, it is designed as a lightweight algorithm tailored for edge devices with limited resources (e.g., smart sensors), bringing forth distinct advantages. FITS demands minimal memory and computational resources, and notably, it demonstrates **sub-millisecond level inference time.** This temporal efficiency is inconsequential when compared to the time taken for inferencing a large model or even the communication. These characteristics render FITS exceptionally well-suited for the **swift detection of critical errors and rapid response**. By integrating FITS as a coarse-grained filter, followed by a specialized AD algorithm for finer-grained detection, the overall system achieves robustness and responsiveness, addressing both severe and nuanced anomalies effectively.* --- Rebuttal Comment 1.1: Title: Extra comment on authors' rebuttal Comment: First, I want to thank the authors for responding. However, here are some extra comments about the explanations: - W2.1: I understand that for multivariate cases, multiple base frequencies are selected but with the introduce of an extra assumption about uncorrelated channels (variables). This assumption (which is not held for all datasets) should be put in the revised manuscript, together with how FITS handle multivariate data. - W2.2: Your answer do not bring any new information. With these, I will keep the score as it is. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the responses to our rebuttal, and we would like to further clarify these two questions. - W2.1 I understand that for multivariate cases, multiple base frequencies are selected but with the introduce of an extra assumption about uncorrelated channels (variables). This assumption (which is not held for all datasets) should be put in the revised manuscript, together with how FITS handle multivariate data. Thank you for the suggestion and we shall clarify it in the revised version. At the same time, it seems that our earlier response is not crystal clear. As mentioned in the common response at the top rebuttal section, FITS handles multivariate data by weight sharing. In practice, channels often share a common base frequency when originating from the same physical system. For instance, signals from electrical appliances commonly have a base frequency of 50/60Hz, while traffic flow across a city follows a daily base frequency. Most of the datasets used in our experiments possess such attributes, and hence we simply apply weight sharing strategy for them. Such an approach balances performance and efficiency. For datasets that indeed contain channels with different base frequencies, we can cluster those channels according to the base frequency and train an individual FITS model for each cluster. We shall elaborate on the above in the revised version. - W2.2 Your answer do not bring any new information. We respect the reviewer's opinion and agree that the various components used in our FITS model are known in the literature. Our main contribution is the proposed overall framework, and it is well summarized by the reviewer: "The proposed FITS framework is very light-weighted, since its learning parameters mainly coming from 1 dense layer. This contribution make its great application for actual real-world scenarios, applying on edge devices.".
Summary: This paper builds a model for time series learning in the frequency domain. The key idea is to discard high-frequency components to reduce the model size. This leads to a simple model under 10k parameters, 50x smaller than DLinear. Experimental results suggest the model can achieve comparable performance to the state-of-the-art methods on long-term forecasting tasks and several anomaly detection tasks. Strengths: 1. Transforming time series into a frequency domain and then training a model is an interesting idea. 2. The model is extremely small, even 50X smaller than DLinear, which is already very small. 3. The experiments on long-term forecasting are strong. Weaknesses: 1. It is unclear whether the approach has practical value. DLinear could be already sufficiently small to fit in a normal edge device. Thus, it is not persuading to have an even smaller model. 2. Training efficiency is not reported. 3. The experiments on anomaly detection are not strong. Firstly, all the baselines are neural networks. However, traditional methods like OCSVM and IForest can have strong performance [1]. Secondly, the existing anomaly detection datasets could be flawed. Thus, I am particularly interested to know how the proposed method performs in the synthetic data provided in [1]. The pattern-wise outliers in [1] are synthesized by modifying the sinusoidal waves. Thus, the proposed method seems to well align with the design of this dataset. I am curious how it performs on this dataset. [1] Revisiting Time Series Outlier Detection: Definitions and Benchmarks Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the proposed method perform on the synthetic anomaly dataset [1]? 2. Can the proposed method outperform classical anomaly detection methods, such as OCSVM and IForest? 3. Why do we need such a small model, as DLinear may already be small enough for most of the edge devices? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful questions. - W1&Q3: Why do we need such a small model, as DLinear may already be small enough for most of the edge devices?: A: *We acknowledge that DLinear is already a compact model. However, for edge devices powered by **MCU (Microcontroller Units), e.g., smart sensors**, it might only have limited onboard storage in the **sub-MB** range. For instance, commonly used STM32 Series devices have just 192KB flash memory, making it infeasible to accommodate DLinear's parameter size. Controllers for smart sensors, such as the 8051 and ATmega32, have even more restricted memory, with capacities of 4KB and 32KB, respectively. In such scenarios, FITS' model size in the **KB range** presents a more viable option, delivering comparable performance.* |MCU|ROM/Flash Memory| |:--|:--| |805x| 4KB / 8KB| |ATmega328P (Arduino UNO)|32 KB| |STM32 Series|Up to 192KB| |ESP32 Series|4MB / 8MB| *For the devices that are deployed off the grid, energy efficiency is also a critical factor. FITS requires less calculation and hence can run with extended battery life.* We will address this importance in our revised version. *More importantly, FITS is capable of handling anomaly detection tasks, which is not the case for DLinear, as shown in the results of TimesNet [1]. As discussed in the **'About Anomaly Detection' section of the common response**, FITS can capture critical errors swiftly thanks to its **sub-millisecond level inference time**. Such time consumption is even negligible compared with the communication latency. Edge devices can shut down the defective machine immediately once the fatal anomaly is detected, preventing the system from further damage. Furthermore, FITS can be integrated with other AD algorithms to achieve better performance, as described in the common response.* - W3&Q1&Q2: About Baseline and Dataset of AD task: A: *We show the comparison with the mentioned OCSVM and IForest in the common response. And find FITS **delivers superior performance on all five datasets**. Please refer to the **common response** for a detailed analysis.* *The mentioned dataset is really aligned with our FITS. We will report the performance on them in the revised version.* *However, we would like to emphasize that conducting AD is for demonstrating the use cases of FITS instead of claiming its superior performance.* - W2: About training efficiency: A: *FITS has very **high training efficiency**. With a single NVIDIA Titan Xp, FITS can finish training within 5 minutes on ETT datasets and 30 minutes on the Traffic dataset (which has 862 channels). Interestingly, on a super lightweight model such as FITS and DLinear, we find that the **bottleneck is the loss computation on CPU instead of the model itself**. Considering the compact size of FITS, the memory footprint is also very minor.* *For detailed analysis, please refer to the **'About Efficiency' section** of the common response G2.* [1] Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. In International Conference on Learning Representations, 2023. --- Rebuttal Comment 1.1: Title: Thank you for the reponse Comment: Most of my concerns have been addressed, so I will increase my score. The authors are encouraged to do some analysis using the synthetic data (not only anomaly detection but it could also be used in forecasting tasks). This can help readers better understand how the algorithm works, and the pros/cons. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the recognition of our work. Following your suggestion, we conduct experiments on the aforementioned synthetic dataset, and the results are shown in the following table. We generate the synthetic dataset using the script provided in the benchmark with the default setting, i.e., 5% outlier on each channel with different outlier types. We generate 4000 time-steps as our dataset, in which we take 2500 for training and the rest 1500 for testing. For our FITS model, we use four different reconstruction windows, labeled as FITS-win{xxx}. We compare with the results retrieved from Table 17 of the original paper [1]. |Model|Precision|Recall|F1-score| |---|---|---|---| |FITS-win24|1|1|1| |FITS-win50|1|1|1| |FITS-win100|1|0.9993|0.9996| |FITS-win400|1|0.9991|0.9995| |AR|0.59|0.77|0.64| |GBRT|0.47|0.56|0.51| |LSTM-RNN|0.22|0.26|0.24| |IForest|0.48|0.57|0.52| |OCSVM|0.62|0.74|0.67| |AutoEncoder|0.20|0.24|0.22| |GAN|0.15|0.15|0.15| The table clearly demonstrates FITS' superior performance compared to other models for this synthetic dataset. We attribute it to the setting of this synthetic dataset, which is constructed from a sinusoidal wave with a single frequency, augmented by added anomaly patterns. These patterns can be challenging to identify in the time domain. However, FITS excels in capturing these features in the frequency domain, allowing it to easily detect anomalies that introduce unexpected frequency components. For instance, consider a 16-point segment of a sinusoidal wave with a period of 8. The frequency representation should appear as [0, 0, 1+j, 0, 0, 0, 0, 0, 0]. After downsampling with a DSR of 4, the resulting 8-point segment exhibits a frequency representation of [0, 0, (1+j)*0.25, 0, 0]. FITS learns to reconstruct the original segment by scaling the frequency component at frequency 2 by 4 and zero-padding the frequency representation to a length of 9. However, anomalies introduce unexpected frequency components, which FITS is not trained to handle, leading to compromised reconstruction. Furthermore, we appreciate the intriguing suggestion to conduct forecasting experiments on synthetic datasets. Such experiments facilitate the behavior of frequency interpolation to be interpretable. Specifically, we conduct a frequency response test for FITS with a synthetic dataset that combines sinusoidal waves of four periods, i.e., 120, 60, 30, 24 (base, 2nd harmonic, 4th harmonic, and 5th harmonic). We train FITS with a cut_off frequency of 20, a look-back window of 240, and a forecasting horizon of 120. Our findings are as follows: - FITS can fit the dataset with ****zero loss**** (1e-9 on training/testing set). Instead of learning the complex temporal patterns, FITS is designed to learn on the frequency domain which can project the complex temporal patterns as four frequency components. All FITS needs to learn is linear projecting these four components to 4 different positions, e.g., repositioning frequency [2, 4, 8, 10] to frequency [3, 6, 12, 15]. - FITS capably accommodates various combinations of frequencies present in the training set, thanks to the inherent independence of each frequency component in the frequency domain. - However, when faced with an unseen frequency during training, FITS yields very bad results since it hasn't been trained to project this particular frequency. We have provided the results and code within a Jupyter notebook titled "Interpretability.ipynb" in our anonymous code repository. This notebook contains additional visualizations and detailed results. We shall design more complex synthetic datasets to further demonstrate the performance and interpretability of our FITS model. Thanks again for your constructive suggestions!
Summary: This paper proposed a novel method on time-series forecasting and anomaly detection. The solution relies on the frequency-domain feature of the given time series and proposes to use a linear model to interpolate the data in the frequency domain. After that, the model utilizes the inverse FFT operation to transform the frequency domain data into time domain. The interpolated time series is longer than the original one. Thus, the forecasting has been conducted based on the augmented time series sequence. Strengths: 1. The idea is novel. The interpolation of the time series in the frequency domain provides some insights for this community. 2. The model efficiency is quite amazing, only 10K parameters have provided the comparable even better performance for the time-series prediction task. Weaknesses: 1. The experiments are not promising. What's the number of the random runs and did you control the random seeds? I suggest to report the mean and std values of the results. 2. As this is the new architecture of the time-series model, more analysis should be conducted such as more prediction tasks, more ablation studies for the key components of the method, etc. 3. The reason behind the effectiveness of the proposed method remains unclear to me. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See those in weakness part. Moreover, the augmented time series is not appropriate for the forecasting tasks. Forecasting is the extension for one given time series, however, simply interpolating for augmenting the original time series is not intuitively correct, because it has modified the original data piece rather than prediction upon the given information (in time domain). And the experimental results show that the performance is not comparable to the baseline methods such as PatchTST. I suggest the authors to further improve the method and figure out the reason of its effectiveness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - W1: The experiments are not promising. What's the number of the random runs and did you control the random seeds? I suggest to report the mean and std values of the results. A: *We report the mean and std values across multiple runs in Section B.3 of the Appendix. Tab.4 shows that FITS is very robust to the random seed selection thanks to the small number of parameters. The result shows that over four runs, FITS only shows a 0.001% performance fluctuation.* - W2: As this is the new architecture of the time-series model, more analysis should be conducted such as more prediction tasks, more ablation studies for the key components of the method, etc. A: *FITS is a very compact model. Removing any of its parts will make it functional. And FITS only has one hyperparameter for tuning, which is the cutoff frequency. We have done comprehensive ablation studies on the cutoff frequency. Please see the Tab.5 of the manuscript and Section C of the appendix.* - W3: The reason behind the effectiveness of the proposed method remains unclear to me. A: *We train FITS to generate an extended time series segment by interpolating the frequency representation of the input time series segment, motivated by the fact that a longer time series provides a higher frequency resolution in its frequency representation. We use a complex-valued linear layer to learn such interpolation. **Please check the Method section of the original manuscript for more information.*** As declared in the Abstract, FITS conducts interpolation on the frequency domain instead of the time domain. Interpolation on the frequency domain, not interpolation on the time domain. Frequency domain interpolation keeps the information in the original data piece and adds frequency for more detail. Please check the Method section of the manuscript for more details. We also need to emphasize that we **outperform baselines in almost half of the settings** and achieve mostly the second-best in the rest of them. Please check Tables 2 and 3 carefully. Furthermore, our emphasis is on efficiency, not solely on superior performance. Such results align with our major claim that '**achieves comparable or even superior performance to SOTA methods**'. --- Rebuttal Comment 1.1: Title: The author has not addressed my concerns. Comment: > We report the mean and std values across multiple runs in Section B.3 of the Appendix. Comparisons between different methods require multi-runs, which will ensure the reproducibility of the experiments and the statistical significance of the comparison. I know that your method is robust to randomness, while I think the common approach is to conduct multiple runs for every compared method to reflect the statistical significance. > Frequency domain interpolation keeps the information in the original data piece and adds frequency for more detail. Please check the Method section of the manuscript for more details. I know you are interpolating in frequency domain. My major point is that interpolation on frequency domain may also influence the input part of the sequence, which is the side effect except forecasting the future horizon. From this perspective, the intrinsic objective is not forecasting, but the disturbance on the original sequence. That's why I do not buy in the motivation and intuition of your method on this task. > our emphasis is on efficiency, not solely on superior performance. Such results align with our major claim that 'achieves comparable or even superior performance to SOTA methods'. From Table 2, the experiments are all conducted on the same dataset with different time spans, and the results are comparable to the baseline(s) in different settings, without obvious trend or characteristics. In Table 3, the proposed method fell back to PatchTST in two datasets with a large margin (in those datasets, difference about 0.01 is very significant). I don't think these results indicated "comparable performance". Yet the comparable one is the DLinear. However, as far as I'm aware, DLinear reported their results only on 1 random seed, which is not promising (see issue #33 in their repository). Before you are claiming about much better efficiency, the performance should be first considered. I think the comparison should be more comprehensive, more promising with more random seeds on all the compared methods. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for taking the time to respond to our rebuttal. We address these concerns as follows: 1. Comparisons between different methods require multi-runs, which will ensure the reproducibility of the experiments and the statistical significance of the comparison. I know that your method is robust to randomness, while I think the common approach is to conduct multiple runs for every compared method to reflect the statistical significance. A: We agree with the reviewer and we shall show the comparison results of multiple runs in the revised version. In the meantime, we would like to point out, *it is not the common practice in the time series forecasting literature*, and we report the results following previous works in this domain. 2. My major point is that interpolation on frequency domain may also influence the input part of the sequence, which is the side effect except forecasting the future horizon. From this perspective, the intrinsic objective is not forecasting, but the disturbance on the original sequence. That's why I do not buy in the motivation and intuition of your method on this task. A: Unfortunately, we do not fully understand this comment. Nevertheless, we would like to address it from the following two aspects: - For the forecasting task, we care about the forecasting results only, and all techniques (including convolution-, MLP-, and Transformer-based solutions) would try to manipulate the inputs for feature extraction to achieve this objective. - More importantly, FITS ****DO PRESERVE**** the information in the input by conducting frequency interpolation. For example, a 16-point segment of a sinusoidal wave with a period of 8. The frequency representation should appear as [0, 0, 1+j, 0, 0, 0, 0, 0, 0]. Suppose we forecast the following 16 steps. After frequency interpolation, the frequency representation would be [0, 0, 0, 0, (1+j)*2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. This is still a sinusoidal wave with a period of 8 and the input part stays intact. For complex waveforms, we can always transform them as the summation of a number of sinusoidal waves, and the above explanation holds true. For detailed results and visualization, please check the ‘Interpretability.ipynb’ in our anonymous code repository link. 3. From Table 2, the experiments are all conducted on the same dataset with different time spans, and the results are comparable to the baseline(s) in different settings, without obvious trend or characteristics. In Table 3, the proposed method fell back to PatchTST in two datasets with a large margin (in those datasets, difference about 0.01 is very significant). I don't think these results indicated "comparable performance". Yet the comparable one is the DLinear. However, as far as I'm aware, DLinear reported their results only on 1 random seed, which is not promising (see issue #33 in their repository). A: Due to the space limitation, we break the results into two tables: Table 2 and Table 3. As can be seen in these 2 tables, FITS achieves the ****best performance in 13 out of 28**** settings, and achieves the ****second best in 11 out of 28**** settings. It fails to do so in ****only 4 out of 28**** settings. While PatchTST achieves 15 best and 8 second best. Therefore, we claim that FITS achieves 'comparable' performance. Generally speaking, the channel independence design of PatchTST makes it excel at handling datasets with many variables such as the Electricity (321 channels) and Traffic (862 channels) datasets. In contrast, FITS is suitable to handle datasets with complex periodical patterns such as ETTm2 and weather. In our humble opinion, if there exists a model that excels for only one particular type of time series, it has its own value in practice. Therefore, we believe FITS is a promising solution for many practical scenarios. 4. Before you are claiming about much better efficiency, the performance should be first considered. I think the comparison should be more comprehensive, more promising with more random seeds on all the compared methods. A: We agree with the reviewer that performance is a critical factor, please refer to our answer to the previous question. At the same time, ****we would like to emphasize that efficiency can be quite critical in practice, especially for edge devices.**** FITS provides very good performance with orders of magnitude smaller compute and memory requirements than existing solutions, making deep learning-based solutions viable for many practical scenarios for the first time, e.g., smart sensors.
Summary: This paper proposes an impressive compact model, named FITS, for time series tasks, including forecasting and anomaly detection. FITS achieves manipulation to time series through interpolation in the frequency domain. The whole framework is quite simple and has remarkably few parameters. FITS achieves competitive performance to SOTA baselines on both forecasting and anomaly detection with about 50 times fewer parameters. With such impressive performance, the proposed model would have a certain impact on the community. Strengths: 1. The idea of implementing time series forecasting and anomaly detection through interpolation in frequency domain is interesting and technically sound. 2. Detailed designs, including LPF and utilization of RevIN, are well motivated and described, making the whole framework reasonable and easy to follow. 3. The proposed model achieves impressive performance on both forecasting and anomaly detection with a remarkably compact size. Weaknesses: 1. Lack of time consumption analysis. Due to rFFT and irFFT in the model, time efficiency is the main concern. 2. The coverage of related works is barely satisfactory. Adding some preliminary about manipulation in frequency domain to the manuscript would be helpful to understand the model. 3. Typos. For example, duplicated citations in line 383 and line 385. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It is appreciated to further list time consumption of the proposed model under different settings and compare time efficiency with baselines. 2. For forecasting task, what if the model is supervised in frequency domain? 3. Did authors try other interpolation methods, e.g., convolution? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive evaluation of our work and the insightful questions. - Q1, W1: It is appreciated to further list time consumption of the proposed model under different settings and compare time efficiency with baselines. Due to rFFT and irFFT in the model, time efficiency is the main concern. A: *We conduct experiment to measure the time consumption and find FITS can provide **sub-millisecond-level inference time**. **Please refer to the 'About Efficiency' section of the common response for detailed analyze.*** *We report the comparison with baselines as below. We conduct experiment on a NVIDIA Titan Xp and follow DLinear's setting, Electricity dataset, 720 input and 96 output. **The result labeled with CPU is run on a single core CPU to simulate the scenario on resources limited edge device**.* |Model|Inference Time| |:---:|:---:| |FITS|0.6ms| |FITS(CPU)|2.55ms| |DLinear|0.4ms| |DLinear(CPU)|3ms| |Informer|49.3ms| |Autoformer|164.1ms| |Pyraformer|3.4ms| |FEDformer|40.5ms| - Q2: For forecasting task, what if the model is supervised in frequency domain? A: *This is an insightful question and it is our **initial idea**. However, the frequency domain is a complex domain, and there is no effective differentiable loss function on the complex domain. Furthermore, supervising the real and imaginary parts of complex numbers is not feasible, breaking the phase information in the complex value.* - Q3: Did authors try other interpolation methods, e.g., convolution? A: *Thanks for the suggestion and we will try more in our future work. In our current opinion, convolution may not be a feasible interpolation method. According to the property of Fourier Transform, convolution on the frequency domain equals the multiplication on the time domain which may not properly manipulate the input to get the ideal result.* - W2: The coverage of related works is barely satisfactory. Adding some preliminary about manipulation in frequency domain to the manuscript would be helpful to understand the model. A: We put the *‘Preliminary: FFT and Complex Frequency Domain’ part in the Method Section to introduce the basis of the complex frequency domain and the calculation and its corresponding physical meaning.* We would expand this part and add a separate preliminary section in the revised version. - W3: About typos. A: *We will carefully proofread our paper and fix it in the revised version.* --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns well and I will keep the score as it is. --- Reply to Comment 1.1.1: Title: Thanks for your assessment! Comment: Thanks for your assessment of our work! Please feel free to ask if you have further question!
Rebuttal 1: Rebuttal: We deeply appreciate the reviewers for their insightful feedback and recognition of our work's strengths: 1. The concept of conducting interpolation after projecting time series into the frequency domain is both **innovative and technically sound** [Reviewers PA7V, S88N, dtUP, pZTr]. 2. The comprehensive explanation of our detailed designs has been acknowledged for making the entire framework **coherent and easily comprehensible** [Reviewers S88N, o3Wj]. 3. FITS' **exceptional efficiency** has been highlighted, as it achieves remarkable performance in forecasting and anomaly detection tasks while maintaining a remarkably compact size [Reviewers PA7V, S88N, pZTr, dtUP, o3Wj]. 4. Our experimental results **effectively validate our assertions**, showcasing competitive performance and a significant reduction in model complexity compared to existing SOTA methods [Reviewers o3Wj, dtUP, PA7V]. 5. FITS shows **great practical value** because of its compact model size. This contribution makes it **great application for actual real-world scenarios**, applying on edge devices. [Reviewers PA7V, o3Wj] Following are the responses to **common questions** raised by reviewers. We will add them in our final version. --- ### G1: Handling Multi-channel data. [Reviewer o3Wj, P7AV] 1. Can FITS handle multivariate series? [Reviewer **o3Wj**] ***Yes, FITS is capable of handling multivariate time series data.** Our experiments were conducted on **eight** multivariate datasets, and the dimensions of the datasets are shown in Table 1 of the original paper.* 2. How does the model handle multi-variate time-series data? [Reviewer **P7AV**] *FITS employs weight sharing across multi-variates/dimensions/channels, a commonly adopted approach that balances performance and efficiency. Moreover, individual FITS models per channel were experimented with, yet no substantial performance improvement was observed.* *Additionally, we note that channels often share a common base frequency when originating from the same physical system. For instance, signals from circuits commonly have a base frequency of 50/60Hz, while traffic flow across a city follows a daily base frequency. This observation further supports the suitability of the weight-sharing strategy for FITS.* 3. Does the model capture cross-channel information? [Reviewer **P7AV**] *FITS does not have a specific cross-channel information-capturing mechanism. But it shows competitive performance. We believe that introducing such a mechanism could further enhance FITS' performance, but it would lead to heavier models compared to its current form.* 4. How to choose cutoff frequency in case of multivariate series, when the harmonics are likely to be different over variables? [Reviewer **o3Wj**] *The harmonic cutoff is determined by observing the amplitude spectrum and selecting the largest harmonic across the dataset. Our experiments show that performance gains from raising the cutoff frequency become minor beyond the second/third order harmonic.* *Regarding different base frequencies, our observations suggest that channels with distinct base frequencies generally have minor correlations. Nevertheless, designers can specify the cutoff frequency for each channel to adapt the data's specific characteristics.* --- ### G2: About the efficiency: To show extraordinary efficiency, we compare FITS with DLinear. As a current SOTA baseline with a single-layer Linear network, DLinear is indeed quite efficient, but it might still be too large to run on MCU-powered edge devices. 1. About the time consumption and the impact of rFFT and irFFT: [Reviewer **S88N**] *Our experiments show that FITS can provide **sub-millisecond-level inference time**. Following DLinear, we evaluate the inference time of FITS on the Electricity dataset with input length 720 and output length 96. On one NVIDIA Titan Xp, the inference time of FITS is **0.6ms** and DLinear is 0.4ms. Moreover, the inference time on a **single core of CPU** of FITS is **2.55ms**, which is better than DLinear's 3ms. The result is averaged over 5164 runs (over the entire test set).* *In our experiment, we find rFFT and irFFT together only introduce 0.7ms time overhead (on CPU). In practice, we can offload FFT to dedicated chips such as FPGA or DSP, which can conduct such algorithms faster. Under this scenario, FITS finally achieves 1.7ms inference time on a single-core CPU. Such time consumption is usually negligible compared with communication latency.* 2. About the training efficiency of FITS: [Reviewer **dtUP**] *FITS exhibits **remarkable training efficiency**, requiring only around 5 minutes for ETT datasets and 30 minutes for the traffic dataset (with 862 channels) on a single NVIDIA Titan Xp. Intriguingly, for both FITS and DLinear, the training bottleneck is CPU loss computation rather than the model itself. FITS' compact size also ensures a minimal memory footprint.* --- ### G3: About Anomaly Detection: [Reviewer **o3Wj, S88N, dtUP, P7AV**] *We compare with the baselines mentioned by reviewers as below (F1-score). While the results are quite encouraging, we want to emphasize that we conduct AD to demonstrate the use case of FITS **instead of claiming its superior performance.*** |Dataset|FITS|DGHL|OCSVM|IForest| |:-:|:-:|:-:|:-:|:-:| |SMD|**99.95**|N/A|56.19|53.64| |PSM|**93.96**|N/A|70.67|83.48| |SWaT|**98.9**|87.47|47.23|47.02| |SMAP|70.74|**96.38**|56.34|55.53| |MSL|78.12|**94.08**|70.82|66.45| *Designed as a lightweight algorithm for edge devices, FITS offers significant benefits with minimal memory and computational demands. Impressively, it achieves **sub-millisecond** inference times, negligible in contrast to large models or communication. This renders FITS ideal for **swiftly detecting critical errors**. Integrated as a coarse-grained filter with a specialized AD algorithm for finer-grained detection, the system ensures robustness and responsiveness to a range of anomalies.*
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors present a transformer model for forecasting and anomaly detection. The model performs both forecasting and reconstruction in the frequency domain with a fraction of parameters compared to state-of-the-art transformers. The model demonstrates impressive performance on anomaly detection and forecasting tasks. Strengths: 1. This is an interesting work and novel to the best of my knowledge. 2. I strongly believe that the lightweight nature of the model is an asset. 3. The model demonstrates near state-of-the-art performance. 4. The paper is well-written. Weaknesses: Major: Most of these are not deal breakers, but the following would make the evaluation of the model stronger: 1. Baselines for forecasting : The authors compare with only transformer based models. I would encourage the authors to compare with advanced non-transformer based models such as N-HiTS [1] and N-BEATS [2]. 2. Handling of multi-variate time-series data (see Q1) 3. Baselines for anomaly detection: I would again encourage the authors to compare with state-of-the-art time-series anomaly detection e.g., DGHL [3]. 4. Evaluation for anomaly detection detection: I would encourage the authors to use standard evaluation metrics for anomaly detection like adjusted best F1 [3, 4], Average Precision [4, 5], and Volume Under Surface (VUS) [6]. The datasets used also have known flaws [7]. Minor: 1. "To avoid information leakage, We choose" --> "To avoid information leakage, we choose" References: 1. Challu, Cristian, et al. "NHITS: Neural Hierarchical Interpolation for Time Series Forecasting." Proceedings of the AAAI Conference on Artifi 2. Oreshkin, Boris N., et al. "N-BEATS: Neural basis expansion analysis for interpretable time series forecasting." arXiv preprint arXiv:1905.10437 (2019). 3. Challu, Cristian I., et al. "Deep generative model with hierarchical latent factors for time series anomaly detection." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. 4. Goswami, Mononito, et al. "Unsupervised model selection for time-series anomaly detection." arXiv preprint arXiv:2210.01078 (2022). 5. Schmidl, Sebastian, Phillip Wenig, and Thorsten Papenbrock. "Anomaly detection in time series: a comprehensive evaluation." Proceedings of the VLDB Endowment 15.9 (2022): 1779-1797. 6. Paparrizos, John, et al. "Volume under the surface: a new accuracy evaluation measure for time-series anomaly detection." Proceedings of the VLDB Endowment 15.11 (2022): 2774-2787. 7. Wu, Renjie, and Eamonn Keogh. "Current time series anomaly detection benchmarks are flawed and are creating the illusion of progress." IEEE Transactions on Knowledge and Data Engineering (2021). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the model handle multi-variate time-series data? Does the model capture cross-channel information? 2. What are MACs (Multiply-Accumulate Operations)? Neither the paper nor the supplementary material talks about it. 3. I would like to know exactly how the reconstruction works, I am not sure if I understand it with the given information. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss some limitations of your work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - W1: Baselines for forecasting : The authors compare with only transformer-based models. I would encourage the authors to compare with advanced non-transformer-based models such as N-HiTS and N-BEATS. A: *We show the comparison with mentioned N-HiTS and N-BEATS on MSE in the following table. FITS outperforms these two models in most cases while maintaining a compact model size. We will consider adding the following results to our main result. The results for N-HiTS and N-BEATS are retrieved from the paper of N-HiTS[1].* | Dataset | Horizon | FITS | N-BEATS | N-HiTS | | :---------: | :-----: | :----: | :-----: | :----: | | Electricity | 96 | **0.138** | 0.145 | 0.147 | | | 192 | **0.152** | 0.180 | 0.167 | | | 336 | **0.166** | 0.200 | 0.186 | | | 720 | **0.205** | 0.266 | 0.243 | | Traffic | 96 | 0.401 | **0.398** | 0.402 | | | 192 | **0.407** | 0.409 | 0.420 | | | 336 | **0.420** | 0.449 | 0.448 | | | 720 | **0.456** | 0.589 | 0.539 | | Weather | 96 | **0.145** | 0.167 | 0.158 | | | 192 | **0.188** | 0.229 | 0.211 | | | 336 | **0.236** | 0.287 | 0.274 | | | 720 | **0.308** | 0.368 | 0.351 | | ETTm2 | 96 | **0.164** | 0.184 | 0.176 | | | 192 | **0.217** | 0.273 | 0.245 | | | 336 | **0.269** | 0.309 | 0.295 | | | 720 | **0.347** | 0.411 | 0.401 | *Note that we also compare FITS with the linear-based model (DLinear[2]) and convolution-based model TimesNet. Furthermore, we also show the comparison with the aforementioned N-HiTS and N-BEATS on the M4 dataset. These results are shown in Table 3 of the appendix.* - W2&Q1: How does the model handle multi-variate time-series data? Does the model capture cross-channel information? A: *FITS employs weight sharing across channels, a commonly adopted approach that balances performance and efficiency. Moreover, individual FITS models per channel were experimented with, yet no substantial performance improvement was observed.* *FITS does not have a specific cross-channel information-capturing mechanism. But it shows competitive performance. We believe that introducing such a mechanism could enhance FITS' performance, although this may introduce additional parameters.* *Please refer to the **'About Multi-variate' section of common response** for detailed analysis.* - Q2: What are MACs (Multiply-Accumulate Operations)? Neither the paper nor the supplementary material talks about it. A: *Sorry for using the abbreviation without explanation. We will add the explanation in the revised version. MACs (Multiply-Accumulate Operations) is a commonly used metric that counts the total number of multiplication and addition operations in a neural network. We follow the DLinear[2] to use MACs as our metric to measure computational efficiency.* - W3&W4: Baselines for anomaly detection and Evaluation. A: *We show the comparison with the mentioned DGHL in the common response and will add it to our main result table in the revised version. FITS outperforms the DGHL on SWaT dataset by a large margin. On the SMAP and MSL datasets, FITS shows suboptimal results. As analyzed in the original manuscript, FITS is not suitable for handling binary data in these two datasets.* *We design our anomaly detection experiment following the methodology of the Anomaly Transformer. For fair and convenient comparison, we adopt the same metrics and benchmarks. We will add the mentioned metrics and benchmarks to the revised version of the paper.* - Q3: I would like to know exactly how the reconstruction works, I am not sure if I understand it with the given information. A: Please refer to Fig. 1 in the **Supplementary Material of this paper** for an illustration of the **reconstruction pipeline**. In this process, the model input $x$ is derived from a segment of the time series $y$ using an **equidistant sampling** technique with a specified downsample rate $\eta$. Subsequently, FITS performs frequency interpolation, generating an upsampled output* *$\hat{x}_{up-sampled}$* *with the same length as $y$. The reconstruction loss is computed by comparing the original $y$ and the upsampled* $ \hat{x}_{up-sampled}$. *Please note that, due to space constraints, the depicted downsample/upsample rate $\eta$ in the figure is shown as 1.5, which is not a practical value. In our actual experiments, we employ a $\eta$ value of 4.* [1] NHITS: Neural Hierarchical Interpolation for Time Series Forecasting. AAAI 2023. [2] Are transformers effective for time series forecasting? AAAI 2023. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: I would like to thank the authors for their response! I would like to bump up the score a bit to reflect my current assessment of the paper. --- Reply to Comment 1.1.1: Title: Thanks for your assessment of our work! Comment: Thanks for your assessment of our work! Please feel free to ask if you have further question!
null
null
null
null
null
null
Better with Less: A Data-Active Perspective on Pre-Training Graph Neural Networks
Accept (poster)
Summary: This work found that more data does not necessarily improve the pre-training of GNNs and then propose a method to adaptively select data for pre-training GNNs. Experimental results show the proposed method is effective. Strengths: 1. this work is well motivated by the observation that more data does not necessarily lead to better pretrained graph encoder 2. that the experiments seems to be convincing since APT generally outperforms other baselines Weaknesses: There are several weaknesses mainly about the method: 1. The optimization problem in equation 5 is confusing, what is the variable you are optimizing, a series of binary variable indicating whether to select a graph or not? Based on description in subsequent part, you are basically selecting graphs with the highest score, in that case it is unnecessary to formulate an optimization problem, just state your scoring function. 2. The description is confusing, eg, what is M in line 296? The number of subgraphs being selected? 3. Efficiency is important in data selection, but the authors do not discuss it in the main body of the paper. I assume the method is slower than baseline pretraining method but how slow is it? 4. The motivation of the other four property criteria is unclear, especially when they are highly correlated with the network entropy. Can we just drop the other four criteria? 5. The authors claim in the conclusion that APT can enhance the model with a fewer number of input data, but I can't find evidence for that. Is APT only trained on fewer data than baselines? Overall, the description of the proposed method could be improved. In addition, the authors claimed they are not curriculum learning but I don't think so. Time-adaptive selection strategy is just curriculum learning in a broader sense, I think easy-first is just one type of curriculum. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors do not state limitations in either the appendix or the main body of the paper, please add them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer gzMz, We really appreciate your valuable comments. We have followed closely the suggestions, and made clarifications and revisions accordingly. We hope the revised content could help to further strengthen our work. >W1: The optimization problem in equation 5 is confusing, what is the variable you are optimizing, a series of binary variable indicating whether to select a graph or not? Based on description in subsequent part, you are basically selecting graphs with the highest score, in that case it is unnecessary to formulate an optimization problem, just state your scoring function. Sorry for the confusion. We here aim to select graphs with the highest score $\mathcal J(G)$. As suggested, we have removed the optimization problem and modified the description to: "Thus we can select the graph with the highest score, where the score is defined as $\mathcal{J}(G) = \gamma_t \phi_\mathrm{uncertain} + (1 - \gamma_t)\mathrm{MEAN}(\hat{ \phi} _ \mathrm{entropy}, \hat{\phi}_\mathrm{density}, \hat{\phi} _\mathrm{avg\\_deg}, \hat{\phi} _\mathrm{deg\\_var},- \hat{\phi} _\mathrm{\alpha}) $. >W2: The description is confusing, eg, what is M in line 296? The number of subgraphs being selected? Thanks for pointing it out. $M$ is a hyper-parameter used to compute the graph-level predictive uncertainty $\phi_{\text{uncertain}}(G) = (1/M) \sum_{i=1}^M \mathcal L_i$, where $M$ denotes the number of subgraph instances queried in the graph $G$ for uncertainty estimation (as mentioned in line 126). We have provided clear descriptions of the meaning of $M$ in line 296, and other ambiguous terms or concepts in our revised version. >W3: Efficiency is important in data selection, but the authors do not discuss it in the main body of the paper. I assume the method is slower than baseline pre-training method but how slow is it? The training time comparison between our model and the most competitive pre-training baseline GCC have been provided in Table 10 in Appendix H. As empirically noted, the total training time of APT is 18592.01 seconds, while the competitive graph pre-training model GCC takes 40161.68 seconds. The efficiency of our model is mainly due to the use of a much smaller number of carefully selected training graphs and samples at each epoch. (Detailed analysis can be found in the subsection "Training time" in Appendix H.) The time complexity of our model have been analyzed in Appendix C. The overall time complexity of APT in each batch is $O(B|V|^3 + X + B^2D + B)$, where $|V|$ is the maximal number of nodes in subgraph instances, $B$ is the batch size, $D$ is the representation dimension, and $X$ is the time complexity of backbone GNN. (Detailed analysis can be found in Appendix C.) To improve the clarity, we have now moved some crucial results to the main body and included pointer to remaining details in the appendix for further reference. >W4: The motivation of the other four property criteria is unclear, especially when they are highly correlated with the network entropy. Can we just drop the other four criteria? We appreciate the reviewer's observation. The five graph properties are provided to serve as comprehensive and informative data selection criteria. We agree that the four graph properties (i.e., density, average degree, degree variance and negative scale-free exponent) are somehow related to the network entropy, so we have conducted further empirical investigations to determine the necessity of using all five properties. The results, as reported in Table 8 of Appendix H, demonstrate the value of each individual property. Remarkably, when combining all five properties, we achieve the best performance in most cases. This emphasizes the significance of using all five properties collectively. In addition, the computational overhead of computing all these five properties is relatively low ($O(|V|)$ where $|V|$ is the maximal number of nodes in subgraph instances), and the computation is only needed once before pre-training. Hence, using all five quantities does more good than harm in our model. >W5: The authors claim in the conclusion that APT can enhance the model with a fewer number of input data, but I can't find evidence for that. Is APT only trained on fewer data than baselines? Sorry for the unclear description. APT is only trained on fewer data than baselines because of the careful selection of suitable input data. In our experiments, only 7 datasets are selected out of the available 11 (as mentioned in "Analysis of the selected graphs" in Appendix H). For each selected dataset, at most 24.92% of the samples are chosen for training. We have added detailed descriptions in the main body in the revised manuscript. >Overall weakness: Overall, the description of the proposed method could be improved. In addition, the authors claimed they are not curriculum learning but I don't think so. Time-adaptive selection strategy is just curriculum learning in a broader sense, I think easy-first is just one type of curriculum. Thanks for your constructive suggestions to improve our manuscript. Regarding the curriculum learning, thanks for reminding us that "easy-first" and "difficult-first" approaches fall under the broader category of curriculum learning. We have adjusted our previous unclear statement and improved our description in the revised manuscript. --- Rebuttal Comment 1.1: Title: Post rebuttal Comment: The authors have addressed my concerns, and I would raise my score from 5 to 6. I would be glad to see this paper get accepted, and if so, please revise the method part to make it clear. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your valuable comments and timely reply. These comments help us significantly improve the quality of our work. We will make the modifications as suggested, and revise the unclear descriptions in the methodology part in our paper.
Summary: The paper introduced an novel graph pre-training pipeline from a data-centric view by proposing a graph data selector. The proposed approach involved sequentially feeding representative data into the model for pre-training. These representative data were determined by a data selector, which utilized prediction uncertainty and graph statistics to suggest the most relevant examples. This novel methodology offers a fresh perspective on graph pre-training and demonstrates the importance of data selection in the process. Strengths: 1. The idea is novel and interesting. 2. The proposed training pipeline is tested on many tasks from different domains. Weaknesses: 1. The utilization of predictive uncertainty based on pre-training loss lacks convincing evidence. 2. The proposed training pipeline requires input graphs to be processed order-by-order, which may introduce unnecessary bias and is not scalable for large graph datasets. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: First, there are several questions about the predictive uncertainty: 1. The loss used for graph pre-training may not effectively align with the actual loss of the downstream task [1,2]. Consequently, the predictive uncertainty based on the current definition may lack meaningful interpretation. A more promising approach would be to ensure that the uncertainty of the pretraining loss accurately reflects the difference between the pre-training loss and the downstream task loss. 2. Prediction uncertainty has been extensively explored in previous research [3,4]. While many existing methods have been proposed with ground-truth labels, it would be beneficial to have discussions (and even experiments) that shed light on the connection and distinctions between different approches of uncertainty modeling. For experimentds 3. The experimental setups lack clarity. It is unclear whether data from all domains are used to pre-train a single model or if separate models are pre-trained for different domains. Others 4. Please check descriptions in lines 215~217. Smaller gamma seems leading to important roles of the predictive uncertainty term. 5. Choosing subgraphs seems to be a crucial component of the pre-training algorithm. However, the paper lacks a thorough explanation or detailed descriptions of this process, which would greatly improve the understanding of this important aspect. 6. Recently, there are many related work focusing on the data-centric approaches [2,5,6], espeically for graph pre-training [2,5]. It would be great and helpful to readers if the authors could provide a brief discussion on the connections and differences between their work and these existing approaches. Ref. [1] Does gnn pretraining help molecular representation? NeurIPS 2022. [2] Data-Centric Learning from Unlabeled Graphs with Diffusion Model. Arxiv. [3] Dropout as a bayesian approximation: Representing model uncertainty in deep learning. ICML 2016. [4] Single-model uncertainties for deep learning. NeurIPS 2019. [5] Analyzing data-centric properties for graph contrastive learning. NeurIPS 2022. [6] Data-centric ai: Perspectives and challenges. Arxiv. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer KR8J, Thanks for your constructive suggestions to improve our manuscript. We have followed closely the suggestions, and made clarifications and revisions accordingly. We hope the revised and newly provided content could help to further strengthen our work. >W1, Q1 and Q2: Predictive uncertainty definition and its connection to existing uncertainty First, we would like to clarify that our uncertainty is introduced to enhance the graph pre-training model. Typically, graph pre-training does not consider any information from the downstream task or data, e.g., GCC, GraphCL. Therefore, the uncertainty is solely computed on pre-training data and based on the pre-training task, making the pre-training loss a natural measure for predictive uncertainty. We have added the detailed description in the revised manuscript. Second, we have presented the discussion and theoretical connection between our proposed uncertainty and existing works on uncertainty. In most existing works, uncertainty is often defined on the label space, including [3, 4]. [3] quantifies uncertainty based on Orthonormal Certificates, which needs to be trained with labels. [4] leverages Bayesian learning to approximate the posterior distribution and obtain the predictive performance distribution, where the variance serves as a representation of model uncertainty. In contrast, our uncertainty is defined in the representation space. Moreover, Theorem 1 has established a theoretical connection between our uncertainty and the most conventional uncertainty defined as the cross entropy loss of an instance on the downstream classifier. The theorem suggests that a smaller conventional uncertainty over all downstream classifiers cannot be achieved without a smaller predictive uncertainty in our work. More details can be found in lines 127-142 of our manuscript. We have highlighted this discussion and theoretical connection with a subheading in the revised version. >W2: Bias and scalability of order-by-order training Sorry for the confusion. We explain our order-by-order training approach and discuss the scalability as follows, which has been included in the revised version. The ordering is indeed a fundamental feature of our model, making our approach perform better than unbiased random sampling. In different stages of training process, the graph selector (equipped with predictive uncertainty and graph property) chooses the samples most needed by the current model. The experimental results present evidence of the superiority of our selection compared with unbiased random sampling in other pre-training models. Additional results are in Table 17 in Appendix H. Regarding scalability, our model is trained on a reduced number of carefully selected training graphs and samples, and achieves a training time 2.2 times faster than the competitive model GCC. In our experiments in Section 4, we carefully selected only 7 datasets out of the available 11 and performed pre-training using at most 24.92% of the samples in each selected dataset. Moreover, for each newly added dataset, our model only needs a few more training iterations to convergence, rather than being trained from scratch. >Q3: Experiment setup Sorry for the unclear description. In the revised manuscript, we have highlighted the crucial setup of utilizing data from all domains to pre-train one model in Section 4.1. >Q4: Mistake of gamma description Thanks for pointing out this mistake. We have corrected the description in lines 215-217 to "the parameter $\gamma_t$ should be set larger so that the graph properties play a leading role. As the training phase proceeds, the graph selector gradually pays more attention to the feedback ${\phi}_{\text{uncertain}}$ from the model via a smaller value of $\gamma_t$". >Q5: Explanation of choosing subgraphs Thanks for pointing it out. We have included the subgraph selection process in the revised version as follows. When choosing subgraphs, we follow the practice in the existing work GCC. The process involves three sequential steps for sampling a subgraph of node $v$. (1) Random walk with restart: We start a random walk from node $v$, traversing iteratively to its neighboring nodes with the probability proportional to the edge weight. At each step, there is a positive probability for the walk to return back to the starting node $v$. (2) Subgraph induction: The random walk with restart accumulates a subset of nodes surrounding $v$. We then obtain the subgraph induced by this subset of nodes. (3) Anonymization. The induced subgraph is anonymized by relabeling its nodes in an arbitrary order. >Q6: Discussion of data-centric related works Thanks for reminding us of these related works. We have detailedly discussed them as follows, and included in the revised version. Data-centric AI, a recently introduced concept, emphasizes the enhancement of data quality and quantity, rather than model improvement [6]. Following-up works in graph pre-training [2, 5] exploits the data-centric idea to design data augmentation. [2] augments unlabeled graph data under the guidance of downstream, while our work focus on data selection in the pre-training phase without any information from downstream. [5] mainly focuses on the theoretical analysis of data-centric properties of data augmentation. Neither of them addresses the specific problem of data selection for pre-training, which is the objective of our proposed graph selector. The reference list of our rebuttal is the same as that of the reviewer's comment. --- Rebuttal Comment 1.1: Title: Thanks for the authors' response. Comment: Thank you for the authors' thorough response. Most of my concerns have been addressed. I look forward to seeing the revised paper, and I hope all the key points will be appropriately covered. --- Reply to Comment 1.1.1: Title: Thank you Comment: We greatly appreciate your valuable insights and prompt response. We will incorporate the suggested revisions in our paper.
Summary: This paper presents a data-centric framework, abbreviated as APT, for cross-domain graph pre-training. APT is composed of a graph selector and a graph pre-training model. The core idea is to select the most representative and informative data points based on the inherent properties of graphs, as well as predictive uncertainty. For the pre-training stage, when fed with the carefully selected data points, a proximal term is added to prevent catastrophic forgetting and remember all the contributions of previous input data. Strengths: 1.This paper proposes that big data is not a necessity for pre-training GNNs. Instead of training on a massive amount of data, it is more reasonable to select a few suitable samples for pre-training. This approach can also reduce the amount of data and computational costs. Compared to pre-training on the entire dataset, selecting a more carefully selected subset of data for pre-training can indeed achieve better results. 2.This paper provides theoretical justification for the connection between uncertainties by establishing a provable connection between the proposed predictive uncertainty and the conventional definition of uncertainty. The predictive uncertainty is defined in the representation space, which enables the identification of more challenging samples that can benefit the model training more significantly. 3.The entire framework seems very reasonable, and the process is described clearly. From the experiments, it appears that good results have been achieved. Weaknesses: 1.This paper mentions both "data-active" and "data-centric" concepts. It may be helpful to clarify the relationship between these two to avoid confusion. Maybe only using "data-active" in the paper. 2.There is no individual ablation experiment about the graph selector to demonstrate the effectiveness for each of the five graph properties. 3.There is a spelling error in the title, 'Prespective' should be corrected to 'Perspective'. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1 Are all the Graph properties used in this paper useful when dealing with graph-inherent features? Also, what is the computational complexity since multiple properties need to be calculated? Q2 Why did the performance of the "dd" dataset decrease after fine-tuning compared to the frozen parameters of APT in table 2? Q3 I am slightly concerned that if all the five graph properties are useful for pre-training. Q4 In existing works, is there any works that define model uncertainty in the representation space? And what is the difference between theirs and this work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer i2Gc, We really appreciate your valuable comments. We have followed closely the suggestions, and made clarifications and revisions accordingly. We hope the revised and newly provided content could help to further strengthen our work. >W1: This paper mentions both "data-active" and "data-centric" concepts. It may be helpful to clarify the relationship between these two to avoid confusion. Maybe only using "data-active" in the paper. Thanks for your valuable suggestion. We here discuss the relation and difference of "data-active" and "data-centric" concepts. The term "data-centric" emphasizes the enhancement of data quality and quantity, rather than model improvement [1]. On the other hand, the term "data-active" in our manuscript is used to emphasize the co-evolution of data and model, rather than mere data selection before model training. Our unified pre-training framework, which co-designs both the graph selector and the pre-training model, fits in the scope of "data-active". To avoid confusion, we have consistently utilized the term "data-active" in the revised version as suggested. >W2, Q1 and Q3: W2: There is no individual ablation experiment about the graph selector to demonstrate the effectiveness for each of the five graph properties. Q1: Are all the graph properties used in this paper useful when dealing with graph-inherent features? Also, what is the computational complexity since multiple properties need to be calculated. Q3: I am slightly concerned that if all the five graph properties are useful for pre-training. The ablation studies of utilizing one graph property have been presented in Table 8 in Appendix H. The results show utilizing one graph property can still beat the best baselines in most cases, which demonstrates the effectiveness of each of the five graph properties for graph pre-training. Moreover, putting five properties together often leads to the best performance. We have added a clear pointer of the important experimental findings in the main body of our paper, making it easier for readers to locate relevant content. Regarding the computational complexity of calculating graph properties, suppose the maximal number of nodes in subgraph instances is $|V|$. We calculate average degree, degree variance, density, network entropy and scale-free exponent of each graph, which cost $O(|V|)$, $O(|V|)$, $O(1)$, $O(|V|)$ and $O(|V|)$ respectively. The overall computational complexity is $O(|V|)$. More importantly, these graph properties are only needed to compute once before pre-training. Therefore, the computational overhead is relatively low. >W3: There is a spelling error in the title, 'Prespective' should be corrected to 'Perspective'. Thank you for your meticulous attention in identifying the spelling error. We have now rectified the mistake, changing 'Prespective' to 'Perspective' in the title of the revised manuscript. >Q2: Why did the performance of the "dd" dataset decrease after fine-tuning compared to the frozen parameters of APT in table 2? Thanks for your careful reading of our paper. This phenomenon could happen since "graph pre-train and fine-tune" is an extremely complicated non-convex optimization problem. Actually the observation that fine-tuning deteriorates the performance has been made in previous work; see, e.g., [2]. We have included this discussion in the revised manuscript. >Q4: In existing works, is there any work that defines model uncertainty in the representation space? And what is the difference between theirs and this work? Thanks for your valuable insights to improve our work. After conducting a comprehensive literature review, we find that the majority of existing works define uncertainty in the label space, such as taking the uncertainty as the confidence level about the prediction [5-8]. Only a few works define uncertainty in the representation space [3,4]. In [3], uncertainty is measured based on the representations of an instance's nearest neighbors with the same label. However, this approach requires access to the label information of the neighbors, and thus cannot be adapted in pre-training with unlabeled data. [4] introduces a pretext task for training a model of uncertainty over the learned representations, but this method assumes a well-pre-trained model is already available. Such a post processing manner is not applicable to our scenario, because we need an uncertainty that can guide the selection of pre-training data during pre-training rather than after pre-training. We have included a detailed discussion of these related works in the revised manuscript. [1] Data-centric Artificial Intelligence: A Survey. Arxiv. [2] Fine-tuning can distort pre-trained features and underperform out-of-distribution. ICLR. [3] Density estimation in representation space to predict model uncertainty. EDSMLS. [4] A Simple Framework for Uncertainty in Contrastive Learning. Arxiv. [5] A Survey of Uncertainty in Deep Neural Networks. Arxiv. [6] Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness. NeurIPS. [7] Dropout as a bayesian approximation: Representing model uncertainty in deep learning. ICML. [8] Single-model uncertainties for deep learning. NeurIPS. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I notice that there have been some recent papers about data-centric GNNs. I suggest to keep the related work and discussion up-to-date if accepted. I will raise my score to 7. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your support of our work! We sincerely value your insightful suggestions, and we will incorporate the suggested revisions into our paper.
Summary: This paper proposes a novel approach to pre-training graph neural networks (GNNs) called the data-active graph pre-training (APT) framework. This framework introduces a unique method that involves a graph selector and a pre-training model. These two components work together in a progressive and iterative way to choose the most representative and instructive data points for pre-training. Strengths: 1. The key insight of the paper is that using fewer, but carefully selected data can lead to better downstream performance compared to using a massive amount of input data. This approach challenges the common practice in machine learning of using large datasets for pre-training, a phenomenon the authors refer to as the "curse of big data" in graph pre-training. 2. The paper is well-motivated, well-written and easy to understand. 3. The experimental results demonstrate that the proposed method outperformed GCC, which is the previous SOTA and a direct ablation to the proposed APT framework. Weaknesses: 1. The proposed method is incremental. The proposed “predictive uncertainty” is adapted from curriculum learning, and “graph properties” are common statistics. 2. The proposed methods are not able to handle node features. It only provides structural information which does not align with most real-world scenarios. As a result, performance is not comparable with a supervised model trained on node features. 3. The node classification performance on homophily graphs (which is an important family of graphs) are much lower than ProNE. The improvement in graph classification tasks seems limited. 4. Selection based on graph properties might cause potential test data leakage because the graph properties are selected according to test performance (as shown in Figure 2). Technical Quality: 3 good Clarity: 3 good Questions for Authors: In line 67, section 2, the authors present an example of a triangle to show what kind of knowledge the pre-trained model can learn. However, the proposed subgraph instance discrimination task can break down that knowledge. For example, ego-graph sampling can break down the triangle. Can the authors provide some insights or justification for this issue? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 16eq, We really appreciate your insightful comments. We have followed closely the suggestions, and made clarifications and revisions accordingly. We hope the revised and newly provided content could help to further strengthen our work. Below is a point-by-point response. >W1: predictive uncertainty from curriculum learning We appreciate the reviewer's careful evaluation of our proposed method. We elucidate three significant differences between curriculum learning and our uncertainty: (1) The uncertainty in most curriculum learning is built on label space for supervised learning [1-4], while we define the uncertainty in representation space and further establish its theoretical connection with the label space. (2) In curriculum learning, samples are individually selected based on uncertainty without considering their joint distribution [1,2]. However, in graph pre-training, it is crucial to ensure that the chosen samples conform to a joint distribution that reflects data diversity and topological structures of real-world graphs. This is achieved through the incorporation of graph properties that characterize the statistics of joint distribution of samples. (3) Most traditional curriculum learning determines the order of learning samples before model training [1,2,5,6]. In our model, however, the uncertainty changes during different stages of model training. Thus, the next training sample is not determined util the end of the previous iteration. Therefore, our method has substantial difference from curriculum learning. > W2: Transferability of node features We appreciate the insightful point. Here we focus on the transferability of graph structures. It is also a common practice to transfer structure rather than node features in related research areas including graph pre-training [10], transfer learning [11] and graph domain adaptation [7-9]. This is because, in most cases, node features of different graphs do not overlap or have little overlap, making them not necessarily transferable across pre-training and downstream data. For instance, when nodes in different graphs represent different types of entities, it renders the node features completely irrelevant. Even when nodes are of the same entity type across graphs, the dimensions/meanings of node features can vary significantly, leading to misalignment. After all, the transferability of node features in graph pre-training is a largely unexplored area, and needs further research attention. >W3: Performance on homophily graphs and graph classification Thank you for bringing up this point, and we here delve into the specific cases in detail. Regarding the performance on homophily graphs, the design of ProNE is based on the homophily assumption that neighboring nodes share the same class. What's more, ProNE requires re-training on different graphs, making it non-transferable. On the contrary, we target at a general, transferable model free from any specific assumptions on graphs, and thus applicable to various settings, including both homophily and heterophily graphs. This could explain the better performance of ProNE on homophily graphs, and the superiority of our model in most other cases (particularly in situations where ProNE fails). We have included this discussion in the revised manuscript. Regarding graph classification performance. Admittedly, our model shows modest improvements in this aspect. Nevertheless, it is valid to emphasize the substantial efficiency gain: APT obtains comparable performance with training time 2.2 times shorter than the most competitive baseline GCC. This efficiency advantage holds practical significance and offers real-world application benefits. >W4: Potential data leakage Sorry for the misunderstanding. To clarify, each downstream dataset is split into a training set and a test set. When selecting graph properties, we only refer to the performance on the **training set** of downstream dataset, including the results in Figure 2(b). We remark that the test set of downstream dataset is completely excluded during model training and graph property selection, preventing any form of data leakage. We have revised the manuscript to avoid any ambiguity. >Q1: Sampling in subgraph instance discrimination task Thanks for the insightful comments. In the subgraph instance discrimination task, one subgraph is defined as the neighborhood of a node. Such neighborhood typically has complex structure, comprising of repeated occurrences of various simple structural patterns (e.g., open triangles, closed triangles). Therefore, although the ego-graph sampling may break some simple structural patterns, most information in the complex neighborhood is still preserved. We have provided the discussions in the revised version. [1] Curriculum learning: A survey. IJCV. [2] A survey on curriculum learning. TPAMI. [3] When do curricula work? ICLR. [4] Dynamically composing domain-data selection with clean-data selection by "co-curricular learning" for neural machine translation. ACL. [5] On the power of curriculum learning in training deep networks. ICML. [6] Curriculum learning by transfer learning: Theory and experiments with deep networks. ICML. [7] Graph domain adaptation: A generative view. ArXiv. [8] Graph adaptive knowledge transfer for unsupervised domain adaptation. ECCV. [9] Graph Domain Adaptation via Theory-Grounded Spectral Regularization. ICLR. [10] GCC: Graph contrastive coding for graph neural network pre-training. SIGKDD. [11] Transfer learning of graph neural networks with ego-graph information maximization. NeurIPS. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank you for the detailed reply and additional experiments. Most of my concerns have been addressed and I increased my score to 6. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your support for our work! We greatly appreciate your valuable suggestions, and we will incorporate the suggested revisions into our paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning List-Level Domain-Invariant Representations for Ranking
Accept (spotlight)
Summary: The authors propose a new domain-invariant ranking method which enforces list-wise feature invariance as opposed to item-wise feature invariance. The authors prove a novel domain adaptation bound specifically for the ranking problem. Authors show strong empirical improvement over item-wise invariance methods. Strengths: (1) The theorem is new and not a trivial extension of the existing domain adaptation generalization bound from Ben-David (which is for classification). In particular, some new techniques are required to analyze the ranking problem. (2) The method is novel and results are good. Weaknesses: (1) My main concern is with the empirical improvement between ItemDA and ListDA. I remain unconvinced that this improvement is coming from the list-wise invariance and not from the improvement in discriminator architecture. The authors mention that ItemDA uses a three-layer MLP, while ListDA uses a stack of three transformer blocks. is there anyway to compare the two methods using the same discriminator architecture? (2) While the novel generalization bound and algorithm are both novel. The theory doesn't explain why ListDA is better than ItemDA. Personally, I do not see this as an issue. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I find the statement in line 141 confusing (even after looking at Appendix A.2). I would think that item-level invariance implies list-level invariance. For example, consider the case where $g$ was trained to output a domain-invariant feature representation of query-document pairs (at the item level). The ranker $h$ operates on these domain invariant features to form a list. How could the resulting list be different between the domains, if the ranker is constant and the output of $g$ is domain-invariant? Perhaps it would be helpful to elaborate on this point further in the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time, comments, and support! We hope that the following addresses the concerns: - **“the statement in line 141 [is] confusing”** Thank you for the question—this was also a conceptual hurdle for us when we worked on this paper (please kindly find the new figure attached to the global response that illustrates the difference between list-level and item-level alignment). In ranking problems (for information retrieval), the list is defined to be the q-d pairs with the same query q. Although in our experiment setup, $g$ computes feature vector (in $\mathbb R^k$) for each q-d pair in isolation, it does not mean that the list structure is lost: the list feature representation is the set of features $\\{g(q,d_1),\cdots, z_(q,d_\ell)\\}$ computed on q-d pairs with the same q. This means that the list-level feature distribution $\mu_Z$ is defined over the set of features vectors computed on q-d pairs with the same q (the support of this distribution is the space of sets that contain $\ell$ length-$k$ vectors). This is in contrast to the item-level feature distribution $\mu_V$, which is defined over the feature vectors, ignoring the list structure (the support is $\mathbb R^k$, the space of length-$k$ vectors). Therefore, even if item-level distributions are aligned, after imposing the list structure on the items based on whether they have the same q (each list contains not one but $\ell$ items), the resulting list-level distributions are no longer necessarily aligned. Or, from a different perspective, there are strictly more ways of learning representations that are aligned at the item-level than there are at the list-level, and this is what we intended to demonstrate with the example in Appendix A.2. - **“The theory doesn't explain why ListDA is better than ItemDA”** While we did not prove a separation, we discussed (in the first point above) and showed that item-level alignment does not imply list-level alignment, which is required by the generalization bound for domain transfer measured in terms of ranking metrics. This suggests that ItemDA, which discards the list structure, is not sufficient for domain alignment on ranking problems, and ListDA is the more appropriate choice. --- Rebuttal 2: Title: response to rebuttal Comment: I thank the reviewers for the rebuttal and experiments. I am satisfied. My only concern regarding the discriminator architecture was addressed in the rebuttal experiments. They show that using the transformer architecture on the baseline provides no improvement. I thank the authors for answering my conceptual question regarding item-level alignment and list-level alignment. The figure in the attachment is especially helpful.
Summary: This paper focuses on domain adaptation in the context of ranking problems. The authors propose a novel approach called list-level alignment to learn domain-invariant representations for ranking tasks. They demonstrate the benefits of this approach through theoretical analysis and empirical experiments. Existing domain adaptation methods for ranking, particularly invariant representation learning, have been sporadically used and lack theoretical guarantees. Given the limitations of previous approaches, the authors aim to address the conceptual discrepancy between item-level alignment and the intrinsic list structure of ranking problems. They propose list-level alignment as a solution to learn higher-level invariant representations on rank lists. This approach not only provides a theoretical foundation for domain adaptation in ranking but also achieves better empirical performance. Strengths: I think this problem setting is interesting and practical. I have the following views on this paper's strengths part. From the perspective of problem and method, this paper addresses the challenge of training ranking models in domains with limited annotated data. It introduces domain adaptation as a transfer learning technique and highlights the need for invariant representation learning in the ranking literature. The authors propose a new approach called \list-level alignment\ that preserves the list structure of ranking problems and achieves higher-level invariant representations on the lists. The benefits of this approach include the establishment of the first domain adaptation generalization bound for ranking and better empirical performance on unsupervised domain adaptation tasks. Experimentally, the authors conducted experiments on unsupervised domain adaptation tasks to evaluate the performance of their proposed approach, list-level alignment. The proposed approach achieved better empirical performance on unsupervised domain adaptation tasks compared to existing methods. The approach also established the first domain adaptation generalization bound for ranking, providing theoretical guarantees for its implementation. Weaknesses: I have the following views on this paper's weakness part. - The paper acknowledges that existing domain adaptation methods for ranking lack theoretical guarantees, and it claims to have theoretical foundations for its proposed approach of list-level alignment. It's already a good contribution. However, the contribution of this paper in the theoretical aspect may be considered weak, since the contribution of this paper can be seen as an extension of existing domain adaptation theorems[1] applied to the ranking problem. Correct me if I misunderstand it. - One curious aspect here is that while the paper discusses domain adaptation (DA) in the ranking problem, the experiments are conducted on reranking. In the context of reranking, which involves reordering a list of candidate documents generated by an initial retrieval model in response to a search query. However, it does not directly address the fundamental ranking problem. This limitation of the paper should be further discussed to provide clarity. [1] Ben-David, Shai, et al. "Analysis of representations for domain adaptation." Advances in neural information processing systems 19 (2006). Technical Quality: 3 good Clarity: 3 good Questions for Authors: In my first point at my weakness part, if I misunderstand the theoretical contributions, the author could correct me. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, it's properly stated in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time, comments, and support! We hope that the following addresses the concerns: - **“the theoretical aspect may be considered weak… can be seen as an extension of existing domain adaptation theorems”** The result established in [1] is in terms of accuracy for binary classification problems, not ranking, and we point out that their proof technique cannot be extended to get a bound in terms of ranking metrics (e.g., NDCG, MRR) for ranking problems, due to technical difficulties that arise from the problem setup for ranking. For instance, the tightness of the bound in [1] depends on the uniqueness of the optimal Bayes classifier, but the optimal ranker/scorer on ranking problems are generally not unique, as discussed in Section 4. In addition, our bound can only be obtained after recognizing the need for list-level representation alignment, rather than item-level alignment as done in prior work on (ranking) domain adaptation. This necessary conceptual leap—which enabled both the theoretical result and the empirical improvements—is also our main theoretical contribution. - **“the experiments are conducted on reranking… does not directly address the fundamental ranking problem”** In Appendix D, we performed experiments on the Yahoo! Learning to Rank (LETOR) Challenge v2.0 dataset, a web search ranking task on numerical data, where the lists to be ranked are directly defined by the dataset (as opposed to obtained from a retrieval stage), hence the setting is a “fundamental ranking problem”. The results are consistent with those from the passage reranking experiments and also support our findings in the main sections. Nevertheless, our theory applies to any data/problem setting that involves lists of items ($x$) and scores ($y$) with the metrics being ranking metrics (e.g., NDCG, MRR), and this includes both passage reranking and Yahoo! LETOR. The only difference between them, in our opinion, is how the raw data is stored (the former as a corpus of queries and documents; the latter directly stores the lists) and how the lists are formed (the former uses a retriever to form the lists, i.e., the retriever, which we do not adapt, is treated as part of the problem setup; the latter as-is).
Summary: This paper proposes to learn domain-invariant representations for ranking. In contrast to prior works that typically consider item-level alignment, they introduce the concept of list-level alignment to learn higher-level invariant representations on the lists. The domain adaptation generalization bound is explicitly established and extensive experiments on benchmark datasets reveal the effectiveness of the proposed approach. Post rebuttal: my major concerns have been adequately addressed, and thus I've decided to change my score from 4 to 5. Strengths: - Domain adaptation for ranking problems is significant but receives scant attention. - The paper in general is well-motivated and presents major challenges that need to be overcome to address DA for ranking. A new paradigm termed list-level alignment is proposed. - They evaluate and ablate the method in multiple benchmark datasets. Weaknesses: - The proposed alignment relies on the traditional domain adversarial training, offering no significant advancements. What are the differences between the proposed alignment and prior adversarial alignment, and why the proposed list-level alignment is preferable in the context of ranking problems? The primary distinctions between list-level alignment and other paradigms are not well-articulated. - According to the reported results, the proposed ListDA shows marginal improvements over baseline methods in terms of MAP. - The connection between the theory and the proposed method is not sufficiently clear and precise, leaving readers uncertain about their relationship. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time, comments, and support! We hope that the following addresses the concerns: - **“The proposed alignment relies on the traditional domain adversarial training, offering no significant advancements. What are the differences between the proposed alignment and prior adversarial alignment”** The key distinction between adversarial training for our list-level invariance and prior work’s item-level invariance is the distribution we try to align. For item-level invariance (e.g., ItemDA), we try to align the distribution of item-level representations, while for list-level invariance, we treat the entire list as a whole, derive a list-level representation, and aim to align the distribution of the list-level representation (please also kindly find the figure in our global response). Another distinction lies in the choice of the discriminator (lines 130–140). ListDA requires list-level invariance, which means that the input of the discriminator is list-like. For modeling such data, we used a transformer model architecture for our ListDA discriminator; specifically, the transformer does not have positional encoding, so it is compatible with the permutation-invariance property of ranking data. In contrast, there is more freedom in the choice of discriminator for item-level domain alignment, and most prior work uses MLP. In short, our contribution is to provide an appropriate alignment objective for ranking problems—with rigorous justification (Theorem 4.6), and our focus is not on the optimization side (we note that besides adversarial training, invariant representations can be learned using other methods, incl. optimal transports [1, 2]). We adapted adversarial training for our use case because of its popularity in the literature. - **“why the proposed list-level alignment is preferable in the context of ranking problems”** Our generalization bounds in Corollaries 4.7–4.8 (derived from Theorem 4.6) state that, if the representations are aligned at the list-level, then target domain performance—in terms of ranking metrics (e.g., NDCG, MRR)—can be lower bounded by source domain performance. But we are not aware of any bound for ranking via item-level invariance; moreover, no such bound has been established for ranking in prior work, because the majority of them studied classification problems (which use accuracy as the metric). This means that item-level invariance does not have performance guarantees, but list-level invariance does so, and is hence more appropriate on ranking problems. This is verified in our experiments, where ListDA always outperforms ItemDA; in particular, because of the lack of guarantee, the latter can sometimes result in negative transfer. Another intuitive reason is that ItemDA does not take into account the list structure inherent in the data of ranking problems, and ListDA does; the distinction between list-level and item-level invariance is discussed on lines 141–149 (please also kindly find the new figure attached to the global response): list-level invariance is a stricter requirement than item-level invariance, suggesting that ItemDA is not sufficient for ranking problems. - **“The connection between the theory and the proposed method is not sufficiently clear and precise”** Corollaries 4.7–4.8 (derived from Theorem 4.6) hold when the representations are aligned at the list-level, and on the other hand, our ListDA method is proposed to learn list-level domain invariant representations. So the bounds provide guarantees for the method, and the method is aimed at maximizing the bounds. The two are connected by the concept of list-level invariance. We will elaborate upon the existing discussion on lines 203–209 and emphasize this connection. - **“ListDA shows marginal improvements over baseline methods in terms of MAP”** The ListDA method consistently achieves significantly better mean average precision (MAP) than ItemDA and zero-shot baselines on all three datasets. While the improvement over QGen PL may seem marginal, it is important to note that MAP is just one of several evaluation metrics used in our experiments, and ListDA still outperforms baseline methods on multiple evaluation metrics (including NDCG@k and MRR@10) across different datasets. We believe that the combination of these results supports the effectiveness of ListDA. Furthermore, the contribution of this paper extends beyond the proposed ListDA method. The list-level alignment framework provides a foundation for future research on domain adaptation in ranking tasks (e.g., more sophisticated discriminator structures). [1] Chen et al. A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization. ICML 2019. [2] Zhou et al. Iterative Alignment Flows. AISTATS 2022. --- Rebuttal Comment 1.1: Comment: I acknowledge the author's rebuttal and I am encouraging reviewers to comment on your reply. Best, AC
Summary: This paper proposes a novel approach for domain adaptation on list ranking problems. In significant contrast with previous works, they adopt aligning list-level representations directly instead of item-level representations, and argue for its relevance in improving on list-level metrics pertinent to the problem of ranking. They also devise a theoretical bound for the target error in terms of source error, a joint hypothesis error and the list-level domain divergence, under suitable assumptions. Results on passage re-ranking task shows their strength in practical applications. Strengths: - The paper is very well written, easy to follow and extremely well presented. - The idea of extending UDA approaches to list-level re-ranking problems is interesting and highly relevant. The ideas discussed in the approach might be relevant to problems beyond ranking (like object detection in computer vision which also uses unique evaluation metrics). - The empirical results on the passage re-ranking task is strong. - The simple of idea of using a transformer discriminator to allow permutation in-variance is impressive. Weaknesses: - Few choices, like the use of transformer discriminator instead of MLP, wasn't evaluated or ablated empirically, although it makes sense intuitively. - The authors might also consider showing results on diverse ranking tasks beyond passage re-ranking, possibly in tasks like fine-grained computer vision as in [1], although I agree that this is much beyond the scope of this work. [1] Wang, Xinshao, et al. "Ranked list loss for deep metric learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is there a typo in L98? Since both $\mathcal{L}_\text{rank}$ and $D(\mu_s,\mu_t)$ need to be minimized according L100-102, but the negative sign in between does the opposite. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work is self-sufficient, and presents good evaluation to convey the effectiveness of the proposed method. Although, I feel that the scope of this or a follow-up work could be broadened a bit, to also incorporate problems beyond passage ranking into the evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time, comments, and support! We hope that the following addresses the concerns: - **“results on diverse ranking tasks… like fine-grained computer vision”** Thank you for the suggestion, and it would be indeed interesting to see whether ListDA would be useful in (vision) tasks that reframe fine-grained classification as ranking. Regarding task diversity, in Appendix D, we also included experiments on the Yahoo! Learning to Rank (LETOR) Challenge v2.0 dataset, a web search ranking task on numerical data. The results are consistent with those from the passage reranking experiments and also support our findings in the main sections. - **“Is there a typo in L98?”** Thank you for pointing this out! It will be fixed in the revision. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for addressing my questions, and will keep my rating unchanged.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and the remarks! - **Figure on item-level alignment vs. list-level alignment.** We created a new figure (in the attached PDF) to better illustrate their difference as well as the point that the latter is a stronger requirement than the former. This figure will be included in our revision. - **(Reviewers 3wjd and nxwQ) “the use of transformer discriminator instead of MLP [for ListDA]” and “unconvinced that this improvement is coming from the list-wise invariance and not from the improvement in discriminator architecture”** First, we discuss the justifications behind this choice, then discuss the results on a new set of experiments where we used a transformer discriminator for ItemDA (ItemDA-transformer), as well as a MLP discriminator for ListDA (ListDA-MLP). We used transformers for ListDA not mainly for its representation power, but for its efficiency in modeling sequential and list/set-like data. It is indeed possible to use discriminators with a MLP architecture for ListDA, e.g., one design (ListDA-MLP) would be to concatenate all feature vectors (each $\in\mathbb R^k$) of a list (of length $\ell$) and form a long vector $\in\mathbb R^{\ell\times k}$, and feed it into the MLP. However, its optimization process will be inefficient because the MLP in this design does not recognize the permutation invariance property of rank lists (i.e., swapping two items in a list with the scores does not alter the list), so it will be much slower to train (exponentially in $\ell$, for iterating through all orderings of the concatenation). Therefore, we used a transformer (without positional encoding) because it is invariant to permutations in the input sequence, and is efficient to optimize, since MLP is not appropriate for ListDA. Now, the transformer is indeed a more capable model than MLP, but its additional modeling power is only useful on sequential data. Note that each input (unbatched) to the list-level discriminator (for ItemDA) is a list of $\ell$ feature vectors in $\mathbb R^k$ as a sequence, whereas each input to the item-level discriminator (for ListDA) is a single $\mathbb R^k$ vector. Therefore, if we were to use transformer for item-level alignment (ItemDA), since each input is a sequence of length-1, the attention mechanism becomes pointless, and the transformer collapses to a MLP (with skip connections and layernorm)—in this regard, the transformer is as expressive as the MLP for ItemDA (we also tried wider and deeper MLPs in our experiments, but they did not provide additional benefits). The transformer discriminator is only meaningful for ListDA, where the inputs are sequential. This is not to rule out the possibility of better discriminator architectures, but is outside the scope of our work. - **(Continuing from above) Results on ItemDA-transformer and ListDA-MLP.** We ran additional experiments (ablations) to show that **using a transformer discriminator for ListDA provides no additional benefits**, where we see that ItemDA-transformer has the same performance as ItemDA-MLP on all datasets: | Robust04 | MAP | MRR@10 | NDCG@5 | NDCG@10 | NDCG@20 | | --- | --- | --- | --- | --- | --- | | ItemDA-MLP | 0.2822 | 0.8037 | 0.5822 | 0.5396 | 0.4922 | | ItemDA-transformer | 0.2848 | 0.8018 | 0.5851 | 0.5406 | 0.4982 | | ListDA | 0.2901 | 0.8234 | 0.5979 | 0.5573 | 0.5126 | | TREC-COVID | MAP | MRR@10 | NDCG@5 | NDCG@10 | NDCG@20 | | --- | --- | --- | --- | --- | --- | | ItemDA-MLP | 0.3086 | 0.9080 | 0.8276 | 0.8142 | 0.7697 | | ItemDA-transformer | 0.3088 | 0.8907 | 0.8287 | 0.8069 | 0.7691 | | ListDA | 0.3184 | 0.9335 | 0.8693 | 0.8412 | 0.7985 | | BioASQ | MAP | MRR@10 | NDCG@5 | NDCG@10 | NDCG@20 | | --- | --- | --- | --- | --- | --- | | ItemDA-MLP | 0.4781 | 0.6383 | 0.5315 | 0.5343 | 0.5604 | | ItemDA-transformer | 0.4739 | 0.6420 | 0.5256 | 0.5312 | 0.5567 | | ListDA | 0.5191 | 0.6666 | 0.5639 | 0.5714 | 0.5985 | To show that **MLP discriminator is inappropriate for ListDA**, we used the concatenation strategy described above and applied it on the Yahoo! LETOR dataset. We used two training strategies: in ListDA-MLP-1, both the ranking model and the discriminator are updated in every step (with gradient reversal), and in ListDA-MLP-10, the model is updated every 10 discriminator update steps. The results are as follows. Because of the difficulty in optimizing ListDA-MLP mentioned above (the complexity is exponential in list-size, and the list size in Yahoo! LETOR dataset can be as large as 139), it is hard to maintain the discriminator to near-optimality (which is required in adversarial training), and this suboptimality leads to negative transfer. Note that ListDA-MLP-10 > ListDA-MLP-1 because the former discriminator updates more frequently (also takes ten times as long to train), suggesting that the negative transfer is due to inefficiencies in optimization. | Yahoo! LETOR | MAP | MRR@10 | NDCG@5 | NDCG@10 | NDCG@20 | | --- | --- | --- | --- | --- | --- | | ItemDA | 0.5315 | 0.6717 | 0.7402 | 0.7708 | 0.8255 | | ListDA-transformer | 0.5370 | 0.6771 | 0.7442 | 0.7735 | 0.8269 | | ListDA-MLP-1 | 0.4265 | 0.5190 | 0.6385 | 0.6958 | 0.7740 | | ListDA-MLP-10 | 0.5041 | 0.6480 | 0.7157 | 0.7508 | 0.8117 | Pdf: /pdf/5b1133b31ab27dde8dc6bc9f0390e9398e55640e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Efficient Hyper-parameter Optimization with Cubic Regularization
Accept (poster)
Summary: {Summary} This paper proposes a stochastic cubic regularization type algorithm for hyperparameter optimization that does not depend on hyper-gradients. Theoretical analysis shows that the proposed method can converge to approximate second order stationary points with lower sample complexity than that of first-order optimization methods, which can only find first-order stationary points. Experiments demonstrate the effectiveness of the proposed method using both synthetic and real data. Strengths: {Strengths} 1. This paper is clearly written and provides a new algorithm to solve nonconvex bilevel hyperparameter optimization problems (with stochastic relaxation). 2. In this setting, solving the cubic subproblem takes little time, as the cubic problem dimension is usually very small. 3. Theoretical analysis shows that the proposed cubic algorithm achieves lower sample complexity than that of first-order methods. Weaknesses: {Weakness and Questions} 1. The algorithm design is a direct application of cubic regularization, and therefore is not novel. 2. It is not clear what is the technical novelty in the convergence analysis compare to the analysis of inexact cubic regularization. The author mentions that the constructed $g$ and $B$ are not unbiased estimators of the gradient and Hessian. How does this challenge and affect the technical proof? 3. There are existing works on finding second-order stationary points of bi-level optimization, e.g., ``Efficiently Escaping Saddle Points in Bilevel Optimization''. Please discuss and compare with it. 4. In the experiments, many curves are piece-wise flat (constant). Does that mean the hyperparameters do not change in those iterations? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: see the previous section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: see the previous section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments on both the strength and weakness of our paper. For your questions raised in the weakness part, here are our answers to each of them: > Q1: The algorithm design is a direct application of cubic regularization, and therefore is not novel. A1: Our algorithm is different from existing works from the following perspectives: 1) in implementation, we need to compute gradient and Hessian from solving the lower-level problem, as in step 8 in Algorithm 1; 2) theory, we consider optimization with inexact gradient and Hessian, which is also different from directly using cubic regularization. > Q2: It is not clear what is the technical novelty in the convergence analysis compare to the analysis of inexact cubic regularization. A2: Thank you for raising this question. Our convergence analysis is different from existing analysis for inexact cubic regularization methods as the inexactness here comes from solving the lower-level problem. As such, we need to bound the error of estimated gradient and Hessian from the solution to lower-level problem, while existing analysis directly assumes error bound on estimating gradient or Hessian. > Q3: The author mentions that the constructed g and B are not unbiased estimators of the gradient and Hessian. How does this challenge and affect the technical proof? A3: As g (resp. B) are not unbiased estimators for gradient (resp. Hessian), the estimation error now contains two parts: stochastic noise and inexactness in solving lower-level problems. These parts need to be separately bounded in our proof. We will mention these differences in the final version of our paper. > Q4: There are existing works on finding second-order stationary points of bi-level optimization, e.g., ``Efficiently Escaping Saddle Points in Bilevel Optimization''. Please discuss and compare with it. A4: Thank you for pointing this out. We are aware of this paper and it is based on hyper-gradient for solving bi-level optimization problems. As such, it cannot be directly applied to our settings here as the hyper-gradient is unavailable due to discrete hyper-parameters or non-differentiable evaluation metrics (e.g., MRR in knowledge completion task, see Section 4.2.1). We will add some discussions on this paper in our final version. > Q5: In the experiments, many curves are piece-wise flat (constant). Does that mean the hyperparameters do not change in those iterations? A5: In our experiments, we report the best validation performances of found hyper-parameters. (similar figures can be referred to Figure 1 in [1] or Figure 2 in [2]) When curves keep piece-wise flat, it means that model with newly sampled hyper-parameter does not obtain better performance than the current optimum. We will add more descriptions to avoid any misunderstandings. [1] Initializing bayesian hyperparameter optimization via meta-learning, Proceedings of the AAAI Conference on Artificial Intelligence. [2] SMAC3: A versatile Bayesian optimization package for hyperparameter optimization, The Journal of Machine Learning Research. --- Rebuttal Comment 1.1: Title: comment Comment: I would like to thank the authors for their detailed response, which has addressed my technical quetions. After reading other reviewers' comments and authors' response, I decided to keep my current score. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We are pleased to know that our responses have addressed your previous questions. We are also willing to engage in more discussions if you have any other questions.
Summary: The authors propose a cubic regularization scheme for the outer loop optimization of bilevel optimization problems. The cubic regularization is very appealing since it can "avoid saddle points". Theoretically, authors extend the convergence results of the cubic regularization to inexact gradients to make it work for bilevel optimization. Empirically, authors show that their method achieves better performance on bilevel optimization tasks. Interestingly, they investigate the eigenvalues of the hessian. Strengths: The idea is very interesting and tackles the very important and hard problem of finding a proper outer procedure for bilevel optimization. Avoiding saddle points seems to be a very appealing property! Weaknesses: However, it feels like the writing of the paper could be polished (there are unmerged commit line 264 in the supplementary material.). A lot of statements are not properly backed. I am not sure to properly understand the interest of the proposed method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - "These methods [Bayesian optimization, Reinforcement learning] can easily get trapped in saddle points with poor performance" Could you provide references for this statement? I am not sure to understand why a Bayesian optimization algorithm would be stuck in a saddle point. From what I understand from the paper, hyper-gradient methods with vanilla first-order methods can be trapped in saddle point, and cubic regularization is here to help. - What is the point of Lemma 3.1? - Definition of what is called a saddle point could be recalled (i.e. negative eigenvalues of the hessian?) - How does the proposed method compare to the use of l-BFGS once the hyper-gradient has been computed? (as done in [1] for instance) - Experimental results. In my experience, these kinds of results are usually very sensitive to the number of inner steps to solve the optimization problem. In particular, if the estimation of the inner problem is crude, the value of the hyper-gradient can be poorly estimated. Moreover, to my knowledge, quasi-second-order methods, combined with approximated gradients, can yield poor results in practice. I did not see this crucial hyper-hyper-parameter reported. - Figure 1: could you detail how exactly this "relative eigenvalue" metric is computed? It seems crucial to make the point of the authors but it is not detailed. I guess this is the ratio of the eigenvalues over the maximal eigenvalue. - Section 4. Could you write exactly the maths of each bilevel problem which is solved? At least in the appendix? - I would love to see if the proposed algorithm can improve out-of-distribution generalization in meta-learning settings, like [2]) [1] Deledalle, C.A., Vaiter, S., Fadili, J. and Peyré, G., 2014. Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. SIAM Journal on Imaging Sciences, [2] Lee, K., Maji, S., Ravichandran, A. and Soatto, S., 2019. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10657-10665). Typos: - Figure 1 eigen value >> eigenvalue Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments that our idea is interesting. Here are our responses to the weakness and questions. ### Weakness > W1: A lot of statements are not properly backed. A1: The proposed method is backed by (i) theoretical analysis, which demonstrate that it can converge faster than existing gradient-based methods and converge to second-order approximate stationary points; and (ii) empirical results, where experiments on both synthetic and real data sets demonstrate its superior performance compared to existing hyper-parameter optimization methods. We are also willing to provide more support for any statements in our paper. > W2: I am not sure to properly understand the interest of the proposed method. A2: The proposed method mainly targets on hyper-parameter optimization problems where the hyper-parameters are discrete or performance metrics are non-differentiable. Please also see our response to Q1 in common responses for the major contributions of our work. ### Questions > Q3: References for statement: "These methods [Bayesian optimization, Reinforcement learning] can easily get trapped in saddle points with poor performance" A3: Thank you for pointing this out. The problem of stuck in saddle points should be only for gradient-based reinforcement learning. We will update this part in the final version to avoid such ambiguity. > Q4: What is the point of Lemma 3.1? A4: Initially, the upper-level objective of hyper-parameter optimization problems is to find the best hyper-parameter $z$ (i.e. optimize $z$). In Eq (1), we transform the upper level objective to optimize $\theta$ instead of $z$. The point of Lemma 3.1 is to justify that such transformation is reasonable, as optimizing $\theta$ is equivalent to optimizing $z$. We will make this point more clear in the final version. > Q5: Definition of what is called a saddle point could be recalled (i.e. negative eigenvalues of the hessian?) A5: Thank you for mentioning this. We will add this definition next to the definition for second-order stationary point in the final version. > Q6: Compare to l-BFGS A6: We have added experiments using L-BFGS and include it in the revised figure. Please check the PDF file in general response. The performance of L-BFGS is similar to Newton's method in our experiments. We will also add the revised figures to the final version. > Q7: Experimental results. In my experience, these kinds of results are usually very sensitive to the number of inner steps to solve the optimization problem. In particular, if the estimation of the inner problem is crude, the value of the hyper-gradient can be poorly estimated. Moreover, to my knowledge, quasi-second-order methods, combined with approximated gradients, can yield poor results in practice. I did not see this crucial hyper-hyper-parameter reported. A7: For experiment on synthetic data, the number of inner steps is set to 40 by default, and we compare the impact of using different number of inner steps in Section 4.3. Results demonstrate that only using few steps in inner loop yields poorly estimated hyper-gradients and worse final performance. When the number of inner steps are too large, it leads to huge computation cost. We will add more descriptions about the number of inner steps in real-world experiments in final version. > Q8: Figure 1: could you detail how exactly this "relative eigenvalue" metric is computed? It seems crucial to make the point of the authors but it is not detailed. I guess this is the ratio of the eigenvalues over the maximal eigenvalue. A8: Thank you for pointing this out. Your understanding is correct. "Relative eigenvalue" refers to the ratio of eigenvalues over the max eigenvalue. We will add more descriptions about it in the final version. > Q9: Section 4. Could you write exactly the maths of each bilevel problem which is solved? At least in the appendix? A9: Here we give the mathematical formulation for each problem in our experiments: (1) For experiment on synthetic data, the mathematical formulation can be referred to Eq (1) in Section 3.1, where $z$ corresponds to feature mask, $p_{\theta}$ uses the sigmoid function and details can be found in Section 4.1. We use AUC as the validation evaluation metric. (2) For hyper-parameter optimization for knowledge graph completion, the mathematical formulation can also be referred to Eq (1), where $z$ corresponds to a set of categorical hyper-parameters (details in Table 1), $p_\{\theta\}$ uses the softmax-like function as defined in Section 4.2.1. We use MRR as the validation evaluation metric. (3) For schedule search on learning with noisy training data, the mathematical formulation can be writen as: $\theta^*=\arg \min_{\theta} J(\theta), s.t. w^*(z)=\arg \min_{w} L_{tra}(w,R_z)$, where $J(\theta)=E_{z \sim p_{\theta}(z)} M_{val}(w^*(R_z))$. $R_z$ is the schedule function parameterized by Eq (7) in Section 4.2.2, and $z$ is the corresponding hyperparameter. For $p_\{\theta\}$, we use Beta distribution for each element of $\beta$ in Eq (7) and use Dirichlet distribution for $\alpha$ in Eq (7). The validation evaluation metric is accuracy. We will add these formulations for each problem to Appendix in final version. > Q10: I would love to see if the proposed algorithm can improve out-of-distribution generalization in meta-learning settings, like [2] A10: Thank you for raising this interesting application. As far as we know, most meta-learning problems deal with differentiable evaluation metrics and continuous hyper-parameters (e.g., model initialization). Therefore, they are not the main focus of our method. We will see if there are some problems in out-of-distribution generalization with non-differentiable metrics or discrete hyper-parameters, where we believe the proposed method should also lead to improvements. > Q11: Writing and typos. A11: Thank you for mentioning these errors. We will revise all of them in the final version of the paper. --- Rebuttal Comment 1.1: Title: Read the rebuttal Comment: I thank the authors for the clarifications, I still think this is a good paper and should be accepted, I will keep my score unchanged --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you for your positive comments on our paper. We are also willing to engage in more discussions if you have any other questions.
Summary: This paper studies the problem of hyper-parameter optimization in the context of machine learning. Through stochastic relaxation, the problem can be formulated as bilevel optimization, and the authors propose to use cubic regularization to solve the optimization problem. It is shown that under some regularization conditions on the loss function and hyper-parameter distribution, the algorithm efficiently converges to an approximate second-order stationary point. Furthermore, experiments are conducted to verify the effectiveness of the proposed method. Strengths: 1. The paper is well-written and quite easy to read. The authors explain the motivations, method and results in a very clear way. 2. Extensive experiments are conducted to demonstrate the wide applicability of the proposed method. All experiment settings are explained in details. Weaknesses: While the authors provide theoretical guarantees for the proposed method, it seems that these guarantees can be directly derived from known results on the convergence of cubic regularization methods. At the same time, because the problem considered is bilevel, some assumption seems less natural. For example, the Lipschitz-Hessian assumption on $J(\theta)$ is quite difficult to verify in practice, since the expression of $J$ itself contains a minimization problem. I wonder if it is possible to obtain convergence results under more 'fundamental' assumptions. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In Assumption 3.5, the estimation of gradient and Hessian may be biased. How does this affect the final convergence rate? 2. How does the proposed method compared with other methods that are not based on stochastic relaxation? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations and potential societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments for both the strength and weakness on our paper. We would like to first answer the questions raised in the weakness part: > W1: While the authors provide theoretical guarantees for the proposed method, it seems that these guarantees can be directly derived from known results on the convergence of cubic regularization methods. A1: It is not trivial to obtain our theoretical analysis from known results on the convergence of cubic regularization methods. Since we are solving a bi-level optimization problem, the gradient and Hessian used in the cubic regularization methods may be inexact. As such, we need to bound the error of estimated gradient and Hessian from the solution to lower-level problem, which is different from any known results to the best of our knowledge. We will make the technical novelty more clear in our next version. > W2: some assumption seems less natural. For example, the Lipschitz-Hessian assumption on $J(\theta)$ is quite difficult to verify in practice, since the expression of $J$ itself contains a minimization problem. I wonder if it is possible to obtain convergence results under more 'fundamental' assumptions. A2: Thank you for your suggestion. Recall that the definition of Lipshitz Hessian is $|| \nabla^2 J(\theta) - \nabla^2 J(\phi) || \le \rho || \theta -\phi ||$ for any $\theta$ and $\phi$, and from Proposition 3.3, we have $\nabla^2 J(\theta) = \int H^*(\theta, z) p_{\theta}(z) dz$, where $H^*(\theta, z)$ depends on the objective value, $\nabla \log p_{\theta}(z)$ and $\nabla^2 \log p_{\theta}(z)$. Then the assumption of Lipschitz Hessian can be deduced by assuming 1) bounded objective value; and 2) bounded differences of $\nabla \log p_{\theta}(z)$ and $\nabla^2 \log p_{\theta}(z)$ with different $\theta$, where similar assumptions are made in Assumption 3.5(ii). We will consider changing to more "fundamental" assumptions in the next version. ### Answers to remaining questions > Q3: In Assumption 3.5, the estimation of gradient and Hessian may be biased. How does this affect the final convergence rate? A3: In our proof, the estimation errors of gradient and Hessian do affect the convergence rate, and inaccurate gradient or Hessian slow down convergence. Nevertheless, the dependency is very complex, and we consider the worst case performance in theorem 3.6 to make it simple. > Q4: How does the proposed method compared with other methods that are not based on stochastic relaxation? A4: In Section 4.2 (Experiments on Real-world Data), we compare the proposed method with other methods that are not based on stochastic relaxation, including random search, Bayesian optimization, Hyperband and reinforcement learning. In the two applications, the proposed method achieves the best performance. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the clear response to my questions which have adequately addressed my concerns. I will keep my rating and still recommend accept. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you for your questions and your affirmation of our paper. We are also willing to engage in more discussions if you have any other questions.
Summary: This paper proposes a new optimization based technique for hyper-parameter tunings using Adaptive Cubic Regularized Newton method (ARC) based on stochastic relaxation. The author highlights the limitations of the existing hyper-parameter optimization algorithms.They show that their suggested method achieves better convergence guarantees as existing methods only prove convergence to first order methods where their work also achieves second order optimality conditions. The authors also run some numerical experiments to show that the algorithm outperforms existing methods in terms of faster convergence. Strengths: * A weakness of existing methods is clearly identified and solved. * The originality lies in being the first to use the second order method (ARC) in a bi-level optimization problem for hyper-parameter tuning. * The paper is well written and the result can serve as the base of future work in using second order methods for hyper-parameter tunings. * Some minimal numerical results are shown Weaknesses: * The major contribution of this work should be explicitly stated in a section called “Our contributions”. * The authors use ' the curse of dimensionality' as an example of a disadvantage for existing methods; however, when explaining their proposed approach and arguing that even they have to compute the full Hessian, this will not be an expensive operation since hyper-parameters are often low dimensional. This contradicts with the disadvantage mentioned earlier. * The authors didn’t compare their results against the global optimal (if exists) set of hyper-parameters in their numerical experiments. * In Algorithm.1, the authors didn’t explain how to optimize the lower-level objective in (1) * In Algorithm.1, the authors didn’t explain if ARC is being solved using an iterative approach or a factorization based approach when solving the subproblems. * In their analysis, the authors are not taking into account the cost of solving the ARC subproblems. * For the synthetic data experiments, the authors didn’t use any reference and they didn’t explain why they didn't use real data for the problem of filtering useful features. * The details of the experiment parts are not enough so that people can reproduce the results. For example, the authors don't state what machine learning models they used. * The proof is a bit tricky and it has some typos and errors. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * A typo under Table 1 should be fixed. * In all figures, the x-axis is confusing. What do you mean by number of trained models? Also does this mean that the AUC on the y-axis is computed by averaging across all the trained models? * What machine learning model you used for all the experiments. More details are needed for example for the classification did you use Decision Tree, SVM, Logistic regression …. Or in the image setup, what neural network models you used? * In figure 4, you can add (M) next to the upper loop iterations so that it is easier to understand the plot and what parameters are you considering in your ablation study. Also add the unit for the relative training time. * The details about the experiment are important as reproducibility of the work is a major concern. Even if you don’t add a new dataset or create a new machine learning model, there is still a need to reproduce your results and validate that. * The details of the experiments should include: what ML models are being trained, the environment, how to set up the datasets and access them, which packages are being used, ARC solver package …. * The code of your work also should be attached to the submission and later in your final paper, it is recommended to add the github repository link. * Also in your code, did you fix the seed so that people can get similar results as what you got. Comments about the proof: * I checked the proof carefully and most of the math seems correct, however following the proof is a bit tricky and I would suggest it if you can make it more clear. Also, if you can give more explanation about the intuition behind those lemmas. * Some of the lemmas better to be referenced as facts since you are using the results of previous works: Lemma B.1, Lemma B.2, Lemma B.4. * In Lemma B.1 if you can refer to the Hessian as $\rho$-Lipschitz. * Prerequisite of Theorem 3.6 should appear on its own as an equation because you are referring to it in multiple places. * In the proof of Preposition 3.2, there is a typo. There should be $\grad \log$ in the last two lines of the proof. * In Lemma B.3, if you can elaborate more the steps as it is not clear how you get rid of the summation term that appears in the definition of the approximate gradient $g^m$. Since the expectation is with respect to the probability $p_\theta$ and your summation is in term of $1/k$, but the $p_\theta$ isn’t uniform so to interchange those terms. * In Lemma B.3, how did you get rid of the $\min$ term that appears in the prerequisite of Theorem 3.6 since you only used the first term in from $\min (term_1, term_2)$. * In Lemma B.5, similar comment to Lemma B.3. How you get rid of the summation term that appears in the definition of the approximate hessian. Also, you only used the second term that appears in the $\min(term_1, term_2)$ from the prerequisite of Theorem 3.6. * In Lemma B.5, in your first line of proof you say “By definition, we have“, but you didn’t refer to any definition. * In Lemma B.5, I guess you have a typo in the equality that involves $K$. I guess, this should be $b_2 ^ 2$ instead of $c_2 ^ 2$. * In Lemma B.5, I guess you have a typo in the inequality bounding the norm of the difference between the approximate hessian and the actual hessian. $\|| B ^ m - \grad J(\theta ^ m)\||$ should be $\|| B ^ m - \grad ^ 2 J(\theta ^ m)\||$. Also in this lemma, you use the l-2 norm and the spectral norm interchangeably but it is not clear if the math is adjusted at all steps. Or you have some typos. * Lemma B.6 is well known facts about ARC, so you can refer to some citations and in that case there is no need to do the proof. * In Lemma B.7, when you refer to both Lemmas B.3 and B.5 you mention the inequalities from those lemmas, but without taking into consideration that each one satisfies the inequality with a different probability. * In lemma B.9, similar comment to the above when referring to lemmas B.3 and B.5. * In lemma B.9, the inequality that appears after the line “And from Lemma B.5, we will have” seems wrong since as I understand you are using Cauchy Schwarz inequality here and it is not clear why you have $\||\delta ^ m\|| ^ 2$ and from where the half term appears. I guess the correct term should be $c_2 \sqrt{\rho \epsilon} \|| \delta ^ m \||$. Then the proof needs to be updated. * In Lemma B.9, also it is not clear from where you got the result in case 2 that $\tilde c(\delta ^ m)$ <= -\frac{1}{96} \sqrt {\epsilon ^ 3 / \rho}$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Exposed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments on both strength and weakness of our paper. Here are our responses to your comments on weakness and questions: ## Weakness > W1: The major contribution of this work should be explicitly stated in a section called "Our contributions". A1: Please see our response to Q1 in the general response part. > W2: The authors use 'the curse of dimensionality' as an example of a disadvantage for existing methods; however, when explaining their proposed approach and arguing that even they have to compute the full Hessian, this will not be an expensive operation since hyper-parameters are often low dimensional. This contradicts with the disadvantage mentioned earlier. A2: Thank you for pointing this out. As the hyper-parameters are often low-dimensional, it may not be appropriate to refer to "curse of dimension" for a disadvantage of existing methods. Nevertheless, even with low-dimensional hyper-parameters, existing methods still suffer from slow convergence, as is also demonstrated in experiment part. We will revise this part to avoid such contradictions in final version. > W3: Compare against global optimal set of hyper-parameters in experiments. A3: For experiments on real-world data, it is hard to obtain the global optimal set of hyper-parameters in advance, so here we only consider the experiment on synthetic data, where the global optimal set of hyper-parameters will be masking all the noise and retaining all the useful features. We have added its performance to the revised Figure 1(a) and you could check it in the PDF file in general response part. From the revised figure, we can see that the best validation AUC obtained by the proposed method is close to global optimum, while all the other baseline methods have much larger gaps. > W4: In Algorithm.1, the authors didn’t explain how to optimize the lower-level objective in (1). A4: Our proposed method should be compatible with any optimizer for lower-level objective, e.g. SGD, Adam or AdamW. In our experiments, we use AdamW in the experiment on synthetic data and use Adam in experiments on real-world data. We will make this more clear in the final version. > W5: In Algorithm.1, the authors didn’t explain if ARC is being solved using an iterative approach or a factorization based approach when solving the subproblems. A5: In Algorithm 1, ARC is solved using an iterative approach, similar to the method described in section 6 in [1]. We will make this more clear in the final version. > W6: In their analysis, the authors are not taking into account the cost of solving the ARC subproblems. A6: Thank you for raising this question. In our experiments, the number of hyper-parameters is often much less than 100, while the models in lower-level problems have millions of parameters. As such, it should take much longer time to solve lower-level problems than solve the ARC subproblems. It is also empiricaly verified in Section 4.4 that the time cost of updating $\theta$ (which includes the time cost of solving the ARC subproblems) is negligible in most cases. Therefore, we focus more on the time cost on solving lower-level problems, rather than solving the ARC subproblems. > W7: For the synthetic data experiments, the authors didn’t use any reference and they didn’t explain why they didn't use real data for the problem of filtering useful features. A7: We will add citations on feature selection problem in final version, e.g. [2], [3]. As we already have experiment for real data in Section 4.2, the use of synthetic data also enables us to compare against global optimal, which may be hard to obtain if we use real-world data. > W8: Experiment details A8: Please see our response to Q3 in common responses. > W9: Typos and errors in proof A9: We will thoroughly revise our paper and appendix in our final version. [1] Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Mathematical Programming, 2011. [2] An introduction to variable and feature selection. JMLR, 2003. [3] Feature selection using stochastic gates. ICML 2020 ## Questions > Q10: Typo under Table 1 A10: Thank you for pointing this out. We will revise it in the final version. > Q11: In all figures, the x-axis is confusing. What do you mean by number of trained models? Also does this mean that the AUC on the y-axis is computed by averaging across all the trained models? A11: Thank you for pointing this out. We have changed the x-axis to running time and you could check the revised figures in the PDF file in general responses. The y-axis plot the best validation performances obtained at that time. We will make this more clear in next version. > Q12: Experiment details A12: Please see our response to Q3 in common responses. > Q13: In figure 4, you can add (M) next to the upper loop iterations so that it is easier to understand the plot and what parameters are you considering in your ablation study. Also add the unit for the relative training time. A13: Thank you for your suggestions. We have uploaded revised figures on running time in the PDF file in general response. We will also add notations to figures on upper loop iterations. > Q14: The code of your work also should be attached to the submission and later in your final paper, it is recommended to add the github repository link. A14: Thank you for your suggestion. The code still needs some clean up to make it easier to use. We will make our code public on GitHub along with our final version. > Q15: Random seed in our code A15: We have fixed the seeds in all our experiments. We will explicitly mention this in the next version of our paper. > Q16: Comments about the proof. A16: Thank you for your thorough comments. We have checked our proof and found that these typos/errors did not affect the correctness of key theoretical results. We will make all these revisions in final version. --- Rebuttal Comment 1.1: Title: Responses to comments on our proof Comment: We would like to thank reviewer S7HE again for detailed comments to our proof. We have thoroughly checked our proof and did not find critical issues that will affect its correctness. Here are our responses to these comments: > Q17: I checked the proof carefully and most of the math seems correct, however following the proof is a bit tricky and I would suggest it if you can make it more clear. Also, if you can give more explanation about the intuition behind those lemmas. A17: Thank you for your suggestions. We will try to revise the proofs to make them more clear. We will also add more explanations to each lemma used in the proof in final version. > Q18: Some of the lemmas better to be referenced as facts since you are using the results of previous works: Lemma B.1, Lemma B.2, Lemma B.4. \ Lemma B.6 is well known facts about ARC, so you can refer to some citations and in that case there is no need to do the proof. A18: While these lemmas are really known facts in previous work, some people may not be familiar with them. As such, we suppose it may be better to also include them in appendix. We will also add references to them in final version. > Q19: In Lemma B.1 if you can refer to the Hessian as $\rho$-Lipschitz. A19: Thank you for your suggestion. We will add it in final version. > Q20: Prerequisite of Theorem 3.6 should appear on its own as an equation because you are referring to it in multiple places. A20: The conditions of Theorem 3.6 directly impact the performance of final solution ($\epsilon$). As such, we suppose it would be better to put it as a prerequisite of Theorem 3.6 rather than a separate assumption. > Q21: In the proof of Preposition 3.2, there is a typo. There should be \grad log in the last two lines of the proof. A21: Thank you for pointing this out. We will revise it in final version. > Q22: In Lemma B.3, if you can elaborate more the steps as it is not clear how you get rid of the summation term that appears in the definition of the approximate gradient $g^m$. Since the expectation is with respect to the probability $p_\theta$ and your summation is in term of 1/k, but the $p_\theta$ isn’t uniform so to interchange those terms... In Lemma B.5, similar comment to Lemma B.3. How you get rid of the summation term that appears in the definition of the approximate hessian. A22: Thank you for pointing this out. Since each $z^k$ is an i.i.d. sample from probability distribution $p_\theta$, the expectation will not change, and we can get rid of the summation (actually average) term. We will revise this part to make it more clear in final version. > Q23: In Lemma B.3, how did you get rid of the min term that appears in the prerequisite of Theorem 3.6 since you only used the first term in from min(term1, term2) ... Also, you only used the second term that appears in the min(term1, term2) from the prerequisite of Theorem 3.6. A23: Thank you for pointing this out. Obviously, we have min(term1, term2) smaller than both term1 and term2, so we can simply replace min(term1, term2) by either term1 or term2 for an upper bound. We will revise this part to avoid such ambiguity. > Q24: In Lemma B.5, in your first line of proof you say “By definition, we have“, but you didn’t refer to any definition. A24: Here we are referring to the definition for spectral norm. We will add this to final version. > Q25: In Lemma B.5, I guess you have a typo in the equality that involves K. I guess, this should be $b_2^2$ instead of $c_2^2$. A25: Yes, it should be $b_2^2$. We will update it in final version. > Q26: In Lemma B.5, I guess you have a typo in the inequality bounding the norm of the difference between the approximate hessian and the actual hessian. $|| B^m-\nabla J(\theta) ||$ should be $|| B^m-\nabla^2 J(\theta) ||$. Also in this lemma, you use the l-2 norm and the spectral norm interchangeably but it is not clear if the math is adjusted at all steps. Or you have some typos. A26: Thank you for pointing this out. In Lemma B.5, we always use spectral norm as distance metric for matrices. We have checked the proof to avoid ambiguity of using two different norms for different objects (l2-norm for vectors and spectral norm for matrices). We will also revise the error in $|| B^m-\nabla^2 J(\theta) ||$ --- Reply to Comment 1.1.1: Title: Responses to comments on our proof (cont.) Comment: > Q27: In Lemma B.7, when you refer to both Lemmas B.3 and B.5 you mention the inequalities from those lemmas, but without taking into consideration that each one satisfies the inequality with a different probability; In lemma B.9, similar comment to the above when referring to lemmas B.3 and B.5. A27: Thank you for pointing this out. Using the same notations, consider two random events A and B that happens with probability at least $1-\delta_1$ and $1-\delta_2$, respectively. Then the probability that either of them does not happen is at most $\delta_1+\delta_2$, hence the probability that both of them happen is at least $1-(\delta_1+\delta_2)$, and can be made sufficiently large with sufficiently small $\delta_1$ and $\delta_2$. We will revise this part in final version to make it easier to understand. > Q28: In lemma B.9, the inequality that appears after the line “And from Lemma B.5, we will have” seems wrong since as I understand you are using Cauchy Schwarz inequality here and it is not clear why you have $|| \delta^m ||^2$ and from where the half term appears. I guess the correct term should be $c_2 \rho \epsilon || \delta^m ||$. Then the proof needs to be updated. A28: Thank you for pointing this out. Here we are actually bounding $\frac{1}{2} (\Delta^m)^{\top} (\nabla^2 J(\theta)-B^m) \Delta^m$, and you are correct if we would like to bound $|| (\nabla^2 J(\theta)-B^m) \Delta^m ||$. We will revise this error and the rest of proof does not need any change. > Q29: In Lemma B.9, also it is not clear from where you got the result in case 2 that $\tilde{c} (\delta^m) \le -\frac{1}{96} \sqrt {\epsilon ^ 3 / \rho}$. A29: The proof for case 2 directly uses the condition in Lemma B.9 that we assume. We will revise this part to directly refer to the condition.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback. Here we colloct some common questions and reply them in this general response. A PDF file containing all revised figures is also uploaded. > Q1: major contributions A1: The major contributions of this work include: (1) We propose to utilize cubic regularization to accelerate convergence and avoid saddle points in hyper-parameter optimization problems. (2) We provide theoretical analysis of the proposed method, showing that the proposed method converge to approximate second-order stationary points with inexactly solved lower level objective considered. (3) We verfied the effectiveness of the proposed method in experiment on synthetic and real-world data. We will also update these contributions to a separate "Our contributions" section in the final version of the paper. > Q2: typos and errors A2: Thank you for pointing these out. We will make thorough revisions in final version. > Q3: experiment details A3: Thank you for your suggestions. Most details on our experiments can be found in appendix. Here we present some details for easy reference: (1) ML model. For experiment on synthetic data, we use a linear model with single layer as feature classifier. For hyper-parameter optimization for knowledge graph completion, we use traditional embedding based model, e.g., TransE, RotatE, ComplEx (see Table 1). For schedule search on learning with noisy training data, we use a 5-layer CNN similar to LeNet as the model for image classification. We will update more details to revised appendix in final version. (2) Environment. For all experiments, we use Python 3.7, PyTorch 1.13 and torchvision 0.14. Installing these packages should be enough to replicate all our experiments. (3) Datasets. For synthetic data, the generation process is described in Line 232-236 in Section 4.1: "We construct a synthetic dataset on 5-way classification, where the inputs are 50-dimensional vectors. Of the 50 features, 25 have different distributions based on their ground-truth labels, generated from Gaussian distributions with different means for each class and the same variance. The remaining 25 features are filled with i.i.d. Gaussian white noise." For knowledge graph completion, the two data sets, FB15k-237 and WN18RR, are well-known in the field of KG. For learning with noisy labels, the CIFAR-10 data set is also well-known and can be easily accessed from torchvision package. (4) Packages. Despite standard packages for deep learning (e.g. PyTorch), we do not use extra packages (e.g. ARC solver packages) in our experiment. We will also update more details to revised appendix in final version. > Q4: revise figures A4: We have uploaded the revised figures in the attached PDF file. Our modifications include: (1) Add the performance global optimal in Figure 1(a) for easy reference (2) Add the result of BFGS in Figure 1 (3) Adjust descriptions of axes in all figures Pdf: /pdf/d3212aabacab1276f652f7083dbc6ffc7a887c3c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery
Accept (poster)
Summary: This paper introduces a novel perspective on categorization, framing it as an optimization problem with the goal of finding optimal solutions. The authors propose a self-supervised approach that can identify new categories during testing. They achieve this by assigning concise category codes to individual data instances, which effectively captures the inherent hierarchical structure found in real-world datasets. Strengths: 1. The paper conceptualizes a category as a solution to an optimization problem, which is novel. 2. The paper is well-written, easy to follow 3. The proposed method is novel Weaknesses: 1. The method is too complicated, with 5 hyperparameters in the overall loss functions, which is hard to tune, is it possible to make it automatically? 2. Some ablation studies are missing, I am curious about the effect on every hyperparameter in the loss function, how sensitive are they to the performance? 3. Some recent methods of comparison are missing. * [1] PromptCAL: Contrastive Affinity Learning via Auxiliary Prompts for Generalized Novel Category Discovery (CVPR 2023) * [2] Modeling Inter-Class and Intra-Class Constraints in Novel Class Discovery (CVPR 2023) * [3] Generalized Category Discovery with Decoupled Prototypical Network (AAAI 2023) * [4] Supervised Knowledge May Hurt Novel Class Discovery Performance (TMLR 2023) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. About the experiment setting, transductive or instructive? the test set is included or excluded in the training process? 2. How to compute mutual information, MINE[1]? [1]MINE: Mutual Information Neural Estimation Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. Complex Hyperparameter Tuning: The model's complexity with five hyperparameters in the loss functions makes tuning challenging. An automated tuning method might streamline this process. 2. Missing Ablation Studies: The absence of ablation studies limits understanding of individual hyperparameters' impact on performance. This analysis would contribute to model optimization. 3. Incomplete Comparison: The paper lacks comparison with some recent methods. Incorporating them can provide a more comprehensive evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***We thank the reviewer for finding our method and theory novel. We will use the provided references to enrich our paper both in related works and experimental analysis. Automatic hyperparameter tuning is an intriguing future work, we also provided the updated results vs. SoTA methods for stronger empirical proof.*** **Complex Hyperparameter Tuning.** Since this paper tries to show that concept of learning categories as an arbitrary number can be detrimental to supervision, we suggested defining an optimization for extracting this inherent information from the data itself. The method proposed in the paper is one approach to this optimization. For instance, two condition losses can be replaced by using binary vectors from scratch. The main losses for the paper are length minimization and information maximization losses. Thus, only these hyperparameters are specific to our theory; other hyperparameters can differ based on how this optimization problem is approximated. Adopting an automatic hyperparameter tuning method can help make models more autonomous and interesting in future work. **Additional Ablation Studies** We investigate the effect of $\lambda_{code}$ in Table 2 of the supplementary. The attached pdf also depicts the t-SNE visualization of CIFAR10 instances for different features generated by our model. As can be seen in the figures, while our model’s features form separate clusters, our label embedding makes these clusters distinctive from each other. Binary embedding enhances this separation while condensing the cluster by making samples of clusters closer to each other, which is evident for the yellow or bird cluster. This is due to the fact that binary embedding, because of its 0 or 1 nature, will be affected by semantic similarity more than visual similarity. Finally, our code embedding shows indirectly that to have the most efficient code, our model should span the code space as much as possible, which explains the porous nature of these clusters. We will add the effects of different hyperparameters on these clusters to our paper. **Comparison with Sota** We made a few minor changes to the implementation, including using tanh instead of sigmoid activation functions for a more symmetric 0 and 1 code distribution, using a bias of 1 for masker to make the masker network start from considering all code bits at the beginning of training and finally using a label smoothing hyperparameter for preventing the network from being too strict in its code predictions. Our results based on these changes are in the table below with a comparison to recent Sota methods. We compare against recent GCD methods in the following table; we leave comparisons with methods with a different problem as NCD like [2,4] or different data like intent classification in [3] for future work. |**Dataset** |||CUB|||Aircraft|||Herb|||SCars|||Pet|| |----------------------|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------| |**Method** |Venue|All | Known | Novel | All | Known | Novel|All | Known | Novel|All | Known | Novel|All | Known | Novel| |k-means|ACM'07|34.3|38.9|32.1|12.9|12.9|12.8|13.0|12.2|13.4 |12.8|10.6|13.8|77.1|70.1|80.7| |RankStats+|TPAMI'21|33.3|51.6|24.2|26.9| 36.4| 22.2|27.9|55.8|12.8 |28.3|61.8|12.1|-|-|-| |UNO+|ICCV'21|35.1|49.0|28.1|40.3| 56.4| 32.2|28.3|53.7|14.7|35.5|70.5|18.6|-|-| -| |ORCA|ICLR'22|36.3|43.8|32.6|31.6|32.0|31.4|24.6|26.5|23.7|31.9|42.2|26.9| -|-|- | |GCD|CVPR'22|51.3|56.6|48.7|45.0 |41.1|46.9|35.4|51.0|27.0|39.0|57.6|29.9| 80.2 |85.1|77.6| |XCon [a]|BMVC'22|52.1|54.3|51.0|47.7|44.4|49.4|-|-|-|40.5|58.8|31.7|86.7 |91.5|84.1 |PromptCAL[b]|CVPR'23|62.9|64.4|62.1|52.2|52.2|52.3|-|-|-|50.2|70.1|40.6|-|-|-| |DCCL [c] |CVPR'23|63.5|60.8|**64.9**|-|-|-|-|-|-|43.1|55.7|36.2|88.1|88.2|88.0| |SimGCD [d]|ICCV'23|60.3|65.6|57.7|54.2|59.1|51.8|**44.0**| 58.0|**36.4**|53.8|**71.9**|45.0|-|-|-| |GPC [e] |ICCV'23|52.0|55.5|47.5| 43.3|40.7|44.8|-|-|-| 38.2| 58.9|27.4|-|-|-| |InfoSieve | | **70.9** | **83.5** |64.3|**60.6**| **69.1**| **56.4**| 40.3 |**59.0**| 30.2|**63.6**| 61.0 |**64.9**|**90.7**|**95.2**|**88.4**| [a] Fei, Yixin, et al. "Xcon: Learning with experts for fine-grained category discovery." 33rd British Machine Vision Conference. BMVA Press 2022. [b] Zhang, Sheng, et al. "Promptcal: Contrastive affinity learning via auxiliary prompts for generalized novel category discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [c] Pu, Nan, Zhun Zhong, and Nicu Sebe. "Dynamic Conceptional Contrastive Learning for Generalized Category Discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [d] Wen, Xin, Bingchen Zhao, and Xiaojuan Qi. "A Simple Parametric Classification Baseline for Generalized Category Discovery." Proceedings of the IEEE/CVF International Conference on Computer Vision. (2022). [e] Zhao, Bingchen, Xin Wen, and Kai Han. "Learning Semi-supervised Gaussian Mixture Models for Generalized Category Discovery." Proceedings of the IEEE/CVF International Conference on Computer Vision. (2023). **Transductive or instructive.** We use the same experimental setup as GCD and DCCL, which is transductive. We have a disjoint validation set to determine the best hyperparameters and best model. **Mutual information computation.** As mentioned in the supplemental sections 1.2.2 and 1.2.3, we can consider the reconstruction loss or contrastive loss to approximately maximize mutual information. We use contrastive loss since we base our approach on the GCD framework. Using MINE to optimize this mutual information aligns well with our theory, but it is computationally expensive in practice. However, estimating mutual information of code and categories can be feasible. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: I read the paper again and checked all reviewers' comments and the author's reply. And I keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the time and effort you invested in your comprehensive review and your valuable suggestions, especially regarding the additional ablation studies and comparisons with recent state-of-the-art works. We have diligently revised our paper to incorporate these new experimental results. Should there be any further points of enhancement that could potentially lead to a positive reevaluation of our paper, we would greatly appreciate your guidance. Your constructive feedback is instrumental in elevating the quality of our work, and we are truly thankful for it.
Summary: The paper conceptualize a category through the lens of optimization, viewing it as an optimal solution to a well-defined problem. The author propose a novel, efficient and self-supervised method capable of discovering previously unknown categories at test time. Strengths: 1、The paper aims to tackle an important problem of how to better represent categories in deep learning models. The mathematical solution and novel framework proposed by the authors offer intriguing insights into addressing this issue; 2、The paper is well-structured, with a reasonable experimental design and demonstrates the effectiveness of the proposed method on multiple datasets. Weaknesses: 1、It is worth noting that the proposed method might require more empirical studies to validate its generalizability across different scenarios and tasks. How does this model perform on the common ImageNet-100 dataset? 2、I agree that there are quite a few loss functions presented in this paper. Although the authors provide detailed explanations for each loss, it might be somewhat challenging to unify so many losses within a single system. So adapting this model for multi-stage training could be a viable approach to address the complexity if handling multiple losses. 3、The improvement on the commonly used CIFAR dataset is relative weak. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1、In the Table. 3 and Table. 4, the performance gap between the method and sota methods to be quite significant. Can you analyze the possible reasons for this? 2、I am very interested in your implict binary tree and category encoding. Could you show it using a visualization? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors adequately addressed the limitations . Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***We thank the reviewer for finding the category definition an important problem, our framework novel, and our experimental design effective. We also agree that a theoretical foundation requires adequate empirical studies. Hence here we address the points raised by the reviewer to enrich our paper with more in-depth analysis and better visualizations.*** **Limited improvement on CIFAR 10 and experiments on ImageNet100.** This is due to the limited number of categories in CIFAR 10. Our method is well suited for datasets with more categories and fine distinctions. Based on our theory, the improvement in CIFAR10/100 is predictable. For CIFAR 10, the depth of the implicit tree is 4, and for CIFAR 100, it will be 7; hence the number of implicit possible binary trees with this limited depth is smaller, meaning finding a good approximation for the implicit category tree can be achieved by other models. However, as the depth of this tree increases, our model can still find the aforementioned tree. This, in particular, is suitable for real-world scenarios where the number of categories can be huge, or categories are long-tailed like Herbarium_19, fine-grained like CUB, Aircraft, and Stanford Cars, partially labeled or have noisy labels. Nevertheless, we made a few minor changes to the implementation of our method and reported the new results on both generic and fine-grained datasets. Namely, we use tanh instead of the sigmoid activation functions for a more symmetric 0 and 1 code distribution, start with a bias of 1 for the masker to make the masker network start from considering all code bits at the beginning of training, and use a label smoothing hyperparameter for preventing the network from being too strict in its code predictions. Based on these changes, we see in the following table that our method is well-suited to fine-grained datasets. |**Dataset** |||CUB|||Aircraft|||Herb|||SCars|||Pet|| |----------------------|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------| |**Method** |Venue|All | Known | Novel | All | Known | Novel|All | Known | Novel|All | Known | Novel|All | Known | Novel| |k-means|ACM'07|34.3|38.9|32.1|12.9|12.9|12.8|13.0|12.2|13.4 |12.8|10.6|13.8|77.1|70.1|80.7| |RankStats+|TPAMI'21|33.3|51.6|24.2|26.9| 36.4| 22.2|27.9|55.8|12.8 |28.3|61.8|12.1|-|-|-| |UNO+|ICCV'21|35.1|49.0|28.1|40.3| 56.4| 32.2|28.3|53.7|14.7|35.5|70.5|18.6|-|-| -| |ORCA|ICLR'22|36.3|43.8|32.6|31.6|32.0|31.4|24.6|26.5|23.7|31.9|42.2|26.9| -|-|- | |GCD|CVPR'22|51.3|56.6|48.7|45.0 |41.1|46.9|35.4|51.0|27.0|39.0|57.6|29.9| 80.2 |85.1|77.6| |XCon [a]|BMVC'22|52.1|54.3|51.0|47.7|44.4|49.4|-|-|-|40.5|58.8|31.7|86.7 |91.5|84.1 |PromptCAL[b]|CVPR'23|62.9|64.4|62.1|52.2|52.2|52.3|-|-|-|50.2|70.1|40.6|-|-|-| DCCL [c] |CVPR'23|63.5|60.8|**64.9**|-|-|-|-|-|-|43.1|55.7|36.2|88.1|88.2|88.0| |SimGCD [d]|ICCV'23|60.3|65.6|57.7|54.2|59.1|51.8|**44.0**| 58.0|**36.4**|53.8|**71.9**|45.0|-|-|-| |GPC [e] |ICCV'23|52.0|55.5|47.5| 43.3|40.7|44.8|-|-|-| 38.2| 58.9|27.4|-|-|-| |InfoSieve | | **70.9** | **83.5** |64.3|**60.6**| **69.1**| **56.4**| 40.3 |**59.0**| 30.2|**63.6**| 61.0 |**64.9**|**90.7**|**95.2**|**88.4**| The new results on CIFAR10 and CIFAR100 (also ImageNet for 50 instead of 200 epochs due to the time limit) are as follows and will be added to the main paper. |**Dataset**|||CIFAR|10||CIFAR|100||Imagenet|100| |----------------------|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------| |**Method**|Venue | All | Known | Novel | All | Known | Novel|All | Known | Novel| |k-means |ACM'07| 83.6 | 85.7 | 82.5 | 52.0 | 52.2 | 50.8|72.7|75.5|71.3| |RankStats+|TPAMI'21 | 46.8 | 19.2 | 60.5| 58.2 | 77.6 | 19.3|37.1|61.6|24.8| |UNO+ |ICCV'21| 68.6 | **98.3** | 53.8 | 69.5 | 80.6 | 47.2|70.3|**95.0** |57.9 | |ORCA |ICLR'22| 96.9 | 95.1 | 97.8 | 74.2 | 82.1 | 67.2|79.2 |93.2|72.1| |GCD|CVPR'22 | 91.5 | 97.9 | 88.2 | 73.0 | 76.2 | 66.5|74.1 | 89.8|66.3| |XCon [a]|BMVC'22 |96.0|97.3|95.4|74.2|81.2|60.3|77.6| 93.5 |69.7| |PromptCAL [b] |CVPR'23| **97.9** | 96.6 | **98.5**|**81.2**| **84.2**| 75.3|**83.1**|92.7|**78.3** | |DCCL [c]|CVPR'23| 96.3 | 96.5 | 96.9 | 75.3 | 76.8 | 70.2|80.5|90.5|76.2| |SimGCD [d] |ICCV'23 |97.1 | 95.1 | 98.1 | 80.1 | 81.2 | **77.8**|83.0|93.1| 77.9| |GPC [e] |ICCV'23|90.6| 97.6| 87.0| 75.4 |84.6| 60.1|75.3| 93.4 | 66.7| |InfoSieve| | 96.8 |96.4 |96.9| 77.8 | 80.5 | 72.4|78.8 | 90.3 |73.1| **Large performance gap for fine-grained datasets** We acknowledge the gap with the GCD paper. However, we'd like to emphasize that this gap has been closed by recent advances in state-of-the-art (SotA) methods, which we demonstrate in the table below. Like these new SotA methods, our method uses a richer set of supervision signals and tunable parameters. This approach accelerates the formation of supervised clusters and significantly increases the gap between the GCD paper over known categories. **Demonstration of binary codes** To demonstrate how the binary codes work, we provide a visualization in the attached pdf. Specifically, we include several sections of the binary tree our method obtains for the CUB dataset. We see that our method is able to extract some hierarchical structure in the data. As we can see in the leaves of this tree, we have “yellow sparrows” while they are the descendants of a more general “sparrow” node. Or for instance the path ‘00...1’ we have “black birds” node which can encompass multiple species of birds that are black. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for providing a rebuttal response, it has majorly addressed my concerns. The demonstration is excellent and I will keep my current score. --- Reply to Comment 1.1.1: Comment: We are pleased to hear that our responses have addressed your concerns. We sincerely thank the reviewer for the thorough examination and invaluable suggestions that have contributed to the enhancement of our paper. We are committed to revising the paper to incorporate the new experimental results. We would greatly appreciate any additional feedback to further refine and enrich our paper. Thank you once again for your insightful comments.
Summary: This paper questions a foundation problem that underpins the generalized class discovery (GCD): what defines a category. To this end, the authors proposed a self-learning-based method that solves GCD by modeling the implicit category hierarchy in the data. The proposed method allows control over category granularity and is thus good at fine-grained recognition datasets. The extensive experiments and theoretical analysis have demonstrated the effectiveness of the proposed method. Strengths: - The writing is satisfying. The paper is easy-to-follow. - The motivation is decent. - The proposed method is novel and proved effective. - Also the related works are adequately discussed. Weaknesses: - The compared methods are no longer state-of-the-art. Also, due to the explicit assumption of a hierarchical structure, the proposed method is not always the best on all datasets, especially the generic ones. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions. --- Post rebuttal --- Thank the authors for the detailed rebuttal response, which addressed my main concerns, and thus I keep my positive rating unchanged. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are carefully discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***We thank the reviewer for finding our proposed method novel and effective and our motivation decent. We also agree that a strong theoretical approach will be more applicable if it is accompanied by strong empirical results. Hence here, we provide the empirical results that the reviewer has mentioned.*** **Recent state-of-the-art methods.** Here, we add a comparison to several recent works, Xcon [a], PromptCAL [b], DCCL[c], SimGCD [d], and GPC [e], over fine-grained data. By making a few minor changes to the implementation of our method, we outperform all of these works on CUB-200, FGCV-Aircraft, Herbarium-19, Stanford-Cars, and Oxford-IIIT Pet. Namely, we use tanh instead of the sigmoid activation functions for a more symmetric 0 and 1 code distribution, use a bias of 1 for the masker to make the masker network start from considering all code bits at the beginning of training, alternate the training of coder and masker to separate their effect as suggested by reviewer Sjrc and use a label smoothing hyperparameter for preventing the network from being too strict in its code predictions. Based on these changes, our results are as follows and will be added to the main paper. |**Dataset** |||CUB|||Aircraft|||Herb|||SCars|||Pet|| |----------------------|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------| |**Method** |Venue|All | Known | Novel | All | Known | Novel|All | Known | Novel|All | Known | Novel|All | Known | Novel| |k-means|ACM'07|34.3|38.9|32.1|12.9|12.9|12.8|13.0|12.2|13.4 |12.8|10.6|13.8|77.1|70.1|80.7| |RankStats+|TPAMI'21|33.3|51.6|24.2|26.9| 36.4| 22.2|27.9|55.8|12.8 |28.3|61.8|12.1|-|-|-| |UNO+|ICCV'21|35.1|49.0|28.1|40.3| 56.4| 32.2|28.3|53.7|14.7|35.5|70.5|18.6|-|-| -| |ORCA|ICLR'22|36.3|43.8|32.6|31.6|32.0|31.4|24.6|26.5|23.7|31.9|42.2|26.9| -|-|- | |GCD|CVPR'22|51.3|56.6|48.7|45.0 |41.1|46.9|35.4|51.0|27.0|39.0|57.6|29.9| 80.2 |85.1|77.6| |XCon [a]|BMVC'22|52.1|54.3|51.0|47.7|44.4|49.4|-|-|-|40.5|58.8|31.7|86.7 |91.5|84.1 |PromptCAL[b]|CVPR'23|62.9|64.4|62.1|52.2|52.2|52.3|-|-|-|50.2|70.1|40.6|-|-|-| DCCL [c] |CVPR'23|63.5|60.8|**64.9**|-|-|-|-|-|-|43.1|55.7|36.2|88.1|88.2|88.0| |SimGCD [d]|ICCV'23|60.3|65.6|57.7|54.2|59.1|51.8|**44.0**| 58.0|**36.4**|53.8|**71.9**|45.0|-|-|-| |GPC [e] |ICCV'23|52.0|55.5|47.5| 43.3|40.7|44.8|-|-|-| 38.2| 58.9|27.4|-|-|-| |InfoSieve | | **70.9** | **83.5** |64.3|**60.6**| **69.1**| **56.4**| 40.3 |**59.0**| 30.2|**63.6**| 61.0 |**64.9**|**90.7**|**95.2**|**88.4**| Based on our theory, the improvement in CIFAR10/100 and ImageNet-100 is predictable. For CIFAR 10, the depth of the implicit tree is 4, and for CIFAR 100, it will be 7; hence the number of implicit possible binary trees with this limited depth is smaller, meaning finding a good approximation for the implicit category tree can be achieved by other models. However, as the depth of this tree increases, our model can still find the aforementioned tree. This, in particular, is suitable for real-world scenarios where the number of categories can be huge, or categories are long-tailed like Herbarium_19, fine-grained like CUB, Aircraft, and Stanford Cars, partially labeled or have noisy labels. Nevertheless, we repeated experiments with the aforementioned changes and reported the new results on CIFAR10 and CIFAR100 (also ImageNet for 50 out 200 epochs due to the time limit). Based on these changes, our results are as follows and will be added to the main paper. |**Dataset**|||CIFAR|10||CIFAR|100||Imagenet|100| |----------------------|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------| |**Method**|Venue | All | Known | Novel | All | Known | Novel|All | Known | Novel| |k-means |ACM'07| 83.6 | 85.7 | 82.5 | 52.0 | 52.2 | 50.8|72.7|75.5|71.3| |RankStats+|TPAMI'21| 46.8 | 19.2 | 60.5| 58.2|77.6 | 19.3|37.1|61.6|24.8| |UNO+ |ICCV'21| 68.6 | **98.3** | 53.8 | 69.5 | 80.6 | 47.2|70.3|**95.0** |57.9 | |ORCA |ICLR'22| 96.9 | 95.1| 97.8 |74.2 |82.1| 67.2|79.2 |93.2|72.1| |GCD|CVPR'22| 91.5 | 97.9 | 88.2 |73.0 |76.2| 66.5|74.1 | 89.8|66.3| |XCon [a]|BMVC'22 |96.0|97.3|95.4|74.2|81.2|60.3|77.6| 93.5 |69.7| |PromptCAL [b] |CVPR'23| **97.9** | 96.6 | **98.5**|**81.2**|**84.2**| 75.3|**83.1**|92.7|**78.3** | |DCCL [c]|CVPR'23| 96.3 | 96.5 | 96.9 | 75.3 | 76.8 | 70.2|80.5|90.5|76.2| |SimGCD [d] |ICCV'23 |97.1 | 95.1 | 98.1 | 80.1 | 81.2 | **77.8**|83.0|93.1|77.9| |GPC [e] |ICCV'23|90.6| 97.6| 87.0| 75.4 |84.6| 60.1|75.3| 93.4 | 66.7| |InfoSieve| | 96.8 |96.4 |96.9| 77.8 | 80.5 | 72.4|78.8 | 90.3 |73.1| Although our method is not always the best, it performs consistently well. While our model on fine-grained datasets outperforms most recent SoTAs, our model's performance on the generic datasets has not been compromised. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the detailed experiemental results and analysis. My main concerns are mostly addressed. I keep my rating for now before discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: We are pleased to learn that we have successfully addressed your concerns. We extend our gratitude for your insightful suggestions, particularly regarding comparisons with the recent state of the art. In our endeavor to continually elevate the quality of our research, we have revised the paper, incorporating new experimental results spanning 5 fine-grained datasets and 3 generic datasets. Should you identify any additional areas for refinement or have insights that could further enhance the quality of our work, your guidance would be immensely valuable. We are thankful for your positive assessment and we remain committed to benefiting from your constructive feedback to continue elevating our paper.
Summary: This paper tackles the problem of generalized category discovery (GCD) by reframing the concept of a category with an implicit category code tree, which addresses three problems of conventional supervised learning, namely category hierarchies, label inconsistency, and open-world recognition. The contributions of this paper are: (1) conceptualizing a category with a theoretically optimal solution; (2) proposing a practical GCD method based on the theory; (3) achieving the state-of-the-art GCD performance on both generic and fine-grained scenarios. Strengths: (1) The inspiration for modeling category hierarchies by encoding a category with a binary tree is reasonable. (2) The theoretical derivation of the optimization loss is adequate and helps readers understand the subsequent model design. Weaknesses: (1) The experimental demonstration of how binary code works is missing. For example, the authors can simply compare the codes of “dog”, “cat”, and “car” to see how the number of digits differs in the codes. It is crucial to verify the category hierarchies and label consistency, which are the core problems to solve claimed in the paper. (2) The performance improvement on generic datasets is limited. CIFAR10 performance lags behind ORCA and CIFAR100 performance gains only 0.2. (3) The paper's ablation study in Table 2 raises concern. In the first row, the model solely uses the visual features to compute the loss, i.e., the original GCD [1] approach. In Table 3 CIFAR10 section, the results provided from the original GCD paper are used. If we compare the results from both Table 2 and Table 3 by evaluating them using Semi-supervised K-means (SS-KMeans [1]), both InfoSieve and GCD approaches gain an increase in accuracy for the known class, while for the novel class, only InfoSieve gains an increase, while GCD gets the worst accuracy results. This paper shows that the GCD + generic K-Means is better than GCD + SS-KMeans, which is inconsistent with what the GCD paper claimed. An investigation should be conducted to address this inconsistency. Minor: Missing notation definitions such as "M" and "K" in the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses above. In addition, should the “Z^i” in line 215 be “z^i”? Because “Z^i” is a number according to line 150? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***We thank the reviewer for finding our inspiration and theoretical derivation reasonable and adequate. We use the points raised by the reviewer for better clarification of our theory and implementation details. The binary tree visualization and discussing the fine-grained vs. generic dataset performance will be a valuable addition to our paper.*** **Demonstration of binary codes.** The attached pdf shows how the binary codes work for the CUB dataset. We see that our method can extract some hierarchical structure in the data. In the leaves of this tree, we have “yellow sparrows,” which are the descendants of a more general “sparrow” node. Or, for instance, in the path ‘00...1’, we have “black birds” node, which can encompass multiple black bird species. **Limited improvement on CIFAR 10.** This is due to the limited number of categories in CIFAR 10. Our method is well suited for datasets with more categories and fine distinctions. Based on our theory, the improvement in CIFAR10/100 is predictable. For CIFAR 10, the depth of the implicit tree is 4; hence the number of implicit possible binary trees with this limited depth is smaller, meaning finding a good approximation for the implicit category tree can be achieved by other models. However, as the depth of this tree increases, our model can still find the aforementioned tree. This, in particular, is suitable for fine-grained data. For instance, the following table shows our results on fine-grained datasets: |**Dataset** |||CUB|||Aircraft|||Herb|||SCars|||Pet|| |----------------------|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------| |**Method**|Venue|All|Known|Novel|All|Known|Novel|All |Known|Novel|All|Known|Novel|All|Known|Novel| |k-means|ACM'07|34.3|38.9|32.1|12.9|12.9|12.8|13.0|12.2|13.4 |12.8|10.6|13.8|77.1|70.1|80.7| |RankStats+|TPAMI'21|33.3|51.6|24.2|26.9| 36.4| 22.2|27.9|55.8|12.8 |28.3|61.8|12.1|-|-|-| |UNO+|ICCV'21|35.1|49.0|28.1|40.3|56.4| 32.2|28.3|53.7|14.7|35.5|70.5|18.6|-|-|-| |ORCA|ICLR'22|36.3|43.8|32.6|31.6|32.0|31.4|24.6|26.5|23.7|31.9|42.2|26.9|-|-|-| |GCD|CVPR'22|51.3|56.6|48.7|45.0|41.1|46.9|35.4|51.0|27.0|39.0|57.6|29.9|80.2|85.1|77.6| |XCon [a]|BMVC'22|52.1|54.3|51.0|47.7|44.4|49.4|-|-|-|40.5|58.8|31.7|86.7 |91.5|84.1 |PromptCAL[b]|CVPR'23|62.9|64.4|62.1|52.2|52.2|52.3|-|-|-|50.2|70.1|40.6|-|-|-| DCCL [c]|CVPR'23|63.5|60.8|**64.9**|-|-|-|-|-|-|43.1|55.7|36.2|88.1|88.2|88.0| |SimGCD [d]|ICCV'23|60.3|65.6|57.7|54.2|59.1|51.8|**44.0**|58.0|**36.4**|53.8|**71.9**|45.0|-|-|-| |GPC [e]|ICCV'23|52.0|55.5|47.5| 43.3|40.7|44.8|-|-|-|38.2|58.9|27.4|-|-|-| |InfoSieve||**70.9**|**83.5**|64.3|**60.6**|**69.1**|**56.4**|40.3|**59.0**|30.2|**63.6**|61.0|**64.9**|**90.7**|**95.2**|**88.4**| Nevertheless, we made a few minor changes to the implementation of our method and reported the new results on CIFAR10 and CIFAR100 (also ImageNet for 50 epochs due to the time limit). Namely, we use tanh instead of the sigmoid activation functions for a more symmetric 0 and 1 code distribution, start with a bias of 1 for the masker to make the masker network start from considering all code bits at the beginning of training, and use a label smoothing hyperparameter for preventing the network from being too strict in its code predictions. Based on these changes, our results are as follows and will be added to the main paper. |**Dataset**|||CIFAR|10||CIFAR|100||Imagenet|100| |----------------------|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------| |**Method**|Venue|All|Known|Novel|All|Known|Novel|All|Known|Novel| |k-means|ACM'07|83.6|85.7|82.5|52.0|52.2|50.8|72.7|75.5|71.3| |RankStats+|TPAMI'21|46.8|19.2|60.5|58.2|77.6|19.3|37.1|61.6|24.8| |UNO+|ICCV'21|68.6|**98.3**| 53.8|69.5|80.6|47.2|70.3|**95.0**|57.9| |ORCA|ICLR'22|96.9|95.1|97.8|74.2|82.1|67.2|79.2|93.2|72.1| |GCD|CVPR'22|91.5|97.9|88.2|73.0|76.2|66.5|74.1|89.8|66.3| |XCon [a]|BMVC'22|96.0|97.3|95.4|74.2|81.2|60.3|77.6|93.5|69.7| |PromptCAL [b]|CVPR'23|**97.9**|96.6|**98.5**|**81.2**|**84.2**|75.3|**83.1**|92.7|**78.3**| |DCCL [c]|CVPR'23|96.3|96.5|96.9|75.3|76.8|70.2|80.5|90.5|76.2| |SimGCD [d] |ICCV'23|97.1|95.1|98.1|80.1|81.2|**77.8**|83.0|93.1|77.9| |GPC [e] |ICCV'23|90.6|97.6|87.0|75.4|84.6|60.1|75.3|93.4|66.7| |InfoSieve||96.8|96.4|96.9|77.8|80.5|72.4|78.8|90.3|73.1| **GCD results.** The difference between the first row of Table 2 and the GCD results reported is the number of blocks finetuned in the ViT, GCD freezes the weights of ViT-B-16 for 11 blocks and finetunes only the final block. Like DCCL, we freeze the weights of 10 blocks and finetune the final two blocks, because our model requires more tunable parameters to optimize our objective function. When freezing 11 and finetuning the last block of ViT (similar to GCD) for our method, we obtain 95.4% for known categories and 91.0% for novel categories on CIFAR10 dataset. This is still an improvement over the 88.2% obtained by GCD for novel categories. In the paper, we will add the number of blocks fine-tuned to our implementation details. **Missing notation definitions.** $M$ is the length of the binary code assigned to a sample $X^i$. $K$ is the length of the path to a category code $c$. We will update lemma $1$ to explain these symbols explicitly. **$Z^i$ clarification.** We can consider $Z^i$, as the mapping of sequence $z^i$ with length $d$, so the $Z^i$ is a number in this line. We will make it explicit in the text.
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful feedback, and we appreciate that the reviewers found our paper theoretically and methodologically novel. There were a few shared questions about our paper, both in terms of comparing against the recently published state-of-the-art and visualizing our method of implicit tree extraction. To this end, we add a comparison to several recent works, Xcon [a], PromptCAL [b], DCCL [c], SimGCD [d], and GPC [e]. Furthermore, the attached PDF shows a demonstration of how our model extracts an implicit category tree for samples of the CUB-200 dataset. [a] Fei, Yixin, et al. "Xcon: Learning with experts for fine-grained category discovery." 33rd British Machine Vision Conference. BMVA Press 2022. [b] Zhang, Sheng, et al. "Promptcal: Contrastive affinity learning via auxiliary prompts for generalized novel category discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [c] Pu, Nan, Zhun Zhong, and Nicu Sebe. "Dynamic Conceptional Contrastive Learning for Generalized Category Discovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [d] Wen, Xin, Bingchen Zhao, and Xiaojuan Qi. "A Simple Parametric Classification Baseline for Generalized Category Discovery." Proceedings of the IEEE/CVF International Conference on Computer Vision. (2022). [e] Zhao, Bingchen, Xin Wen, and Kai Han. "Learning Semi-supervised Gaussian Mixture Models for Generalized Category Discovery." Proceedings of the IEEE/CVF International Conference on Computer Vision. (2023). Pdf: /pdf/a1cf9d76c84ade38744d0c318e301f14f1692999.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
Accept (poster)
Summary: This work presents a GAN-based TTS model, as an improvement on its predecessor StyleTTS. Notably, it includes a diffusion-based style encoder, which is one major difference compared to its predecessor. In addition, the proposed model directly produce waveform, in contrast to spectrogram in its predecessor. Strengths: 1) The proposed method is sound; 2) The audio samples sound great with clear improvements over multiple baselines 3) Abundant evaluations are conducted with persuasive results, including ablation studies Weaknesses: 1) The proposed model is quite complicated 2) A few technical questions, see below Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) "[StyleTTS 2] As the first model to achieve human-level performance on both single and multispeaker datasets" (line 47) -- Aren't PnG BERT and VITS achieved so as well, as acknowledged in Sec 2? If there is a difference on the scope of the claim, it's important to make it clear here. Also note that PnG BERT reported both MOS and CMOS (line 77). 2) Sec 3.2.1 claims "E2E training optimizes all TTS system components for inference without relying on any fixed components", but still "before jointly optimizing all components, we first pre-train the acoustic modules along with the pitch extractor and text aligner" -- is this still end-to-end training? 3) Sec 5.2: It's not clear how the evaluation is done. Are the styles analyzed the prosodic style, the acoustic style, or both? Are the emotion labels based on text, or audio (which can diverge)? what's the emotion of the reference audio? what about other factors as "style", beside emotion? Figure 2(c) -- "loose clusters" -- these are highly overlap to be considered as (even loose) "clusters" 4) Table 4: "real-time factor (RTF) in second" -- RTF is agnostic to the time unit 5) "SLM adversarial training" on "OOD texts" -- how does this work together with the regular training? There would be two types of training data, one is text-speech pairs, one is text-only -- how are they coordinated in an end-to-end manner? Can the model benefit from using model text-only data? 6) Evaluation on OOD text -- given the proposed model is trained on the OOD text (with SLM adversarial training), is it still a fair comparison to baseline models? importantly, are the eval text seen during SLM adversarial training? 7) Figure 1(b): where does $s_a$ come from during inference? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Any plan on releasing the source code and the trained checkpoints, especially given the claim on "StyleTTS 2 sets a new benchmark for TTS synthesis"? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive remarks and appreciation of our work, and we appreciate your helpful feedback for clarification. Here are our responses to your insightful questions and concerns. **1. Claim of Human-Level Performance on Single and Multispeaker Datasets:** You are correct in pointing out that the achievements of PnG BERT and VITS are also human-level. However, VITS shows differences from ground truth in CMOS in the Appendix, indicating possible room for improvement. In addition, PnG BERT conducted the evaluation on a proprietary dataset rather than an open-access dataset. We appreciate your attention to detail, and we intend to revise the claim to specify that StyleTTS 2 is the first human-level model on publicly available single and multispeaker benchmark datasets. This revision will help in differentiating our contribution from previous works. **2. End-to-End Training:** The pre-training is only to speed up the training procedure because we found that having a well-trained acoustic model, particularly the style encoder in the first place can help the prosodic predictor converge faster. However, this is not absolutely necessary, and the model still converges when starting directly from the joint training without pre-training. Moreover, even with pre-training, all components are eventually trained together during joint training, so it is still considered end-to-end. We will make this point more clear in the revised version, by stating that this pre-training is not necessary for convergence. **3. Evaluation and Style Analysis:** The styles analyzed are concatenated acoustic and prosodic styles, and the emotion labels are based on texts generated by GPT-4. There is no reference for the single-speaker case shown in Figure 2(a), and the reference for the multispeaker case is neutral emotion (randomly picked from the testing set). This analysis aims to show that our model can sample different styles based on the emotions of the texts. There are definitely other aspects of styles that cannot be reflected in the text, though they can be reflected in the reference audio. Previous work (StyleTTS) has already shown that the framework can reproduce many aspects from the reference audio, so this work focuses more on the newly invented style diffusion. We have also provided samples on our demo page to show that our model can reflect several aspects of styles in the reference audio, including the acoustic environment and the speaker’s emotions. **4. Real-Time Factor (RTF):** We acknowledge the oversight in describing RTF, and we will correct this mistake in the camera-ready version. **5. SLM Adversarial Training on OOD Texts:** The training procedure in Figures 1(a) and 1(b) is indeed separate, as you have correctly noticed that their inputs are different, but the gradients are combined for each batch to update the parameters in the end. We will revise the caption under the figure to clarify this process and emphasize that adversarial training in Figure 1(b) is not together with joint training in Figure 1(a) because they have different inputs. **6. Evaluation on OOD Text:** The evaluation data was not seen during SLM adversarial training. In particular, the adversarial training uses texts from *train-clean-460* subset in the LibriTTS dataset, while in our evaluation we use LJSpeech, VCTK, and the *test-clean* subset of LibriTTS, so none of them are in the OOD texts used for training. We will make sure this is further emphasized in Section 4 Model Training. **7. Origin of $s_a$ in Figure 1b:** We apologize for the confusion in Figure 1b. The style diffusion model samples both $s_a$ and $s_p$, not just $s_p$. We will revise the figure and make the border around $s_a$ and $s_p$ darker in color to avoid any confusion. *** Lastly, we acknowledge that our method contains multiple components, but we hope that our paper has described each component sufficiently clearly. In addition, we do plan to make the source code publicly available to make sure all components in our method are completely transparent. We will add a link to the source code and checkpoints in the camera-ready version.
Summary: In this paper, the paper proposes StyleTTS 2 for human-level text-to-speech, which is an end-to-end system with two-stage training (pretraining some modules and finetuning the whole network). Two main modules proposed in paper is style diffusion module to model the prosodic style information and WavLM adversarial module to improve the quality of synthesized waveforms. Moreover, the paper proposes an interesting module on duration prediction, which is differentiable and stable. Strengths: 1. The paper proposes several interesting modules, including new differentiable duration modeling method, new adversarial module with WavLM and style diffusion to model the prosodic style information, which can benefit the future research in TTS. 2. The experiments verify the effectiveness and efficiency of proposed method by comparing MOS and RTF with different baseline methods. 3. The ablation study shows the effectiveness of proposed methods. Weaknesses: I do not see obvious weakness. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. For the multi-speaker setting or the zero-shot setting, how is the inference details. I think it will be different with Figure 1b since reference audio is used in inference? 2. Will the code be open-sourced to help the future research in TTS? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive assessment of our work and constructive feedback on our paper. Here are the responses to your questions. 1. Figure 1(b) represents the single speaker case only. In the multispeaker scenario, our model first encodes the speaker embedding (anchor style) and then samples a style using this anchor. This process is not well-articulated in the current version of the manuscript. We will revise the caption of Figure 1(b) to illustrate this process more clearly in the camera-ready version. 2. We will make the code publicly available to further the research in this field. We will include a link to the code in the camera-ready version of the paper.
Summary: The paper presents Style TTS 2, a new TTS model with three main innovations 1) diffusion of a latent variable to capture style (everything in the speech signal that’s not enumerated by a phone sequence t and speaker variable c), 2) the use of a pretrained speech language model (SLM) as a discriminator and 3) a novel formulation of a differentiable alignment function. The authors also enable a new joint training curriculum. These three innovations result in quality that surpasses existing available TTS approaches including VITS, VALL-E, Natural Speech and YouTTS. Strengths: The paper is well written, and, for the most part, easy to follow. Each of the novel components are clearly described, well motivated and contextualized with other comparable approaches. Also, the quality of the samples is quite good. Weaknesses: Given that zero-shot speaker adaptation is extremely ripe for misuse as Societal Impact and Potential Harmful Consequences per https://neurips.cc/public/EthicsGuidelines, I would think the risk and mitigation section of this presentation should be more than 2 sentences. Section 3.2.1 using less resources is highlighted as a contribution of Style TTS 2 – does this include the “phoneme-level BERT pretrained on extensive corpora of Wikipedia articles”? Similar question concerning Section 3.2.3 and the use of WavLM pre-trained on 94k hours of data? It seems like this assessment is limited to the resources required for the model itself, but does not include components like the discriminator and text embedding. Some areas to tighten the presentation and contribution in the upfront material of the paper. Introduction: the first paragraph establishes the areas of improvement for TTS as diverse, expressive speech, robustness to OOD text, and requirements of massive datasets for performing zero-shot TTS. In the last paragraph of the introduction the contributions are oriented about quality, and novel use of SLMs. However, these are inconsistent with the areas for improvement enumerated by the authors. It might be clearer to be specific about which open questions for in the first paragraph are going to be addressed in some form by the proposed technique. The limitations of StyleTTS are again not the same as those established in the introduction. Namely they are 1) a two-stage training process, 2) limited expressiveness and 3) reliance on reference speech hindering realtime application. StyleTTS 2 addresses these three limitations, unifying the training process, and reducing reliance of reference speech while improving OOD performance and expressiveness. While this is compact and consistent as an improvement ot StyleTTS, it does not address the broad limitations of TTS established by the authors. Section 3.2 and 3.2.1 one of the advantages of Style TTS 2 is described as an E2E training process that jointly optimizes all components. This is presented in contrast to STyle TTS which required a two-stage training process (Section 3.1). However, later in Section 3.2.1 (line 142 and Figure 1a) Style TTS2 uses a two-stage training pre-training and joint training process. If single stage training, or E2E training is meant to be an advantage and contribution of Style TTS 2, the distinction between its two-stage training and the two-stage training used by the original Style TTS needs to be more clearly defined. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What does it mean for synthetic samples to “surpass” human recordings? Not in the context of how this happens as a score, but what does it say about the evaluation itself if it’s possible for an artificial sample to be more “natural” than a human derived sample which is “natural” with metaphysical certitude. I appreciate the argument about unnatural segmentation of the audio in LJSpeech, but if this is the source of the “unnaturalness” in the test data, it’s maybe not worth making too much of a claim about its value as a quantitative evaluation. On line 151, the augmented style vector is defined as s = [s_p,s_d], but s_d is not previously defined. Should this be s_a? Section 3.2.3 - the generator loss being independent of ground truth is true of any GAN training, no? Is there something specific to the use of an SLM discriminator that enables this better than prior work training TTS with GAN objectives? Section 5.1 “setting a new standard for this dataset”. What is the referent of “this dataset”? Is it the evaluation set of 80 text samples from LIbriTTS and 40 Librivox utterances spoken by the speaker of LJSpeech described in Section 4? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Very briefly in two sentences at the end of the conclusion. (My rating below is contingent on the outcome of an ethics review to assess if the limitations and discussion of ethics concerns are sufficient to comply with https://neurips.cc/public/EthicsGuidelines) Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and insightful comments on our work. We appreciate your observations and feedback that have helped us identify areas for clarification and improvement, and here are our responses. - **Ethical Concerns on Misuse** We are grateful for your attention to the potential ethical problems in our work. We acknowledge that having only two sentences is insufficient for addressing ethical issues of potential misuse. We will expand the discussion in the conclusion section by adding the following paragraph: > We acknowledge that zero-shot speaker adaptation has the potential for misuse and deception by mimicking the voices of individuals as a potential source of misinformation or disinformation. This could lead to harmful, deceptive interactions such as theft, fraud, harassment, or impersonations of public figures that may influence political processes or trust in institutions. In order to manage the potential for harm, we will require users of our model to adhere to a code of conduct that will be clearly displayed as conditions for using the publicly available code and models. In particular, we will require users to inform those listening to samples synthesized by StyleTTS 2 that they are listening to synthesized speech or to obtain informed consent regarding the use of samples synthesized by StyleTTS 2 in experiments. Users will also be required to use reference speakers who have given consent to have their voice adapted, either directly or by license. Finally, we will make the source code publicly available for further research in speaker fraud and impersonation detection. - **Resources for Zero-Shot Speaker Adaptation Task** We acknowledge your concern regarding the resources utilized in our zero-shot speaker adaptation task. Our intention was to emphasize the reduction in the need for speech data in TTS model training rather than the entire training pipeline. Moreover, since pre-trained models like PL-BERT and WavLM are already publicly available, we do not consider the training data used for these models a part of required training resources. Compared to models like Vall-E that trains on 60k hours of data, our approach can be seen as a resource-efficient method, as we are using 250 times less speech data than Vall-E while achieving similar or better performance. We will revise our paper to make this point more straightforward. - **Addressing Limitations in Previous Works** Thank you for pointing out this inconsistency in presenting the areas for improvement and our model’s contributions in the introduction. Our model indeed addresses the limitations mentioned in the introduction, but it would be easier to follow if we were more consistent in our wording so that it was clear that our model contributed solutions to these limitations. Using style diffusion, our model can synthesize diverse samples, outperforming models like VITS, as shown in Section 5.2 and Appendix A.2.1. The adversarial training with SLM also improves the performance on out-of-distribution (OOD) texts. Furthermore, our model performs comparably to Vall-E, despite using significantly less speech data, making our model a resource-efficient alternative to models requiring a large amount of speech data like Vall-E. We appreciate your suggestion and we will highlight these contributions more in the conclusion section for consistency. - **Meaning of Outperforming Human Recordings** Thank you for bringing up the meaning of outperforming human data in the evaluation of naturalness. There are important considerations to make about the way in which speech segments were evaluated. First, the evaluated segments were originally spoken in a larger context. It is possible that a certain speech segment spoken in isolation sounds less natural than the same speech spoken in context. However, StyleTTS 2 synthesizes segments with a more limited context which may sound more natural in isolation than in a larger context. This would lead to higher ratings for StyleTTS 2 than for real speech, as the lack of larger context in evaluation would favor synthesized speech. Second, human speech naturally contains variability that is neither text nor context-dependent. This variability can lead to short segments of real speech sounding less natural than expected on average. On the other hand, synthesized speech lacks this variability, producing “average” speech given the current text and context, which sounds more natural on average. We will include these discussions in the revised paper to guide future research in this area. - **Clarification on Two-Stage Training Process** We understand the confusion around our two-stage training process. Unlike the original StyleTTS, which fixed the acoustic modules during the second stage, StyleTTS 2 goes through joint training of both acoustic and predictor modules after pre-training. The initial pre-training is primarily aimed at accelerating the training process, not a strict necessity. We found that having a well-trained acoustic module first, particularly the acoustic style encoder as the starting point for the parameters of the prosodic encoder, promotes faster convergence of the prosody predictor. We will clarify this in the camera-ready version. - **Typos and Other Clarifications** We appreciate your attention to detail, and we will correct the notation mistake on line 151. We agree that the independence of the generator from the ground truth is common in GAN training. However, in our work, this independence allows us to train on OOD texts without specific speech ground truth, improving performance on OOD texts. We will emphasize more on this point in the revision. We also apologize for the ambiguity in Section 5.1. By "this dataset", we were referring to the in-distribution samples from the LJSpeech dataset, as our model has outperformed the previous state-of-the-art model NaturalSpeech on this dataset. We will make this more specific in the revised paper. --- Rebuttal Comment 1.1: Title: thank you Comment: Thank you very much for answering my questions! I enjoyed reading the work and hope my comments helped to strengthen its presentation. I'll leave the assessment of the ethics comments to the ethics reviewers.
Summary: The authors of the paper have proposed a high-quality TTS model StyleTTS 2. Unlike previous StyleTTS work, StyleTTS 2 has a DDPM-based style generator, speech language model discriminators, and a differentiable aligner. All of these design decisions help improve the overall performance of the model. As the authors report, StyleTTS 2 sets new standards for TTS synthesis. Strengths: I appreciate the hard work of the authors on this paper. It was a well-written and enjoyable read. It is a great challenge to further develop TTS models with high quality, as the current SOTA models are very close to human speeches. StyleTTS 2 optimizes all the components in the joint training stage. Evaluation is thorough enough to support the arguments with in-depth ablation studies. I have carefully checked the audio samples, they are promising. Weaknesses: The training procedures are complex. It requires several training stages, such as pre-training the acoustic modules, retraining the speech language models, and joint training. It is unclear whether the speech language models (SLMs) are trained on audio clips (rather than the whole-length audio). If so, how do you learn semantic information from short audio clips? It is also unclear whether the MPD and MRD still exist in the jointly training stage. I found MPD and MRD missing in Figure 1 (b). The overall novelty is limited, as many previous works integrate the diffusion modules and speech language model modules. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do MPD and MRD still exist in the jointly training stage? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our paper and your constructive questions. We appreciate your positive feedback on our work and your insightful comments. We would like to address your concerns and clarify certain aspects of our paper. - **Concern about Complicated Training Procedure** We acknowledge that the training procedure consists of more than one step. We hope that we can alleviate this issue by making the training code publicly available, ensuring that users can train our model easily. In addition, we would like to clarify that the SLMs are not re-trained. Instead, we appended a discriminator head on top of it, as mentioned in Section 3.2.3. We will ensure to highlight this point more prominently in the revised manuscript. - **Concern about Training SLM Discriminators on Audio Clips** We followed the methodology of the original WavLM paper for downstream tasks. For example, the original WavLM paper used the SLM representations on 3-second clips and fine-tuned for extra 2 epochs with 6-second clips for the speaker verification task. They also used a batch size of 200 seconds of audio for base model finetuning and 80 seconds of audio for large model finetuning for the speech recognition task. In our training process, we randomly cropped the audio into 3-6 seconds (thus 96 to 192 seconds per batch) to mimic how the WavLM model was used for downstream tasks, as we have described in Section 4. However, we acknowledge that having longer clips can help with the performance, but it is also more costly in terms of GPU RAM usage. We have found that 3-6 seconds of clips are sufficiently long to produce naturalistic speech. Since the semantic information the discriminator captures is primarily for understanding the paralinguistic information as described in Appendix D.1, we stick to this setting in our implementation. We will make this point more apparent in the revised paper to avoid confusion. - **Concern about the Absence of MPD and MRD in Figure 1(b)** Figure 1(b) is an illustration of adversarial training with SLM only. MPD and MRD are not present for adversarial training with SLM but are present during the joint training process, as shown in Figure 1(a). The training consists of both joint training and adversarial training with SLM, where the gradients from both processes are combined to update the parameters of trainable modules in each batch. We agree that the absence of these components in Figure 1(b) could be confusing. In the revised manuscript, we will amend the caption of Figure 1 to more clearly show that the actual training procedure is a combination of the processes illustrated in Figures 1(a) and 1(b). - **Concern about Novelty** We appreciate your feedback, and we apologize for not presenting the novelties of our work as clearly as possible. The main novelties that we want to present are the following: Unlike previous works that sample waveforms, melspectrograms, or hidden representations proportional to the duration of the speech using diffusion models, we are the first to diffuse a fixed-length vector for speech synthesis, which greatly improves the model efficiency while maintaining the benefits of diverse samples provided by diffusion models. Moreover, unlike previous works, we employ SLM as a discriminator, rather than a source of hidden representations from which speech is decoded or reconstructed. While diffusion models and SLMs have indeed been proposed in previous works for speech synthesis, we use them in new ways that greatly improve the performance of the model. We will emphasize these points more in Section 2 of the final version of the manuscript. --- Rebuttal Comment 1.1: Title: Thanks for the comments Comment: Thanks for your comments. I believe this paper merits acceptance as it is a high-quality paper, with a significant contribution that would be of interest to the community.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and effort in reviewing and improving our work. Here are some common concerns we would like to address: *** **Source Code and Checkpoints** We will make the trained checkpoints and source code for training and inference publicly available. We will add a link to the GitHub repository in the camera-ready version that contains the source code and checkpoints. To prevent potential misuse for deception, we will release the model under a code of conduct that the listeners of the samples should be informed that the speech they are listening to is synthesized by StyleTTS 2, and the reference speakers should have given consent either directly or by license to have their voice adapted by StyleTTS 2. *** **Figure 1 Clarification** We have noticed several confusions caused by unclarity in our presentation in Figure 1, and we will make the following changes: - We will clarify that Figure 1 illustrates two separate training processes in Figure 1(a) and 1(b), with different inputs but combined gradients for each batch. - We will make it clear that Figure 1(b) only describes the inference pipeline for single-speaker models, and we will add additional descriptions for multispeaker cases involving reference speaker embeddings. - We will clarify that the joint training process described in Figure 1(a) is end-to-end with all components being trained together. We will emphasize that the pre-training described in Figure 1(a) is for faster convergence only, not an absolute necessity. *** **Ethics Review Rebuttal** We appreciate the ethics reviewer's careful evaluation of any ethical issues in our work. Here is our rebuttal to the ethics review and our plan to address these concerns. **1. IRB Details**: We thank the ethics reviewer for noting that we omitted details about the IRB approval of the protocol used to obtain speech evaluations from human participants. We will make revisions to include further details about the human study protocol. Previous: > “These evaluations were conducted by native English speakers from the U.S. on Amazon Mechanical Turk.” Updated: >“These evaluations were conducted by native English speakers from the U.S. on Amazon Mechanical Turk. All evaluators reported normal hearing and provided informed consent as monitored by the local institutional review board and in accordance with the ethical standards of the Declaration of Helsinki.” We will further disclose the IRB protocol number in the camera-ready version as proof. **2. Potential for Deception and Ways for Mitigation:** We appreciate the importance of the potential for deception this model creates. We will address this issue more wholistically by noting further impacts of possible deception and our specific plans for mitigating possible negative consequences of the release of this model. Specifically, we will expand our discussion in Section 6 with the following paragraph: >We acknowledge that zero-shot speaker adaptation has the potential for misuse and deception by mimicking the voices of individuals as a potential source of misinformation or disinformation. This could lead to harmful, deceptive interactions such as theft, fraud, harassment, or impersonations of public figures that may influence political processes or trust in institutions. In order to manage the potential for harm, we will require users of our model to adhere to a code of conduct that will be clearly displayed as conditions for using the publicly available code and models. In particular, we will require users to inform those listening to samples synthesized by StyleTTS 2 that they are listening to synthesized speech or to obtain informed consent regarding the use of samples synthesized by StyleTTS 2 in experiments. Users will also be required to use reference speakers who have given consent to have their voice adapted, either directly or by license. Finally, we will make the source code publicly available for further research in speaker fraud and impersonation detection. **3. Meaning and Implications of Outperforming Human Data** We thank the ethics reviewer for bringing up the meaning of outperforming human data on an evaluation of naturalness. There are important considerations to make about the way in which speech segments were evaluated. First, the evaluated segments were originally spoken in a larger context. It is possible that a certain speech segment spoken in isolation sounds less natural than the same speech spoken in context. However, StyleTTS 2 synthesizes segments with a more limited context which may sound more natural in isolation than in a larger context. This would lead to higher ratings for StyleTTS 2 than for real speech, as the lack of larger context in evaluation would favor synthesized speech. Second, human speech naturally contains variability that is neither text- nor context-dependent. This variability can lead to short segments of real speech sounding less natural than expected on average. On the other hand, synthesized speech lacks this variability, instead producing “average” speech given the current text and context, which sounds more natural on average. Given that we can synthesize speech barely distinguishable from real speech, it raises the question of how real speech could be reasonably detected. Since we believe StyleTTS 2 outperforms human data mainly due to context and variability, we believe these characteristics could also be used for detecting synthesized speech. In particular, shorter segments may not be detected accurately, as they lack context and only represent a small amount of the possible variation in the speaker’s voice. On the other hand, longer segments may be detected more accurately, as real speech will be more influenced by long-term context and contain more “human” sources of variability, such as pauses and breaths. We will include these discussions in the revised paper to guide future research in this area.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces StyleTTS 2, a text-to-speech (TTS) model that leverages latent diffusion and adversarial training with SLMs. The most noteworthy thing is that the paper employs large pre-trained speech language models (SLMs), such as WavLM, as discriminators in the adversarial training process. This novel approach, combined with differentiable duration modeling for end-to-end training, improves the naturalness of the synthesized speech. Strengths: 1. It utilizes adversarial training with large speech language models (SLMs) in conjunction with differentiable duration modeling for end-to-end training. This approach improves the naturalness of synthesized speech. 2. Text encoder in StyleTTS 2 is separated into an Acoustic Text Encoder and a Prosodic Text Encoder. The Prosodic Text Encoder is then used as a conditional input for the latent diffusion process. This design allows the Diffusion model to model the diverse prosody of different styles, and it can also be seen through the ablation study that latent diffusion is essential in this paper. Weaknesses: 1. The article fails to provide a clear explanation of how the prosodic style encoder ensures the extraction of style from the Mel spectrogram rather than other elements. The specific mechanism behind this process is not adequately detailed in the paper. While the article briefly mentions certain style diffusion methods, they do not sufficiently explain how the prosodic style is accurately derived from the Mel spectrogram. Further exploration is required to devise more effective encoder architectures and training mechanisms to ensure the precise extraction of prosodic style information. 2. The writing style of the article lacks coherence, making it challenging to grasp the main ideas and problem-solving strategies. The organization and presentation of the paper could have been improved. ----------- The author's response addressed some of my issues, so I raised my score. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your valuable time and effort in reviewing our paper. Your insights have helped us clarify certain points about our work. Here are our point-to-point responses. - **Concern about Zero-Shot TTS and Prosodic Styles** We acknowledge that this aspect may not have been clearly articulated, and we apologize for not clarifying that the inference pipeline in Figure 1(b) is for the single-speaker model only. For multispeaker models, including zero-shot speaker adaptation, the style diffusion model does take both acoustic and prosodic styles as input, as described in the last paragraph of Section 3.2.2. We will make it clear that Figure 1(b) represents the single-speaker case only and we will add an additional description for the multispeaker case under the figure in the camera-ready version. Moreover, we would like to emphasize that our primary contribution lies in presenting human-level TTS systems for seen speakers with style diffusion and SLM discriminators. Beyond this, we present the zero-shot feature as an additional capability, and we leave further improvements in zero-shot speaker adaptation for future works, as discussed in Section 6. - **Concern about Prosodic Style Encoder and Decoupling from Speaker Timbre** We regret that we have not made the function of our prosodic style encoder more clear. Its role is not to decouple from the speaker timbre but to capture additional stylistic information that the acoustic style encoder cannot. This relieves the "burden" of the acoustic style encoder so the information it extracts does not have to be used to both reconstruct the speech and predict the prosody (duration and F0) faithfully. As you have correctly pointed out, the former needs to encode more of the timbre information, while the latter needs to extract more of the prosodic aspects. Hence, the prosodic style encoder is designed to complement, not decouple, the speaker's timbre by capturing additional stylistic information. We agree that this point can be made more clear and we will make the motivation of adding the prosodic style encoder more straightforward in the revision. In addition, since the F0 is predicted based on the style extracted from the reference audio, our model is able to produce speech with pitch ranges similar to the reference speech. This was already shown in the previous work StyleTTS, and we have presented several samples in Section 4 on our demo page to demonstrate this effect. We acknowledge that this could have been elaborated more clearly, and we will do so in the revision by emphasizing that our model maintains the capability of the original StyleTTS that can match the styles in the synthesized speech with those extracted from the reference audio such as the pitch range. - **Concern about Clarity and Understanding of Prosodic Style Encoder** We would like to clarify that the prosodic style encoders are trained specifically to reconstruct duration, F0, and energy, which we refer to as "prosody". It is worith noting that the prosodic style encoder is not the main focus of our paper. It is only to address the problem we found in the previous work (StyleTTS) that using a single style encoder to reconstruct both the speech and the prosodic information (duration, F0, and energy) is not sufficient. Hence, we simply duplicated the original style encoder to encode the acoustic and prosodic information separately. We appreciate the reviewer’s suggestion for further exploration of encoder architectures and training mechanisms, which we plan to do in future work. We will discuss this in Section 6 as a possible direction for future research in the revised version. - **Concern about the Readability of the Paper** We appreciate your feedback regarding the readability of our paper. We have made significant efforts to improve the readability and accessibility of the writing. Based on your feedback and that of the other reviewers, we have improved the clarity of several sections. We believe the overall coherence is greatly improved. We welcome further comments or suggestions on how we can improve the presentation of our work.
null
null
null
null
null
null
DiffComplete: Diffusion-based Generative 3D Shape Completion
Accept (poster)
Summary: The paper tackles the shape completion task by introducing a diffusion-based technique. To do so, additional design choices have been made such as hierarchical feature aggregation and an occupancy-aware fusion strategy to better reproduce the shape and respect the details. Based on numerous quantitative and qualitative experiments, the proposed method outperforms others by a considerable margin. Strengths: The paper clearly outperforms other SOTA. The visual quality of the completion is good. Occupancy-aware fusion and hierarchical feature aggregation that are the main task specific designs make sense and the authros have successfully shown their importance in the design through ablation studies. I like the adoption of control net in the design. I believe it makes sense to learn gradually and at different scales. The fact that the data is from the actual scan and it is not synthesized by adding noise and removing parts make the method very useful and interesting. Weaknesses: In terms of writing, the paper could use simpler and more understandable and short sentences. For example, this sentence could be improved by making it shorter or breaking it into two sentences: To improve the completion accuracy, we enable effective control from single-input condition by hierarchical and spatially-consistent feature aggregation, and from multiple ones by an occupancy-aware fusion strategy. This happens many times throughout the paper. Missing reference: Multimodal shape completion with IMLE: https://openaccess.thecvf.com/content/CVPR2022W/DLGC/html/Arora_Multimodal_Shape_Completion_via_Implicit_Maximum_Likelihood_Estimation_CVPRW_2022_paper.html Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: When you pro-process the input and the incomplete shape through a convolutional network and add them, why do you call it aligning distributions? As the computation and time are two limitations of this method, would a part-based completion task be a better approach? In fact, my question is whether a part based approach in which a model is first broken into parts will save computation time and understand the geometry better? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The method is mostly trained category specific. Considering the time and computational cost of this approach, this makes it hard and expensive to be adopted and modified. easily. However, I weigh the quality of the results more than the time and computation so I am okay with this limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 9z1Q, Thank you for the valuable comments and encouraging feedback. We are happy to respond to each question below. **Q1: The paper could use simpler, more understandable, and short sentences.** A1: In the revision, we will carefully polish the manuscript to ensure better clarity and conciseness. For instance, the suggested example will be rephrased as "We enhance the completion accuracy with two key designs. For a single condition, we propose a hierarchical feature aggregation strategy to effectively control the outputs. For multiple conditions, we introduce an occupancy-aware fusion strategy to incorporate more shape details." **Q2: Add reference: Multimodal Shape Completion with Implicit Maximum Likelihood Estimation.** A2: Both IMLE and DiffComplete address multimodal shape completion, and we'll cite this paper in the revision. The relevant discussion will be inserted in line 102 of the paper and is shown as follows, with the new contents in bold. "While some generative models can generate diverse global shapes given a partial input, they potentially allow for a high generation freedom and overlook the completion accuracy. **IMLE, for instance, adopts an Implicit Maximum Likelihood Estimation technique to specially enhance structural variance among the generated shapes.** Distinctively, we formulate a diffusion model for shape completion. Our method mainly prioritizes fidelity relative to ground truths while preserving output diversity." **Q3: Why do you call the pre-processing process of incomplete and complete shapes "aligning distributions"?** A3: The complete and incomplete shapes are represented by different data fields, *i.e.*, TUDF and TSDF, respectively. Directly fusing them could result in confusing representations. Hence, we use a pre-processing layer to empirically project the two different fields into a more compatible feature space for interaction. We previously referred to this as "aligning distributions". To avoid any misunderstanding, we will remove the term "aligning distributions" and incorporate the above explanations in the revision. **Q4: Will a part-based approach save computation time and help understand the geometry better?** A4: Decomposing 3D shapes into parts can help the model capture high-resolution details with low memory costs. We agree that this strategy is beneficial to our shape completion task. For instance, we can utilize the techniques of work [1] and [2] to first encode global volumetric TSDF (TUDF) into smaller structural patches. Then, our DiffComplete is capable of performing accurate part-level completion. Such an adaptation is feasible due to the versatility of DiffComplete's core designs. Future work will explore more efficient 3D representations (*e.g.,* patch-based) to enhance the capability of our method. We'll add these discussions to the future work section in our revised paper. **** **References** [1] Li, et al. Diffusion-SDF: Text-to-Shape via Voxelized Diffusion. In CVPR 2023. [2] Hertz, et al. SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation. TOG 2022. --- Rebuttal Comment 1.1: Comment: I am still positive about this paper and it should be accepted. --- Reply to Comment 1.1.1: Title: Author Response to Reviewer 9z1Q Comment: Dear Reviewer 9z1Q, We sincerely appreciate your positive feedback and consistent support. Your suggestions are instrumental in enhancing the quality of our paper.
Summary: The authors present a diffusion-based neural network for 3D shape completion. They introduce a hierarchical and spatially consistent feature aggregation scheme to fuse partial and complete shapes, and an occupancy-aware fusion strategy to fuse multiple partial shapes. They demonstrate a new SOTA performance in shape completion for both new instances and completely unknown object categories. Strengths: 1. The authors successfully apply the diffusion-based model to the shape completion task and achieve a new SOTA performance. 2. The stepwise addition of partial scans to predict a complete shape is interesting and makes completion tasks interactive and adaptable. 3. The proposed model supports multimodal completions that can predict multiple plausible results. Weaknesses: 1. The writing of this paper can be improved. 2. The technical designs, such as the two feature aggregation schemes, are not well motivated. As a main contribution, I expect more explanations of the design ideas. 3. The experimental design is not very clear. Does DiffComplete train on a single category? Since PatchComplete trains on all categories, DiffComplete should also train across categories to allow a fair comparison. 4. The results of multiple conditional inputs are unpredictable. A better example is adding and editing a semantic part of the object from a partial input. 5. Does the proposed model require much more time than its competitors? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I wonder if the resolution can go up to 64^3 or even 128^3? 2. The diversity of multimodal complement is worse than other methods. So is it because the diffusion model gives more deterministic results than others? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitation is not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer hw6P, Thanks for the constructive comments. Please find our response to the specific points below. **Q1: Explain more design ideas about the two feature aggregation schemes.** A1: Our key design motivation is to enhance the completion accuracy and generalization ability. Then, we illustrate how each design choice benefits the goals. **(i) Two-branch architecture**: We utilize two branches to separately process the complete and incomplete scans, such that each branch has a specific focus. Given the same architecture of the two branches, the feature sizes of complete and incomplete shapes are always aligned. This alignment greatly simplifies the following feature aggregation process. **(ii) Hierarchical feature aggregation**: Leveraging the multilayer structure of CNNs, we aggregate two-branch features at various network levels. It brings two-fold benefits. First, the network can correlate the difference between two shapes at multiple abstraction levels. The learned completion regularities are multi-scale, thus being more accurate and robust. As the substructures of objects are more general than full shapes, *e.g.*, chairs and tables may share very similar legs, the learned local completion pattern could generalize to various object classes. Second, we found that altering the level of feature aggregation can control the trade-off between completion accuracy and diversity. Hence, we can leverage this feature to adjust the model's performance, as illustrated in Table 7 of the main paper. **(iii) Spatially-consistent feature aggregation**: We simply add up the two-branch features at the respective 3D locations. This operation precisely compares the corresponding parts of two scan pairs, contributing to yield more accurate completion than the widely-adopted cross-attention mechanism, as shown in Table 8 of the main paper. We'll further clarify our design motivation in the revision. **Q2: Does DiffComplete train on a single category? Since PatchComplete trains on all categories, DiffComplete should also train across categories to allow a fair comparison.** A2: For fair comparisons, on the PatchComplete benchmark, we train DiffComplete across all object categories like PatchComplete. This has been described in lines 252-254 of the main paper. On the 3D-EPN benchmark, both PatchComplete and our method train a specific model for each object category. We will make the experimental settings more clear in the revision. **Q3: To show the results of multiple conditional inputs, a better example is adding and editing a semantic part of the object from a partial input.** A3: We have provided several visual examples in the attached **PDF** under *the response to all reviewers*. These examples will be integrated into our revised manuscript to better showcase the model's multiple-input capability. **Q4: Does the proposed model require much more time than its competitors?** A4: As our method belongs to the class of diffusion model, like other diffusion-based approaches, it's inherently slower than methods of other classes (*e.g.*, GAN) due to the iterative sampling process. In scenarios where the completion quality weighs more than inference time, DiffComplete is definitely a better choice. Additionally, we'd like to mention that the inference time can be reduced by employing fast sampling techniques (*e.g.*, [1]) for diffusion models. Compared with two other diffusion models [2] and [3], our method maintains similar inference times but significantly improves the completion quality, as shown in **Table R8**. **Table R8:** Comparisons of completion accuracy (Avg. $l_1$-err.) and average inference time on the 3D-EPN benchmark. It is tested on a single RTX 3090 GPU with a batch size of 1. | **Method**|**Paradigm**|**Avg. $l_1$-err. $\downarrow$**|**Inference Time $\downarrow$**| |-|-|:-:|:-:| |RePaint-3D [2]|diffusion model|0.374|34.1 s| |PVD [3]|diffusion model|0.114|**3.1 s**| |DiffComplete (Ours)|diffusion model|**0.053**|3.2 s| **Q5: Can the resolution go up to $64^3$ or even $128^3$?** A5: Yes. **(i)** It's feasible to train our DiffComplete with a resolution of $64^3$ on affordable NVIDIA 3090 GPUs. We provide visual examples of our outputs in the attached **PDF** under *the response to all reviewers*. We are happy to include these results in the supplement during revision. **(ii)** There are various available strategies to scale up resolutions further (*e.g.*, to $128^3$) even on smaller GPUs. For instance, we can apply the gradient accumulation technique to break the large batch into smaller chunks, ensuring each fits within the GPU's memory constraints. Other beneficial options would be improving computation efficiency, such as employing a lightweight backbone network (*e.g.*, TriVol [4]) or leveraging the autoencoding approach (*e.g.*, from [5]) for shape compression. These techniques could be incorporated into our generic pipeline to facilitate high-resolution shape completion. **Q6: For worse diversity, is it because diffusion models give more deterministic results than other methods?** A6: It's not about the diffusion model, as RePaint-3D, with the best diversity, is also diffusion-based. This is because our design choice prioritizes completion accuracy over diversity. Yet, the accuracy-diversity trade-off can be easily adjusted in DiffComplete. By altering the feature aggregation level, our method can achieve the best diversity, as shown in Table 7 of the main paper. **** **Reference** [1] Zheng, et al. Fast Sampling of Diffusion Models via Operator Learning. In NeurIPS 2022 Workshop. [2] Lugmayr, et al. RePaint: Inpainting using Denoising Diffusion Probabilistic Models. In CVPR 2022. [3] Zhou, et al. 3D Shape Generation and Completion Through Point-Voxel Diffusion. In ICCV 2021. [4] Hu, et al. TriVol: Point Cloud Rendering via Triple Volumes. In CVPR 2023. [5] Li, et al. Diffusion-SDF: Text-to-Shape via Voxelized Diffusion. In CVPR 2023.
Summary: The paper tackles the problem of probabilistic shape completion using diffusion models, learning from range scans. The main contribution of the paper comes from proposing 2 novel techniques: hierarchical feature aggregation for strong conditioning and occupancy-aware fusion technique. The method is tested on ShapeNet dataset and achieves better completions compared to the baselines. Also, the proposed method exhibits robust generatlization for out of distribution inputs. Strengths: * I enjoyed the idea of hierarchical feature aggregation. To best of my knowledge, conditioning a diffusion model is hard, and the ablation study in Tab. 7 clearly shows that the multi-level conditioning acts as intended. * Strong empirical results on completion on unseen categories. The results in Tab2, 3 shows that the method achieves strong completion results on unseen categories and it makes sense due to strong hierarchical conditioning. Weaknesses: * Similar works on 3D completion using diffusion models exists. As mentioned in the related sections, the difference compes from the representation used to represent the state in the diffusion model. For example, point-voxel uses point cloud, while this work uses TSDF (TUDF) representation. While I do not think that the novelty of the work degrades even there are other diffusion based 3D completion models, I want to hear in detail how the method differs from other diffusion based models. Please look at the question section. * My main concern is that some of the important recent works are not mentioned and compared. For determnisitc completion methods, convocc completes the given partial point cloud using occupancy fields. For multimodal completion methods, point-voxel [2] (although mentioned in the related-section) and GCA[3] completes point cloud in point cloud/sparse voxel representation. ShapeFormer[4] and cGCA[5] completes the point cloud in implicit neural representations. Although I realize that comparing all of these baselines requires alot of effort, these baselines should have been tested since the core contribution of the work stems from achieving SOTA results on an existing benchmark. [1] Peng et al. Convolutional Occupancy Networks. ECCV, 2020 [2] Zhou et al. 3D Shape Generation and Completion Through Point-Voxel Diffusion. ICCV, 2021 [3] Zhang et al. Learning to generate 3d shapes with generative cellular automata. ICLR, 2021 [4] Yan et al. ShapeFormer: Transformer-based Shape Completion via Sparse Representation. CVPR, 2022 [5] Zhang et al. Probabilistic implicit scene completion. ICLR, 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Regarding the related work with diffusion based models, the authors mentioned that “. Due to the absence of meaningful ground truths in these completion scenarios, they could also face completion accuracy challenges like the above generative approaches” in line 114. Does that mean that the diffusion methods require the ground truth while the proposed method does not? To the best of my knowledge, the diffcomplete model requires ground truth TUDF for training. I think that the assumption that we can acquire the ground truth TUDF means that we can acquire ground truth point cloud and implicit function (mostly UDF based) as well. If so, then other baseline methods mentioned in the weakness section should be compared. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors have addressed the limitations in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer DbPU, Thanks for the valuable feedback. We are happy to address each specific comment below. **Q1: Explain more details about the differences from other diffusion-based models.** A1: DiffComplete is specifically designed for high-quality shape completion. As a result, it differs from recent 3D diffusion models in various design choices, as summarized in **Table R7**: (i) diffusion Space, (ii) training strategy, and (iii) condition injection mechanism. Below, we present the details about each of these aspects. **(i) Diffusion Space**: Different from many previous approaches, We perform the diffusion process on the original volumetric TUDF. Doing so helps to ensure a direct transition to final meshes via Marching Cubes. This choice is favored as it preserves the fidelity of shape information better than the latent codes and circumvents the need for post-processing required by point clouds. While radiance fields are suited for rendering tasks, they fall short when recovering the shape geometries. **(ii) Training Strategy**: DiffRF [1] and Diffusion-SDF [2] both perform a masked diffusion process to incorporate conditions only at the inference stage. They do not include any conditional inputs at the training stage, thus lacking paired incomplete-to-complete GT supervision to accurately learn the completion rules. In contrast, Diffusion-SDF [3], PVD [4], and our method utilize these scan pairs in training to enhance the completion accuracy. However, we design different mechanisms to inject the conditions, as we discuss next. **(iii) Condition Injection**: Utilizing a control branch for hierarchical condition injection is a key design of DiffComplete, which enables both accurate completion and effective controls. This strategy yields superior performance over both the cross-attention (see Table 8 of the main paper) and the PVD's injection mechanism (see **Tables R1**, **R2**, and **R3** in *the response to all reviewers*, located at the top of this webpage). In the revision, we'll add these discussions into the supplement to clarify DiffComplete's unique attributes. **Table R7:** Comparisons with different 3D diffusion models. Note that [2] and [3] are different works. |**Method**|**Diffusion Space**|**Conditional Training**|**Condition Injection**| |-|-|-|-| |DiffRF [1]|radiance field|×|masked diffusion| |Diffusion-SDF [2]|latent patch|×|masked diffusion| |Diffusion-SDF [3]|latent vector|✓|cross-attention| |PVD [4]|point cloud|✓|single branch| |DiffComplete|volumetric TUDF|✓|two-branch hierarchical aggregation| **Q2: Confusions in Line 114 - "Due to the absence of meaningful ground truths in these completion scenarios, they could also face completion accuracy challenges like the above generative approaches".** A2: Similar to Diffusion-SDF [3] and PVD [4], our method requires incomplete-to-complete ground-truth shape pairs during training, as indicated in **Table R7**. For line 114, we intended to illustrate that certain 3D diffusion models (DiffRF [1] and Diffusion-SDF [2]) do not use these pairs for conditional training, which limits their completion accuracy. To avoid any confusion, the related sentences (lines 112-117) will be revised as follows: "For conditional shape completion, both DiffRF and Diffusion-SDF adopt a masked diffusion strategy to fill in large missing regions cropped out by 3D boxes. However, their training processes do not leverage a paired incomplete-to-complete ground truth, which may prevent them from accurately learning the completion rules. Contrarily, our method explicitly uses the scan pairs for conditional training." **Q3: Comparisons with the suggested baselines.** A3: As suggested, we have extensively compared DiffComplete with ConvONet, ShapeFormer, cGCA, and diffusion-based PVD. The results are illustrated in **Tables R1**, **R2**, and **R3** within *the response to all reviewers* with detailed discussions, located at the top of this webpage. Notably, DiffComplete exhibits obvious advantages over these baselines across various benchmarks and evaluation metrics. As GCA and cGCA share a similar generation paradigm, we only compared DiffComplete with cGCA, which has superior performance as demonstrated in its paper. **** **References** [1] Muller, et al. DiffRF: Rendering-Guided 3D Radiance Field Diffusion. In CVPR 2023. [2] Li, et al. Diffusion-SDF: Text-to-Shape via Voxelized Diffusion. In CVPR 2023. [3] Chou, et al. Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions. In ICCV 2023. [4] Zhou, et al. 3D Shape Generation and Completion Through Point-Voxel Diffusion. In ICCV 2021. --- Rebuttal Comment 1.1: Title: Revision Comment: I indeed thank the authors for the clarification and all the experiments conducted. I feel that the paper is much stronger for it. It would be nice if the authors could visualize the qualitative results for the newly added baselines, but I think that can be done in the camera-ready version. I am convinced that the method works well compared to other baselines and I will raise the score. --- Reply to Comment 1.1.1: Title: Author Response to Reviewer DbPU Comment: Dear Reviewer DbPU, Thank you for your positive acknowledgment and updated score. We've conducted visual comparisons with the newly-suggested baselines, including ConvONet, ShapeFormer, and PVD. Our method consistently demonstrates superior qualitative results for both known and unseen object categories. Due to NeurIPS 2023's guidelines that prohibit external links in the rebuttal box, we cannot show the visualizations here. Instead, we've shared an anonymized Dropbox link in the "Official Comment" section at the top of the review page for the AC's reference. We are happy to incorporate these visual comparisons into Figures 3 and 4 of the camera-ready paper. Our code will also be available to facilitate reproducibility.
Summary: The paper introduces a diffusion-based approach, DiffComplete, to generate complete shapes conditioned on partial 3D range scans. The condition is represented as volumetric TSDF (truncated signed distance function), while the complete shape is represented as volumetric TUDF (truncated signed distance function). Inspired by 2D ControlNet, the authors devise a hierarchical feature aggregation mechanism to inject conditional features in a spatially-consistent manner. Besides, they propose a fine-tuning strategy to adapt the model trained on a single condition to multiple conditions. The trade-off between multi-modality and high fidelity can be controlled through the network level for feature aggregation between conditions and denoised complete shapes. The results on 3D-EPN and PatchComplete show the superiority of DiffComplete. The authors also demonstrate its zero-shot ability. Strengths: - The paper is clearly written and easy to follow. - The ControlNet-style design to inject features from the condition is reasonable. - Fine-tuning the network trained with a single incomplete shape for multiple incomplete shapes is a good strategy. - The generalizability looks good according to Fig. 4. Weaknesses: It is unclear why some baselines (especially point-cloud-based methods) are missing. For example, "3D Shape Generation and Completion Through Point-Voxel Diffusion" and "SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Is there a typo in Figure 7? Is it an MMD curve? 2. For the ablation study on multiple conditions, can the authors first fuse the scans and compute TSDF, then extract features instead of averaging features extracted from individual scans? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations (e.g., failure cases) in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer KuxT, Thanks for the encouraging feedback. We are grateful to see your recognition of our methodology, model performance, and manuscript writing. We address the specific comments below. **Q1: Compare with the suggested baselines (especially point-cloud-based methods), e.g., PVD [1] and SnowflakeNet [2].** A1: In our rebuttal (see above), we've expanded comparisons with several suggested surface completion baselines, including an adapted version of PVD. Please refer to **Tables R1**, **R2**, and **R3** within *the response to all reviewers* for details, located at the top of this webpage. These additional comparisons further show the superior capability of our method and will be included in the final revision. Regarding methods on point cloud completion, we compare our DiffComplete with the leading approach SnowflakeNet (TPAMI 2022) as per your suggestion. To make the comparisons fair, we converted SnowflakeNet's outputs to meshes using the reconstruction technique from ConvONet [3]. As shown in **Table R5**, our method delivers much better completion accuracy. We'll include this table in the supplement of the revision. **Table R5:** Comparisons of completion accuracy on the 3D-EPN benchmark, evaluated by average $l_1$-error ($\downarrow$) across eight object classes. | **Method**|**Output**|**Avg. $l_1$-err. ($\downarrow$)**| |-|-|:-:| |SnowflakeNet [2]|point cloud|0.189| |DiffComplete (Ours)|volumetric TUDF|**0.053**| **Q2: Is there a typo in Figure 7? Is it an MMD curve?** A2: Figure 7 is a TMD curve to reveal the varying shape diversity. Yet, a typo in line 301 identifies it as an MMD curve. We have already corrected the issue and will do a careful wording pass. **Q3: For multiple conditional inputs, can the authors first fuse the scans and compute TSDF, then extract features instead of averaging features extracted from individual scans?** A3: We compare our strategy with this option in Section B.2 of the supplement. As indicated by **Table R6** (copy of Table 10 in the supplement), fusing the original scans might be vulnerable to registration errors, which can disrupt the final results. Instead, fusing scan features in hierarchical feature spaces shows greater resilience to simple noise at the TSDF level. For a more clear illustration of our design, we will move this ablation study to the main paper. **Table R6:** Choice of fusion space for multiple partial shapes. Directly fusing them in the original TSDF space significantly impairs the completion quality. |**Fusion Space**|**$l_1$-err. $\downarrow$**|**CD $\downarrow$**|**IoU $\uparrow$**| |-|:-:|:-:|:-:| |TSDF|0.12|4.78|61.0| |feature|**0.05**|**3.97**|**68.3**| **** **References** [1] Zhou, et al. 3D Shape Generation and Completion Through Point-Voxel Diffusion. In ICCV 2021. [2] Xiang, et al. Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer. TPAMI 2022. [3] Peng, et al. Convolutional Occupancy Networks. In ECCV 2020. --- Rebuttal Comment 1.1: Comment: Thank the authors for the extra comparison with missing baselines. The rebuttal has resolved my concern. I would like to keep my rating. --- Reply to Comment 1.1.1: Title: Author Response Comment: Dear Reviewer KuxT, We sincerely thank you for your feedback and support. Your suggestions greatly help us refine our work.
Rebuttal 1: Rebuttal: Dear all reviewers, We sincerely thank your constructive comments and are grateful to see the recognition received: - **Methodology:** Reviewer 9z1Q: "I like the adoption of ControlNet in the design"; Reviewer KuxT: "The ControlNet-style design is reasonable", "Fine-tuning the network for multiple incomplete shapes is a good strategy"; Reviewer DbPU: "I enjoyed the idea of hierarchical feature aggregation"; Reviewer Ct9H: "The paper is technically sound'. - **Experimental Results:** Reviewer 9z1Q: "clearly outperforms other SOTAs", "shown their importance in the design through ablation studies"; Reviewer hw6P: "a new SOTA performance"; Reviewer KuxT: "The generalizability looks good"; Reviewer DBPU: "Strong empirical results on completion on unseen categories". - **Practicality**: Reviewer 9z1Q: "Data is from the actual scan, making the method very useful and interesting"; Reviewer DbPU: "The stepwise addition of partial scans is interesting and makes completion tasks interactive and adaptable". We first address the general concern of Reviewer Ct9H/KuxT/DbPU below. Then, we provide a detailed response to each reviewer's comments and address all issues. **Q: Performance comparisons with the suggested baselines. (To Reviewer Ct9H/KuxT/DbPU)** A: The suggested baselines include both the deterministic approach (ConvONet [1]) and multimodal approaches (ShapeFormer [2], PVD [3], and cGCA [4]). We conducted experiments on both the 3D-EPN and PatchComplete benchmarks, showing the results in **Table R1** and **Table R2**, respectively, inside our rebuttal. Our DiffComplete surpasses all of these baselines by a significant margin with respect to completion accuracy and generalization ability. We provide a detailed discussion below. **(i)** ConvONet [1] extends the 3D-EPN structure (which is a baseline in our paper) by incorporating an implicit occupancy decoder. Both methods are deterministic by nature and hence cannot handle ambiguities of missing data. Thus, their completion accuracy and generalization ability are lower than our method. **(ii)** Our method differs from ShapeFormer [2] and cGCA [4] by the proposed generative model. The employed diffusion model offers superior sampling quality than the autoregressive model (ShapeFormer) and the Generative Cellular Automata model (cGCA). Compared with [2] and [4], which perform shape compression, we also preserve the fidelity of the original structures to enhance the completion details. **(iii)** PVD [3] is originally a point cloud diffusion model. Here, we adapt it to perform TSDF (TUDF) diffusion as suggested by Reviewer Ct9H. A limitation of PVD is that it retains noises of the partial input. Hence, when it is mixed with the generated missing part, noise severely affects the final completion quality. Instead, we design two branches to separately process the partial and complete shapes. By doing so, the model can effectively learn a diffusion process from noise to clean shapes. For multimodal approaches, we further compare their multimodal capacity. **Table R3** shows that our method achieves significant improvements on both completion accuracy (MMD) and fidelity (UHD), while preserving a moderate (or even comparable) completion diversity (TMD). This result aligns with our design choice that prioritizes completion accuracy over diversity. We'd like to highlight that the trade off between accuracy and diversity can be easily adjusted in DiffComplete, as detailed in Section 4.5 of the main paper. All experiments are conducted on the same data for a fair comparison. We'll include the additional comparison results from **Tables R1**, **R2**, and **R3** into Tables 1, 2, and 4 of the main paper, respectively, since they directly correlate. We will also release our code to facilitate reproducibility. **Table R1:** Comparisons of completion accuracy on the 3D-EPN benchmark, evaluated by average $l_1$-error ($\downarrow$) across eight object classes. | **Method**|**Paradigm**|**Avg. $l_1$-err. ($\downarrow$)**| |-|-|:-:| |ConvONet [1]|deterministic|0.220| |ShapeFormer [2]|multi-modal|0.141| |PVD [3]|multi-modal|0.114| |cGCA [4]|multi-modal|0.185| |DiffComplete (Ours)|multi-modal|**0.053**| **Table R2:** Comparisons of generalization ability on the PatchComplete benchmark, evaluated by average CD ($\downarrow$) and IoU ($\uparrow$) across eight *unseen* object classes. | **Method**|**Paradigm**|**Avg. CD ($\downarrow$)**|**Avg. IoU ($\uparrow$)**| |-|-|:-:|:-:| |ConvONet [1]|deterministic|5.26|60.1| |ShapeFormer [2]|multi-modal|5.05|62.5| |PVD [3]|multi-modal|4.94|62.8| |cGCA [4]|multi-modal|5.09|61.7| |DiffComplete (Ours)|multi-modal|**4.10**|**67.5**| **Table R3:** Comparisons of multimodal capacity on Chair class of the 3D-EPN benchmark. | **Method**|**MMD ($\downarrow$)**|**TMD ($\uparrow$)**|**UHD ($\downarrow$)**| |-|:-:|:-:|:-:| |ShapeFormer [2]|0.007|0.024|0.055| |PVD [3]|0.007|**0.027**|0.042| |cGCA [4]|0.006|0.024|0.047| |DiffComplete (Ours)|**0.002**|0.025|**0.032**| **References** [1] Peng, et al. Convolutional Occupancy Networks. In ECCV 2020. [2] Yan, et al. ShapeFormer: Transformer-based Shape Completion via Sparse Representation. In CVPR 2022. [3] Zhou, et al. 3D Shape Generation and Completion Through Point-Voxel Diffusion. In ICCV 2021. [4] Zhang, et al. Probabilistic implicit scene completion. In ICLR 2022. Pdf: /pdf/e35163a21e9f0cfb81684e6cbd4bd7e068c28105.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The proposed method aims to tackle the TSDF shape completion problem using diffusion models. That is, given one or several incomplete TSDFs obtained from partial scans of a single object produced by range sensors, the method generates a TSDF of complete shape while trying to preserve the geometric details of the incomplete one. The method achieves this by using the same technique in ControlNet: adding a control branch to the vanilla UNet module in diffusion models. Also, in order to better incorporate the partial scans from different views, the authors propose an "occupancy-aware fusion" module which performs a weighted feature fusion for the multi-view scans considering their geometric reliability. Strengths: In general, the paper is easy to read and is technically sound. The following are specific points: - Single probabilistic framework for all tasks. As more and more recent papers suggested, generative models are not only doing great in generation problems, but are also competitive in deterministic problems where usually a unique optimal solution is desired given the input. This method shows this point for the 3D completion problem: Even when the input shape has little to no ambiguity, the generated completion has better quality than previous deterministic approaches. - Incorporate multi-view scans. This special consideration is useful in real-life application, where the scanners are constantly capturing new scans and the small errors in registration will make it non-trivial to fuse features from different views. Weaknesses: I have concerns on baseline & references, unsatisfactory contricution and writing. Hence I believe this paper does not reach the NeurIPS bar and would like to rate "Reject". Here are detailed points: - Missing baselines and references. Comparison with multiple very related works are omitted. ConvOccNet can perform shape completion very well in higher resolution (64 and higher, see the Fig. 4 in their supplementary). The model is very good at deterministic completion. For multimodal setting, two very related work, ShapeFormer and 3DILG, are not mentioned or discussed. And the former work is especially designed for multimodal completion. - Contribution is not satisfactory. The proposed method seems to be just deploying ControlNet on the PatchComplete problem and benchmark. The used resolution (32) is also not impressive comparing to previous works like DiffusionSDF or 3DShape2VecSet. I would be skeptical that training diffusion models directly and keep the partial tsdf unchanged during sampling (as what PVD does for completion) can achieve better results. - Many minor writing problems. L99: It is confusing to classify AutoSDF to be a type of autoencoder. It is an autoregressive approach. L114: "Due to the absence of meaningful ground truths, ...". In the mentioned works, they all have meaningful ground truths shapes. It is unclear what is "meaningful" here. L325: "Effects of feature aggregation manner". A better way is to write: "Effects of the manner in which features are aggregated" or "Effects of our feature aggregation mechanism" Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Please show advantages over these baselines: (1) Convolutional Occupancy Network. Train ConvOccNet to map partial TSDF grid to complete grid. Then compare. (2) ShapeFormer. TSDF can be converted to point cloud. Train ShapeFormer to map point cloud to implicit TSDF fields. (3) TSDF diffusion with PVD style of completion sampling. That is, in the sampling time, always replace the noisy to the actual TSDF value in the place of partial scanned cells. This way, only the "missing" regions are denoised and completed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Authors include a comprehensive discussion on method limitations and potential societal impact in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Ct9H, In the following, we address all comments in the review. Our additional results validate that our method achieves a clear improvement over the newly-suggested baselines. We are happy to include these experiments in the final revision of the paper. **Q1: Add comparisons with the suggested baselines.** A1: Comprehensive comparisons with ConvONet, ShapeFormer, and PVD are presented in **Tables R1**, **R2**, and **R3** within *the response to all reviewers*, located at the top of this webpage. These results show the superior performance of our method. Regarding the suggested 3D-ILG [1], it mainly focuses on the representation for shape generation rather than the specific completion task. As 3D-ILG also supports shape completion with a conditional generation paradigm, we compare it with our method with aligned data and experimental setups. As presented in **Table R4**, our method achieves much more accurate results compared to 3D-ILG. We'll include **Table R4** into our supplement in the revision. **Table R4:** Comparisons of completion accuracy on the 3D-EPN benchmark, evaluated by average $l_1$-error ($\downarrow$) across eight object classes. | **Method**|**Avg. $l_1$-err. ($\downarrow$)**| |-|:-:| |3D-ILG [1]|0.165| |DiffComplete (Ours)|**0.053**| **Q2: Is the proposed method a deployment of ControlNet on the PatchComplete benchmark?** A2: Although the paradigm of injecting conditional features takes inspiration from ControlNet, DiffComplete has critical distinctions to justify our contributions. First, DiffComplete differs from ControlNet in several key areas. **(i) Task:** ControlNet tackles the 2D text-to-image generation task, making it work on our 3D completion task is non-trivial. To this end, we design the appropriate volumetric representation and 3D networks. **(ii) Motivation:** The motivation of ControlNet is to finetune pretrained large diffusion models, while we aim to train a specific diffusion model. This leads to different training strategies. **(iii) Training Strategy:** ControlNet utilizes a "trainable copy" initialization, but our experiments found that training from scratch is the most effective way. Please see Table 9 in the supplement for details. **(iv) Design Details:** We discard "zero convolution", a critical component in ControlNet, as we do not require the pre-training process. We also directly embed the original 3D shape representation rather than operating in latent space. Second, DiffComplete offers new features and insights beyond ControlNet. **(i)** Our method further supports multiple inputs to improve the completion accuracy. **(ii)** We delve deeper into the feature injection mechanism, observing that altering the feature aggregation level finely controls the trade-off between completion accuracy and diversity. This finding can be leveraged to adjust the model's performance, as described in Section 4.5 of the main paper. **Q3: The used resolution ($32^3$) is not impressive.** A3: We are able to scale up the resolution through various available strategies. For instance, we can apply the gradient accumulation technique to break the large batch into smaller chunks, ensuring each fits within the GPU's memory constraints. Such a technique effectively facilitates the training of DiffComplete at higher resolutions. Other beneficial options would be improving computation efficiency, such as employing a lightweight backbone network (*e.g.*, TriVol [2]) or leveraging the autoencoding approach (*e.g.*, from [3]) for shape compression. These approaches could be incorporated into our generic pipeline to complete high-resolution shapes even on smaller GPUs. **Q4: Will training diffusion models with PVD paradigm achieve better results than DiffComplete?** A4: DiffComplete attains much better performance than PVD, as shown in **Tables R1**, **R2**, and **R3** within *the response to all reviewers* with detailed analysis. In addition, our method can flexibly support multiple conditional inputs, while this may not be feasible for the PVD pipeline. **Q5: Minor writing problems.** A5: Thanks for pointing out these wording issues and typos. We will carefully polish and revise the paper. **(i)** Though AutoSDF employs an AutoEncoder, it should be classified as an autoregressive model. We'll rectify this. **(ii)** Our goal is to illustrate that certain 3D diffusion models (Diffusion-SDF [3] and DiffRF [4]) do not employ incomplete-to-complete ground-truth pairs during training. The related sentences (lines 112-117) will be revised as follows: "For conditional shape completion, both DiffRF and Diffusion-SDF adopt a masked diffusion strategy to fill in missing regions cropped out by 3D boxes. However, their training processes do not leverage a paired incomplete-to-complete ground truth, which may prevent them from accurately learning the completion rules. Contrarily, our method explicitly uses the scan pairs for conditional training." **(iii)** Thanks for the suggestions. We've already fixed it. **** **References** [1] Zhang, et al. 3DILG: Irregular Latent Grids for 3D Generative Modeling. In NeurIPS 2022. [2] Hu, et al. TriVol: Point Cloud Rendering via Triple Volumes. In CVPR 2023. [3] Li, et al. Diffusion-SDF: Text-to-Shape via Voxelized Diffusion. In CVPR 2023. [4] Muller, et al. DiffRF: Rendering-Guided 3D Radiance Field Diffusion. In CVPR 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' clarification and the additional experiments. In general, the rebuttal addresses most of my concerns. For Q2&A2, after seeing the authors explanation, I agree that adapt Control-Net for 3D completion problem require non-trivial effort. For Q1&R1, I appreciate the authors' effort for the analysis and the extra quantitative comparisons. It would be better to show the comparisons visually, especially for ConvONet. In conclusion, I will raise my rating and am looking forward to seeing these updates incorporated in the revised paper. --- Reply to Comment 1.1.1: Title: Author Response to Reviewer Ct9H Comment: Dear Reviewer Ct9H, Thank you for acknowledging our methodology and considering a score increase. In response to your suggestion, we have made additional visual comparisons with ConvONet, ShapeFormer, and PVD. The qualitative results showcase our method's superior completion quality for both known and ​unseen object categories. Due to the conference guidelines that prohibit external links in the rebuttal box, the visualizations cannot be displayed here. Yet, we've shared an anonymous link to our figures in the "Official Comment" section at the top of the review page for the AC's reference. We will incorporate these visual results into Figures 3 and 4 of the revised paper and release our code to facilitate reproducibility. We hope this reply addresses your remaining concerns and are looking forward to your final decision.
null
null
null
null
null
null
Spontaneous symmetry breaking in generative diffusion models
Accept (poster)
Summary: The paper explores the generative dynamic of diffusion models. It proposes spontaneous symmetry breaking and Gaussian late initialization scheme and they achieve better fidelity and diversity on the generated images. Strengths: [S1] The paper supports the claims in both theoretical and empirical. [S2] The paper successfully convince the proposed approach improves diversity by comparing. Weaknesses: [W1] They do not discuss the limitations and drawbacks of their model. [W2] The other generative models such as VAE, GAN, regression models, etc. do not discuss. At least, it is expected to mention them in the related work section as they claim that the proposed approach has a broader impact on generative models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: [Q1] In Table 1, why the dataset and denoising step (n) is repeated? You can put them in the first column and it might be enough. So, instead of making subtables, can you make them a single table? [Q2] In Table 1 and Figure 7 why you didn't include other generative models such as GANs in comparison? [Q3] Is Table 1 an ablation study to show the effect of the gls? If so, where is the fidelity comparison of the proposed approach? [Q4] In the discussion of Table 1, you do not discuss the results so as to indicate the spontaneous symmetry breaking. What is the reason for this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: [L1] It is only applicable to the diffusion models. [L2] The paper ignores the other type of generative models in both related work and experiments. It is required to compare the proposed approach with existing generative models. [L3] The paper does not discuss the drawback of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewers' feedback and are happy to hear that our work supports our claims both theoretically and empirically. In the following, we will address the main issues raised by the reviewer. ### Point 1: Discussion of other generative models Q: *“The other generative models such as VAE, GAN, regression models, etc. do not discuss. At least, it is expected to mention them in the related work section as they claim that the proposed approach has a broader impact on generative models.”* A: In the revised version, we will add a sentence discussing other generative models in the introduction and a more extensive discussion about how to generalize these insights to other models in the discussion section. We indeed focused our attention on generative diffusion models. This is due to the fact that these models are radically different from other generative approaches such as VAEs and GANs and it is not straightforward to extend these insights to these other models. However, given the current importance of diffusion models in the generative modeling literature, we believe that the insights developed by carefully analyzing these models can have a broad impact on generative modeling. ### Point 2: Experimental comparison with other generative models Q: *“In Table 1 and Figure 7 why you didn't include other generative models such as GANs in comparison?”* A: Our aim is 1) to demonstrate that spontaneous symmetry breaking phenomena are ubiquitous in the dynamics of generative diffusion models and 2) to show that this insight can lead to faster, higher quality and more diverse sampling in diffusion models. Given these aims, it is unclear what we would learn by including other non-diffusion baselines. The relative performance of these models when compared with diffusion models has already been discussed in the literature [1, 2] and it is orthogonal to our claims. [1] Diffusion Models Beat GANs on Image Synthesis, https://arxiv.org/abs/2105.05233 [2] Improved Denoising Diffusion Probabilistic Models, https://arxiv.org/abs/2102.09672 ### Point 3: Table 1 Q: *“In the discussion of Table 1, you do not discuss the results so as to indicate the spontaneous symmetry breaking. What is the reason for this?”* A: Unfortunately, we are not sure to have understood the question. We show the existence of the symmetry breaking in the FID curves in Fig. 4 and in the splitting of the potential obtained from the model in Fig.1 (right side) and Supps in Fig. 13-14. Table 1 reports the result of the late initialization experiment. As we explain in the paper, the existence of the symmetry breaking predicts that late initialization will produce more efficient fast samples, which is confirmed in Table 1. We hope this clarifies it. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the rebuttal. I think it is good to enlarge the discussion by adding other generative (types of) models to demonstrate a complete picture and allow others to have better insight into how the idea can be generalized. Besides, I carefully read the rebuttal and the other reviews and rebuttals and reconsidered my rating as borderline accept.
Summary: The authors approach the reverse diffusion process in score-based generative modeling from the perspective of spontaneous symmetry breaking, which may explain why generation is minimally affected by "late initializations" (e.g. initializing x_t at t=T, but simulating the reverse process starting at some t < T). Motivated by this framework, the authors propose late-start scheme and show improved performance compared to some baselines. Strengths: **Exciting direction of study.** The authors propose an exciting perspective on the diffusion model in terms of spontaneous symmetry breaking in physical systems. **Initially promising empirical results.** The empirical results appear to be promising. (However, they do not presently provide much in the way of concrete proof that the symmetry breaking framework provides tangible improvements on generation speed or quality, see "Weaknesses"). Weaknesses: **Motivation.** The motivation of the work should be made more clear. If it is to improve generation speed, the model should be compared against fast generation models, such as Model Distillation, Consistency Models, and EDM. If it is to generate better samples, then the model should be compared against standard diffusion models such as [4] or [5]. **Empirical Results.** The empirical results are not very convincing. All models achieve FIDs that are far from state-of-the-art. It is not clear at this stage whether improvements come directly from the proposed approach, and whether the improvements will remain when models are tuned to be more competitive with respect to the state of the art. **Claims.** The claim (up to 3x FID improvements) feels overstated. The authors do not compare against state-of-the-art fast samplers, such as Model Distillation [1], Consistency Models [2], and EDM [3]. **Soundness.** I have some concerns over the soundness of the claims in the paper. - The existence of stable points is only shown for toy examples (e.g. Secs. 3.1 and 3.2). Even in these toy examples, it is not clear to me how the stable points will affect the result of the reverse diffusion process for most initial conditions of the reverse process. Showing the existence of stable points at the origin and the original data points implies only the existence of a finite set of stable points, which are essentially probability zero set of the entire space. Diffusion models are initialized from a standard normal, so x_T can be very far from the origin (or any of the stable points). So is it true that the spontaneous symmetry breaking model is relevant here, even in theory? - While the second toy example is more realistic (Sec 3.2), and the authors claim that the assumptions required are mild, I do not believe they hold for any usual image distributions (e.g. MNIST, CIFAR10, ImageNet, CelebFaces, etc.). - Finally, the authors connect the theory of symmetry breaking to diffusion modeling with a single piece of empirical evidence: The FID of samples drawn with late start diffusion is relatively stable (up to a critical) point in time t_c. I can see how this is necessary condition to show the existence of symmetry breaking. But how is it sufficient? In summary, while the theory of spontaneous symmetry breaking is very interesting and thought-provoking, I am not yet convinced that it is presently a useful perspective in diffusion-based generative modeling. **Minor edits.** 39: the theory weak and electromagnetic -> the theory of weak and electromagnetic 108: Hessian matrices (of what?) [1] Progressive Distillation for Fast Sampling of Diffusion Models. https://arxiv.org/abs/2202.00512 [2] Consistency Models. https://arxiv.org/abs/2303.01469 [3] Elucidating the Design Space of Diffusion-Based Generative Models. https://arxiv.org/abs/2206.00364 [4] Denoising Diffusion Probabilistic Models. https://arxiv.org/abs/2006.11239 [5] Score-Based Generative Modeling through Stochastic Differential Equations.https://arxiv.org/abs/2011.13456 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See "Weaknesses" section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See "Weaknesses" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful to the reviewer for the comprehensive and insightful feedback. In what follows, we'll address the main concerns and questions. ### Point 1: Motivation Q: *“The motivation of the work should be made more clear. If it is to improve generation speed, the model should be compared against fast generation models, such as Model Distillation, Consistency Models, and EDM. If it is to generate better samples, then the model should be compared against standard diffusion models such as [4] or [5].”* A: We acknowledge that we were not entirely clear in the original manuscript and we will change the text accordingly. Our primary goal is to demonstrate that spontaneous symmetry breaking is a near-universal consequence of the dynamics of generative diffusion models. Our result establishes a direct connection between a dominant class of generative models in machine learning and a large body of theory and numerical methodologies developed in physics, which can be leveraged to improve our understanding and mastery of generative AI. As we argue in the general reply to all reviewers, the goal of our empirical study on fast samplers is to show how this theoretical understanding can be leveraged in practice and it can indeed lead to rather impressive results. However, we believe that the level of optimization needed to optimize SOTA methods goes beyond the scientific scope of this paper. ### Point 2: Comparison with distillation models Q: *“the model should be compared against fast generation models, such as Model Distillation, Consistency Models, and EDM”* A: Distillation models aim at skipping the generative dynamics in order to obtain one or few step sampling, often using a re-trained model. Since our aim is to study the generative dynamics, we decided to not focus on this class of methods. So said, it is worth noting that distillation models still rely on the DDIM sampler in their training process, a sampler that we have employed and improved upon in our paper. ### Point 3: Relevance of the fixed-points Q: *“The existence of stable points is only shown for toy examples (e.g. Secs. 3.1 and 3.2). Even in these toy examples, it is not clear to me how the stable points will affect the result of the reverse diffusion process for most initial conditions of the reverse process. Showing the existence of stable points at the origin and the original data points implies only the existence of a finite set of stable points, which are essentially probability zero set of the entire space. Diffusion models are initialized from a standard normal, so x_T can be very far from the origin (or any of the stable points). So is it true that the spontaneous symmetry breaking model is relevant here, even in theory?”* A: This is a very insightful and subtle point. Indeed, the reviewer is entirely correct in pointing out that, at least in the early stage, the fixed-point in itself does not seem to be very relevant. What actually matters is the shape of the potential around the fixed-point. The main reason to study the bifurcation in the fixed-points is that it implies a dramatic change of shape in the potential, which splits into multiple wells, each representing a subset of the original symmetry group. This change of shape is the hallmark of a spontaneous symmetry breaking, as visualized in Figure 2 in the attached pdf. As a secondary point, note that for $T-t <T- t_c$ the fixed-points are not always isolated. In fact, if the data spans a d-dimensional manifold with uniform probability, for $T-t$ tending to $0$, each point in the manifold becomes a fixed-point. Finally, in a discrete dataset, for $T-t$ tending to zero, the fixed-points will generally acquire a non-zero measure since the magnitude of the score pointing at them tends to infinity. ### Point 4: Assumptions in discrete datasets Q: *“While the second toy example is more realistic (Sec 3.2), and the authors claim that the assumptions required are mild, I do not believe they hold for any usual image distributions (e.g. MNIST, CIFAR10, ImageNet, CelebFaces, etc.).”* A: We would like to point out that we did not claim that these assumptions are met in real datasets, but instead that it is straightforward to induce them (at least approximately) by normalization. For example, subtracting the mean (centering) is commonly done as a preparatory step in many algorithms without loss of generality, since the mean can be summed after generating the centered data. In general, the centering assumption is only needed to make sure that the fixed-point is the origin and to make the analysis simpler. Concerning the constraint on the Euclidean norm, it is well known that the Euclidean norm often concentrates around a single value in high-dimension, meaning that the data approximately ‘live’ in a hyper-spherical annulus. Generally speaking, while the constraint is not exactly met, we think that it is relevant for understanding high-dimensional data. ### Point 5: Empirical evidence Q: *“Finally, the authors connect the theory of symmetry breaking to diffusion modeling with a single piece of empirical evidence: The FID of samples drawn with late start diffusion is relatively stable (up to a critical) point in time t_c. I can see how this is necessary condition to show the existence of symmetry breaking. But how is it sufficient?”* A: The late start FID curves are not the only empirical result in the paper. Using the trained network, we also directly evaluate the potential on arcs connecting two trajectories and we show that the potential has a single minimum initially and then splits into two separate minima. This can be seen in Figure 1 and in the Supplementary Material in Figures 13-15. These changes in the shape of the potential provide very direct empirical evidence of the symmetry breaking phenomenon in trained models.
Summary: The authors investigate theoretically and empirical when (in terms of diffusion timestep) certain symmetries in a diffusion process are broken, corresponding to when "choices" are made about which qualitative features a generated data point should have. Their findings suggest that there is no symmetry breaking occuring early in the process. They follow this up by showing that the early dynamics can be replaced by a sample from a multivariate Gaussian without meaningful degradation of the perceptual quality of generated data. Strengths: - This perspective on the diffusion process is novel as far as I know. - Understanding when in the diffusion process different symmetries are broken is likeliy to be very helpful for future work designing better diffusion-based models. - Their technique for better fast sampling yields surprising large improvements. Although they acknowledge that it does not immediately scale to high-dimensional data, it provides good support of their analysis. Weaknesses: - There is a fairly large gap between the simple example in Section 3.1, where we could reason analytically about when the symmetry is broken, to the realistic case in Section 3.2, where the only symmetries considered were the identity of each data point. Do the authors think it would be possible or meaningful to do this type of analysis on other features that could be extracted from the data (e.g. the identity of a person in an image)? - This analysis of symmetry breaking makes sense for "discrete" variables, like cluster index. It does not significantly help to provide insights about when values of "continuous variables" like e.g. background colour of an image are selected. - The proposed Gaussian initialization improves fast samplers in terms of FID and the authors show is helpful in terms of maintaining the identity of discrete variables, but it is important to note that replacing the early diffusion process with a Gaussian may still lead to large changes in the distribution. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses (mainly the first one) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: sufficiently addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Next, we will address the highlighted concerns and questions. ### Point 1: Gap between theory and experiments Q: *“There is a fairly large gap between the simple example in Section 3.1, where we could reason analytically about when the symmetry is broken, to the realistic case in Section 3.2, where the only symmetries considered were the identity of each data point. Do the authors think it would be possible or meaningful to do this type of analysis on other features that could be extracted from the data (e.g. the identity of a person in an image)?”* A: There is indeed a substantial gap between what we can show theoretically in analytically tractable models with simple symmetry groups and the actual (often approximate) symmetries in real-world datasets. We definitely agree that a substantial amount of work can be done in characterizing the symmetry breaking for more complex symmetry groups (e.g. translational and rotational symmetries) and also, as the reviewer suggested, on symmetry transformation learned directly from a dataset. However, these are significant and complex endeavors that go beyond the scope of this first paper. While we understand the perspective that this might be seen as a weakness, in fact, we do believe this is a significant strength of our paper, as it opens several fascinating research directions. ### Point 2: Discrete and continuous symmetries Q: *“This analysis of symmetry breaking makes sense for "discrete" variables, like cluster index. It does not significantly help to provide insights about when values of "continuous variables" like e.g. background colour of an image are selected.”* A: In the main text, we focused on discrete symmetry groups as they show the phenomenon in a clear and simple way. However, our symmetry breaking framework can deal with both continuous and discrete symmetry groups. In fact, we discussed the case of the continuous $SO(N)$ symmetry breaking in the Supplementary Material in section A.3. In order to make this point clear, we will move this analysis to the main text. We will also include a new paragraph discussing this point explicitly. ### Point 3: Deviation from Gaussianity Q: *"The proposed Gaussian initialization improves fast samplers in terms of FID and the authors show is helpful in terms of maintaining the identity of discrete variables, but it is important to note that replacing the early diffusion process with a Gaussian may still lead to large changes in the distribution".* A: While it is true that the distribution will not be exactly Gaussian, our theoretical and, more importantly, experimental results show that the deviations are minor during the first phase. To further address this point, we performed Gaussianity tests on forward samples as a function of time. Due to Anderson's theorem, both forward and generated samples have the same marginal statistics. The results of these tests across several datasets are graphically illustrated in Figure 1 on the attached pdf. --- Rebuttal Comment 1.1: Comment: Thanks for the response, which addressed my concerns. I will keep my rating of 6.
Summary: The authors of this work propose a new method to accelerate the sampling of diffusion models. First, the authors define as fixed points of the reverse process, points where the drift function is 0. Then, the authors claim that if two paths of stable fixed points interact, then it is the noise that can make the system jump from one path to the other. The authors argue that there are phase transitions happening in the sampling of diffusion models: until a certain time, the stable paths interact and hence the system jumps between potential paths. After a point in the reverse diffusion, there are no more intersections in the paths. Even if we don't initialize exactly in the path, the stability ensures that the differential equations are mean-reverting and will correct any introduced errors. Hence, the authors propose to start the sampling from a point in the middle of the diffusion and accelerate the sampling. Strengths: The introduced framework is very interesting and novel. The example with the two Dirac functions is very pedagogical and its' generalization to a uniform distribution over many discrete points follows naturally. The experimental evidence supports that this phenomenon is indeed happening in trained diffusion models. The proposed method surpasses the baselines for a low number of function evaluations. The paper is well-written and the approach can be implemented easily. I expect that the paper will be of great interest to the research community and the audience of NeurIPS. Weaknesses: The theoretical analysis has some simplifying assumptions that it is not clear to what extent they alter what happens in practice. Specifically, the authors analyze uniform distributions over Dirac functions. The distributions observed in practice have a more complicated structure and it is not clear how we can argue anything about the fixed points of a model given access to its (learned) score-function. A weakness of the approach is that to start the sampling from some intermediate diffusion time, one needs to estimate the parameters of a multivariate Gaussian. This can be a particularly challenging task and it doesn't scale well for high dimensions because of the quadratic parameters needed for the covariance. The authors acknowledge this limitation. Experimentally, the authors mostly consider what happens in the low NFEs regime. More detailed comparisons would be useful. If the method performs poorly (relative to the baselines) for higher NFEs, it is important to acknowledge it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * It seems to me that the method can be used *together* with other approaches for accelerating sampling (since it is only changing the initial point). It would strengthen the paper if the authors included experiments to show this. * Can the authors comment on whether we can find fixed points given a trained diffusion model? * I am puzzled by what happens to the story that this paper builds once we think about the deterministic samplers. In such settings, stable points would correspond to no movement at all. If the system becomes mean-reverting at some point, wouldn't that mean that we stop moving once we reach a fixed point? * It would strengthen the paper to include comparisons with EDM and other more recent sampling methods. * It would also help to include results for higher NFEs. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the reviewer for the detailed and insightful review. In what follows, we will address the main issues and questions. ### Point 1: Analytically tractable models Q: *“The theoretical analysis has some simplifying assumptions that it is not clear to what extent they alter what happens in practice. Specifically, the authors analyze uniform distributions over Dirac functions.”* A: The Z1 symmetry breaking on the on-dimensional model has been chosen due to its pedagogical value and the fact that it highlights all the main conceptual points. However, this form of binary symmetry breaking has relevance in many situations, for example, if the data spans two topologically separated manifolds. Of course, in the real model the Z1 symmetry will be a small subgroup of the overall symmetry group. ### Point 2: Studying the fixed-point in trained models Q: *"The distributions observed in practice have a more complicated structure and it is not clear how we can argue anything about the fixed points of a model given access to its (learned) score-function.” “Can the authors comment on whether we can find fixed points given a trained diffusion model?”* A: In theory, having access to a fully learned score function allows us to analyze its fixed points, which can be found by gradient descent. Specifically, this could be used to find the original fixed point, to determine its stability by evaluating the Hessian, and consequently to detect the onset of a symmetry breaking in a real-world model. However, unfortunately, this analysis is not going to be reliable in actual trained models since the fixed-point itself is outside the training range, and the shape of the potential in that region is not properly trained. This is due to the, perhaps paradoxical, fact that the fixed points have vanishingly low probability of being visited, as the samples concentrate in a fixed-variance annulus around them (see reply to Reviewer FFjV, Point 3). However, what ultimately matters is the shape of the potential, which we have studied directly in trained models along variance-preserving arcs connecting pairs of data points, where we found the expected change of shape (see Figure 1 and in the Supplementary Material see Figures 13-15). ### Point 3: Scalability of the Gaussian late initialization Q: *"A weakness of the approach is that to start the sampling from some intermediate diffusion time, one needs to estimate the parameters of a multivariate Gaussian. This can be a particularly challenging task and it doesn't scale well for high dimensions because of the quadratic parameters needed for the covariance. The authors acknowledge this limitation."* A: Indeed, in fact the GLS as used in the paper is meant as a proof of principle to showcase the importance of an appropriate initialization in order to preserve diversity. So said, it is rather straightforward to make the approach scalable either by PCA, Fourier analysis or, even better, by replacing the Gaussian initialization with another more efficient model such as a VAE. ### Point 4: Performance for a large number of samples Q: *“Experimentally, the authors mostly consider what happens in the low NFEs regime. More detailed comparisons would be useful. If the method performs poorly (relative to the baselines) for higher NFEs, it is important to acknowledge it.”* *“It would also help to include results for higher NFEs.”* A: Certainly, it is important to present results on higher NFEs. Thank you for pointing that out. In light of your feedback, we have incorporated extra experiments for 20, 50, and 100 denoising steps. You can view these results in Table 1 on the attached PDF. Notably, we observe the same behavior obtained at low NFEs remains at higher NFEs. Consequently, our gls scheme enhances performance over fast samplers, like DDIM and PNDM, as well as the standard DDPM. ### Point 5: Deterministic samplers Q: *“I am puzzled by what happens to the story that this paper builds once we think about the deterministic samplers. In such settings, stable points would correspond to no movement at all. If the system becomes mean-reverting at some point, wouldn't that mean that we stop moving once we reach a fixed point?”* A: This is a very interesting point. Since the deterministic samplers are designed to exactly track the marginals of the stochastic sampler, a change in the stochastic dynamics will also appear in the deterministic case. However, the symmetry breaking looks very different from the point of view of the deterministic samplers. Specifically, in the deterministic ODEs, the correction term makes the drift vanish prior to the symmetry breaking. Therefore, in a deterministic sampler the first phase is ‘irrelevant’ not because of the mean-reverting dynamics, but just because there is no significant temporal evolution prior to that point. Therefore, with good approximation every point is a fixed-point in the first phase of the deterministic ODE. This can be clearly seen in Fig. 7(k) and Fig 16 (c,f,i,l,o) in the Supplementary Material. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the rebuttal. I will keep my score which recommends acceptance. Please incorporate the rebuttal discussion in the main paper, if possible.
Rebuttal 1: Rebuttal: Dear reviewers, We wish to thank you for the detailed analysis of our paper and for all the encouraging and constructive comments. We are extremely grateful for your work, as it will make our manuscript substantially better. Several important questions were raised, and we provided extensive answers under the individual reviews. In particular, we would like to point out to the discussion in the reply to reviewer FFjV (point 3) about the importance of the fixed points and the analysis of the deterministic samplers in the reply to reviewer J7iR (point 5). Upon acceptance, we will incorporate these discussions in the updated manuscript. In what follows, we will outline our conclusions from the reviews, addressing both the strengths and weaknesses of our work as pointed out by the reviewers. We have also included a summary of all the additional analysis that we conducted during the rebuttal period in response to questions and observations raised by the reviewers. ### **Main strengths** We are happy to see that all reviewers agreed that our main claim is well supported by strong theoretical and empirical evidence. This is corroborated by the fact that two reviewers (sN4H and kBdj) marked the soundness of the paper as excellent, a distinction that makes us proud, given the ambitious nature of our claims. We are also pleased to see that a majority of reviewers agree that our work is novel and important, as expressed, for example, by the following quotes from reviewer J7iR and kBdj: > *“I expect that the paper will be of great interest to the research community and the audience of NeurIPS.”* (reviewer J7iR) , > *“Understanding when in the diffusion process different symmetries are broken is likely to be very helpful for future work designing better diffusion-based models.”* (reviewer kBdj) , and even by the more critical reviewer FFjV > *“the theory of spontaneous symmetry breaking is very interesting and thought-provoking”* (reviewer FFjV) . ### **Main weaknesses** The main concern, most prominently expressed by reviewer FFjV, is that the improvements in sample quality and diversity due to the Gaussian late initialization scheme might not generalize to state-of-the-art (SOTA) fast-sampling approaches such as progressive distillation and consistency models. While we acknowledge that this is a fair criticism, we would like to emphasize that the primary motivation of this paper is not to propose a SOTA fast sampler. The primary goal of the paper is instead to demonstrate that the generative dynamics of diffusion models can be understood as spontaneous symmetry breaking, a phenomenon that is ubiquitous in physical systems. This is in itself a very ambitious endeavor as it establishes a direct connection between generative AI and some of the deepest aspects of fundamental physics (e.g. the standard model and the statistical mechanics of critical phenomena), which might in turn lead to substation methodological and theoretical developments in both fields. The purpose of our sampling experiments is indeed to demonstrate that understanding this phenomenon can directly lead to algorithmic improvements, thereby providing evidence for its usefulness. However, reaching the boundaries of the state-of-the-art requires a level of optimization that we believe goes beyond the scope of our paper. In this sense, our sampling experiments should be interpreted as a proof of principle and a starting point for future developments. ### **Summary of new experimental results** Several questions and observations raised by the reviewers motivated us to run several new experiments for this rebuttal period, which will end up in the revised version of the manuscript. In the accompanying PDF, we provide expanded analyses addressing our reviewers' questions. - Reviewer kBdj concern about the Gaussianity nature of the approximate distribution is addressed in Figure 1, which provides a Gaussianity analysis via the Shappiro-Wilk test. The figures corroborate the validity of a Gaussian distribution up until the symmetry breaking’s critical point, beyond which the distribution rapidly ceases to be Gaussian. - In answer to the comments made by reviewer J7iR, Table 1 expands on the robustness of our improvements in the context of higher numbers of function evaluations (NFE). Here, we present results for 20, 50, and 100 denoising steps for datasets including MNIST, CIFAR10, and Celeba64. These results confirm that our improvements persist in various NFE regimes. - Lastly, to address reviewer FFjV concern on the importance of the fixed-points, Figure 2 provides a visualization of the bifurcation of the fixed-points (estimated numerically) in the one-dimensional model, together with a visualization of the vector field provided by the score function. The figure shows how the bifurcation is associated with a dramatic shift in the vector field, which determines the dynamics of the system away from the fixed-points themselves. Pdf: /pdf/891b2a29d3596f631ba802063c751bdc066ac902.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors examine denoising diffusion models and their dynamics through the lense of spontaneous symmetry breaking. They characterize the dynamics of the denoising process into two stages: in the beginning, samples orbit a central fixed point and after a critical amount of time denoising of a ‘selected’ sample begins. They use insights gained from theory as well as practical examples to develop a novel fast sampling method they call “Gaussian late start” (GLS). Strengths: This was a very well written paper. The authors motivate the issue very well, and they additionally illucidate the concept of spontaneous symmetry breaking in a way that is easy to understand. One of the biggest strengths, was the foundational example laid out in Sec. 3.1 — this example helped fill in a lot of gaps I had up until that point, and made me really appreciate the authors characterization of the denoising dynamics into two separate stages. The experiment of section 4.1 followed by its extension in section 4.2 do appear to give credence to thinking of the denoising dynamics as having a spontaneous symmetry breaking characteristic. This also leads in nicely to the Gaussian late start sampling method, which is both intuitive, practical, and appears to work well. Overall, the paper presents a compelling idea, motivates it through practical toy examples as well as theory, and then provide a novel efficient way of sampling. Weaknesses: It seems like this is not a problem constrained solely to diffusion models used for image generation, but for any data modality. In that case, it might be good to empirically demonstrate this for another data modality. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: This may be a silly question, but I was wondering if the authors thought if the datasets clustered and the sampler was initialized using the statistics of each cluster — do you expect the samples drawn to be similar to the cluster whose statistics were used to initialize the sampler? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: As noted, GLS as is, is slightly impractical, but there is room for future research to develop fast approximate methods in the same vain. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wish to thank the reviewer for the positive and insightful review. We are happy to hear that you found the one-dimensional example as insightful, as it certainly shaped our understanding of this fascinating phenomenon. To further solidify the intuition, in the updated manuscript we will also add a figure visualizing the fixed-points and showing how the score changes after the bifurcation (see Figure 2 in the attached pdf). In the following, we will address some of your questions. ### Point 1: Other modalities Q: *“It seems like this is not a problem constrained solely to diffusion models used for image generation, but for any data modality. In that case, it might be good to empirically demonstrate this for another data modality.”* A: We agree that the phenomenon should not be limited to image datasets. The choice of the experiments mainly reflected our field of expertise. We also agree that it would be useful to see results in other modality, such as audio and language, and we are planning to pursue these research directions in the future. ### Point 2: Cluster initialization Q: *“This may be a silly question, but I was wondering if the authors thought if the datasets clustered and the sampler was initialized using the statistics of each cluster — do you expect the samples drawn to be similar to the cluster whose statistics were used to initialize the sampler?”* A: Interesting point. Prior to the symmetry breaking, all clusters are merged into a single mode (around a fixed-point). Therefore, at least if we are talking about stochastic samplers, the initial value will not correlate with the final cluster assignments. The situation is different for deterministic samplers. In this case, the initialization scheme will likely work. However, we believe that it risks inducing significant overfitting. --- Rebuttal 2: Comment: Thank you to the authors for answering my questions. I maintain my original score which is to accept.
null
null
null
null
null
null
CARE: Modeling Interacting Dynamics Under Temporal Environmental Variation
Accept (poster)
Summary: The paper aims to model out-of-distribution dynamics of environments with interacting entities using a seq-to-seq style graph neural net architecture. The proposed model attempts to capture the invariant aspects of the dynamical environment via a context embedding that follows a neural ordinary differential equation. The paper reports results on a number of physical systems modeling tasks, as well as some theoretical aspects of the proposed approach. Strengths: The paper is very well written. Figure 2 describes the methodology simply and clearly. The paper reports a large set of experiments and shows consistent improvement over a large set of benchmarks, though I think it misses the most essential ones (see my comments below). The paper exercises a systematic scientific writing approach where the key assumptions are pointed out and their most important analytical conclusions, such as model consistency, are analyzed, although with rather straightforward proof techniques. Weaknesses: The proposed method is novel per se. It is also intuitive and well-justified, but it appears to put together a number of existing tools in the most straightforward way one can think of. While I am convinced by the quality of the proposed solution, I am a bit skeptical about its scientific value, i.e. how exactly it enhances our knowledge base. The conceptual novelty of the proposed method over some existing graph-based probabilistic ODE approaches such as IGP-ODE [60] and [Ref1] is not clarified. It is also not obvious to the reader why these methods that have suitable specs for handling out-of-distribution data, due to rigorous uncertainty modeling, should not be among the list of models in comparison. For instance IGP-ODE also model interaction dynamics, it reports disentanglement results with the motivation of out-of-distribution detection, and it also takes the dynamics of context variables to its center. [Ref1] Look et al., Cheap and Deterministic Inference for Deep State-Space Models of Interacting Dynamical Systems. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Although the paper motivates the proposed methodology with the primary application of generalization over out-of-distribution cases, I am not able to point out how the proposed experiments test this aspect and according to which metric. The results tables appear to quantify model performance only in in-distribution use cases, i.e. both training and test data are assumed to follow the same dynamics. Is my understanding correct? If not, how exactly is the out-of-distribution data acquired and what is the actual difference between the train and test splits that makes the test observations out-of-distribution? Since the answers to the above questions play a critical role in my evaluation, for now I set my grade to borderline. It is likely to swing based on further interactions with the authors and other reviewers. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Section 6 of the paper discusses the limitations of the proposed approach in sufficient detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your comments in the following. > Q1. The proposed method is novel per se. It is also intuitive and well-justified, but it appears to put together a number of existing tools in the most straightforward way one can think of. While I am convinced by the quality of the proposed solution, I am a bit skeptical about its scientific value, i.e. how exactly it enhances our knowledge base. A1. Thanks for your acknowledgement of our novelty. Our method is totally based on the proposed probabilistic model, which could make our method a little straightforward. The scientific value of our proposed method is two-fold: - **Real-world Applications.** Our approach is tailored to dynamical systems affected by fluctuating environments. Such scenarios are prevalent in real-world applications. Examples include particle-based systems experiencing variable temperatures or unsteady flows influenced by different Reynolds numbers. By incorporating context variances, we can tackle these complex and real-world dynamical systems. - **Disentangling Representations.** A salient feature of our method is its ability to distinguish between object-specific and context representations. This disentanglement offers deeper insights into the impact of shifting environments on dynamical systems. We believe this perspective can serve as a crucial foundation for subsequent research in the area. > Q2. The conceptual novelty of the proposed method over some existing graph-based probabilistic ODE approaches such as IGP-ODE [60] and [Ref1] is not clarified. It is also not obvious to the reader why these methods that have suitable specs for handling out-of-distribution data, due to rigorous uncertainty modeling, should not be among the list of models in comparison. For instance IGP-ODE also model interaction dynamics, it reports disentanglement results with the motivation of out-of-distribution detection, and it also takes the dynamics of context variables to its center. A2. Thanks for your comment. We first compare these two methods with the proposed methods. IGP-ODE combines latent Gaussian process with ODEs while GDSSM [ref1] utilizes the graph neural networks with a Gaussian mixture model. The differences between these two works and ours lie in three points: - **Different Objectives**. Our method primarily hones in on addressing the intricacies of dynamical systems in the presence of fluctuating environments. In contrast, both IGP-ODE and GDSSM, do not encompass this specific challenge. - **Different Methodology**: Our methods introduce a context variable to model the dynamics of environments based on a probabilistic model while these two methods focus on modeling the dynamics of objects using Gaussian processes and Gaussian Mixture Model, respectively. - **Different Efficiency**. Our method is tailored to efficiently cater to both particle-based and mesh-based systems, even when these encompass a vast array of nodes. Conversely, the models like IGP-ODE and GDSSM concentrate on vehicular and kinematic systems, leveraging intricate modeling techniques through Gaussian processes and the Gaussian Mixture Model. In our practical experiments, both these methods faced Out-Of-Memory challenges in our setup. Specifically, these models encountered OOM issues with datasets exceeding 30 nodes, whereas our datasets consistently feature thousands of nodes. | Number of nodes | IGP-ODE | GDSSM | |-----------------|-----------|----------| | 10 | 23.57GB | 19.65GB | | 20 | 62.54GB | 68.23GB | | 30 | OOM | OOM | > Q3. Although the paper motivates the proposed methodology with the primary application of generalization over out-of-distribution cases, I am not able to point out how the proposed experiments test this aspect and according to which metric. The results tables appear to quantify model performance only in in-distribution use cases, i.e. both training and test data are assumed to follow the same dynamics. Is my understanding correct? If not, how exactly is the out-of-distribution data acquired and what is the actual difference between the train and test splits that makes the test observations out-of-distribution? A3. Thanks for your comment. Indeed, in particle-based systems, individual samples often exhibit different initial temperatures and laws of change, leading to distinct dynamics. As a result, the training and test data inherently possess varying dynamic contexts, which in turn introduces different dynamics. Furthermore, within individual samples, the ever-changing temperature introduces temporal distribution shifts, thereby causing the data distribution to evolve over time. It is this temporal variability and context-driven change that we refer to when discussing "out-of-distribution." To ensure greater clarity, we will rephrase our approach as addressing "context-varying dynamics" to better convey the challenges which our method seeks to tackle. In light of these responses, we hope we have addressed your concerns, and hope you will consider raising your score. If there are any additional notable points of concern that we have not yet addressed, please do not hesitate to share them, and we will promptly attend to those points. **Reference** [Ref1] Look et al., Cheap and Deterministic Inference for Deep State-Space Models of Interacting Dynamical Systems. --- Rebuttal Comment 1.1: Title: Concerns addressed Comment: Thanks, my concerns have been addressed, the major one being the conceptual confusion about OOD. Also interesting additional result about the memory footprint of the alternative methods. I raise my score to 6 to acknowledge the contribution, not higher to account for the slight incremental nature of it. --- Reply to Comment 1.1.1: Title: Thanks for your feedback and raising the score! Comment: Thanks again for your feedback and increasing the rating! We are pleased to know that our responses have addressed your concerns. We really appreciate your efforts on reviewing our paper, your insightful comments and support.
Summary: The paper proposes a new model architecture for modeling multi-agent dynamics. The model consists of three parts (encoder, dynamics, decoder) where the encoder initializes the latent state of each agent and the context, the dynamics is modeled via NeuralODEs and the decoder is a standard MLP. New to the paper is that the model explicitly consider a context variable that evolves over time. Strengths: ### New Architecture The explicit modeling of the context variable and its temporal evolution is new to me. I like the idea and the experiments support its benefits. ### Experiments The experiments in the paper are exhaustive and show the advantage of the new method compared to competitors. I also like the ablation study that shows the merits of the individual contributions. Weaknesses: I think the paper slightly oversells what it is doing: ### Temporal distribution shift The paper is called "modeling interacting dynamics under temporal distribution shift" but there is very little about temporal distribution shift in the data. I would have expected that for instance the temperature in Sec. 5.1 is different between training and test data. If this is the case, the authors should state it more explicitly in the experiments. If not, I would suggest to tone it down a bit. ### Probabilistic Modeling I like the new model architecture and it seems principled to me. However, the paper states in the abstract "provide a probabilistic view for out-of-distribution dynamics", whlie the learnt model is completely deterministic. I would also suggest here to not overstate the contribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Interpretability Can you please comment on the interpretability of the context variable? I think it is not and this should also be clearly stated in the manuscript or demonstrated otherwise. ### Naive solutions I could also come up with a naive solution in which the context variable is modeled as N+1-th object in the graph that is connected to all other variables but does not have any observations. How would that compare to the aforementioned solution? How does a static context variable compare to the dynamic solution? Static context variables have for instance also been considered in [1] which should be cited. ### References [1] Yıldız, Çağatay, Melih Kandemir, and Barbara Rakitsch. "Learning interacting dynamical systems with latent Gaussian process ODEs." Advances in Neural Information Processing Systems 35 (2022): 9188-9200. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification. > Q1: The paper is called "modeling interacting dynamics under temporal distribution shift" but there is very little about temporal distribution shift in the data. I would have expected that for instance the temperature in Sec. 5.1 is different between training and test data. If this is the case, the authors should state it more explicitly in the experiments. If not, I would suggest to tone it down a bit. A1. Thanks for your comment. Indeed, in particle-based systems, individual samples often exhibit different initial temperatures and laws of change, leading to distinct dynamics. As a result, the training and test data inherently possess varying dynamic contexts, which in turn introduces different dynamics. We acknowledge the potential lack of clarity on this aspect in the paper and will also tone it down a bit to make it more clear. > Q2: I like the new model architecture and it seems principled to me. However, the paper states in the abstract "provide a probabilistic view for out-of-distribution dynamics", while the learnt model is completely deterministic. I would also suggest here to not overstate the contribution. A2. Thanks for your comment. We take your suggestion and will adjust the phrasing to more accurately state "provide a view for context-varying dynamics". We value your feedback and strive for clarity and accuracy in our work. > Q3: Can you please comment on the interpretability of the context variable? I think it is not and this should also be clearly stated in the manuscript or demonstrated otherwise. A3. Thanks for your comment. We have visualized the first 10 dimensions of the context variable in Figure A. From this visualization, we can find that the context variable experiences rapid changes over time. This variation supports our assertion that a dynamic context variable is crucial for adapting to the shifting environments found in dynamical systems. Moreover, the complex dynamics of the context variable indicates that the variance of environments (e.g., temperature) could bring in complicated impacts to the dynamical systems. > Q4. I could also come up with a naive solution in which the context variable is modeled as N+1-th object in the graph that is connected to all other variables but does not have any observations. How would that compare to the aforementioned solution? How does a static context variable compare to the dynamic solution? Static context variables have for instance also been considered in [1] which should be cited. A4. Thanks for your comment. Following your suggestion, we have added two model variants below: - CARE-O, which includes the N+1-th object in the graph connected to all other variables; - CARE-S, which utilizes a static context variable instead. The compared performance on two datasets is recorded as below. From the results, we can obtain that the full model perform better than CARE-O, which shows that directly combines heterogeneous nodes, i.e., objects and contexts on a simple graph could overlook critical information, e.g., the gradients of node representations in Eq. 11. Moreover, we can find that models relying on static context variables, i.e., CARE-S performs worse since it is hard to capture the dynamic nature of varied environments. We will definitely cite [1] in our revised version. **** | Dataset | Lennard-Jones Potential | | | CylinderFlow | | | |----------|:-----------------------:|:----:|:----:|:------------:|:----:|:----:| | Variable | $v_x$ | $v_y$ | $v_z$ | $v_x$ | $v_y$ | $p$ | | CARE-O | 6.69 | 6.80 | 6.73 | 4.27 | 38.4 | 14.2 | | CARE-S | 7.96 | 8.18 | 8.01 | 5.32 | 39.6 | 15.8 | | CARE | 5.75 | 5.91 | 5.82 | 3.95 | 37.8 | 13.9 | Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Hi all, I thank the authors for adressing my questions, and, particularly, for running the additional experiments. I have no further questions. --- Reply to Comment 1.1.1: Title: Thanks again for your feedback! Comment: Thanks again for your feedback! We are pleased to know that our responses have addressed your concerns. We really appreciate your efforts on reviewing our paper, your insightful comments and support.
Summary: The main idea of this work is to propose a novel approach called Context-attended Graph ODE (CARE) for modeling interacting dynamical systems under temporal distribution shift. The paper formalizes the problem of temporal distribution shift in interacting dynamics modeling and proposes a probabilistic framework to understand the relationships between trajectories and contexts. Experimental results demonstrate the exceptional performance of the CARE model, outperforming state-of-the-art methods in accurately predicting long-term trajectories, even in the presence of environmental variations. Strengths: - Novel approach: The CARE model introduced in this work presents a novel methodology by incorporating continuous context variations and system states into a coupled ODE system for modeling interacting dynamical systems under temporal distribution shift. - Superior performance: Extensive experimental results demonstrate that the CARE model outperforms state-of-the-art approaches in accurately predicting long-term trajectories, even in the presence of environmental variations. - Clarity: The paper effectively summarizes its contributions, including problem formalization, novel methodology, and comprehensive experiments. This clarity allows readers to readily grasp the significance of the research. - Extensive experiments: The paper conducts a thorough evaluation of the CARE model by performing experiments on diverse dynamical systems. The results highlight the superiority of the proposed approach compared to state-of-the-art methods. Additionally, the paper includes an ablation study and a parameter sensitivity analysis to further validate the effectiveness of the CARE model. Weaknesses: I don't see any weakness in the paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Where is Lemma 4.2? - How to set the threshold mentioned in Line 159? - Will the dynamic graph updating with larger $\delta s$ leads to many indirect connections and how will it affect the results? - How is the performance of proposed CARE on dynamical systems with more nodes, let's say, 10, 100 and more? - As shown in table 2, CARE still suffers from accumulated error on longer time-series. Do you have any idea on how to decrease the accumulated error? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your major concern and provide additional clarification. > Q1: Where is Lemma 4.2? A1: Thanks for the comment. Lemma 4.2 can be found on line 206, and its corresponding proof is detailed in Appendix C. > Q2: How to set the threshold mentioned in Line 159? A2: Thanks for the comment. When constructing the graph structure, we set the threshold at the 30th percentile of all distances. For further validation of the robustness of this parameter choice, we experiment with varying percentiles in {25, 30, 35} on two datasets. The results are shown below, , which demonstrates the robustness of our parameter selection. | Percentile | 25 | | | 30 | | | 35| | | |-------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|------|------| | Lennard-Jones Potential | | | | | | | | | | | Variable | $v_x$ | $v_y$ | $v_z$ | $v_x$ | $v_y$ | $v_z$ | $v_x$ | $v_y$ | $v_z$ | | Ours | 5.98 | 6.07 | 6.05 | 5.75 | 5.91 | 5.82 | 5.93 | 6.06 | 6.02 | | CylinderFlow | | | | | | | | | | | Variable | $v_x$ | $v_y$ | $p$ | $v_x$ | $v_y$ | $p$ | $v_x$ | $v_y$ | $ p$ | | Ours | 4.13 | 38.2 | 13.9 | 3.95 | 37.8 | 13.9 | 4.08 | 37.9 | 13.9 | > Q3: Will the dynamic graph updating with larger $\Delta_s$ leads to many indirect connections and how will it affect the results? A3: Thanks for the comment. Indeed, an excessively large $\Delta_s$ might introduce a number of indirect connections, which in turn could potentially degrade performance. To empirically validate this, we have conducted experiments. As evidenced in Fig.5(c), it is clear that a large $\Delta_s$ (exceeding 20) would decrease the performance. > Q4: How is the performance of proposed CARE on dynamical systems with more nodes, let's say, 10, 100 and more? A4. Thanks for the comment. Actually, the experiments we conducted already involved systems with a significant number of nodes, well over the figures you have mentioned. Specifically, the datasets and their respective node numbers are as follows: | Dataset | Number of Nodes | |--------------------------------|-----------------| | Lennard-Jones Potential | 1000 | | 3-body Stillinger-Weber Potential | 1000 | | CylinderFlow | 9200 | | Airfoil | 10720 | We'll ensure these details are more clearly stated in our revised manuscript. > Q5: As shown in table 2, CARE still suffers from accumulated error on longer time-series. Do you have any idea on how to decrease the accumulated error? A5. Thanks for the comment. Error accumulation is always a longstanding problem in long-term prediction. To mitigate this issue, we're considering several strategies for our future research: - **Enhanced Model Capacity**. We can develop more advanced graph ODE models with higher capacity, e.g., second-order ODE and augmented ODE. These advanced models could fit the spatial data better, which has the potential to decrease the accumulated error. - **Adaptive Settings**. We can utilize active learning and online learning to have more observations, which would provide valuable feedback to enable further fine-tuning and adjustments during long-term predictions. - **Ensemble Learning**. Incorporating multiple models with varying time horizons can be a promising strategy. By combining their strengths, we can potentially minimize both bias and variance of the prediction, further decreasing accumulated error. Thanks again for appreciating our work and for your constructive suggestions. Please let us know if you have further questions. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I would like to thank the authors for the response. My questions are fully addressed. --- Reply to Comment 1.1.1: Title: Thanks again for your feedback! Comment: Thanks again for your feedback! We are pleased to know that our responses have addressed your concerns. We really appreciate your efforts on reviewing our paper, your insightful comments and support.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, We thank you for your careful reviews and constructive suggestions. We acknowledge the positive comments such as "Novel approach" (Reviewer ZaqZ), “Superior performance” (Reviewer ZaqZ), "Clarity" (Reviewer ZaqZ), "Extensive experiments” (Reviewer ZaqZ), “I like the idea and the experiments support its benefits.” (Reviewer P3pg), “The experiments in the paper are exhaustive” (Reviewer P3pg), “The proposed method is novel per se.” (Reviewer QmzE), “The paper is very well written” (Reviewer QmzE), “reports a large set of experiments and shows consistent improvement” (Reviewer QmzE). We have also responded to your questions point by point. The figure results are attached in the PDF. Please let us know if you have any follow-up questions. We will be happy to answer them. Best regards, the Authors Pdf: /pdf/fe3ee0969d41afe34e5630f0d87200f88da27bd9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation
Accept (poster)
Summary: The paper works on the problem of visual RL dexterous manipulation with pre-trained representations. Specifically, the authors propose to fix the pre-trained Convolution layers and adapt only the BatchNorm layers on a small amount of in-domain data, and then learn the policy based on the pre-trained feature with EMA updates. Experiments are performed on 12 simulation tasks. Strengths: The idea of leveraging large-scale pretraining to boost the performance of visual RL is interesting and well-motivated; The proposed method for adapting pre-trained feature for dexterous manipulation is simple and straightforward; The paper is well-written and easily read. Weaknesses: For some tasks, the proposed method seems on par or slightly worse than baselines (e.g., Door, Relocate Mug, Relocate Mustard Bottle); The necessity of stage 3 in the pipeline can be called into question since the performance of the pipeline without stage 3 appears to be comparable to the performance of the full pipeline in the Hammer task. No limitations or failure modes are presented in the paper. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In Fig. 6 (b), why would larger or smaller momentums lead to better performance than an intermediate one? Could authors give more theoretical analysis on this? It seems like the standard BatchNorm without EMA already achieves comparable performance. It would be great if more theoretical guidance on choosing the momentum value is provided instead of using empirical study; Further clarification or analysis from the authors regarding the significance and contribution of stage 3 would be valuable to better understand its role in achieving improved results; In Fig. 8, it would be better to also present the scales for X-axis and Y-axis. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: No limitations or failure modes are presented in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and suggestions. We address each of your comments in the following. **Q1:** For some tasks, the proposed method seems on par or slightly worse than baselines (e.g., Door, Relocate Mug, Relocate Mustard Bottle); **A1:** In the field of deep reinforcement learning, it is inherently challenging for a single approach to excel across all tasks. In our experiments, although VC-1 slightly outperforms our method in tasks like Relocate Mug and Relocate Mustard Bottle, it significantly lags behind in others, including Relocate Tomato Soup Can and Relocate Foam Brick. We agree with the reviewer that the pursuit of a universally optimal representation is critical. Nevertheless, our method delivers robust performance across a spectrum of tasks, only equating to baseline levels in a few tasks. We believe the results demonstrate the advantage on both task quantities and average performance. **Q2:** The necessity of stage 3 in the pipeline can be called into question since the performance of the pipeline without stage 3 appears to be comparable to the performance of the full pipeline in the Hammer task. **A2:** To further show the functionality of our Stage 3, we conduct a more comprehensive evaluation during the rebuttal phase. Our experiments are conducted across all 7 tasks where the momentum m > 0. The results are given in Figure 3 of the rebuttal PDF file. It is observed that our simple adaptation in Stage 3 is helpful for all these tasks, especially for Relocate Mustard Bottle and Relocate Large Clamp. In our initial submission, due to the computation limit, we do not ablate on all the tasks and only put results on hammer. **Q3:** No limitations or failure modes are presented in the paper. **A3:** We have described the limitations in the conclusion section. Our framework also has failure modes. For instance, in Figure 2 of the rebuttal file, on Relocate Box task, removing Stage 2 proves to be more effective than incorporating it. This suggests that merely combining Stage 1 with Stage 3 might be a sufficiently effective approach. **Q4:** In Fig. 6 (b), why would larger or smaller momentums lead to better performance than an intermediate one? Could authors give more theoretical analysis on this? **A4:** Intuitively, the larger momentum means that the model values the current and recent data points more and assigns them a larger weight, and this could be suitable to the case where the policy learns the tasks very quickly and thus the incoming data is changing quickly. In our experiments, the methods converge fast on *hammer* compared to other tasks, and thus the momentum might be large (m=0.1). Following these intuitions, we set the momentum by additional considering empirical performance. It could be good if the rigorous theoretical analysis is built and the momentum could be selected based on the theory, which is not the focus of work but could be an interesting problem for future research. **Q5:** It seems like the standard BatchNorm without EMA already achieves comparable performance. It would be great if more theoretical guidance on choosing the momentum value is provided instead of using empirical study; **A5:** See A2 and A4. **Q6:** Further clarification or analysis from the authors regarding the significance and contribution of stage 3 would be valuable to better understand its role in achieving improved results; **A6:** See A2. **Q7:** In Fig. 8, it would be better to also present the scales for X-axis and Y-axis. **A7:** We update Figure 8 with X and Y axis. The new figure is also given as Figure 5 of the rebuttal file. Thank the reviewer for this suggestion. **Please do not hesitate to let us know if you have any additional comments.** --- Rebuttal Comment 1.1: Title: POST-REBUTTAL Comment: I extend my gratitude to the authors for sharing their feedback. The additional results and analysis effectively alleviate my prior concerns regarding the technical details of the methodology.
Summary: The paper proposes H-InDex for dexterous manipulation tasks with a 30 DoF robotic hand in simulation. The method consists of three stages: the first stage adopts a pre-trained network that is trained with a 3D human hand pose estimation dataset; the second stage adapts the representation with self-supervised keypoint detection, which updates only 0.18\% parameters of the entire model; and the final stage learns an RL policy on top of the adapted module while also adapting representation by using the exponential moving average for changing distribution based on training state. The method is extensively evaluated on 12 different visual dexterous manipulation tasks and is demonstrated it outperforms the current SoTA baselines by a 16.8\% absolute improvement and also achieves superior sample efficiency in 10 out of the 12 tasks. The authors also tested the effectiveness of the proposed human hand prior by comparing the method with several different priors and showed the hand prior from human movies does improve the performance. Finally, ablations are conducted to show the effects of each state, how the adaptation works, qualitative results on keypoint detection, and visualization of the adaptation in BatchNorm layers. Overall, the paper is very well written and organized, and the reviewer enjoyed reading this work. However, there are some unclear descriptions and conclusions that cannot be drawn from the experiments. The reviewer would be happy to raise the score if the authors could answer the questions and/or clarify the reviewer's misunderstanding. Strengths: - The idea of using a prior from human hand pose estimation is interesting, and the experimental results support the soundness of the method. - Only adapting the limited parameters by changing the statistics of the BatchNorm layers is also interesting and could be useful for other tasks that involve domain transfer, such as sim2real. - The experiments are extensively conducted and they support the idea of the proposed method. However, there are some unclear descriptions and/or conclusions that are summarized in the Weaknesses section. Weaknesses: - The authors draw some conclusions that are not empirically supported or require more experiments. The reviewer thinks the authors need to tone down their claims and/or cite adequate prior works to justify the claims. For example, - L.145-146: 'While finetuning only a small portion of parameters, it empirically outperforms both a frozen model and a fully finetuned model.' Please show the numbers probably in the appendix. - L.209,212: 'with appropriate domain knowledge, ConvNets can still outperform ViT', 'these observations highlight the task-dependent nature of the strengths exhibited by ConvNets and ViTs.' The reviewer thinks these are rather the nature of algorithms, not feature extractors (which is also just a hypothesis, without any evidence). To draw this conclusion, the authors need to do experiments with the same backbone networks, the same algorithms, and large enough amounts of random seeds. Without those, the reviewer does not think these claims are valid. - L.273-276: 'For the shallow layers, the distributions of the adapted models closely resemble those of the pre-trained models. However, as we move deeper into the layers, noticeable differences emerge. We attribute this disparity to the fact that dissimilarities between human hands and robot hands extend beyond low-level features like color and texture. Instead, they encompass higher-level features such as dynamics and structure.' At the very least, please cite relevant papers that analyze the adaptation between two domains. Especially, it would be very difficult to claim that 'they encompass higher-level features such as dynamics and structure.'. For what reasons did the authors draw this conclusion? - Fig.7: It is very hard to understand whether the keypoints are reasonable or not. For example, the bottom red star of the leftmost figure does not look meaningful. - Fig.6: Why do the authors specifically choose the three environments, specifically why only (c) is conducted in the 'Place Inside'? Since the differences between the proposed method and the others look minor for all three ablations, the readers can feel the environments can be cherry-picked. The reviewer thinks the authors need to show more results on different environments (can be in the appendix if space does not allow it). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Fig.2: The data distribution changes between stages 2 and 3 are a bit confusing. The left human to robot should happen in stage 2 and the right one should happen in stage 3, correct? If it is, the review thinks it would be clearer to enclose the left one with blue dots and the right one with green dots. Otherwise, it looks like the adaptations are independent of the proposed three stages and are confusing. - Fig.2 and Sec.4: In stage 2, what do the dots between 'keypoint decoder' and 'reconstruct tgt view' mean? The reviewer read the implementation details shown in the appendix, but it does not explain the 'dots' part. - The ratio of parameter changes is written as 0.36\% in the abstract and conclusion but written as 0.18\% in the other parts. Which is true or the reviewer misunderstands something? - Fig.7: The reviewer cannot see the videos of how keypoints move in the video. - Sec.5.2: The reviewer is curious about the performance of the BC-only policy while adapting the domain gap using expert trajectories. Does it correspond to the performance at environment steps being zero in Figure 4? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: - As mentioned in the conclusion section, the method is tested on the same objects in the training set. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and suggestions. We address each of your comments in the following. We omit the questions due to space limits. **Q1:** L.145-146 ..... **A1:** We have conducted a more comprehensive evaluation on our Stage 2 and the results are given in Figure 2 and Figure 4 of the rebuttal file. It is clearly shown that finetuning all parameters trivially could lead to suboptimal performance (Figure 2) and so is without any finetuning (Figure 4). **Q2:** L.209,212:..... **A2:** We would like to clarify that the statements mentioned might be misunderstood a bit by the reviewer, and we wanted to explain more here. ‘with appropriate domain knowledge, ConvNets can still outperform ViT’ With such statements, we are not trying to show ConvNets are better than ViT in general cases. *Instead, we want to show the advantage of our method*, that a feature representation with ConvNet architecture could beat other strong ViT-based methods including MVP and VC-1. ‘these observations highlight the task-dependent nature of the strengths exhibited by ConvNets and ViTs.’ This statement is saying that on different tasks, different architectures show very different results. For example, on *Relocate Tomato Soup Can*, RRL is much better than VC-1, while on *Relocate Mug*, it is the opposite. We agree with the reviewer that the statements are made not clear to fully express our observations, and would be more careful on the claims. **Q3:** L.273-276:..... **A3:** Previous studies in interpretability have shown that the deeper layers in neural networks have more high-level information. For example, Bau et al [2] show that color and texture concepts dominate at lower layers while more object and part detectors emerge in deeper layers; The visualizations in [3, 4] show the increase in complexity and variation on higher layers, while having simpler components from lower layers. In our experiments, we observe that the adaptation transforms the deep layer features more, thus we make the hypothesis that our method adapts the distribution of human hand features to the distribution of robot hand features, *especially in the higher-level parts*. **Q4:** Fig.7: .... **A4:** We apologize for the unclear explanation in the paper and give more explanations here. The term *keypoints* in our study refers to landmarks as defined in [1], which identify the most critical pixels for image reconstruction. As a result, some of these points might be located in unexpected positions. Additionally, the quantity of keypoints is preset (we use n=30 in our paper), implying the potential for redundant or idle points. **Q5:** Fig.6: ..... **A5:** We acknowledge the insufficiency of the ablations and we want to respectfully emphasize that our ablation results are not cherry-picked; instead, the task is randomly sampled. During the rebuttal phase, we give a much more comprehensive evaluation of our Stage 2 and Stage 3. The results are given in the PDF file and clearly show the advantage of our method across tasks. We are also committed to releasing our codes. **Q6:** Fig.2: ..... **A6:** Thank the reviewer for the detailed suggestion on paper improvements. We have updated Figure 2 as suggested. The updated version is also given in the PDF file. **Q7:** Fig.2 and Sec.4: ..... **A7:** We omit the complex self-supervised learning process in Figure 2 due to space limitation and explain briefly in Section 4. Here we would like to give more intuitive explanations. More details could be found in [1]. From a high perspective, the self-supervised learning task used in our paper is to reconstruct the target image, given a source image and a target image. The target image provides the keypoints, and the source image provides the appearance. For example, if we have a figure with a robot hand and the background (this is the source image), we could provide the new positions of the robot hand (encoded from the target image), and the information in the source image would be sufficient to reconstruct the target image where the robot hand is in the new position. **Q8:** The ratio of parameter changes.... **A8:** In Stage 2, the learnable parameters (weight, bias) in BatchNorm Layers are updated, which is 0.18% of the entire network. In Stage 3, moving mean and average parameters in BatchNorm are updated, which is also 0.18% of the entire network. These two parts together are 0.36%. **Q9:** Fig.7: The reviewer cannot see the videos of how keypoints move in the video. **A9:** The videos are displayed on our anonymous website during the initial submission. It could be correctly played in the Chrome browser of one MacBook or iPhone. **Q10:** Sec.5.2.... **A10:** Yes. The performance of BC-only policy corresponds to the environment steps being zero. **Please do not hesitate to let us know if you have any additional comments.** [1] Jakab, Tomas, et al. "Unsupervised learning of object landmarks through conditional image generation." Advances in neural information processing systems 31 (2018). [2] Bau, David, et al. "Network dissection: Quantifying interpretability of deep visual representations." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [3] Yosinski, Jason, et al. "Understanding neural networks through deep visualization." arXiv preprint arXiv:1506.06579 (2015). [4] Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13. Springer International Publishing, 2014. --- Rebuttal Comment 1.1: Title: Reply by the reviewer Comment: > QA1,2,3,6,7,8,9,10 The reviewer wants to thank the authors for the clarifications. Some other QAs still remain unclear. > QA4: "the quantity of keypoints is preset (we use n=30 in our paper), implying the potential for redundant or idle points." I think explicitly writing this in the paper would be better. Otherwise, readers (like me) will be confused. > QA5 " We acknowledge the insufficiency of the ablations and we want to respectfully emphasize that our ablation results are not cherry-picked; instead, the task is randomly sampled. During the rebuttal phase, we give a much more comprehensive evaluation of our Stage 2 and Stage 3. The results are given in the PDF file and clearly show the advantage of our method across tasks. We are also committed to releasing our codes." I don't think Fig.9 answers the original question while it still being very informative and comprehensive. Can the authors provide more results on ablations? Fig.9 is not related to Fig.6. --- Reply to Comment 1.1.1: Title: Thank you for the feedback! Comment: We really appreciate your feedback! We would like to answer your questions as follows: --- **QA4:** "the quantity of keypoints is preset (we use n=30 in our paper), implying the potential for redundant or idle points." I think explicitly writing this in the paper would be better. Otherwise, readers (like me) will be confused. **A:** Yes, we agree with the reviewer and we would add this detail about Stage 2 in our revised paper. Thank you for the suggestion. --- **QA5:** “We acknowledge the insufficiency of the ablations and we want to respectfully emphasize that our ablation results are not cherry-picked; instead, the task is randomly sampled. During the rebuttal phase, we give a much more comprehensive evaluation of our Stage 2 and Stage 3. The results are given in the PDF file and clearly show the advantage of our method across tasks. We are also committed to releasing our codes." I don't think Fig.9 answers the original question while it still being very informative and comprehensive. Can the authors provide more results on ablations? Fig.9 is not related to Fig.6. **A:** Yes, Figure 9 (found in the supplementary file) is not related to the reviewer’s question, and it was also not our intention. This figure (fig. 9) was presented in our initial submission. For the rebuttal phase, we conducted extensively more ablation studies, and results could be found in the attachment under **Author Rebuttal by Authors**. We kindly ask the reviewer to consider our updated results, which comprehensively show the advantages of our proposed Stage 2 and Stage 3. --- **As the rebuttal phase draws to a close, we sincerely hope our responses have addressed the reviewer's concerns. We kindly hope the reviewer could raise the score if the concerns are addressed. If the reviewer has any additional comments, please do not hesitate to let us know.**
Summary: This paper studies a three stage framework for adapting pre-trained visual representations for dexterous manipulation tasks. Specifically, this work proposes only adapting a subset of network parameters in stages 2 and 3. The approach is tested on 12 manipulation tasks from Adroit and DexMV. These experiments demonstrate that the proposed techniques lead to more sample efficient learning. Furthermore, the overall approach outperforms alternative pre-training methods from recent work. Strengths: - This paper demonstrates that using a visual backbone pretrained for (human) hand-pose estimation leads to improvements in learning downstream dexterous manipulation tasks over strong baselines from recent work. These results suggest that this inductive bias might be missing from recently proposed methods. - This work proposes adapting a specific subset of parameters during stages 2 and 3 of training, and demonstrates that this leads to more efficient learning. - The paper is clearly written and most of the details of the proposed approach are clear. Weaknesses: Large parts of the proposed approach are directly applied from prior work including the pretrained visual representation (stage 1) and the learning objective from stage 2. This is clearly stated in the manuscript and appendix, and is not in itself an issue. However, this means the contributions of this work are primarily in the parameter efficient techniques used in stage 2 and 3. - Unfortunately, the results in Figure 6 indicate that stage 2 may improve sample efficiency, but may not improve asymptotic performance. Thus, it is unclear if this is a significant contribution. - Furthermore, the sample efficiency gains (Figure 6) are only demonstrated on two tasks (Hammer and Place Inside), so it is unclear how general these findings are. - Similarly, when compared to full finetuning (Figure 6c) it is not clear if asymptotic performance is improved and the sample efficiency gains are minimal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does asymptotic performance compare in the ablations in Figure 6? - Do the sample efficiency improvements generalize to other tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: Yes, the authors discuss limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and suggestions. We address each of your comments in the following. **Q1:** Large parts of the proposed approach are directly applied from prior work including the pre-trained visual representation (stage 1) and the learning objective from stage 2. This is clearly stated in the manuscript and appendix, and is not in itself an issue. However, this means the contributions of this work are primarily in the parameter efficient techniques used in stage 2 and 3. **A1:** We thank the reviewer for insightful comments and for acknowledging our contributions in stages 2 and 3 of the proposed approach. We greatly value your feedback. However, we would like to respectfully point out that Stage 1, the usage of a pre-trained representation using 3D human hand pose estimation is also one of our main contributions. While it is true that Stage 1 simply employs an existing model, **the inductive bias, introduced by us is not trivial and still intuitive**. A lot of recent works stick to using unsupervised learning on egocentric data to pretrain a new task-agnostic model, such as MVP [1,4], R3M [2], VIP [3], and RPT [5]. However, we show that with the proper usage, the existing pre-trained model from other domains could be enough. **Moreover, it's not straightforward to adapt a pre-trained model from different domains.** As shown in Figure 5 of the paper, some other models that also share a similar intuition, such as AlphaPose, could not gain a reasonable result. And as shown in our new ablation results (Figure 2, Figure 3, and Figure 4 in the rebuttal file), wrongly using the pre-trained model would not lead to reasonable performance. **Our framework, combining all three stages together and correctly adapting the pre-trained model, unleashes the potential of the pre-trained encoder.** We believe our Stage 1 and the efforts in combining all three stages effectively are also the core contributions of our work. [1] Xiao, Tete, et al. "Masked visual pre-training for motor control." arXiv preprint arXiv:2203.06173 (2022). [2] Nair, Suraj, et al. "R3m: A universal visual representation for robot manipulation." arXiv preprint arXiv:2203.12601 (2022). [3] Ma, Yecheng Jason, et al. "Vip: Towards universal visual reward and representation via value-implicit pre-training." arXiv preprint arXiv:2210.00030 (2022). [4] Radosavovic, Ilija, et al. "Real-world robot learning with masked visual pre-training." Conference on Robot Learning. PMLR, 2023. [5] Radosavovic, Ilija, et al. "Robot Learning with Sensorimotor Pre-training." arXiv preprint arXiv:2306.10007 (2023). **Q2:** Unfortunately, the results in Figure 6 indicate that stage 2 may improve sample efficiency, but may not improve asymptotic performance. Thus, it is unclear if this is a significant contribution. Furthermore, the sample efficiency gains (Figure 6) are only demonstrated on two tasks (Hammer and Place Inside), so it is unclear how general these findings are. Similarly, when compared to full finetuning (Figure 6c) it is not clear if asymptotic performance is improved and the sample efficiency gains are minimal. How does asymptotic performance compare in the ablations in Figure 6? Do the sample efficiency improvements generalize to other tasks? **A2:** During the rebuttal phase, we have conducted a more comprehensive evaluation of our proposed Stage 2 and Stage 3, across almost all tasks, as shown in our rebuttal PDF file (Figure 2, Figure 3, and Figure 4). In Figure 2, we conduct experiments across all 12 tasks and show that adapting all parameters in Stage 2 is not better than our proposed method in Stage 2, which only adapts 0.18% parameters. It is also observed that on Relocate Box, adapting all parameters is better, but this phenomenon only appears in one task. In Figure 3, we conduct experiments across all 7 tasks that use m>0 in Stage 3. It is shown that our adaption in Stage 3 also makes a solid contribution to the final results achieved. In Figure 4, we show that directly removing Stage 2 instead of full finetuning (which is shown in Figure 2) is also not good. This observation is natural since we use in-domain data to finetune the pre-trained model, while it could be generally shown that wrongly finetuning the model (Figure 2) could possibly hurt more than not finetuning the model (Figure 4). In response to the reviewer's comment on asymptotic performance, we present the convergence results in Figure 7. This indicates that our method reaches the peak performance at 2M steps, so we saw no need to continue further steps. Overall, we enhance the ablations during the rebuttal phase and we hope that these new results could show the advantages of our proposed techniques in improving the pre-trained vision model for downstream RL tasks. **Please do not hesitate to let us know if you have any additional comments.** --- Rebuttal 2: Title: Thank you for the review and awaiting your response Comment: We sincerely thank you for your efforts in reviewing our paper and the suggestions again. We believe that we have resolved all the concerns mentioned in the review. Should there be any additional concerns, we are more than happy to address them! Thank you very much! --- Rebuttal Comment 2.1: Title: Thanks for the response Comment: Thank you for the response. Most of my questions are resolved, but I am still somewhat concerned about asymptotic performance. I will update my score accordingly. In Figure 7 in the rebuttal pdf it appears that with enough environment steps, "stage 1" and "stage 1+2+3" may converge. This would of course mean that stages 2 and 3 are not needed. That said, it seems likely that the "stage 1" curve will saturate at or below the "stage 1+2" curve. While this remains an open, empirical question, these additional ablations are encouraging. --- Reply to Comment 2.1.1: Title: Thank you for the feedback! We address your questions as below. Comment: We thank the reviewer for the feedback. We agree with the reviewer that it could be good to see also results of more steps for *Stage 1 only*. During the rebuttal phase, we also run *Stage 1 only* for longer steps. We provide an updated version of Figure 7 ([link](https://drive.google.com/file/d/1erAasSzVJOowc2z8HVvBwaeOJ_E49OXo/view?usp=sharing)). It is obviously observed that our method achieves the highest score with only 2M steps, while *Stage 1 only* gets similar or slightly worse results with 6M steps. Also, it should be acknowledged that on Hammer, *Stage 1 only* has been strong enough, which is exactly our original motivation to explore this pre-trained representation. Additionally, we kindly hope the reviewer could also consider our new ablation results in Figure 2, Figure 3, and Figure 4 of the rebuttal file. We show that on various tasks, removing Stage 2 or Stage 3 will lead to sub-optimal performance. Consequently, Stage 2 and Stage 3 are not only necessary but also crucial for our final performance, instead of "not needed". We would update all these results in our future revision. **We believe all our contributions are solid and demonstrated across diverse challenging dexterous manipulation tasks**. If the reviewer has any additional comments, please do not hesitate to let us know and we are happy and eager to address them.
Summary: This paper introduces a framework called H-InDex that uses human hand-inspired visual representation learning to solve complex manipulation tasks with reinforcement learning. The framework consists of three stages and outperforms other methods in challenging dexterous manipulation tasks. Strengths: * A novel and effective method to perform dexterous hand manipulation using visual RL. * Extensive experiments and ablation study on various tasks. * A convincing framework exploiting the pre-trained vision models for robot learning. Weaknesses: * The method seems only to be able to perform on a single object on each task, while the vision model should be able to generalize on category-level objects. * It is unclear of the RL action space, reward design, robotic control parameters, and the coordinate system used in the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Are the proprioceptive states of the robotic hand (e.g. joint states) considered in stage 3? * How does this work encode the object information in the compact representations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments and suggestions. We address each of your comments in the following. **Q1:** The method seems only to be able to perform on a single object on each task, while the vision model should be able to generalize on category-level objects. **A1:** Achieving generalization across objects is not trivial, especially when the training objects are fixed and not diverse. In this work, all our tasks only have one single object and we do not put our focus on generalizable manipulation (clearly stated in the limitation part). Instead of generalizable manipulation, one good property of the pre-trained visual presentation is visual generalization, such as generalizing to unseen scenes. To show our model’s visual generalization ability, we change the visual backgrounds and evaluate our H-InDex and VC-1 on Relocate Potted Meat Can task. The results are given in Table 2 of the rebuttal file and the visualizations are also given in Figure 6. We could observe that, though VC-1 has a slightly better training performance on this task, our method presents better robustness on these new scenes, indicating the strong visual generalization ability of H-InDex. **Q2:** It is unclear of the RL action space, reward design, robotic control parameters, and the coordinate system used in the paper. **A2:** In this work, all our tasks follow prior works (Adroit [1] and DexMV [2]) and we do not make specific changes to the tasks, such as action space and reward design. The action space is 30-dim and the action controls the joints of a five-finger robot hand, i.e., Adroit hand. We use a dense reward for RL and the reward design is task-specific. We will add more task details in our revised version. **Q3:** Are the proprioceptive states of the robotic hand (e.g. joint states) considered in stage 3? **A3:** Yes, and the robot hand states are only used in Stage 3 (RL stage). In our paper, we also mention the usage of robot hand states in Section 3 (L102). In addition, the usage of robot hand states is a common practice in visual-based robot learning [3, 4, 5], and all our baseline methods also use robot states for fair comparison. **Q4:** How does this work encode the object information in the compact representations? **A4:** The object information is directly encoded from the visual observations. We do not use any other information such as the object bounding box, the object coordinates, and so on. This makes the learning process extremely challenging, especially for dexterous manipulation. **Please do not hesitate to let us know if you have any additional comments.** [1] Rajeswaran, Aravind, et al. "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations." arXiv preprint arXiv:1709.10087 (2017). [2] Qin, Yuzhe, et al. "Dexmv: Imitation learning for dexterous manipulation from human videos." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [3] Shah, Rutav, and Vikash Kumar. "Rrl: Resnet as representation for reinforcement learning." arXiv preprint arXiv:2107.03380 (2021). [4] Xiao, Tete, et al. "Masked visual pre-training for motor control." arXiv preprint arXiv:2203.06173 (2022). [5] Radosavovic, Ilija, et al. "Real-world robot learning with masked visual pre-training." Conference on Robot Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response and will maintain my ratings. --- Reply to Comment 1.1.1: Title: Thank you for maintaining the score as weak accept! Comment: We sincerely appreciate your feedback and understand from your comments that our rating stands at *Weak Accept*. However, we observed that the score in the review has been adjusted to *Borderline Accept*. We wonder if this might have been an unintended change. If that's the case, could the reviewer kindly revert the score? We have diligently addressed all the concerns previously mentioned. If there are any further issues or feedback, we are happy and eager to address them promptly.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments. We have addressed all your individual comments. We wanted to thank the reviewer for acknowledging the novelty and the empirical evaluation of our work – “a novel and effective method” (GX9c), “experiments are extensively conducted” (rofX), “interesting and well-motivated” (di2q), “improvements over strong baselines” (s3to). More additional experiments are also conducted during the rebuttal phase to support our proposed method (given in the PDF file), as suggested by the reviewers. **EXP1: Ablation on Stage 2 (adapting 0.18% parameters or adapting 100% parameters)** in reply to Reviewer s3to and Reviewer rofX. Results are given in Figure 2. We compare *only adapting 0.18% parameters in Stage 2* (our proposed method) with *adapting all parameters*, across all 12 tasks. It is clearly shown that finetuning all parameters could lead to suboptimal performance and even make agents fail to learn on *Door, Relocate Large Clamp, and Relocate Potted Meat Can*. We also observe that on Relocate Box task, finetuning 100% parameters is surprisingly more effective, but considering the high variance and the low average scores of finetuning 100% parameters across 12 tasks, the advantage of our proposed adaptation is clearer. **EXP2: Ablation on Stage 3 (m > 0 or m = 0)** in reply to Reviewer s3to and Reviewer rofX, Reviewer di2q. Results are given in Figure 3. We compare setting *m > 0* with *m=0* across all 7 tasks where m is set to be larger than 0. It is observed that adding this simple technique (updating statistics with momentum) during the RL phase could have a large impact. The empirical results emphasize the importance of correct adaptation when applying the pre-trained model. We do not conduct more ablations on the rest 5 tasks that use m=0 originally, since we currently do not find a suitable m\in {0.1, 0.01, 0.001} that could outperform m=0, which means the visual representation without adaption in the RL phase has been good enough. **EXP3: Ablation on Stage 2 (with or without adaptation)** in reply to Reviewer s3to and Reviewer rofX. In addition to experiments on adapting 100% parameters, we also show more results about not adding Stage 2 in Figure 4. The experiments cover all 6 kinds of tasks in our paper, and we think it is extensive to show the necessity of our Stage 2. It is observed that except Door task (where results are similar), our method with Stage 2 outperforms that without Stage 2 across all other tasks. **EXP4: Visual Generalization to unseen backgrounds** in reply to Reviewer GX9c. Results are given in Table 2 and visualizations are given in Figure 6. The experiments are conducted on Relocate Potted Meat Can task, where the best baseline VC-1 is slightly better than our H-InDex. However, we show that when the backgrounds change from seen to unseen, VC-1 is distracted more than H-InDex, and H-InDex achieves 475.04 average scores across 9 new scenes while VC-1 has only 392.56 scores. This experiment shows that even after adaptation, our visual representation does not overfit to training data simply and still keeps good generalization ability. Again, we thank the reviewers for their constructive feedback. We believe that all individual comments have been addressed, but are happy to address any further comments from reviewers. Pdf: /pdf/b2c79eef40e45bc8a2d161b6731c4ea6590641cb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
K-Nearest-Neighbor Local Sampling Based Conditional Independence Testing
Accept (poster)
Summary: This paper argues that conditional independence (CI) testing becomes challenging because of high-dimensional conditioning variables and limited data samples. To address these issues, the authors propose a testing approach incorporating a class-based conditional mutual information (CMI) and a $k$-nearest-neighbor local sampling strategy. Theoretical analysis demonstrates its asymptotic control of type I error and consistency against all alternative hypotheses. Extensive empirical results on synthetic and real data show that the proposed method achieves computational efficiency, decent performance under different scenarios, and robustness towards heavy-tailed data. Strengths: This idea is intrigued by two significant challenges of CI testing, i.e., high-dimensional conditioning variables and limited data samples. The authors do a good job of motivating this work, and the proposed test approach is simple but effective. Extensive empirical results show that the proposed method achieves a better trade-off between type I error rate and testing power. In addition, the paper is very well written and easy to understand. Weaknesses: My main doubts/concerns regarding the paper are the following: - Line 104-105 claim that the $k$-nearest-neighbor local sampling strategy is an alternative to the binning strategy. However, this paper does not provide a comparison between the proposed testing approach and the methods employing a binning strategy [1,2]. - In Figure 3, the results on synthetic data show that the performance is sensitive to the selection of $k$. Therefore, it is unreasonable to apply $k=7$ in all experiments, especially in real data. Minor: - Empirical analysis shows that the proposed method achieves a better trade-off between type I error rate and testing power. It is hard to say that the proposed method outperforms existing SOTA methods. [1] I. Kim, et al., "Local Permutation Tests for Conditional Independence", The Annals of Statistic 2022. [2] D. Margaritis, "Distribution-free Learning of Bayesian Network Structure in Continuous Domains", AAAI 2005. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Can the authors add more discussions on the difference between the $k$-nearest-neighbor local sampling strategy and the binning strategy? What are the advantages of the $k$-nearest-neighbor local sampling strategy? 2. Can the authors provide the optimal $k$ on real data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the authors have addressed the societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful review! **Weakness 1**. The binning strategy can only handle conditioning variables $Z$ with very few dimensions. Please see the response for Question 1. We therefore did not consider the comparison with the methods employing a binning strategy in the paper. **Weakness 2**. Thank you for this important question. Conditional independence testing is an unsupervised problem, since the ground truth information regarding conditional independence or non-conditional independence is unavailable. Therefore, the cross-validation method does not work for determining the optimal $k$ in practical scenarios. The task of selecting the optimal $k$ appears to be quite challenging in the context of CIT. We will delve into developing methods to identify the optimal parameter $k$ in our future research. Nevertheless, utilizing $k=7$ in the experiments and the two real data analyses yields very good performances, even though it may not represent the optimal choice for $k$. It is noted that, in the works related to CIT, such as Runge (2018), Bellot and van der Schaar (2019), and Scetbon et al. (2022), the hyperparameters are chosen using synthetic data and then applied to other experiments and real data analyses. **Weakness (Minor)**: When assessing the effectiveness of a test, it's crucial that it showcases a higher statistical power compared to other methods, while simultaneously effectively controlling the type I error. On one hand, if a test fails to control the type I error well, it cannot be considered as reliable. On the other hand, when a test successfully controls the type I error, its power is then compared to that of other tests. If it exhibits superior power, it can be confidently regarded as more favorable and superior. Based on this perspective, it is observed from the experiments that our method outperforms the commonly used competitive SOTA methods. For this statement, we will replace “existing SOTA methods” with “the commonly used competitive SOTA methods”. **Question 1**. The binning strategy can only handle conditioning variables $Z$ with very few dimensions. Kim et al. (2022) proposed two binning strategies. The first binning strategy is to partition the conditional variable $Z$ into $M$ bins of equal size. Then, we permute the X-samples in each bin randomly and perform local permutation tests. The second binning strategy is to partition $Z$ into $M$ coarser bins and further partition each coarser bin into fine bins in which permutation occurred. Kim et al. (2022) used both strategies to process univariate $Z$ in all of the experiments. Margaritis (2005) adopted a recursive-median binning strategy and can not handle the high-dimensional conditional variable $Z$ either. In fact, we have conducted simulation studies and confirmed that the binning strategy can only handle the condition variable $Z$ with very few dimensions. We give explanations for the limitations of the binning strategy in addressing the high-dimensional conditional variable $Z$. Binning serves as a technique to partition samples that exhibit notable similarity with respect to variable $Z$ into distinct bins. Shuffling $X$ within each bin maintains the relationship between $X$ and $Z$, as opposed to directly permuting $X$ throughout the entire dataset. Nevertheless, employing the binning method in high-dimensional scenarios poses challenges (Berrett et al., 2020). One challenge stems from the intricate task of selecting bins where the samples within each bin exhibit notable similarity concerning the variable $Z$. Another challenge arises from the generation of an excessive number of bins in high-dimensional $Z$. For instance, consider a scenario where the dimension of $Z$ is $100$. By applying the first binning strategy described in Kim et al. (2022) to partition each dimension of $Z$, we would generate an excessive number of bins for $Z$. Specifically, the number of bins is $M^{100}$. Even with a small value of $M=2$, the binary binning technique would produce a staggering $2^{100}$ bins. As highlighted by Kim et al. (2022), redundant bins can result in a reduction of test power. More specifically, a substantial number of bins within the high-dimensional space of Z consist of just one or two samples. On one hand, as per the definition of $U$-statistics in the computation of test statistics in Kim et al. (2022), only bins comprising a minimum of $4$ samples are included in the computation of $U$-statistics. Consequently, those bins containing just one or two samples do not contribute to the calculation of the test statistics. This leads to an underutilization of samples in these bins. On the other hand, significant similarity among various permutations can arise when numerous bins consist of merely one or two samples. This similarity can result in insufficient statistical power, and may also undermine effective control of the type I error rate. Different from the binning strategy involving permutation, the k-nearest-neighbor local sampling strategy serves as an alternative sampling approach. This approach equips us with pseudo samples $\widetilde{\pmb{X}}^{(b)}$ ($b=1,\cdots,B$) that exhibit considerable dissimilarity while closely approximating the true conditional distribution in terms of the total variation distance, as evidenced by our Lemma 1. Especially in scenarios with high-dimensional $Z$, the k-nearest-neighbor local sampling strategy, compared to the permutation-based binning approach, effectively resolves concerns like maintaining the relationship between $X$ and $Z$ for pseudo sample $\tilde{X}$, as well as addressing issues of sample underutilization and pronounced similarity among permutations. This results in adequate statistical power and good control of the type I error rate within the context of high-dimensional $Z$. **Question 2**. Please see our response to Weakness 2. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer P7Sn Comment: Dear authors, thanks for the clarifications. The comments and references from the authors solve my primary concerns, and I've updated my score. --- Reply to Comment 1.1.1: Title: Thank Reviewer P7Sn Comment: Dear Reviewer P7Sn, Thanks for your positive feedback and raising the score. Authors
Summary: In this paper, the authors present a method for conditional independence (CI) testing based on a conditional mutual information (CMI) estimator. This CMI estimator is based on a classifier over 1-NN samples. From the CMI estimates, the authors present a hypothesis testing procedure for CI based on multiple iterations over estimates of CMI on K-NN samples. The authors present a theoretical analysis of their approach and the empirical results show that their approach has low Type 1 error and high power. The empirical section shows also that their approach remains consistent as the number of dimensions increase for the conditioning variables. Strengths: The paper is well written and technically sound. The contribution builds on previous approaches but is an original method for the important task of CI. The method is very promising given the theoretical and empirical results providing balanced detection of TP and TN links in real-world datasets as shown by the top F-scores. The use of a single classifier for the CMI estimation is advantageous as the CI test is usually executed multiple times for graph detection. Weaknesses: I find the paper really compelling and useful in many other ML tasks. However, I believe the empirical evaluation could have gone a step further. In particular, I believe a distillation procedure would be helpful to show which part of your approach contributes to the significant results. More precisely, it would be great to see how good your CMI estimator is. A comparison against two-classifier approaches and even a synthetic dataset where you have access to the ground truth conditional distributions would be very useful. This could show how tight the bound is given your estimator. Furthermore, it would be nice to see the performance of your approach given different classifiers, e.g., DNNs or Random forest methods as mentioned in your paper. In the presentation, I would make the figures larger. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I might have missed it but it wasn't clear to me, which classifier you used in your experiments? What do you mean by $V_0^{test}$ in line 7 of alg. 2? is that any instance over the union of test sets in f and g? How long does your approach take compared to the others? After all, a large number of B repetitions could significantly add up. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I don't expect any potential negative societal impact from this work Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the important review! **Weakness 1**. To show which parts of our approach contribute to the significant results, we have done some additional experiments as per your suggestions. Specifically, the classifiers we evaluate include DNNs, XGBoost, and Random forest. The simulation dataset is generated according to 'Scenario I: The post-nonlinear model' in the paper. We maintain the distribution as the standard Gaussian, with the sample size $n=500$ and dimensions of $Z$ set to $10$ and $50$. First, we assess the effects of different classifiers on CMI estimation. Under the null hypothesis $H_0$, the true CMI value is zero. We calculate the mean squared errors (MSEs) of the CMI estimators using different classifiers under $H_0$. These MSE values are averaged over 100 independent trials. It is observed from Table 3 in the pdf file (attached in the global response) that the MSEs of the CMI estimators based on the three classifiers exhibit remarkable similarity. Furthermore, these three classifier-based CMI estimation methods all yield estimators that closely approximate the true value of $0$ under the null hypothesis. However, calculating the true value of CMI under $H_1$ is highly complex. Therefore, we do not compute the MSE in this case. Instead, we further utilize the thresholding approach to evaluate the performances. To be specific, we conduct the experiment independently 100 times under the assumption of $H_0$. While $H_0$ being true implies $I(X;Y|Z)=0$, the CMI estimator from a finite-sized dataset seldom equals zero due to statistical variation. We simply set the threshold at $0.1$ for the CMI estimator: if $\hat{I}>0.1$, we reject $H_0$, and we then count the instances of falsely rejecting $H_0$. Similarly, under $H_1$, we threshold the CMI estimator at $0.1$ and record the number of correct reject $H_0$. Table 4 in the pdf file (attached in the global response) records the results obtained. We observed that, XGBoost-CMI has the highest empirical type I errors $0.12 (0.19)$ and empirical testing power $0.85 (0.83)$, and DNNs-CMI has the lowest empirical type I error $0.05 (0.07)$ and empirical testing power $0.38 (0.12)$. In practice, choosing an appropriate threshold value presents a significant challenge due to the inherent difficulty in accurately estimating the statistical variance of the CMI estimator. Therefore, relying on a threshold-based scheme for testing conditional independence is considered unwise. To enhance the effectiveness of CI testing based on the classifier-CMI estimator, we propose in this paper applying the k-nearest-neighbor local sampling strategy to approximate the conditional distribution $X|Z$ that encodes the null hypothesis. This scheme takes into account the statistical variation inherent in the CMI estimation. It is not only simple and effective, but also has theoretical guarantee (See Theorem 2). Second, we employ the same dataset utilized for CMI estimation to evaluate the performance of our CMI estimation method using various classifiers equipped with the k-nearest-neighbor local sampling approach. We present the type I error rate, testing power, and timing performance for a single test for each method in Table 5 in the pdf file (attached in the global response). These results are averaged over 100 independent trials. Upon examination of the results in Tables 4 and 5 in the pdf file (attached in the global response), we observe that when using the threshold value as the basis for testing conditional independence, the XGBoost-CMI method exhibits inflated type I error, while the DNNs-CMI method demonstrates low testing power. However, when incorporating the k-nearest-neighbor local sampling scheme, the performance of both methods improves. Specifically, the empirical type I error rate of XGBoost-CMI decreases from $0.12 (0.17)$ to $0.06 (0.06)$, and its empirical testing power increases from $0.85 (0.83)$ to $0.97 (0.96)$. Moreover, the empirical testing power of DNNs-CMI increases from $0.38 (0.12)$ to $0.85 (0.34)$. These results indicate that both a good classifier-based CMI estimator and the k-nearest-neighbor local sampling strategy are key factors contributing to the significant outcomes achieved by our test. **Weakness 2**. Thanks for your suggestion. We will make the figures larger in the final version. **Question 1**. The XGBoost classifier was used in all of our experiments. **Question 2**. After completing the classifier training in Algorithm 2 (line 7), we proceed to compute the prediction probability $P(l=1|w)$ for each feature $w$ in the test set $V^{test}$. The label of the sample $w$ (i.e., $l=1$ or $l=0$) is not required when computing $P(l=1|w)$. However, since $V^{test}$ contains labels for all samples, we introduce a new symbol $V_{0}^{test}$, which includes all the features in $V^{test}$ but excludes the labels. $V_{0}^{test}$ is essentially the union of the test sets in $f$ and $g$. **Question 3**. We report the timing performance of all methods for a single test in Figure 7 of the Supplementary Materials. Our test is found to be highly computationally efficient even when dealing with large sample sizes and high-dimensional conditioning sets. In contrast, CMIknn and CCIT for sample sizes exceeding 1000, and LPCIT for dimension of $Z$ higher than 50 are impractical due to their prohibitively long running time. For example, when $n=2000$ and $dz=80$, a single test of our method takes 42 seconds, GCIT 12 seconds, CCIT 173 seconds, KCIT 13 seconds, NNSCIT 31 seconds, CMIknn 305 seconds and LPCIT 188 seconds. In our approach, we set $B=200$. --- Rebuttal Comment 1.1: Comment: I'm satisfied with the comments of the authors and I will keep my scoring as is. --- Reply to Comment 1.1.1: Title: Thank Reviewer 671y Comment: Dear Reviewer 671y, Thanks for your positive feedback. Authors
Summary: In this paper, the authors propose a novel CMI estimator with a classifier-based approach. The estimated CMI is then used for conditional independence testing. This approach achieves asymptotic type I & II control and does not require prior assumptions on the distribution, types of correlations, or knowledge of the null distribution. Empirical results show promising performance. Strengths: 1. This paper is well-written. 2. The proposed CMI is very natural for testing CI. The algorithm is easy to implement given any classifier. 3. The proposed method is backed up by theory (Theorem 3 & 4). Weaknesses: 1. As authors have pointed out, prior works for estimating MI (e.g. Suzuki et al., 2008 and [3]) using density ratio do exist, and a conditional version of such an estimator is not exactly a huge leap forward (but no small feat either) 2. The control of type I & II is asymptotic, and it is unclear how useful they are when the sample size is small. The claim of Theorem 3 and 4 is unsurprising (due to the consistency of KNN and smoothness of conditional distribution). If the proof is non-trivial, please point out. Suzuki et al., 2008 Approximating Mutual Information by Maximum Likelihood Density Ratio Estimation Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why is the CMI estimated using classifiers rather than directly optimising (4) like f-GAN? Why not directly use the maximum of (4) to approximate the CMI? This way, you would not need to split the dataset into training and testing sets right? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful review! **Weakness 1**. The Maximum Likelihood Mutual Information (MLMI) method proposed by Suzuki et al. (2008) directly models the density ratio and avoids density estimation. MLMI estimates the density ratio using maximum likelihood and formulates it as a convex optimization problem using Legendre-Fenchel duality. We may extend this to the conditional version. Nevertheless, the MLMI-based method heavily depends on selecting appropriate basis functions that can effectively capture the information in $Z$, and it can face challenges due to the curse of dimensionality (Mukherjee et al., 2020). In contrast, our approach employs the classifier-based estimation of the likelihood ratio, offering a solution capable of addressing high-dimensional challenges. **Weakness 2**. First, in the simulation study, we present some results when the sample size is $n=300$ as can be seen in Figures 1 and 4. Even when the sample size is small, our proposed test controls the type I error well and achieves adequate testing power. Note that, GCIT and CMIknn consider a minimum sample size of 500, and DGCIT, NNSCIT and CCIT all consider a minimum sample size of 1000. Second, we discuss the non-trivial points in the type-I error control of our method in Theorem 3. If the conditional distribution of $X|Z$ is known, the p-value is calculated by $$p:=\frac{1+\sum_{b=1}^{B}{1}(\widetilde{\mbox{CMI}}{}^{(b)}\geq \widehat{\mbox{CMI}})} {1+B},$$ as can be seen in Equation (8) in the paper. Because of the exchangeability of $B+1$ triples $(\pmb{X}, \pmb{Y}, \pmb{Z}),(\pmb{X}^{(1)}, \pmb{Y}, \pmb{Z}),\cdots,(\pmb{X}^{(B)}, \pmb{Y}, \pmb{Z})$ under $H_0$, this p-value is valid and $P(p\leq \alpha |H_{0})\leq \alpha$ holds for any given $\alpha \in (0,1)$. However, in our case, $X|Z$ is unknown. $\widetilde{\pmb{X}}$ is drawn from $\widehat{p}(\cdot|\pmb{Z})$ instead of $p(\cdot|\pmb{Z})$. Therefore, the existing argument based on the assumption of a known conditional distribution $X|Z$ does not apply in this context. So, in the proof of Theorem 3, we introduce an additional sample $\acute{\pmb{X}}$ drawn from $\widehat{p}(\cdot|\pmb{Z})$ independently of $\pmb{Y}$. We proceed to independently and identically draw $\acute{\pmb{X}}^{(1)}, \cdots, \acute{\pmb{X}}^{(B)}$ using the $k$-nearest neighbor local sampling mechanism based on $(\acute{\pmb{X}}, \pmb{Y}, \pmb{Z})$. Define $\chi\_{\alpha}^B:=\\{(\pmb{x},\pmb{x}^{(1)},\ldots , \pmb{x}^{(B)})\Big| \left[1+\sum\_{b=1}^B1(T(\pmb{x}^{(b)},\pmb{Y},\pmb{Z})\geq T(\pmb{x},\pmb{Y},\pmb{Z}))\right]\big/(1+B)\leq \alpha \\}$. The type I error of our test conditionally on $\pmb{Y}$ and $\pmb{Z}$ can be expressed as $P((\pmb{X},\widetilde{\pmb{X}}^{(1)},\ldots, \widetilde{\pmb{X}}^{(B)})\in \chi_{\alpha}^B|\pmb{Y},\pmb{Z})$. This expression can be decomposed into the sum of $P((\acute{\pmb{X}},\acute{\pmb{X}}^{(1)},\ldots ,\acute{\pmb{X}}^{(B)})\in \chi_{\alpha}^B|\pmb{Y},\pmb{Z})$ and $P((\pmb{X},\widetilde{\pmb{X}}^{(1)},\ldots ,\widetilde{\pmb{X}}^{(B)})\in \chi_{\alpha}^B|\pmb{Y},\pmb{Z})-P((\acute{\pmb{X}},\acute{\pmb{X}}^{(1)},\ldots ,\acute{\pmb{X}}^{(B)})\in \chi_{\alpha}^B|\pmb{Y},\pmb{Z})$. On one hand, the exchangeability of $B+1$ triples $(\acute{\pmb{X}}, \pmb{Y}, \pmb{Z}),(\acute{\pmb{X}}^{(1)}, \pmb{Y}, \pmb{Z}),\cdots,(\acute{\pmb{X}}^{(B)}, \pmb{Y}, \pmb{Z})$ under $H_0$, conditioned on $\pmb{Y}$ and $\pmb{Z}$, leads to the first term being smaller than $\alpha$. On the other hand, $(\acute{\pmb{X}}^{(1)},\ldots ,\acute{\pmb{X}}^{(B)})$, conditioned on $\acute{\pmb{X}},\pmb{Y}$ and $\pmb{Z}$, is generated from the same mechanism as $(\widetilde{\pmb{X}}^{(1)},\ldots ,\widetilde{\pmb{X}}^{(B)})$, conditioned on $\pmb{X},\pmb{Y}$ and $\pmb{Z}$. In other words, for all $\pmb{x}\in \mathbb{R}^n$, $((\widetilde{\pmb{X}}^{(1)},\ldots ,\widetilde{\pmb{X}}^{(B)})|\pmb{X}=\pmb{x},\pmb{Y},\pmb{Z})\overset{\mathrm{d}}{=}((\acute{\pmb{X}}^{(1)},\ldots ,\acute{\pmb{X}}^{(B)})|\acute{\pmb{X}}=\pmb{x},\pmb{Y},\pmb{Z})$. By Lemma 6 in the Supplementary Materials, we can bound the second term by the total variation distance between $\widehat{p}(\cdot|\pmb{Z})$ and $p(\cdot|\pmb{Z})$. Third, we discuss the non-trivial points in the power analysis of our method in Theorem 4. We utilize the Markov inequality to establish a connection between power analysis and the consistency of the $\mbox{CMI}$ estimator. Mukherjee et al. (2020) considered the scenario where $p(x,z)p(y|z)$ (denoted as $g$) is known for the consistency. However, this distribution is unknown in practice. We investigate the consistency when incorporating the $1$-NN sampling (Algorithm 1) to approximate it. In this extension, we denote the joint density of $(X,Y',Z)$ generated through $1$-NN sampling as $\phi$. The CMI estimator $\widehat{\mbox{CMI}}$ based on Algorithm 2 is referred to as $\widehat{D}\_{KL}^{(n)}(f||\phi)$, and the true CMI of $(X,Y,Z)$ is denoted as $\mbox{CMI}:=I(X;Y|Z)=D\_{KL}(f||g)$. The key aspect of this extension involves decomposing $\widehat{D}\_{KL}^{(n)}(f||\phi)-D\_{KL}(f||g)$ into two components: $\widehat{D}\_{KL}^{(n)}(f||\phi)-D\_{KL}(f||\phi)$ and $D\_{KL}(f||\phi)-D\_{KL}(f||g)$. The first term is shown to be $o\_p(1)$ according to Mukherjee et al. (2020). By leveraging properties of point-wise minimizers of binary cross-entropy loss, Jensen's inequality, and Lagrange's mean value theorem, we bound the second term by $d\_{TV}(\phi,g)$, which is also $o(1)$. The combination of these two results leads to the conclusion that $\widehat{D}\_{KL}^{(n)}(f||\phi)-D\_{KL}(f||g)=o_p(1)$, which confirms the consistency of our CMI estimator. **Question 1**: Please refer to the global response for details regarding the question "Why is the CMI estimated using classifiers rather than directly optimising (4) like f-GAN? Why not directly use the maximum of (4) to approximate the CMI?". --- Rebuttal Comment 1.1: Title: Thanks for Replying Comment: > Nevertheless, the MLMI-based method heavily depends on selecting appropriate basis functions that can effectively capture the information in Z, and it can face challenges due to the curse of dimensionality (Mukherjee et al., 2020). True. Thanks for clarifying. > Second, we discuss the non-trivial points in the type-I error control Thanks! These all make sense to me. I hope in the revision, the authors can explain how their results are linked with assumptions 1 and 2 (highlighting the necessity of smoothness in this context). In response to the satisfactory response to Weekness 1 and 3, I will raise my score by 1. --- Reply to Comment 1.1.1: Title: Thank Reviewer xTib Comment: Dear Reviewer xTib, Thanks for your positive feedback. We will incorporate your suggestions into the paper during the revision process. Thanks, Authors
Summary: The paper proposes a novel method to test conditional independence (CI) by estimating conditional mutual information (CMI). A kNN local sampling strategy is employed to empirically sample from an unknown conditional distribution. The method is proved to have type-I error controlled, and power converging to 1. The main contribution of the novel method consists of three points. First, it addresses challenges posed by high-dimensional conditioning variables and limited data samples. Second, it operates without assumptions about distribution forms or feature dependencies. Third, it avoids dataset splitting. Empirically, it is computationally efficient. Strengths: (1) Good originality. Using kNN as a sampling method to circumvent prior knowledge on conditional distribution $p_{X|Z}(x|z)$ is a novel idea. (2) The algorithm is a high-quality one, in terms of enhanced computational efficiency. (3) The simulation settings is a nice touch to showcase that the novel method operates without assumptions about distribution forms. Weaknesses: (1) The paper claims an improvement upon the current literature under high-dimensional settings. But this contribution is not reflected in either theoretical results or simulations. Specifically, in simulation the sample size is 500, while dimension of $Z$ is at most 100. The results are still in low-dimensional regime. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) The paper claims novelty in avoiding sample splitting. It seems to me that Algorithm 1 & 2 are still splitting data into $V_1$ and $V_2$. Does this splitting not weaken test power? (2) More runtime analysis would be appreciated. The paper's claim on enhanced computational complexity sounds legit, but requires more evidence to support. (3) How does $\beta$ and $c_{d_Z}$ in Assumptions 1 and 2 affect the convergence rates in Theorems 2 and 4? (4) Line 180: "Furthermore, it intuitively suggests that our test can achieve high power under $H_1$". This claim was based on $\hat{\text{CMI}}$ should be positive whp. This quantity, however, is replaced by $\tilde{\text{CMI}}^{(b)}$ in practice. It the latter still positive whp? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper claims that their novel method operates without assumptions about distribution forms or feature dependencies. However, the two assumptions on smoothness do not sound mild. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful review! **Weakness**: Please refer to the global response for details regarding the claim “ high-dimensional conditioning variables". Further, we conduct additional experiments to assess the performance of our tests in higher-dimensional scenarios. We generate simulation dataset based on `Scenario I: The post-nonlinear model' in the paper. We keep the distribution as the standard Gaussian, maintain the sample size of $n=500$, and set the dimension of $Z$ to be 200 and 500. We present the type I error rate and testing power in Table 1 in the pdf file attached in the global response, where the results are averaged over 100 independent trials. It is shown that our test maintains good control of the type I error and achieves high testing power. Notably, LPCIT is very time-consuming when the dimension of $Z$ is high, we thus do not report the results of this test. **Question 1**. Please refer to the global response for details regarding the claim “our method avoids sample splitting". **Question 2**. First, we provide the total runtime and the time taken for a single test run for all methods on the real ABALONE dataset ($n=4177$) and the Flow-Cytometry dataset ($n=1755$). Table 2 in the pdf file attached in the global response shows that our test maintains a high level of computational efficiency even when handling large sample sizes (e.g., $n=4177$). Second, please refer to Lines 152-158 in the paper for more details. Third, we find that our method has a computational complexity of $O(B\cdot dz \cdot n\log n)$. Further comparisons of computational complexity with other methods will be conducted in our future research. **Question 3**. In Theorem 2, $\beta$ is employed to establish an upper bound on the KL divergence between $p(x|Z)$ and $p(x|Z\_n^{(l)})$: $$D_{KL}\\{p(x|Z)||p(x|Z\_{n}^{(l)})\\}=\frac{1}{2}(Z_{n}^{(l)}-Z)^{T}I_{a}(Z)(Z_{n}^{(l)}-Z),$$ where $0\leq \lambda_{\max}(I_{a}(z))\leq \beta$. Therefore, given that $\beta$ is finite, the convergence rate of $d\_{TV}\\{p(x|Z),\widehat{p}(x|Z)\\}$ is determined solely by the convergence rate of $\Vert Z\_{n}^{(l)}-Z\Vert\_{2}^{2}$ and remains unaffected by $\beta$, as indicated by the inequalities: \begin{align*} d\_{TV}\\{p(x|Z),\widehat{p}(x|Z)\\} \leq \sqrt{\frac{D_{KL}\\{p(x|Z),\widehat{p}(x|Z)\\}}{2}},\\ D\_{KL}\\{p(x|Z),\widehat{p}(x|Z)\\} = \sum\_{l=1}^{k} I\\{\xi = l\\} D\_{KL}\\{p(x|Z)||p(x|Z\_{n}^{(l)})\\}. \end{align*} In the context of power analysis as presented in Theorem 4, it is noteworthy that most CIT methods lack theoretical results pertaining to testing power. Examples include works by Runge et al. (2018), Berrett et al. (2020), and Li et al. (2023). Even when theoretical analyses of testing power are conducted, they never give the convergence rate of testing power. For instance, the method proposed by Shi et al. (2021) achieves consistency only against a subset of alternatives in $H_1$. Wang et al. (2022) specifically consider the high-dimensional linear model and analyze the asymptotic power under local alternatives. In our work, we achieve the consistency of our test against all alternatives stated in $H_1$ when analyzing testing power. This consistency heavily relies on the consistency of the CMI estimator. The CMI estimator's consistency hinges on whether the joint density of $(X,Y',Z)$ generated by the 1-NN sampling (Algorithm 1), denoted as $\phi$, approximates $p(x,z)p(y|z)$ (denoted as $g$) well, in terms of TV distance. To bound the TV distance between $\phi$ and $g$, we utilize $\beta$ and $c\_{d_z}$. Precisely, $d_{TV}(\phi,g)\leq b(n)$, where $$b(n)=\frac{1}{2}\sqrt{\frac{\beta}{4} \frac{C_5 2^{1/d_z} \Gamma (1/d_z)}{(n\gamma_{d_z})^{1/d_z}d_z}+\frac{\beta \epsilon_1 G(2c_{d_z}\epsilon_1^2)}{4}}+\exp \bigg{(}-\frac{1}{2}n\gamma\_{d_z}c\_{d_z}\epsilon_1^{d_z+2}\bigg{)}+G(2c\_{d_z}\epsilon_1^2),$$ where $\epsilon_1$ is small enough, $\Gamma(\cdot)$ is the gamma function, and $\gamma_{d}$ is the volume of the unit radius $l_2$ ball in $\mathbb{R}^d$. As indicated in Theroem 1 of Sen et al. (2017), as long as $\beta$ and $c\_{d_z}$ are finite (Gao et al., 2017), $d\_{TV}(\phi,g)$ can be small enough when $n$ is large enough. This ensures the consistency of our test against all alternatives stated in $H_1$. [1] Wang W., Janson, L. A high-dimensional power analysis of the conditional randomization test and knockoffs, Biometrika, 2022. **Question 4**. We have proved that $\widehat{\mbox{CMI}}$ is a consistent estimator of $\mbox{CMI}$ based on the original data set $(\pmb{X},\pmb{Y},\pmb{Z})$. Because the $\mbox{CMI}$ is greater than zero under $H_1$, $\widehat{\mbox{CMI}}$ is positive whp. When the distribution of $X|Z$ is known, we can draw pseudo samples $\pmb{X}^{(b)}$. In this case, the estimator $\widetilde{\mbox{CMI}}^{(b)}$ based on $(\pmb{X}^{(b)},\pmb{Y},\pmb{Z})$ converges to zero, but it is not necessarily positive. As a result, the $p$-value calculated using $p:=\\{1+\sum_{b=1}^{B}{1}(\widetilde{\mbox{CMI}}{}^{(b)}\geq \widehat{\mbox{CMI}})\\}/(1+B)$ is highly likely to be very small, which indicates the consistency of our test against all alternatives stated in $H_1$. In practice, the distribution of $X|Z$ is unknown. We use the $k$-NN local sampling strategy (Algorithm 3) to draw pseudo samples $\widetilde{\pmb{X}}^{(b)}$. In the proof of Theorem 4, we have demonstrated that $\widehat{\mbox{CMI}}^{(b)}$ calculated based on $(\widetilde{\pmb{X}}^{(b)},\pmb{Y},\pmb{Z})$ converges to zero, though it is not necessarily a positive value. Consequently, the $p$-value calculated using $p:=\\{1+\sum_{b=1}^{B}{1}(\widehat{\mbox{CMI}}{}^{(b)}\geq \widehat{\mbox{CMI}})\\}/(1+B)$ is very small with high probability, indicating our test can achieve high power under $H_1$. **Limitation**: Please see the global response for details regarding the claim “our method operates without assumptions about distribution forms or feature dependencies". --- Rebuttal Comment 1.1: Comment: I appreciate the clarification from the authors. All my questions are answered. The weakness that I raised seems to be well explained. I'll raise my score by 1. --- Reply to Comment 1.1.1: Title: Thank Reviewer BxQV Comment: Dear Reviewer BxQV, Thanks for your positive feedback. Authors
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed and considerate feedback. We offer the following clarifications. **A PDF file including 5 tables is attached.** 1. About the claim "**our method avoids sample splitting**", we emphasize that our proposed method **allows the entire dataset to be used in computing $\widehat{\mbox{CMI}}$**. The method in Li et al. (2023) requires splitting the dataset into two parts, and thus the dataset used for calculating the test statistics comprises only one-third of the total samples. This will reduce the statistical power of the test, particularly when working with small datasets. In contrast, our proposed procedure **allows the entire dataset to be used in computing $\widehat{\mbox{CMI}}$**, thereby avoiding the loss of testing power. In the computation of the CMI estimator using the entire dataset, we adopt a classifier-based approach. This approach necessitates the utilization of two datasets for a supervised classification task. As a result, the data splitting in this context occurs naturally and does not result in any loss of testing power. 2. Regarding the claim "**high-dimensional conditioning variables**", we have the following response. Recently, Cai et al.(2022), Kim et al. (2022), and Ai et al. (2022) have primarily concentrated on low-dimensional cases where the dimension of $Z$ does not exceed 2. Additionally, Runge (2018) employed a sample size of 500, while restricting the dimension of $Z$ to a maximum of 8. CI testing is a really challenging task when handling high-dimensional conditioning variable sets (Scetbon et al., 2022). The concept "high dimensional" in CI testing refers to the scenario where the dimension of $Z$ is not low or is relatively high given the sample size (Scetbon et al., 2022). This concept is different from that of the high-dimensional regression, where some sparsity assumptions may be needed. For example, Sen et al. (2017) and Scetbon et al. (2022) both claimed they can handle the high-dimensionality of $Z$. In their simulation studies, Sen et al. (2017) set the sample size to be 1000 with the dimension of $Z$ being at most 150, and Scetbon et al. (2022) set the sample size to be 1000 with the dimension of $Z$ being at most 50. They also did not consider that the dimension of $Z$ converges to infinite with the sample size in their theoretical results. In Figure 1 of our paper, we present the favorable performance of our tests for $n=300$ and the dimension of $Z$ equal to 80. In response to Reviewer BxQV, we also consider $Z$ dimension as high as $500$ when $n=500$. In contrast to previous works, the dimension of $Z$ is relatively high. [1] Cai et al., A Distribution Free Conditional Independence Test with Applications to Causal Discovery, JMLR 2022. [2] Ai et al., Testing Unconditional and Conditional Independence Via Mutual Information, Journal of econometrics 2022. 3. Regarding the claim "**our method operates without assumptions about distribution forms or feature dependencies**", we emphasize that, unlike Candes et al. (2018), our method operates successfully in practical applications without requiring the true distribution of $X|Z$ to be provided. We propose using the k-nearest-neighbor local sampling strategy to approximate $p\_{X|Z}(x|z)$. We establish in Section 4 that the total variation distance between the true distribution of $X|Z$ and the distribution of samples generated by the k-nearest-neighbor local sampling strategy tends to zero in probability as $n$ goes to infinity. However, in the theoretical part, we need Assumptions 1 and 2 to facilitate our proof. The two assumptions have been included in Gao et al. (2016, 2017) and Sen et al. (2017), even though they may not be the weakest conditions. Moreover, we can validate Assumption 1 when ($X, Z$) follows the multivariate Gaussian distribution (MVD) and Assumption 2 when $Z$ follows the MVD. 4. About the question "**Why is the CMI estimated using classifiers rather than directly optimising (4) like f-GAN? Why not directly use the maximum of (4) to approximate the CMI?**", we have the following response. Belghazi et al. (2018) introduced a mutual information neural estimator (MINE) by maximizing equation (4) directly. We obtained the conditional version of this estimator to estimate CMI, referred to as C-MINE. This estimator does not require dataset splitting into training and testing sets. However, through simulations, we observed that C-MINE has a high variance and its optimization becomes unstable in high-dimensional settings. Similar conclusions have been drawn by Poole et al. (2019) and Mukherjee et al. (2020). Furthermore, C-MINE exhibits more computational complexity compared to our classifier-based CMI estimator. As outlined in Algorithm 3, to obtain a single p-value, we need to estimate $B+1$ CMI values. Therefore, any method that can offer fast and accurate estimation will yield substantial benefits. Furthermore, we note that Mondal et al. (2020) directly optimized (4) like f-GAN to estimate CMI. However, the training of GANs is often challenging, with the risk of collapse if hyperparameters and regularizers are not carefully chosen (Dhariwal and Nichol, 2021). In contrast, our classifier-based CMI estimation method relies on a simple binary classifier and performs effectively even in high-dimensional scenarios. Importantly, our classifier-based approach demonstrates remarkable computational efficiency compared to the direct optimization of equation(4) by f-GAN. [1] Belghazi et al., Mutual Information Neural Estimation, ICML 2018. [2] Poole et al., On Variational Bounds of Mutual Information, ICML 2019. [3] Kumar Mondal et al., C-MI-GAN: Estimation of Conditional Mutual Information Using MinMax Formulation, UAI 2020. [4] Prafulla Dhariwal and Alex Nichol, Diffusion Models Beat GANs on Image Synthesis, NeurIPS 2021. Pdf: /pdf/21ee61b49429638184f7f9b05a6fd9c54df5422b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SPA: A Graph Spectral Alignment Perspective for Domain Adaptation
Accept (poster)
Summary: The authors present a new graph Spectral Alignment (SPA) framework for domain adaptation. The method consists of two main components: (i) a coarse graph alignment mechanism that uses a novel spectral regularizer to align domain graphs, and (ii) a fine-grained message propagation module that employs a neighbor-aware self-training mechanism to enhance discriminability in the target domain. Extensive experiments validate the effectiveness of SPA. Strengths: The paper is exceptionally well-organized, with a clear and concise presentation of the proposed method. The approach itself is highly innovative and addresses a critical tradeoff in graph alignment. The authors have done an excellent job of explaining the core components of their method, including the coarse graph alignment mechanism and the fine-grained message propagation module. Weaknesses: I have a few concerns regarding some details in the paper that I would like to address: 1. I noticed that the Neighbor-aware Propagation is abbreviated as "L_nas" in the paper. However, I think that "L_nap" would be a more appropriate abbreviation. 2. In Table 4, where the results are presented without L_gsa and L_nas, the average accuracy is reported to be 68.5%. However, I am confused as to why this result is higher than the accuracy reported for CDAN in Table 2. 3. I am also curious about the performance of SPA when it is not combined with existing methods such as CDAN. Have you conducted any experiments to evaluate the single SPA's performance in isolation? 4. I am curious to know whether the graph in question is considered to be heavy. Additionally, I would like to inquire about the number of parameters that are present within the graph. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No limitations are included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback, and we would like to address your concerns in detail below: > **[Weakness 1]** "[...] Neighbor-aware Propagation is abbreviated as "L_nas" in the paper. However, I think that "L_nap" would be a more appropriate abbreviation." **[Answer]** Thanks for your suggestion. We will modify this abbreviation for better presentation in the revised version. > **[Weakness 2]** "In Table 4, where the results are presented without L_gsa and L_nas, the average accuracy is reported to be 68.5%. However, I am confused as to why this result is higher than the accuracy reported for CDAN in Table 2." **[Answer]** In Table 2, following previous works [7, 12, 40], we compare the reported performance of CDAN [44] in the original paper. Besides, just as Line 182-184 state, we employ the standard cross-entropy loss with label-smoothing regularization [61] in our implementation. This label-smoothing regularization is directly applied to $L_{cls}$ , which is not related to $L_{gsa}$ or $L_{nas}$. In this way, with this useful mechanism, even though the CDAN-based model is without $L_{gsa}$ and $L_{nas}$ in the first section of Table 4, the experimental results are better than the originally reported performance. In addition, following previous work [40], we also adopt coefficient warm-up trick during the training process, which also help improve the performance. More programming details can be referred in the code among our supplementary materials. > **[Weakness 3]** "I am also curious about the performance of SPA when it is not combined with existing methods such as CDAN. Have you conducted any experiments to evaluate the single SPA's performance in isolation?" **[Answer]** In Table 4, we point out the experiments of Ablation Study and Parameter Sensitivity are based CDAN [44]. Other ablation studies are default on DANN [17], which is just described as our Eq.(5) $L_{total}$. Here, we put more experimental details based on CDAN and DANN under the setting of OfficeHome dataset and we choose Gaussian similarity as the metric function. | OfficeHome | A2C | A2P | A2R | C2A | C2P | C2R | P2A | P2C | P2R | R2A | R2C | R2P | avg | | ----------------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | CDAN w/ $L_{rwk}$ | 59.8 | 79.1 | 84.1 | 74.5 | 79.9 | 82.1 | 73.1 | 57.9 | 85.0 | 77.8 | 61.4 | 87.8 | 75.2 | | CDAN w/ $L_{sym}$ | 59.9 | 79.1 | 84.4 | 74.9 | 79.1 | 81.9 | 72.4 | 58.4 | 84.9 | 77.9 | 61.2 | 87.7 | 75.1 | | DANN w/ $L_{rwk}$ | 60.4 | 79.7 | 85.0 | 73.6 | 81.3 | 82.1 | 71.9 | 58.0 | 85.2 | 77.4 | 61.0 | 88.1 | 75.3 | | DANN w/ $L_{sym}$ | 60.0 | 79.8 | 84.8 | 73.5 | 80.8 | 82.5 | 72.0 | 57.5 | 84.4 | 76.9 | 60.7 | 87.77 | 75.1 | From these above experimental results, we can see that our performance works well without CDAN. > **[Weakness 4]** "I am curious to know whether the graph in question is considered to be heavy. Additionally, I would like to inquire about the number of parameters that are present within the graph." **[Answer]** Thanks for your thoughtful review. We can control the $k$-nearest neighbors for the graph, which is a hyper-parameter. Besides, during the graph construction process, we also can choose different metric function for edge weight. These experiments are offered in the Robust Analysis in experiments and appendix. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Ecsg, We have already heard back from Reviewer AnYf, Reviewer pjo2 and Reviewer Yk67. Do you have any further comments, concerns or questions towards our paper? Feel free to let us know. We are quite near the end of the discussion period :( -- The authors --- Rebuttal Comment 1.2: Title: Official Commit of Submission13641 by Reviewer Ecsg Comment: Thank you for answering my questions. My problems are solved, and I think the paper should be accepted.
Summary: The paper proposes a domain adaptation method called Spectral Alignment (SPA). The proposed SPA utilizes spectral distance defined as the difference between eigenvalues of the Laplacian matrix to quantify the distribution gap. Then SPA employs this distance as regularizer for domain alignment. Additionally, SPA incorporates pseudo-labeling for feature aggregation, enhancing the similarity between semantically related nodes. The experimental results on multiple datasets verfy the effectiveness of the proposed algorithm. Strengths: Different from the previous domain adaptation methods, the paper introduces the spectral distance to measure the domain gap. In addition, the experiment results demonstrate the efficiacy of this idea. Weaknesses: 1. The contribution of the paper is not clear. First, the paper directly employs a spectral distance from [1,20,59] without technical development to measure and narrow the distribution gap. Second, the feature aggregation with pseudo labels has been widely used in domain adaptation. So what the different of the proposed method to the previous work? 2. Figure 1 does not effectively demonstrate the motivation and innovative aspects of this paper. 3. Eq. (5) compute the total loss without consideration of loss scale. Specifically, Eq. (5) adopts four types of loss terms, L_{cls},L_{adv},L_{gsa},L_{nas}. However, there is no trade-off parameters to balance different losses. I am not sure the convergence of the proposed method. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The difference to the previous methods. 2. The convergence analysis of the proposed method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors say that "The current method for constructing graph spectra is only a starting point and may be inadequate for more difficult scenarios such as universal domain adaptation. Additionally, this method is currently limited to visual classification tasks , and more sophisticated and generic methods to object detection or semantic segmentation are expected in the future" in line 324. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback, and we would like to address your concerns in detail below: > **[Weakness 1 + Question 1]** "The contribution of the paper is not clear. First, the paper directly employs a spectral distance from [1,20,59] without technical development to measure and narrow the distribution gap. Second, the feature aggregation with pseudo labels has been widely used in domain adaptation. So what the different of the proposed method to the previous work?" **[Answer]** Thanks for the feedback. First of all, the proposed definition of spectral distance is not the same one as the existing distances [1,20,59]. In practice, different metric functions and graph laplacians can lead to different model performance. More details of choosing different metric functions and graph laplacians can refer to our experiments. Besides, our dynamic graphs motivate us to adopt graph propagation method. We did not directly apply the classic propagation method to our model because of their computation cost. We optimized this classic method with batch-wise operations. We also did a lots of technical improvement to joint the graph spectral alignment and neighbor-aware propagation and pursue a balance between the transferability and disciminability of model. > **[Weakness 2]** Figure 1 does not effectively demonstrate the motivation and innovative aspects of this paper. **[Answer]** Thanks for your thoughtful suggestion. This figure mainly used to illustrate the constitution of our framework. We will polish this pipeline to highlight our motivation and novelty in the revised version. > **[Weakness 3 + Question 2]** "Eq. (5) compute the total loss without consideration of loss scale. Specifically, Eq. (5) adopts four types of loss terms, L_{cls},L_{adv},L_{gsa},L_{nas}. However, there is no trade-off parameters to balance different losses. I am not sure the convergence of the proposed method. The convergence analysis of the proposed method." **[Answer]** Thanks for your pointing out this problem. In our implementation, we indeed use loss scales. In Eq. (3), we also write out $\alpha$, which is the coefficient term properly designed to grow along with iterations to mitigate the noises in the pseudo-labels at early iterations and avoid the error accumulation. In the revised version, we will fix the scale coefficients for other loss terms in Eq. (5). To analyze the loss scales of SPA, here are more experiments of sensitivity analysis based on DANN [17] under the setting of OfficeHome dataset and we choose Gaussian similarity as the metric function. The experimental results are shown in the following table. Fixing the coefficient of $L_{nas}$ = 0.2, the coefficient of $L_{gsa}$ changes from 0.1 to 0.9. | The coefficient of $L_{gsa}$ = | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | $\Delta$ | | ------- | ---- | ---- | ---- | ---- | ---- | -------- | | A2C | 60.0 | 59.8 | 60.3 | 60.1 | 59.0 | 1.3 | | C2R | 82.4 | 82.1 | 82.3 | 81.8 | 82.0 | 0.6 | Fixing the coefficient of $L_{gsa}$ = 1.0, the coefficient of $L_{nas}$ changes from 0.1 to 0.9. | The coefficient of $L_{nas}$ = | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | $\Delta$ | | ------- | ---- | ---- | ---- | ---- | ---- | -------- | | A2C | 59.0 | 60.4 | 59.4 | 58.9 | 58.3 | 2.1 | | C2R | 81.0 | 82.1 | 82.8 | 83.4 | 83.2 | 2.4 | From the series of results, we can find that in OfficeHome dataset, the choice of different coefficients result in similar results, which means that SPA is insensitive to these coefficients. Beside, we also visualize the curve of training loss and accuracy to support the convergence of our method and we put these figures in .pdf file. Hope these extra experiments and analysis can address your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for your reply and your efforts in the additional experiments. However, I would not change my score since the author did not provide theoretical analysis to illustrate the differences between the algorithm in this article and previous work. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the reply. As as stated in both the paper as well as the first reply, indeed that the theoretical analysis of spectral graph alignment are not presented. However, we argue that because graph spectra and graph matching are both missing--- not only in our paper, but also in the entire field of graph matching ---so that making the theory advancement is difficult and certainly beyond the scope of this paper. We thank the reviewer for pointing out the related works above. Despite, we went through all these papers [1, 20, 59], and we believe they mostly belongs to a different field of mathmatical sciences. By contrast, when we see the DA papers that being published in recent years [7, 13, 30, 47, 80, 82], very few paper factually approached the problem as a graph matching problem from the theoretical aspect. Currently, the entire field of graph matching puts more attention on optimal solutions, remaining a relatively blank understanding towards learning and generalization. Put these points together, we believe it is more suitable for a set of future works towards resolving your issue. Nevertheless, we are still immensely greatful for the good words and positivity the reviewer has exposed.
Summary: The paper proposed a graph SPectral Alignment (SPA) framework which uses a coarse graph spectral alignment and neighbor-aware self-training mechanism. The authors compared the proposed method with different UDA methods using several benchmark datasets. Strengths: The paper is generally well-written and clear. The results show the superior performance of the method in terms of accuracy in different datasets. Weaknesses: however, it missed some details that need to be completed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Please add more details that how to construct a dynamic graph, especially for weighted edge and metric function. 2. Please explain the reason why decreasing the distance of graphs is useful for solving da problem. 3. Line 121, ``with n vertices can be from the spectrum of any other graph with n vertices". what's the meaning of n? Do both graphs have to require n? 4. Why uses weighed k-Nearest-Neighbor classification to obtain the pseudo-label? 5. Please add more analysis about the results of experiments, such as Line 226, ``The experiments shows SPA consistently ourperforms than various of DA methods". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks sincerely for the constructive reviews, and we have made great efforts to address all these concerns. > **[Weakness 1]** "Please add more details that how to construct a dynamic graph, especially for weighted edge and metric function." **[Answer]** Thanks for your suggestion. We put different choices of metric function in the robust analysis in experiments and leave out some repeated information because of page limits. In revised version, we will follow your suggestion to add more details and analysis in Section 3.1 and experiments. > **[Weakness 2]** Please explain the reason why decreasing the distance of graphs is useful for solving da problem. **[Answer]** Thanks for your suggestion. The final goal of domain adaptation is to learn the domain-invariant representations (Line 20). In our paper, we first construct dynamic graphs within source domain and target domain separately and then utilize a spectral distance to project these graphs into the spectral space. After spectral alignment, source domain graphs are closer to target domain graphs in the latent space (Line 144-145). With message propagation, this alignment is further encouraged. In this way, the whole framework can perform well on domain adaptation scenario. > **[Weakness 3]** Line 121, ``with n vertices can be from the spectrum of any other graph with n vertices". what's the meaning of n? Do both graphs have to require n? **[Answer]** Thanks for your question. In our paper, the node size 'n' is exactly the batch size, which is a hyper-parameter. Under our setting, following the definition of a classic graph matching problem, source domain graphs always have the same 'n' as target domain graphs. Graph matching aims at finding the vertex correspondence between the two unlabeled graphs that maximizes the total edge weight correlation. In other words, we expect two randomly sampled subsets from source/target domains shares similar intrinsical topological properties in the latent space and thus, have great transferability. There are also some papers that study inexact graph matching, which allows matching two graphs of different sizes. We leave the extension to inexact graph matching for future work. > **[Weakness 4]** "Why uses weighed k-Nearest-Neighbor classification to obtain the pseudo-label?" **[Answer]** Thanks for your question. We'd like to show the superiority of our kNN pseudo-labeling algorithm from two aspects. **Motivation**: we can regard the weighted kNN method as a type of regularization. Our source domain graph and target domain graph are within a domain and follow homophily assumption. Compared with the single-point pseudo-labeling method, we believe the kNN method is able to encourage the smoothness of predictions among neighbors and hence better performance. **Technique**: after graph spectral alignment, the rich structure information is coarsely transferred to the target domain and then we hope to align those fine-grained intra-domain information (Line 146-147). Message propagation is encouraged to perform this alignment. The classic label propagation method can be represented as $\mathbf{Z} = (\mathbf{I} - \alpha \mathbf{A})^{-1}$ $\mathbf{Y}$. The rows of $\mathbf{Y}$ corresponding to labeled examples are one-hot encoded labels and the rest are zero. The $\alpha \in [0,1)$ is a parameter. The class prediction for an unlabeled example $x_i$ is $\hat{y} = \arg \max_{j} z_{i, j}$, where $z_{i, j}$ is the $(i, j)$ element of matrix $\mathbf{Z}$. However, the computation of propagation matrix $\mathbf{Z}$ is usually performed on all data samples, and implicitly requires access to all embedding vectors at each step of computation. Therefore, we focus on batch-wise operations and avoid to solve the above linear system of message propagation via the weighted KNN classification algorithm [72] to generate pseudo-labels for target domain graphs. > **[Weakness 5]** "Please add more analysis about the results of experiments, such as Line 226, 'The experiments shows SPA consistently outperforms than various of DA methods'." **[Answer]** Thanks for your suggestion. We leave out some information because of page limits and more analysis of experimental results can be found in appendix. In revised version, we will follow your suggestion to add more details and analysis for better presentation. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response and have changed the rating score. --- Reply to Comment 1.1.1: Comment: Thank you for the support!
Summary: This paper proposes a novel graph spectral alignment framework for unsupervised domain adaptation, jointly balancing the inter-domain transferability and the intra-domain discriminability. First, they cast the domain adaptation problem to graph primitives by composing a coarse graph alignment and aligning domain graphs in eigenspaces. Second, they perform a fine-grained message passing in the target domain via a neighbor-aware self-training mechanism. The first alignment method gives rise to coarse-grained topological structures transfer across domains but in a more intrinsic way than restrictive point-wise matching. With the help of fine-grained neighbor-aware propagation, the whole framework is able to refine the transferred topological structure to produce a discriminative domain classifier. Strengths: 1. This paper is easy to follow with clear writing, concise figures, and well-structured formulation in problem definition and methods. 2. The research problem is essential and motivation is clear. This paper gives a new perspective for the essential problem of how to find a suitable utilization of intra-domain information and inter-domain information in unsupervised domain adaptation. 3. The proposed graph spectral alignment method is an attractive method in domain adaptation, which works well and inspires follow-up research. 4. The experiment part is comprehensive, including 4 commonly used datasets in domain adaptation. Weaknesses: 1. To support the claim of balancing transferability and discriminability, the authors offer the figures of visualized features. More ablation studies to express the balance can be offered. 2. The introduction of this paper is less informative. More technical details of the proposed method can be provided there to improve legibility, as well as more comparative description of previous works. 3. Some grammars need to be improved. Line 120-122, the long sentence with two predicates. Line 241, ‘achieve’ should add ‘s’. Line 265, ‘presents’ should remove ‘s’. Line 270, ‘also use’ should add ‘s’. Line 283, ‘, By’ should use period. Line 289-292, ‘propose’, ‘formulate’, ‘calculate’ should add ‘s’. Line 297, ‘comes out’ should remove ‘s’. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. The questions are mentioned in weakness. More ablation studies can be offered. More information can be added to introduction. 2. For related work part, more latest references about graph methods can be added. 3. For the Laplacian metrices, you report the results based on two normalized types. Did you try any other types? 4. The authors mentioned the relation between kNN pseudo-labeling and graph propagation, which is confusing for me. Can you explain more on this? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the authors have addressed the limitations of their work at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for these thoughtful comments, and we would like to address your concerns in detail below: > **[Weakness 1 + Question 1]** "To support the claim of balancing transferability and discriminability, the authors offer the figures of visualized features. More ablation studies to express the balance can be offered." **[Answer]** Thank you for expressing your concern regarding the balance between transferability and discriminability of our method. We offer feature visualizations of C->R setting of OfficeHome dataset in the main paper. More visualized features of A->D, A->W setting of Office31 dataset and A->C setting of OfficeHome dataset in the appendix. Besides, we offer a Transferability and Discriminability section in appendix. We use A-distance [27] measures the distribution discrepancy, which shows our SPA model achieves a lower generalization error. Furthermore, following previous work [3], we also offer the source accuracy and target accuracy specifically in Figure 2b and Figure 2c in appendix, which illustrates our SPA can always achieve higher target accuracy. Combined with all of these experimental results, it reveals that our SPA enhances transferability while still keep a strong discriminability. Hope this can address your concern. > **[Weakness 2 + Questions 1]** "The introduction of this paper is less informative. More technical details of the proposed method can be provided there to improve legibility, as well as more comparative description of previous works. [...] More information can be added to introduction." **[Answer]** Thanks for your suggestion. We leave out some information because of page limits. More comparisons with previous works are in related work. We will add more details and highlights to the introduction in the revised revision as the reviewer suggests. > **[Weakness 3]** "Some grammars need to be improved. Line 120-122, [...]" **[Answer]** Thank you for pointing out these grammar problems. We will fix these grammars for more clear presentation in the revised revision. > **[Question 2]** For related work part, more latest references about graph methods can be added. **[Answer]** Thank you for your suggestion. Although our method focuses mainly on graph matching and alignment methods for domain adaptation, we will improve our reference with more papers about graph data mining methods or graph neural networks for domain adaptation as suggested by the reviewer. > **[Qeustion 3]** For the Laplacian matrices, you report the results based on two normalized types. Did you try any other types? **[Answer]** Thanks for your question. We choose random walk Laplacian matrix and symmetrically normalized Laplacian matrix for robustness analysis in experiment section. These two types of graph laplacians are commonly-used among graph methods, because they have very good properties such as being symmetric, positive semi-definite, and having non-negative eigenvalues, which is an important basis for following spectral alignment. Furthermore, here we offer some extra experimental results for a type of unnormalized laplacian matrix $L = D - A$ with cosine similarity and Guassian similarity respectively in the following: | OfficeHome | A2C | A2P | A2R | C2A | C2P | C2R | P2A | P2C | P2R | R2A | R2C | R2P | avg | | ---------------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | w/ $cos$ | 42.1 | 67.9 | 71.9 | 38.8 | 51.2 | 55.7 | 55.8 | 32.4 | 71.9 | 66.1 | 48.7 | 80.2 | 56.9 | | w/ $gas$ | 50.2 | 65.7 | 75.4 | 56.3 | 53.3 | 61.1 | 63.2 | 44.7 | 66.8 | 64.2 | 47.5 | 72.1 | 60.1 | From these experimental results, it also illustrates that the normalized graph laplacians with good properties perform better than the unnormalized one. > **[Question 4]** The authors mentioned the relation between kNN pseudo-labeling and graph propagation, which is confusing for me. Can you explain more on this? **[Answer]** Thanks for your thoughtful review. For graph propagation methods, it can be represented as $\mathbf{Z} = (\mathbf{I} - \alpha \mathbf{A})^{-1}$ $\mathbf{Y}$. The rows of $\mathbf{Y}$ corresponding to labeled examples are one-hot encoded labels and the rest are zeros. The $\alpha \in [0,1)$ is a parameter. The class prediction for an unlabeled example $x_i$ is $\hat{y} = \arg \max_{j} z_{i, j}$, where $z_{i, j}$ is the $(i, j)$ element of matrix $\mathbf{Z}$. Based on our constructed graphs, it is intuitive for us to adopt graph propagation. However, the computation of $\mathbf{Z}$ is performed on all data samples, and implicitly requires access to all embedding vectors at each step of computation. If the size of dataset increases, it would be intractable to finish the computation of matrix $\mathbf{Z}$. We focus on producing labels for unlabeled target domain graphs in each iteration. Therefore, we avoid to solve the above linear system of classic label propagation via the weighted KNN classification algorithm [72] to generate pseudo-labels for target domain graph. --- Rebuttal Comment 1.1: Comment: Thanks for your reply.
Rebuttal 1: Rebuttal: We thank the reviews for their positive views about different aspects of our work which say that: * our proposed method is: "an attractive method which works well and inspires follow-up research" (Reviewer AnYf), "different from the previous domain adaptation methods" (Reviewer Yk67), "highly innovative" (Reviewer Ecsg) * our motivation is: "essential and clear" (Reviewer AnYf), "addresses a critical trade-off in graph alignment" (Reviewer Ecsg) * our formulation is: "well-structured formulation in problem definition and methods" (Review AnYf), "an excellent job" (Reviewer Ecsg) * our presentation is: "easy to follow" (Reviewer 54MB, Reviewer AnYf), "well-written and clear" (Reviewer pjo2), "clear writing, concise figures" (Review AnYf), "a clear and concise presentation of the proposed method" (Review Ecsg) * our experiment is: "pretty good" (Reviewer 54MB), "comprehensive" (Reviewer AnYf), "superior" (Reviewer pjo2), "the experiment results demonstrate the efficacy of this idea" (Reviewer Yk67) We also thank the reviewers for all the other detailed and insightful comments. We would like to address their concerns in detail below. Pdf: /pdf/93262bf41997cef83cf364270958859ac068fdc1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposed a graph spectral alignment model for unsupervised domain adaptation. They evaluated on several benchmarks and show good performance, especially in DomainNet and Office-Home, compared with existing UDA methods. Generally, the paper is easy to follow. Strengths: The paper is generally easy to follow and the experimental results on DomainNet and Office-Home are pretty good. Weaknesses: The novelty of this work is limited. The graph alignment is simply extended from BSP [12] and Graph Matching like SIGMA [39]. The Neighbor-aware Self-training Mechanism is similar to embedding propagation. In this sense, there is limited novelty in terms of the methodology contribution. The performance on other benchmarks like Office-31 and VisDA seem not better than others. The visualization of feature embedding is not informative enough to validate the graph alignment. It is essential to visualize the domain-wise graphs, and how are they aligned? This can show more insights. For example, in Office-Home, their model performs better when A is target domain. Then what is the key reason? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What are the cross-domain graphs? How are they aligned? What is key difference in terms of the proposed two terms? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The experimntal results are not diverse, mainly on classification performance. The model is incremental over existing methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable time and review, and we have made great efforts to address all these concerns below: > **[Weakness 1]** "The novelty of this work is limited. The graph alignment is simply extended from BSP [12] and Graph Matching like SIGMA [39]. The Neighbor-aware Self-training Mechanism is similar to embedding propagation. In this sense, there is limited novelty in terms of the methodology contribution." **[Answer]** Thanks for the feedback. As we stated in the (Line 295-297 and Line 316), both work ([9] and [39]) are indeed related to our work. The key finding of BSP [9] is that the eigenvectors with the largest singular values will dominate the feature transferability and thus, BSP penalizes the largest singular values for improving transferability. SIGMA [39] focuses on the category-level adaptation of object detection. Technically, their bipartite graph matching is a type of exact graph matching, requiring multiple matching stages for nodes and edges respectively. Despite that, as is stated by multiple previously published papers [9, 12, 32], the very core and crucial challenge of DA (and UDA) attributes to a balancing between inter-domain transferability and intra-domain discriminability (in our paper: Line 23-27). To approach this balancing problem is not trivial since the remarkable transferability is usually enhanced at the expense of worse discriminability (in our paper: Line 30-31). Our work's major finding centers at this core problem as we develop a joint adjusting mechanism of graph spectral alignment and message propagation module. To make the graph propagation feasible in UDA, we present a novel batch-wise propagation operator and develop the self-training propagation method, which is not a direct application of existing method. In comparison with the two works the reviewer mentioned, while we do believe they make substantial contributions to the community, we believe neither work concerns this essential balancing problem. > **[Weakness 2]** "The performance on other benchmarks like Office-31 and VisDA seem not better than others. " **[Answer]** First, while the reviewer (rightfully) point this out, we may still want to list the performance gain of our approach overall: +2.6% on OfficeHome, +0.5% on VisDA, neck-to-neck performance with current SOTA[48] on Office-31, and **most notably*** the large-scale DomainNet with +8.6%. Partially, the reviewer also gave us credits on the empirical performances as is written in Strength section of th review :) Upon that, we pledge the reviewer to reconsider the empirical aspect of our approach, especially crediting the significant performance plus on the large-scaled one (DomainNet). To have this result was somewhat rare, as we posit that a certain amount of prior papers did not report original DomainNet in the experiments. > **[Weakness 3 + Questions]** "The visualization of feature embedding is not informative enough to validate the graph alignment. It is essential to visualize the domain-wise graphs. What are the cross-domain graphs? How are they aligned? What is key difference in terms of the proposed two terms?" **[Answer]** There seems some misunderstanding here. The graphs we built in our approach always attribute to each individual domain. As such, we do not have the domain-wise or the cross-domain graphs in that sense. We kindly ask the reviewer to reiterate Line 96-104 for the graph definition, as well as the visualization plots in both experiments and the appendix. > **[Weakness 4]** "For example, in Office-Home, their model performs better when A is target domain. Then what is the key reason?" **[Answer]** This is a good observation. As we can see from Table 2, the performances across different setups do have exhibited some patterns: C2A > A2C, A2P > P2A, A2R > R2A, as is rightfully pointed out by the reviewer. However, all the baselines in Table 2 also exhibit the same pattern --- not just our approach. We postulate that this very much aligned results reveal a intrinsic properties in these public datasets --- and perhaps more importantly --- a difference of difficulty-level for these different pair of domain transfer. > **[Limitations]** "The experimental results are not diverse, mainly on classification performance." **[Answer]** As we dive into the literature, most papers published in the past ML venues (such as ICML, ICLR and NeurIPS) on DA, DG or UDA [9, 12, 17, 18, 31, 32, 44, 48], have commonly stuck with the benchmarks we report in our paper. We believe this is due to that this line of work mostly focuses on the fundamental ML problem behind UDA, rather than specific task. On the other line, we do acknowledge that there are papers covering other diversified tasks such as object detection [39], semantic segmentation [36], or NLP [8, 10]. Different from the ML line of work, these papers were dedicated at specific task rather than promoting the generality of the method. That said, given the terrain of the normal empirical proxy of UDA, we believe our experiments suffice to establish our contribution. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 54MB, Hi! Since the beginning of the discussion, we have already heard back from three other reviewers (AnYf, pjo2, Yk67). And luckily, it seems that all the other reviewers have reached a consensus of positivity towards our paper :) We may humbly ask the reviewer if there are any other questions, concerns, comments or responses that the reviewer may have. Feel free to post them while we can still clarify and help, before this discussion phase ends (and it will end soon :( Thank you! -- The authors
null
null
null
null
null
null
ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion
Accept (poster)
Summary: The paper introduces ProtoDiff, a novel addition to the family of Prototypical Network methods that uses a diffusion model to obtain better class prototypes at inference time. Prototypical Networks perform classification with metric-based comparisons between examples and “class prototypes”, variables computed based on support sets of class examples. In ProtoDiff, a “vanilla” prototype is first computed based on the class support set. This vanilla prototype is then used as conditional input of a diffusion model, computing over T steps a residual between the vanilla prototype and a better, “diffused” prototype for the class. This diffused prototype is then used to perform traditional prototypical network classification. To train the diffusion model at meta-training time, one has to produce “overfitted” prototypes for tasks, based on both the query set and support set for the task. The authors then demonstrate the effectiveness of their approach with a series of experiments for within-domain few-shot learning, cross-domain few-shot learning, and few-task few-shot learning. Strengths: The combination of prototypical networks and diffusion models is interesting: applying insights from recent developments in diffusion models to the field of meta-learning is still under-explored in the field. The paper is clearly presented, the procedure itself is a novel combination of existing ideas, and is clearly motivated and easily argued for. The experiments seem to support the claim that the diffusion model specifically is what is needed to edge out any remaining performance missing from the original prototypical network approach, in the form of a residual. Weaknesses: Overall, the paper doesn’t have any glaring flaw, if not for the fact that experimental gains seems marginal. In practice, the idea behind the paper seems to be about squeezing any additional percentage point of performance out of the existing technique of prototypical networks. This is especially apparent with the fact that the diffusion model is used to produce a residual, and is also itself conditioned on the “vanilla” prototype. As for related work, it would be interesting to explore the connection with *Nava et al. (2022). Meta-Learning via Classifier (-free) Diffusion Guidance.* (https://arxiv.org/abs/2210.08942). They also apply diffusion models in the context of meta-learning, using a similar metric classification technique for the downstream tasks, even though they approach the problem with the goal of conditionally diffusing a model’s parameters, instead of class prototypes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would ask the authors to further elaborate on the positioning of their work within the meta-learning literature, arguing for the choice of conditioning the diffusion model on the existing “vanilla” prototype, and comparing it with possible alternative choices, such as specifically learnt class embeddings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors satisfactorily discuss the limitations of their work in the main paper text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Thank you for your time, effort, and constructive comments.*** * **Weakness** Answer: Thank you for the thoughtful feedback and the reference to the work by Nava et al. We acknowledge the concern that the experimental gains might seem marginal and that the focus appears on incremental improvements within prototypical networks. However, it is essential to highlight that applying a diffusion model in ProtoDiff represents a novel way to generate prototypes, moving beyond simple performance squeezing. By using a task-guided diffusion model to produce a residual and allowing for task-specific prototype generation, ProtoDiff opens a new avenue in prototype learning. This approach offers a subtle but significant departure from conventional methods, allowing for more robust and nuanced class representations. As for the connection with the paper by Nava et al., we agree that exploring similarities and differences between our approach and theirs would enrich our work. While our focus is on the diffusion of class prototypes, their method applies diffusion models in a broader context of the model's parameters. We will review this paper in depth and include a discussion in our revised manuscript to clarify how ProtoDiff fits within this broader research landscape. * **Questions** Answer: **Position in the Meta-Learning Landscape**: As far as we know, our ProtoDiff is the first approach that integrates a diffusion model for generating prototypes in few-shot meta-learning. This design choice facilitates seamless integration with any metric-based meta-learning method for mutual benefit. **Rationale for Conditioning**: We utilize the “vanilla” prototype as a guiding condition, gradually enabling the diffusion model to generate class-specific prototypes. We also explored two alternative strategies for conditioning in the following table: one without any conditions and another with learned class embeddings. When not conditioned, the performance tends to be arbitrary due to the absence of class-specific cues during diffusion. On the other hand, employing learned class embeddings as a condition yielded subpar results compared to the vanilla prototype, potentially due to outliers in each class's learned embeddings. We plan to elaborate on these insights in our revised manuscript. |Conditions|1-shot|5-shot| |:-----:|:-----:|:-----:| |W/o conditons|20.27$\pm$ 0.25|22.32$\pm$0.15| |Learnt class embeddings|66.17$\pm$ 0.22|81.17$\pm$0.16| |Vanilla prototype|66.63$\pm$ 0.21|83.48$\pm$0.15| --- Rebuttal Comment 1.1: Title: Reply to the Authors Comment: We thank the Authors for their detailed response, which addresses the concerns I raise. I look forward to a broader comparison with the research landscape in the final version. Meanwhile, I am willing to raise my score to a 7. --- Reply to Comment 1.1.1: Comment: We appreciate your feedback and are pleased to hear that our detailed response has addressed your concerns. We will certainly incorporate a broader comparison with the research landscape in the final version of the manuscript. Your willingness to raise the score to a 7 is greatly appreciated.
Summary: This paper proposes using the diffusion model to learn a prototypical network. During the meta-train stage, an overfitted prototype (by fine-tuning the classifier until it gets a probability of 1 for the task) is obtained first, then given a vanilla prototype (by simply averaging the features) as input, the diffusion model is to denoise a random prototype into the overfitted prototype. This is because the overfitted prototype is unknown during the meta-test stage, therefore the diffusion model is to obtain this overfitted prototype by using the vanilla prototype as conditional information. Strengths: - the whole design of leveraging the diffusion model to predict the prototype is quite interesting and novel. That is, overfitted prototype indeed is better than the vanilla prototype (can be seen as an "optimal" prototype for a given task), but obtaining it is not possible during the meta-test stage because the query set is not available. Hence, using the diffusion model to capture the distribution of how a vanilla prototype can be turned into an overfitted prototype is quite making sense. Weaknesses: - as the diffusion model is data-hungry, I believe when the number of available tasks is low and not diverse, this proposed model may not perform better than a conventional model. The author should also include this analysis to make the paper more complete. - the proposed model has increased training time due to overfitting the prototype and a high number of timesteps to predict the prototype, as mentioned by the author. The author should also include this analysis to let readers know how much increased time over conventional methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I believe Eq.8 has a typo. - What is the ratio of the increased training time over the conventional model? - What is the case when the number of available tasks is low and not diverse? Will the conventional model perform better or not? - How can this model be applied in production compared to conventional model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: - the author mentioned two limitations, one is the proposed model requires a substantial number of timesteps to sample the prototype during the meta-test stage. - secondly, obtaining the overfitted prototype requires increased training time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Thank you for your positive feedback and for recommending acceptance of our paper.*** * **Lower and not diverse available tasks** Answer: We acknowledge the reviewer's point regarding the data-intensive nature of diffusion models. Indeed, performance can be impacted when faced with a limited number of training or less diverse tasks. Nevertheless, our original manuscript's Table 4 offers insights into cross-domain few-shot learning, indicating that even in less diverse scenarios, ProtoDiff maintains competitive results. Table 5 further illustrates our model's resilience under the constraints of the few-task few-shot learning framework. In addition, our approach consistently stands its ground against top-tier competitors, including MetaModulation (Sun et al.). We are confident these findings effectively address the issues highlighted and will provide more details in our revised version. |miniImageNet|1-shot|5-shot| |:-----:|:-----:|:-----:| |MetaModulation (Sun et al.)|43.21|57.26| |MLTI + ProtoDiff|44.75|58.18| Sun et al. "MetaModulation: Learning Variational Feature Hierarchies for Few-Shot Learning with Fewer Tasks." ICML 2023. * **Increased time over conventional methods** Answer: We appreciate your suggestion regarding the increased training time for the proposed model, which is a consequence of overfitting the prototype and the high number of timesteps required for predicting the prototype. To provide a clear comparison, we have analyzed the time consumed during both the meta-training and meta-testing phases of ProtoDiff and compared these results with the respective times for ProtoNet. In meta-training and meta-testing wall-clock times per task, our ProtoDiff is slower by factors of ProtoNet $5 \times$ and $15 \times$, respectively. Numerical results are provided in our response to *Reviewer ZGD4*, who asked the same question. We will include this time analysis in the revised manuscript to understand our approach's efficiency comprehensively. * **Eq.8 typo** Answer: Correct, the summation symbol $\sum$ is missing over the task $T_i$. We have rectified this and will ensure a thorough review of all notations and equations in our manuscript to prevent any such oversights in the final draft. * **The ratio of the increased training time** Answer: The training time for our proposed model, ProtoDiff, compared to the conventional model, is approximately 2.5 times longer. This increase is primarily due to the overfitting of the prototype and the diffusion process to predict the prototype. * **The number of available tasks is low and not diverse** Answer: In scenarios where the number of tasks is limited and lacks diversity, the performance of conventional models might be questioned. However, in Table 4, our experiments focus on cross-domain few-shot classification and have already shown ProtoDiff's capabilities even when tasks are not diverse and exhibit domain shifts. Further, our experiments in Table 5 delved into the few-task few-shot challenges, where the number of available tasks is low. In both these situations, ProtoDiff consistently emerges as one of the leading models, underscoring its effectiveness under such constraints. * **How can this model be applied in production?** Answer: Our ProtoDiff is designed to be adaptable in its application to any prototype-based meta-learning method, enhancing performance across different challenges, such as conventional few-shot learning, cross-domain few-shot learning, and few-task few-shot learning. The versatility of ProtoDiff allows it to handle various few-shot scenarios, providing a potential edge over standard methods. Such adaptability can be beneficial in diverse production environments. * **Limitations** Answer: We have included the comparisons for the effect of the number of timesteps (see response to *reviewer ZGD4*) and increased training time. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I agree with the author's rebuttal. The only concern left for me is the increased training time which is an obvious tradeoff of this method. I believe this paper can inspire future work to reduce/improve this. I recommend an accept for this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and recommendation to accept our paper. We acknowledge the concern regarding the increased training time, and we are motivated to explore more efficient implementations and methods in our future work, inspired by your insightful comments.
Summary: This paper proposes a prototype-based meta-learning method called ProtoDiff. During meta-learning stage, ProtoDiff introduces a task-guided diffusion process, learning a generative process conditioned on vanilla prototype for more robust prototype representation. ProtoDiff is extensively validated on within-domain, cross-domain, and few-task few-shot classification. Strengths: (1) The diffusion models have been very successful generative models, attracting great interests very recently. It is interesting to see the authors introduce the diffusion models into few-shot learning regime, for addressing the limitation of unrobust prototype estimation given a limited number of training examples. (2) The proposed ProtoDiff is extensively evaluated on a variety of few-shot learning problems, including within-domain, cross-domain, and few-task few-shot classification. The ablation study is carefully and thoroughly designed for validating the effectiveness. Weaknesses: (1) The diffusion model for prototype learning is the key in the proposed ProtoDiff; however, some technical details on the diffusion model are missing. About the architecture, how many layers are used in the encoder and what about dimension of linear transform and the number of heads in the attention module and what about the hidden dimensionality in MLP? What the proposed ProtoDiff will perform if the design parameters vary? About the diffusion process, how do you select T? How does different T affect the performance and speed of the proposed ProtoDiff? (2) Some other concerns. In ProtoDiff, the ground truth prototypes, called overfitting prototypes, are obtained by finetuning the network for several steps. Besides that strategy, have you ever considered, during meta-training, using more support images (e.g. 8 or 16) by which one may get more accurate prototypes? As suggested in ProtoNet, it is beneficial to train with a higher way that will be used at meta-test. In table 3, the baseline of ProtoNet is SetFeat; therefore, I suggest the authors change “This paper’’ to “SetFeat+ProtoNet”. How do you implement ProtoNet in Tables 4 and 5? Is it based on SetFeat or ProtoNet? Please clarify. In equation (10), $\mathbf{z}$ is not defined; the subscript $t$ of $\sum$ is missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See "Weaknesses" section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Though the authors discuss the limitations of the proposed ProtoDiff, i.e., heavy computational cost, it is not clear how ProtoDiff is expensive compared to the baselines. I suggest the authors provide wall-clock time of meta-training/meta-testing of ProtoDiff against the ProtoNet. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Thank you for your time, effort, and constructive comments.*** (1) **Technical details are missing** Answer: Our ProtoDiff model is constructed using a design inspired by GPT-2. This includes a 12-layer transformer, a 512-dimensional linear transformation, an attention mechanism with 16 heads, and an MLP with a hidden dimensionality 512. These configurations are in line with GPT-2's default parameters. The configuration files can be accessed in our code repository for more detailed parameter setup. Our experiments in the subsequent tables highlight that our model achieves optimal performance with these settings. We will ensure this information is included in the revised manuscript for clarity. |transformer structures|1-shot|5-shot| |:-----:|:-----:|:-----:| |3-layer|62.17$\pm$ 0.25|78.93$\pm$0.17| |6-layer|63.25$\pm$ 0.22|79.63$\pm$0.15| |9-layer|65.21$\pm$ 0.21|80.13$\pm$0.18| |12-layer|66.63$\pm$ 0.21|83.48$\pm$0.15| |15-layer|64.15$\pm$ 0.25|80.93$\pm$0.18| |Heads numbers|1-shot|5-shot| |:-----:|:-----:|:-----:| |1|63.28$\pm$ 0.24|79.97$\pm$0.15| |4|64.23$\pm$ 0.22|80.13$\pm$0.14| |8|65.91$\pm$ 0.23|81.91$\pm$0.16| |16|66.63$\pm$ 0.21|83.48$\pm$0.15| We have selected the diffusion time step **T** for the diffusion process to be 100. We've adopted the DDIM sampling strategy to accelerate prediction with $dim(\tau) = 10$. This effectively reduces the total sample requirement to just 10. We've conducted comparative experiments using varying diffusion times and intermediate intervals $dim(\tau)$. As presented in the subsequent tables, we observe that as the diffusion timesteps increase, both the performance and the inference time increase. Simultaneously, when $dim(\tau)$ is increased, the inference time decreases, making the process more efficient. |$dim(\tau)=10$, Timesteps|1-shot|5-shot|1-shot inference time|5-shot inference time| |:-----:|:-----:|:-----:|:-----:|:-----:| |T = 10|63.98$\pm$ 0.24|80.12$\pm$0.15|1ms|2ms| |T = 50|65.93$\pm$ 0.21|82.93$\pm$0.17|5ms|7ms| |T = 100|66.63$\pm$ 0.21|83.48$\pm$0.15|9ms|14ms| |T = 500|66.65$\pm$ 0.23|83.59$\pm$0.14|53ms|93ms| |T = 1000|66.74$\pm$ 0.20|83.97$\pm$0.17|102ms|192ms| |$dim(\tau)$, T=100|1-shot|5-shot|speed| |:-----:|:-----:|:-----:|:-----:| |1|66.78$\pm$ 0.21|83.62$\pm$0.14|99ms|192ms| |5|66.15$\pm$ 0.23|83.44$\pm$0.16|45ms|92ms| |10|66.63$\pm$ 0.21|83.48$\pm$0.15|9ms|14ms| |20|66.12$\pm$ 0.21|82.79$\pm$0.13|5ms|7ms| |100|63.15$\pm$ 0.23|81.27$\pm$0.14|1ms|2ms| (2) **More support images** Answer: We appreciate your suggestion of using more support images during meta-training to obtain more accurate prototypes, as also recommended in ProtoNet. To shed more light on this, we've conducted an experimental comparison using different numbers of support sets during meta-training. The results in the subsequent table illustrate that augmenting the number of support images for each class during the meta-training phase enhances performance across various shots. We intend to integrate these findings into the revised version of our manuscript. |Number of support sets for each class|1-shot|5-shot| |:-----:|:-----:|:-----:| |1|66.63$\pm$ 0.21|73.12$\pm$0.15| |5|70.25$\pm$ 0.22|83.48$\pm$0.13| |8|73.91$\pm$ 0.21|84.72$\pm$0.13| |16|82.21$\pm$ 0.23|84.98$\pm$0.17| **Clarified Tables 4 and 5** Answer: Thank you for your keen suggestion. In our revision, we will change the label accordingly. For Table 4, the implementation is based on AFA [19]. Hence we will appropriately change "This paper" to "AFA + ProtoDiff". Similarly, in Table 5, the implementation is based on MLTI [51], and the label "This paper" will be replaced with "MLTI + ProtoDiff." **Equation typo** Answer: Thank you for your careful observation. In Equation (10), **z** should be denoted as $\mathbf{z_t}$. Also, you're correct about the missing subscript for the summation symbol $\sum$. The subscript should be $(x, y) \sim Q$. Our revised manuscript will correct these typos to ensure clarity and precision. (3) **Limitations** Answer: We have compiled the wall-clock time for both the meta-training and meta-testing phases of ProtoDiff and compared these against the respective times for ProtoNet. In meta-training and meta-testing wall-clock times per task, our ProtoDiff is slower by factors of ProtoNet $5 \times$ and $15 \times$, respectively. As part of future work, we will investigate and address this limitation to further enhance the efficiency of our approach. We will incorporate these details into the revised manuscript for a more comprehensive comparison. |Meta-train|1-shot|5-shot| |:-----:|:-----:|:-----:| |ProtoNet|0.6ms|1.2ms| |ProtoDiff|3ms|5.3ms| |Meta-test|1-shot|5-shot| |:-----:|:-----:|:-----:| |ProtoNet|0.6ms|1.2ms| |ProtoDiff|9ms|14ms| --- Rebuttal Comment 1.1: Title: Reply to Rebuttal by Authors Comment: I thank the authors for their careful responses, which addressed most of my concerns. I hope the authors include, in the modified version, the discussion and analysis herein about technical details and ablation on diffusion models, more accurate estimation of prototypes with more support images, and wall-clock time of meta-training/test. Despite the fact that the proposed ProtoDiff has heavy computational cost, I think it is interesting and inspiring to introduce diffusion model for robust prototype estimation in the few-shot learning. I opt to accept this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful review and your support for our work. We appreciate your suggestions and will definitely incorporate the detailed technical analysis, ablation on diffusion models, refined prototype estimation with additional support images, and the wall-clock time of meta-training/test in the revised version of the paper, thereby addressing the computational cost concern of ProtoDiff.
Summary: This paper tries to improve Prototypical Network in few-shot learning. Specifically, the authors use a diffusion model to predict optimal prototypes from initial prototypes. In each iteration of the meta-training phase, the optimal prototypes are obtained by several inner loops on the current task, then a diffusion model takes the initial prototypes as input, and predicts the difference between the optimal and initial prototypes. The final loss is used to meta-learn the feature extractor and the diffusion model. The experiments show that the proposed method can improve ProtoNet to some degree. Strengths: - Introducing diffusion models into few-shot learning and using it to translate features is interesting. - The use of residual prototype learning is clever. Weaknesses: - In essence, the key idea is to learn a module to predict optimal prototypes from initial prototypes. Thus this module is a set-to-set function, and does not need to be stochastic, e.g., a MLP or transformer. From this point, there is no motivation of using generative models like diffusion models, considering that no probabilistic inference is used during training or testing. The motivation of this paper is thus very weak. - In fact, back in 2019, FEAT algorithm [1] already exploited such an idea, using a shallow transformer to transform initial prototypes into near-optimal prototypes without using any probabilistic models. While being much more efficient (no diffusion operations, no inner gradient loops, and much fewer parameters used) and being less tuned for hyperparameters, FEAT can achieve performance comparable to ProtoDiff. Similarly, CrossTransformer [2] also uses a shallow transformer to transform initial prototypes into near-optimal prototypes, but in addition, considers spatial information. Thus the novelty and the value of the paper are quite limited. - The writing of the method section needs to be significantly improved. Before looking at the algorithm 1-2 in the appendix, I struggle to understand the pipeline of the method, and some notations are confusing. For example, there are clear mistakes in the notations in lines 112-116. - In Table 1, the result of Classifier-Baseline is borrowed from Meta-Baseline. Instead, the authors need to show the results of their own pre-trained model (trained with CE loss on the whole training set) before the use of ProtoDiff to show how much improvement of their method brings. - In Table 2, only generative models are considered. As stated above, more deterministic feature transformation methods should be compared. [1] Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions. CVPR 2020. [2] CrossTransformers: spatially-aware few-shot transfer. NeurIPS 2020. ======Post-rebuttal======= The authors have addressed most of the concerns during the rebuttal. Thus I raise my score from 3->7. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weakness above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have stated some of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***Thank you for your time, effort, and constructive comments.*** * **Motivation of ProtoDiff** Answer: We acknowledge your insight regarding using a stochastic or diffusion model. Our choice to employ a generative model like the diffusion process extends beyond simply predicting optimal prototypes from initial ones. It also incorporates uncertainty into the few-shot learning process, which can be vital when dealing with limited samples. We have also conducted the experiments only using MLP and transformers instead of using generative models in the following table. |miniImageNet|1-shot|5-shot| |:-----:|:-----:|:-----:| |Vanilla|63.17|79.26| |MLP|64.15|80.23| |Transformer|64.97|81.28| |Diffusion|66.63|83.48| While an MLP or transformer can improve performance over vanilla prototypes, our experiments reveal that ProtoDiff outperforms these alternatives. Incorporating uncertainty through the generative model provides a nuanced advantage in capturing the underlying distribution, resulting in better performance. We will detail these experimental comparisons and further clarify the motivation behind our approach in the revised manuscript. * **Limited novelty** Answer: While FEAT and CrossTransformer also utilize a function to transform vanilla prototypes into near-optimal ones, they lack the intermediate supervision provided by "overfitted" prototypes, which can guide the transformed prototype toward an optimal state. Our model uniquely learns an "overfitted" prototype first and then leverages a diffusion model to capture this learning process, ensuring a more refined transition to the optimal prototype. Besides the diffusion process to generate diffused prototypes, our model provides a new insight to leverage the “overfitted” prototype as supervision. This approach enables our system to utilize the overfitted prototype as guidance, thus implementing meta-learning as a continuous learning process. We will incorporate these papers into our discussion of related work and in our comparisons. We believe this is of community interest. * **Writing of the method** Answer: We regret that the notations in the method section were confusing, and we appreciate pointing out the specific mistakes in lines 112-116. We have taken your feedback to heart, fixed these typos, and conducted a thorough review of all notations and equations in the manuscript. * **Pre-trained model** Answer: We agree that it is essential to showcase the results of our own pre-trained model before applying ProtoDiff to provide a clear comparison and demonstrate our method's improvements. In response to your suggestion, we have prepared the results of our pre-trained model (trained with CE loss on the whole training set) and will present it in a new table. This information will be included in the revised manuscript to provide a more transparent and comprehensive evaluation of ProtoDiff. |||mini 1-shot|mini 5-shot|tiered 1-shot|tiered 5-shot|800 1-shot|800 5-shot| |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| |Pretrained from Chen et al.|Classifier-Baseline|58.91|77.76|68.07|83.74|86.07|96.14| |Pretrained from Chen et al.|ProtoDiff|66.63|83.48|72.95|85.15|92.13|98.21| |Pretrained from ours|Classifier-Baseline|58.75|77.86|68.15|83.95|85.97|96.03| |Pretrained from ours|ProtoDiff|66.49|83.51|73.01|85.72|92.05|98.11| * **More deterministic feature transformation methods results** We have also conducted the experiments using MLP and transformers instead of generative models in the first question (**Motivation of ProtoDiff**). While an MLP or transformer can improve performance over vanilla prototypes, our experiments reveal that ProtoDiff outperforms these alternatives. --- Rebuttal Comment 1.1: Title: Response to Authors Rebuttal Comment: Thanks for the rebuttal. I totally agree that uncertainty is vital when dealing with limited samples. However, there is no "useful" uncertainty in this paper due to design choices. Let me detail it below. When uncertainty is taken into account, only when there are multiple samples sampled from the posterior probability which are used for inference, the uncertainty makes sense (as what is done in variational inference). However, in this paper, during inference, only one prototype is produced for each class, thus there is no probabilistic inference, i.e., the distribution built for the prototype makes no use at all and, the uncertainly is not taken into account; furthermore, it can be easily inferred that the optimal prototype becomes the best choice if only one prototype is produced. So in principle, in the authors' method, the optimal posterior prototype distribution should be a point estimate of the optimal one, so there is, in fact, no uncertainty in your method. The strong performance only comes from the auxiliary deep architecture (used for the diffusion process) for estimating the optimal prototypes, which does not need uncertainty at all. Besides, your baseline using transformer is far below that reported by FEAT [1], the method identical to adding a transformer layer for estimating the optimal prototype. I have re-implemented FEAT, and it can exactly match the reported performance, which is very similar to ProtoDiff. This indicates that generative modeling used in this paper is not needed, at least when used without probabilistic inference. I know that different from FEAT, Protodiff learns to fit the overfitted prototype explicitly. However, due to the mechanism of meta-learning, FEAT implicitly does such a process (meta-learn to map prototypes to optimal ones, the "supervision/guidance" in your rebuttal), and its performance matches Protodiff closely. I think it is interesting to take this mapping process out explicitly, but I still do not think there is anything new here. [1] Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions. CVPR 2020 --- Reply to Comment 1.1.1: Title: Thank you for the response. Comment: We thank the reviewer for engagement and encouragement to sharpen our contribution and its justification. **Uncertainty in diffusion and ProtoDiff**: Traditional diffusion models, during the sampling phase, introduce randomness at every time step by drawing random noise from a standard normal distribution to diffuse the sample. This mechanism injects uncertainty into the diffusion process at each step (full details in Jonathon Ho et al. Denoising Diffusion Probabilistic Models. NeurIPS 2020.) Our ProtoDiff follows a similar strategy. At each time step, we sample different random noise values to diffuse our prototypes, ensuring that uncertainty is introduced in each step of the prototype generation. While it is true that we generate a single prototype for each class during inference, it's important to note that the inherent uncertainty introduced during the diffusion process ensures robustness and generalization in this prototype (see Figure 7). Multiple samples at the final step could indeed introduce additional uncertainty. However, we believe the cumulative uncertainty introduced during each step of the diffusion process suffices, and further sampling might increase the computational complexity without a proportionate gain in performance. **Differences and complementarity with FEAT**: Thank you for the insightful comment. Allow us to clarify some misunderstandings and highlight the difference in our approach compared to FEAT. The way our Transformer and ProtoDiff construct a new prototype is fundamentally different from FEAT. FEAT leverages samples from *all categories* within a task to generate a prototype, capitalizing on the intrinsic inter-class information to derive a more discriminative prototype. In contrast, our ProtoDiff, while using the overfitted prototype as supervision, only employs samples from a *single category* to generate the new prototype. This means we are not tapping into the potential informative context provided by samples from other categories. Additionally, our ProtoDiff employs the diffusion model to progressively generate the prototype, whereas FEAT does not utilize any generative model for prototype creation. Naturally, we may adopt FEAT’s strategy of utilizing samples from all categories to produce a new prototype. The results with ResNet-12, as presented in the following table, are revealing. Our performance with a transformer based on an overfitted prototype (with all categories) indeed experiences a slight enhancement over FEAT. However, when the diffusion process is integrated, there is a notable improvement of 2.19% and 3.11% in the 1-shot and 5-shot scenarios on miniImageNet. We will include these results in the main paper. |miniImageNet|1-shot|5-shot| |:-----:|:-----:|:-----:| |ProtoNet|63.17|79.26| |FEAT|66.78|82.05| |FEAT (Our reimplementation)|66.93|82.41| |FEAT + overfitted prototype|67.68|83.18| |FEAT + ProtoDiff|68.97|85.16| We believe our approach, while drawing parallels with FEAT, introduces unique considerations in generative prototype creation. We hope this response clarifies our position as well as the novelty of our contributions.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting
Accept (poster)
Summary: This research focuses on distributed learning problems and introduces a novel approach that incorporates three key elements: variance reduction, compression, and partial participation. It extends the DASHA method to address the partial participation scenario in both finite-sum and stochastic scenarios. In the context of partial participation, the proposed method achieves state-of-the-art communication complexity. Furthermore, in the scenario involving variance reduction and partial participation (without compression), it achieves optimal oracle complexity without relying on strict bounded gradient assumptions. Strengths: The extension of DASHA to accommodate the partial participation setting is nontrivial addition. The analysis involving partial participation introduces additional complexity, but this work successfully establishes state-of-the-art bounds (although not that surprising). Weaknesses: The presentation can be further improved in several ways to enhance clarity and readability. Firstly, instead of listing the assumptions and results for all three settings (full gradient, finite sum, stochastic) consecutively, it would be beneficial to present each setting individually with a particular focus on the stochastic setting, which is the main focus of this work. This approach would make it easier for the reader to understand and follow the discussion. Additionally, by adopting this approach, it would be possible to cover the results for both the nonconvex and PL condition cases in the main text. Furthermore, there are a few minor typos that need to be addressed. The set $\mathbb{R}^d$ is written as $\mathbb{R}$ in various places, specifically lines 35, 44, 51, and 59. Additionally, in line 164, "what" should be replaced with "which." Finally, in line 405, "dependeces" should be corrected to "dependencies." Another aspect that requires attention is the presentation of the algorithm. Currently, it lacks motivation and clarity. For instance, it is unclear how step (9) is derived to address the issue in steps (7) to (8), where it is mentioned that these updates are adapted to ensure the validity of the proof. Providing a clearer explanation or justification for step (9) would greatly improve the understanding of the algorithm and its purpose. Although the specific setup presented in this study has not been previously explored, and its analysis poses challenges, it can be viewed as a minor blend of previous setups, and the results can be considered as a slight extension of the work done in DASHA. Finally, the scope of the experiments conducted in this research is somewhat limited, as they do not encompass the more intriguing nonconvex neural network scenarios. Furthermore, there is a notable absence of numerical comparisons with other methods, such as FRECON or any other method within the full gradient setup. While I understand that space constraints may be a factor, with careful organization, it may be possible to allocate some room for such comparisons. One possible approach could involve dedicating the main body solely to the stochastic setting, while relegating the finite-sum case to the appendix. This would create additional space for the inclusion of numerical comparisons and provide a more comprehensive evaluation of the proposed approach. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: After examining equations (7)-(8), it is mentioned that step (7) requires further updating if the same idea as in the full gradient step is utilized. The question arises as to why this is necessary. Would the method still function if only the participating nodes updated $h$ while the others retained the previous values? Essentially, is this a limitation of the analysis or would the method diverge under such circumstances? Another query pertains to practical applications that satisfy Assumption 6. Is there a specific real-world scenario that meets this criterion? Additionally, it is worth exploring whether achieving state-of-the-art bounds in the stochastic setting is possible without relying on Assumption 6. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! > The presentation can be further improved in several ways to enhance clarity and readability. ... We agree. One can see that we separate assumptions in the parts "1. Gradient Setting.," "2. Finite-Sum Setting," and "3. Stochastic Setting," which begin at Lines 33, 38, and 43, accordingly. We tried to separate assumptions for each particular setting. > Although the specific setup presented in this study has not been previously explored, and its analysis poses challenges, it can be viewed as a minor blend of previous setups, and the results can be considered as a slight extension of the work done in DASHA. 1. The **partial participation setting** with **stochastic gradients** was explored in many previous papers, including FedAvg, FedPAGE, MIME, and CE-LSGD. However, these methods have weaknesses (see Tables 1 and 2). 2. The **partial participation setting** with **stochastic gradients** and **compression** was explored in FRECON, MARINA and DASHA (gradient setting only). Our paper also considers **partial participation setting** with **stochastic gradients** and **compression,** so we consider the setup that was explored in the papers from 2., and partially explored the papers from 1. > Finally, the scope of the experiments conducted in this research is somewhat limited ... Furthermore, there is a notable absence of numerical comparisons with other methods, such as FRECON or any other method within the full gradient setup. While I understand that space constraints may be a factor, with careful organization, it may be possible to allocate some room for such comparisons. ... **Note that we added new experiments that are presented in a separate file in the "global" comment to all reviewers called "Author Rebuttal by Authors".** Click "Revisions" if you can not find the pdf. There we compare our new method with other baselines, MARINA and FRECON. Also, in Section A, we compare with DASHA. If our paper gets accepted, we will be allowed an additional content page, where we will be happy to add our numerical experiments. > Another aspect that requires attention is the presentation of the algorithm. Currently, it lacks motivation and clarity. For instance, it is unclear how step (9) is derived to address the issue in steps (7) to (8), where it is mentioned that these updates are adapted to ensure the validity of the proof. ... > After examining equations (7)-(8), it is mentioned that step (7) requires further updating if the same idea as in the full gradient step is utilized. The question arises as to why this is necessary. ... In Lines 162-167, we discuss that DASHA-MVR *can not* work in the partial participation scenario because $h_{i}^{t+1}$ requires the points $x^{t+1}$ and $x^{t}$ to make the update (7). We can not evaluate (7) if the $i^{th}$ does not participate. The step (9) resolves this problem because $h_{i}^{t+1} = h_{i}^{t}$ when the $i^{th}$ does not participate. However, we can not use (7) anymore if the $i^{th}$ does not participate because **the proofs do not work** in such a case. So we redesigned (7) and obtained (9) to make new proofs work. Let us give an intuition of how we come up with (9). Imagine that we do not have partial participation. Then, using the results from DASHA, we know that (7) works: $h_i^{t+1} = \nabla f_i(x^{t+1};\xi^{t+1}_{i}) + (1 - b) (h_i^t - \nabla f_i(x^{t};\xi_i^{t+1})).$ ( * ) Consider that we the partial participation. If we understood the question correctly, the reviewer asks if the following strategy will work: $h_i^{t+1} = \begin{cases}\nabla f_i(x^{t+1};\xi^{t+1}_{i}) + (1 - b) (h_i^t - \nabla f_i(x^{t};\xi_i^{t+1})),& p \\\\ h_i^{t}, & 1 - p\end{cases}$ ( ** ) Let us take the expectation of $h_i^{t+1}$ w.r.t. the partial participation randomness. We get $$E_p [h_i^{t+1}] = p (\nabla f_i(x^{t+1};\xi^{t+1}_{i}) + (1 - b) (h_i^t - \nabla f_i(x^{t};\xi_i^{t+1}))) + (1 - p) h_i^{t}$$ $$=p \nabla f_i(x^{t+1};\xi^{t+1}_i) +p (1 - b) (h_i^t - \nabla f_i(x^{t};\xi_i^{t+1}) + (1 - p) h_i^{t}$$ Note that this $E_p [h_i^{t+1}]$ is not equal to r.h.s of ( * ). At the same time, if we take $h_i^{t+1}$ from (9), one can show that $E_p [h_i^{t+1}]$ equals to ( * )! This indicates that we can not use ( ** ) in the partial participation setting, and the right choice is (9). > Another query pertains to practical applications that satisfy Assumption 6. Is there a specific real-world scenario that meets this criterion? Additionally, it is worth exploring whether achieving state-of-the-art bounds in the stoch. setting is possible without relying on Assumption 6. Assumption 6 is crucial to use the variance reduction technique and obtain the $1 / \varepsilon^{3/2}$ rate. This assumption is the mean-squared smoothness property. It was shown \[1\] that this is crucial to get the $1 / \varepsilon^{3/2}$ rate. Without this assumption, one can only get the $1 / \varepsilon^{2}$ rate (the rate of the vanilla SGD method). In the context of the ML and FL problems, this assumption is not strong. For instance, for the logistic regression problems, this assumption holds. In other words, this assumption requires the smoothness of $f_i(x, \xi)$ w.r.t $x,$ which holds for most ML problems. > Regarding the Main theorems, there is uncertainty surrounding the meaning of $\omega$. Does it represent any positive scalar, or is there a specific definition or interpretation attributed to it? $\omega \geq 0$ is the variance of the compressor from Definition 1. For instance, consider the Rand$K$ compressor that takes $K$ random values of a vector scaled by $d / K.$ Using the definition, one can show that this compressor is unbiased and $\omega = \frac{d}{K} - 1.$ So the more we compress ($K$ smaller), the larger is $ \omega.$ If let say $K = d$ (we don't compress), then $\omega = 0.$ Since we use compressors in our method, $\omega$ affects our convergence rates. \[1\]: Arjevani et. al Lower Bounds for Non-Convex Stochastic Optimization --- Rebuttal Comment 1.1: Comment: I'd like to confirm that I've reviewed the rebuttal. The authors have appropriately taken into account my comments. I suggest including the reasoning behind the derivation of equation (9) for better understanding. Lastly, I maintain the same evaluation because, in my view, while the contributions are valuable, the paper holds a moderately significant impact. --- Reply to Comment 1.1.1: Title: Respond Comment: Thank you for your comments and time!
Summary: This paper introduces a novel method for distributed optimization and federated learning that integrates three critical components: variance reduction of stochastic gradients, partial participation, and compressed communication. The proposed method is shown to have optimal oracle complexity and leading-edge communication complexity in a setting of partial participation. A significant advantage of this method is that it effectively blends variance reduction and partial participation, thus providing optimal oracle complexity without the necessity of all nodes' participation or the bounded gradients assumption. Strengths: One of the primary strengths of this paper is the integration of three crucial elements in distributed optimization and federated learning. By encompassing variance reduction of stochastic gradients, partial participation, and compressed communication into a single method, the authors have created a holistic approach that takes into account the multifaceted nature of distributed optimization. The authors demonstrate that their method achieves optimal oracle complexity, which is an important measure of the efficiency of an algorithm. This is a significant contribution, suggesting that the method has a high degree of efficiency in processing the data. Weaknesses: Despite the significant theoretical contributions, this paper falls short in one crucial aspect, which restricts its overall impact and practical applicability. Lack of Empirical Validation: The major limitation of this paper is the absence of empirical experiments to substantiate the theoretical findings. While the theoretical results are compelling, they need to be complemented with practical validation to establish the practicality and effectiveness of the proposed method. The lack of experimental results makes it difficult to ascertain the method's performance in real-world scenarios, to gauge its scalability with varying data sizes and distributions, and to compare it with existing methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Despite the significant theoretical contributions, this paper falls short in one crucial aspect, which restricts its overall impact and practical applicability. > Lack of Empirical Validation: The major limitation of this paper is the absence of empirical experiments to substantiate the theoretical findings. While the theoretical results are compelling, they need to be complemented with practical validation to establish the practicality and effectiveness of the proposed method. The lack of experimental results makes it difficult to ascertain the method's performance in real-world scenarios, to gauge its scalability with varying data sizes and distributions, and to compare it with existing methods. We agree experiments are important, and that is why we have experiments in Section A. **Note that we added new experiments that are presented in a separate file in the "global" comment to all reviewers called "Author Rebuttal by Authors".** Click "Revisions" if you can not find the pdf. We compare our new method with other baselines, MARINA and FRECON, that support compressed communication in the partial participation setting. We show that our new method requires significantly fewer communication rounds to find an $\varepsilon$-stationary point. We believe that these experiments, together with the experiments from Section A in the main paper, give us strong evidence that our method has an excellent practical performance. Moreover, please see Section A. The DASHA \[1\] method is considered to be the current SOTA method in *the full participation setting.* So, it is reasonable to compare our new method with DASHA. However, our method supports partial participation, and from the discussions in Section 6, we know that partial participation should degenerate the convergence rate up to $1 / p.$ When $p = 1$ ($s = 100$, full participation regime) we observe that our method and DASHA have almost the same convergence rates. Then we take $p < 1$ by taking the number of clients $s$ smaller and observe that the degeneration of the convergence rates is up to $1/p$ factor. This is the expected dependence because some clients do not participate. In total, we compare our new method with three previous baselines that support compressed communication with different datasets and settings. If our paper gets accepted, we will be allowed an additional content page, where we will be happy to add our numerical experiments. *In view of the extra experiments, we kindly ask the reviewer to reconsider the score. If the reviewer thinks that there should be some other experiments that would enhance the paper, then let us know.* \[1\]: Tyurin A, et al. DASHA: Distributed nonconvex optimization with communication compression and optimal oracle complexity (ICLR 2023) --- Rebuttal Comment 1.1: Comment: Thanks for the extra experiments. I have updated rating accordingly. --- Reply to Comment 1.1.1: Title: thanks! Comment: Thanks, much appreciated!
Summary: This manuscript considers distributed non-convex optimization in the federated learning setting. It proposes an algorithm DASHA-PP that brings three important features of federated learning together: i) variance reduction, ii) compressed communication, and iii) partial participation. The authors derive the oracle and communication complexities of finding an $\epsilon$-stationary point of the smooth non-convex objective. It is the first method that includes those three important features. The algorithm presented in the paper is built upon DASHA, presented in [1]. In particular, in its form in [1], DASHA could work in the partial participation case only when the exact gradient oracle was available. In this manuscript, the authors extended the partial participation property of DASHA to the stochastic and finite-sum settings. [1] Tyurin, Alexander and Peter Richt'arik. “DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization.” ArXiv abs/2202.01268 (2022): n. pag. Strengths: * I think the paper is very well-written. * The problem the manuscript considers is an important problem for the large-scale federated learning setting, where including all three features, i.e., partial participation/compressed communication/stochastic gradient, is inevitable. Weaknesses: * The main technical contribution of the manuscript over [1] is the update rule given in Eq. (9). Similar update rule adjustments are frequently used in bandit optimization literature, cf [Section 2, 2] and [Section 6, 2]. * Besides the new update rule, it seems to me that the mathematical derivations in the manuscript are very similar to those in [1]. In that sense, I find the contribution of the manuscript is rather incremental. [1] Tyurin, Alexander and Peter Richt'arik. “DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization.” ArXiv abs/2202.01268 (2022): n. pag. [2] Cesa-Bianchi, Nicolò and Gábor Lugosi. “Prediction, learning, and games.” (2006). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments! > Besides the new update rule, it seems to me that the mathematical derivations in the manuscript are very similar to those in [1]. In that sense, I find the contribution of the manuscript is rather incremental. In Lines 168-182, we describe the difference between the proof techniques. The partial participation setting required us to rethink the proofs from the previous methods. Note that we had two challenges: *develop method* and *prove convergence.* When you have a developed method and you know that it is a correct method, it is much easier to prove the convergence rate (still may be very challenging though). However, at the beginning of our research journey, we didn't have either the method, nor the proof technique. > The main technical contribution of the manuscript over [1] is the update rule given in Eq. (9). Similar update rule adjustments are frequently used in bandit optimization literature, cf [Section 2, 2] and [Section 6, 2]. * We agree that there are many connections in mathematics. One can also argue that there is a connection to the Markov chain with infinitely many states, where we move to the next state with probability $p,$ or stay at the same state with probability $1 - p.$ * But we respectfully disagree that this is a weakness of our paper because our paper and \[2\] solve completely different problems. Even if we knew about \[2\], how could it help us to design the first step (the update rules of $h^{t+1}_i$ and $g^{t+1}_i$) of (9) from our paper? The connections between (9) and LABEL EFFICIENT FORECASTER from [2, p. 130] are not straightforward except for the fact that they both choose a step based on a random variable. Please let us know if you have any questions. \[1\] Tyurin, Alexander and Peter Richt'arik. “DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization.” ArXiv abs/2202.01268 (2022): n. pag. \[2\] Cesa-Bianchi, Nicolò and Gábor Lugosi. “Prediction, learning, and games.” (2006) --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I keep my overall acceptance score.
Summary: The paper injects existing algorithm for distributed optimisation in the compressed and variance reduced setting with a mechanism for partial participation of nodes. Theoretical guarantees are given in the general non-convex setting, but also under PL-condition. The theoretical results are verified empirically in a classification problem. Strengths: The paper considers the setting of partial participation, which is of importance in distributed optimisation. The setting of partial participation considered, is general enough to include at least two specific partial participation strategies. The theoretical results match the state of the art guarantees for variance reduced algorithms with limited communication up to scaling of $1/p_a$, where $p_a$ governs the probability of some node to participate in the gradient estimation during some iteration of the algorithm. I find this result elegant and I like that this scaling is pretty much observed in the experiments. The authors give some intuition on why passing from the full to the partial participation setting is non-trivial. Weaknesses: The paper is very technical (if one wishes to follow proofs of the claims) and builds to an extensive amount of previous work, thus it is difficult to be appreciated by readers that do not belong to this specific research community. The appendix is about 90 (!) pages and a significant portion of them is essential to the proper understanding of this work, thus it's quite hard to review it in the context of neurips (the results are believable though). Since the topic is distributed computing with limited communication, I would expect a wider experimental analysis of the proposed algorithms in various tasks (also would be good some of the experiments to be in the main text). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I struggle a bit to understand the intuition behind Algorithms 3 and 4. For algorithm 3, I don't see immediately the connection with the PAGE algorithm and, for Algorithm 4, I don't see which algorithm tries to generalise. Could you elaborate? Is it possible to show that the chosen batch sizes of $B=O((L_{max}/\hat L)^2)$ and $O((L_{\sigma}/\hat L)^2)$ are at most $n$? Otherwise, there is a chance that such a batch size is not available. Minor: there are some issues with the writing, e.g.: "it is not trivial in each order one should apply the expectations" "We are not fighting for the full generality" " The main goal of our paper is to develop a method for the nonconvex distributed optimization" "From the beginning of federated learning era, the partial participation..." I would recommend to the authors to go through the text once more and improve the quality of writing. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss limitations of their work, but I cannot see any profound ones. Perhaps the issue of batch sizes specified in my questions deserves a bit more attention. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! > The paper is very technical (if one wishes to follow proofs of the claims) and builds to an extensive amount of previous work, thus it is difficult to be appreciated by readers that do not belong to this specific research community. The appendix is about 90 (!) pages and a significant portion of them is essential to the proper understanding of this work, thus it's quite hard to review it in the context of neurips (the results are believable though). Since the topic is distributed computing with limited communication, I would expect a wider experimental analysis of the proposed algorithms in various tasks (also would be good some of the experiments to be in the main text). * The flow of our proofs is quite standard for the literature on optimization methods. One can look at the proofs of celebrated optimization methods (e.g. \[1\]). We agree that the appendix is large and admit it in Lines 183-187. However, the size of the appendix is justified by the number of new methods that we developed. And, in Lines 183-187, we suggest how to approach the proofs. * Our main goal was to develop methods with strong *theoretical* guarantees. Once we've done that, we expect that the developed methods will have the best performance in the analyzed setting with any loss functions, datasets, and parameters. **Note that we added new experiments that are presented in a separate file in the "global" comment to all reviewers called "Author Rebuttal by Authors".** Click "Revisions" if you can not find the pdf. > I struggle a bit to understand the intuition behind Algorithms 3 and 4. For algorithm 3, I don't see immediately the connection with the PAGE algorithm and, for Algorithm 4, I don't see which algorithm tries to generalise. Could you elaborate? Regarding Algorithm 4, it does not try to generalize any algorithm because it is somewhat new. However, it has connections to ZeroSARAH \[2\] which has stronger assumptions on the functions $f_i$ (see Table 2). For Algorithm 3, one can compare it to Line 4 of Algorithm 1 in \[3\]: we also use the probability switching technique. However, Algorithm 3 is not exactly PAGE because, with probability $p$, Algorithm 3 calculates $\nabla f_i(x^{t+1}) - \nabla f_i(x^{t}) - \frac{b}{p} (h^t_i - \nabla f_i(x^{t}))$ instead of $\nabla f_i(x^{t+1}).$ This modification is necessary for the partial participation setting. > Is it possible to show that the chosen batch sizes ... In the nonconvex case, these quantities can be as large as possible (we can not bound them by $n.$ Note that they are lower bounded by $1$). However, we never say that $B = (L_{\max} / \widehat{L})^2.$ Instead of it, we only say that $B = O ((L_{\max} / \widehat{L})^2),$ which means that $B \leq C * (L_{\max} / \widehat{L})^2$ for some constant $C \geq 1$ (This the definition of the Big-O notation). So it is always possible to choose a batch size $B.$ \[1\]: G. Lan First-order and stochastic optimization methods for machine learning \[2\]: Z. Li. et al. ZeroSARAH: Efficient nonconvex finite-sum optimization with zero full gradient computation \[3\]: Z. Li. et al. PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I am satisfied by it and keep my overall acceptance score.
Rebuttal 1: Rebuttal: Dear AC and Reviewers, We are now presenting new **extra experiments**. See the attached pdf to this rebuttal. If you do not see the pdf here, click "Revisions," after "Author Rebuttal by Authors" and there you will find a link to the pdf In these experiments, we compare our new method with the previous theoretical state-of-the-art baselines (FRECON and MARINA) that work with the **partial participation setting** and **compression**. We choose only those baselines that support compressed communication from Table 1. We compare the algorithms on two datasets (real-sim and MNIST) and in stochastic and finite-sum settings. We show that our new method requires significantly fewer communication rounds to find an $\varepsilon$-stationary point. **We believe that these experiments, together with the experiments from Section A in the main paper, give us strong evidence that our method has an excellent practical performance.** Even more important, we have a theory that provides new state-of-the-art convergence guarantees in the considered settings. Thank you! Please let us know if you have any additional questions or suggestions. Pdf: /pdf/19353319f58cd810923f460fd76dc1da64320f41.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Robust Learning for Smoothed Online Convex Optimization with Feedback Delay
Accept (poster)
Summary: This paper studies Smoothed Online Convex Optimization (SOCO) problems where the target is to minimize the sum of a per-round hitting cost and a switching cost that penalizes temporal changes in actions. The most significant difference from previous studies is that they assume a delayed hitting cost and a nonlinear multi-step switching cost, which is quite more general than before and thus brings technical difficulties to provide theoretically guaranteed methods. Furthermore, they also want to use ML-based online optimizers to enhance performance. To that end, they developed a meta-algorithm that robustifies a given online optimizer by projecting its suggested action onto a robustified action set computed from a given expert method and historical observations. They provide a worst-case analysis for the cost of the resulting algorithm, based on which, they further establish a robustness guarantee and consistency result in the language of $CR$-competitive (see Definition 4) once a condition is satisfied. In the appendix, they explore the empirical performance of the proposed algorithm using a case study of battery management and validate its robustness. Strengths: 1. The studied setting is general enough and can incorporate popular neural networks as its part. 2. The worst-case theoretical analysis seems new and the proof seems correct. 3. Though put in the appendix, the experiments are well conducted with detailed descriptions of the experiment setup and chosen parameters, and a careful analysis of the experiment results. I appreciate the experiments a lot. 4. The paper is well-written mostly and the analysis is always to the point. Weaknesses: However, there are some parts not clear enough. 1. The term ``multi-step nonlinear memory’’ appears abruptly in line 38. I think several additional explanations would help readers to digest this concept better for the first time. 2. In line 200, the author didn’t explain the meaning of $\text{cost}(x_{1:T})$. 3. Line 353, it is unclear to me what the notation $\text{Rob}_{\lambda} \left(\tilde{x}_{1: T} \right)$ means. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Intuitively, once establishing the worst-case bound uniformly in Theorem 4.1, one could easily get an average version (because the average case is always no worse than the worst case). Hence, it is unclear to me why the author bothered to introduce another result to bound the average cost in Theorem 4.2. 2. In Appendix D, the author spent much space explaining how to compute $\nabla_W \mathrm{cost}(x_{1:T})$. The reason is that the operator of projecting ML predictions into a robust set cannot be easily differentiated as typical neural network layers. So why bother to use this projection method? Why not use a primal-dual method that provides a projection-free alternative? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See the weakness part and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your efforts and time in reviewing our paper. `Average cost bound:` The reviewer is correct that an average cost bound can be established based on the worst cost bound. We provide Theorem 4.2 to confirm that it’s beneficial to train the ML model in a robustification-aware manner according to Eqn. (6). Additionally, it implies that the robustification step is more valuable in terms of bounding the average cost when the ML model is not well trained (e.g., due to inadequate training data). More explanations of the average cost analysis are available in Line 368-387. `Differentiable robustification:` While adding a regularizer is an alternative to differentiating the constrained optimization (i.e., the robustification step in Line 5 of Algorithm 1), we need to properly tune the weight (i.e., Lagrangian multiplier or dual variable) to meet the constraint. Thus, we directly consider differentiating the robustfication step to be consistent with our algorithm design. The robustification step can be viewed as an implicit layer, which is also differentiatiable. Specifically, to facilitate standard training using back-propagation, we have derived the gradients in Appendix D. Note also that differentiating the implicit layer is also commonly considered in the literature (e.g., the reference at the bottom of this response). `Multi-step nonlinear memory:` It means the current action appears in the future switching costs for multiple steps, and hence penalizes temporal changes in actions over multiple steps. For example, given the position of a robot as the action for motion planning, a two-step memory captures the acceleration (see details in Ref [2] at the bottom of this response). We’ll explain this in the revised paper. `Notation:` $cost(x_{1:T})$ means the total cost given a sequence of actions $x_{1:T}$. The notation in Line 353 means the training loss is defined directly in terms of the robustified actions, rather than pre-robustification ML predictions $\tilde{x}_{1:T}$. Hence, we refer to training using this loss as robustification-aware training (Line 348). **Reference:** [1] Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J. Zico 560 Kolter. Differentiable convex optimization layers. NeurIPS 2019. [2] Shi, Guanya, Yiheng Lin, Soon-Jo Chung, Yisong Yue, and Adam Wierman. Online optimization with memory and competitive control. NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal. And I keep my score currently. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal and hope that the reviewer's concerns have been satisfactorily addressed. We'd also be glad to engage in more discussions to address any remaining concerns that the reviewer may have.
Summary: This paper considers the SOCO problem with multi-step nonlinear switching costs and feedback delay. The authors propose the RCL algorithm, which projects the ML predictions on to a robust action set determined by the expert’s predictions. They show that RCL maintain the $(1+\lambda)$-competitiveness against the expert while exploiting the potential of ML predictions. Moreover, they identify a sufficient condition under which the RCL achieves finite robustness and 1-consistency simultaneously. They also give an upper bound of the average cost of RCL, which reveals the advantage of training the ML model in a robustification-aware manner. Finally, they validate the theoretical results using a case study of battery management. Strengths: - Challenging setup. This paper considers SOCO with hitting cost feedback delay and multi-step nonlinear memory in the switching cost, which is very hard to deal with. - New algorithm design and new proof approach. The intuition behind their algorithm is clearly explained in line 232-240. Theorem 4.1 is proven by considering a new reservation cost, which decouples the dependency of the online action on the history. - Good theorems with adequate explanation. The paragraphs following Theorem 4.1, Corollary 4.1.1 and Theorem 4.2 give comprehensive analysis of the results. Weaknesses: - Assume known switching cost and smoothness constant. Convexity and smoothness assumptions on the hitting costs and switching costs. - The robustification-aware ML model is hard to be trained using standard back-propagation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Few lines explaining why the cost difference and the switching cost difference can be upper bounded by $H$ and $G$ respectively would help understanding. - How to efficiently compute the robustified action set? If it cannot be efficiently computed, how you learn that set in your experiment? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have already mentioned in the paper (line 408-417). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your efforts and time in reviewing our paper. `Explanation of costs difference bounds:` The smoothness of the cost functions implies that, given a bounded difference in two actions, their cost difference is also bounded (formally stated shown in Lemma C.1 in the appendix). Thus, when RCL chooses a different action than the expert, the future cost difference can be bounded in terms of the action difference shown in $H$ and $G$ in Section 4.1. `Computing the robust action set:` The constrained problem in (1) for the robust action set is convex, and hence it can be efficiently solved using standard convex solvers without a significant computational burden at runtime. For example, in our experiment, each testing process (including 1400+ inferences) takes less than a second on a laptop (Line 586 in the appendix). `Assumptions:` Along with known switching costs and constants, the convexity and smoothness assumptions on the costs are standard in the SOCO literature. These assumptions are needed for theoretical analysis and can be good approximations of realistic scenarios in practice. `Training using back-propagation:` The robustification step in Line 5 of Algorithm 1 can be viewed as an implicit layer, which can also be differentiated. Specifically, to facilitate standard training using back-propagation, we have derived the gradients in Appendix D. Note also that differentiating the implicit layer is also commonly considered in the literature (e.g., the reference at the bottom of this response). **Reference:** Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J. Zico 560 Kolter. Differentiable convex optimization layers. NeurIPS 2019. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I have read the rebuttal and the other reviews. And I will keep my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal and hope that the reviewer's concerns have been satisfactorily addressed. We'd also be glad to engage in more discussions to address any remaining concerns that the reviewer may have.
Summary: This paper considers the smoothed online convex optimization problem, in which, in addition to the standard hitting cost incurred per step, the algorithm also incurs a switching cost which relates to changes in its chosen action. The paper considers switching costs which are multi-step (relate to the previous p time steps). The paper presents an ML-augmented algorithm for the problem, which given a possibly-inaccurate predictor and a robust expert, is able to utilize the predictions from the predictor to the degree to which they are accurate, while not exceeding the cost of the expert times some small factor. This is done by taking the predictions and projecting them onto a space of possible actions dictated by the expert. The paper also discusses how to adjust the training of an ML model to take into account the fact that the model's predictions are then projected onto the actions allowed by the expert. Strengths: The model, and the concept underlying the algorithm, are natural. I liked the figure in Appendix A, perhaps it should be part of the main body of the paper. Weaknesses: In my opinion, the writing of the paper makes it hard to gain an intuitive understanding of the contributions of the paper within the scope of reasonable reviewing. For example, consider Theorem 4.1; excluding the robustness term, it is very hard to understand the significance of this theorem. Perhaps this bound seems simple and intuitive to experts more familiar with this problem. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: none Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your efforts and time in reviewing our paper. `Significance:` We’ll revise the paper to improve its presentation. Our contribution is a novel ML-augmented algorithm that exploits ML predictions to improve the average performance while bounding the worst-case performance in a general SOCO setting with hitting cost feedback delay and multi-step non-linear switching costs. The feedback delays introduce additional uncertainties, and multi-step memory in the switching cost means the current action can affect future costs in multiple steps. Both make it substantially challenging to construct a robust action set. In fact, even without ML predictions, addressing the hitting cost feedback delay and multi-step nonlinear memory is already challenging and an independent study [26]. Our algorithm design and analysis (Theorem 4.1) are substantially new and differ from those used in simple SOCO settings. More specifically, the proof of Theorem 4.1 relies on a novel and crucial techniques that removes the dependency of $x_t$ on the history in the online decision process. The second term in the bound in Theorem 4.1 shows how well RCL follows the ML predictions given $\lambda>0$. Specifically, when $\lambda$ increases, RCL will be closer to ML (see the definition of $\Delta(\lambda)$ in Line 252). Moreover, it shows that RCL stays closer to the better-performing expert for guaranteed competitiveness when the expert’s cost is lower, and vice versa. **To summarize, our work is the first learning-augmented algorithm for the challenging SOCO setting with feedback delay and multi-step switching costs. It makes a novel and significant contribution to the growing SOCO literature.** --- Rebuttal Comment 1.1: Comment: Thank you for your answer. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal and hope that the significance and novelty of our work have been clarified. We'd also be glad to engage in more discussions to address any remaining concerns that the reviewer may have.
Summary: This paper studies the Smoothed Online Convex Optimization with multi-step nonlinear switching costs and feedback delay. In this setting, they propose Robustness-Constrained Learning (RCL), which combines existing online algorithm with a novel reservation cost to robustify untrusted ML predictions. Theoretically, they provide both worst-case and average-case guarantees for RCL. Finally, they provide some experiments to evaluate RCL. Strengths: 1. The paper is well written and organized. The paper thoroughly explains the challenges of incorporating ML predictions in their setting and emphasizes the importance of developing novel algorithmic techniques to overcome these difficulties. 2. The theoretical results are sound and look rigorous. The worst-case bound of RCL is novel. Weaknesses: 1. The weakness of the paper lies in the design of reservation costs, which may seem somewhat technical for analysis purposes. Additionally, the discussion on the optimization complexity for solving the constrained convex problem is missing, leaving a gap in understanding the practical implications of the proposed approach. 2. The paper's experimental results should be included in the main part to enhance confidence in the efficiency of RCL. Presenting empirical comparisons of RCL with other methods would demonstrate its advantages and substantiate the claims made in the paper. This would provide valuable evidence of the proposed algorithm's performance and practical benefits. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There are no limitations with regards to negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your efforts and time in reviewing our paper. `Reservation cost and complexity:` If we choose an action different from the expert without a reservation cost, it’s possible that the competitiveness against the expert is violated (an example is provided in Lines 210-217). Thus, for guaranteed competitiveness, our novel reservation costs are designed to hedge against any possible uncertainties due to hitting cost feedback delays and multi-step non-linear memory in the switching costs. It bounds the maximum future cost difference in terms of the difference between our actual action and the expert's action. We’ll revise the presentation for the reservation cost to improve its readability. The constrained problem in (1) for computing the robust action set is convex, and hence it can be efficiently solved using standard convex solvers without a significant computational burden at runtime. For example, in our experiment, each testing process (including 1400+ inferences) takes less than a second on a laptop (Line 586 in the appendix). `Simulation results:` Our experimental results for EV charging station management can be found in Appendix B. Our results highlight the advantage of RCL in terms of robustness guarantees compared to pure ML models, as well as the empirical benefit of training a robustification-aware ML model in terms of the average cost. Per the reviewer’s suggestion, we’ll include some of the key results in the main body of the paper for better understanding. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. After reading the rebuttal and the other reviews, I decided to keep my original score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal and hope that the reviewer's concerns have been satisfactorily addressed. We'd also be glad to engage in more discussions to address any remaining concerns that the reviewer may have.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies a problem, namely, Smoothed Online Convex Optimization (SOCO) with feedback delay and multi-step nonlinear memory, proposing a machine learning augmented online algorithm (RCL) with certain theoretical guarantees. This algorithm considers to implicitly hedge the decisions made by online and offline predictors, using a new constraint action set onto which the algorithm projects its decisions. This constraint set requires the decisions within it to achieve a trade-off between the hitting cost and the switching cost. By this design, this work achieves the first worst-case type cost bound for machine learning augmented algorithm under the given setting, and proposes a sufficient condition for the proposed algorithm to attain finite robustness and 1-consistency simultaneously. Moreover, the paper demonstrates that the algorithm RCL benefits from robustification-aware training and offers an average cost guarantee. Comprehensive experiments are executed to support these results. Strengths: The writing of this paper is good with detailed explanations of the achieved results and the motivation is well substantiated by the experiments. The proposed algorithm is clear and easy to understand. The inherent nature of delayed feedback in the considered problem might pose an obstacle for algorithm design and analysis, and this paper proposes a solution for it. Weaknesses: I harbor some reservations regarding the applicability and scalability of the theoretical findings presented in this paper. Please refer to the "Questions" section below for further details. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. This paper claims that it provides a sufficient condition to achieve the finite robustness and 1-consistency simultaneously (line 63). But Corollary 4.1.1 suggests that this condition might be applicable for the proposed specific algorithm only. Moreover, this condition requires a per round lower bound for the switching cost of the expert algorithm. Does this condition hold for a broad class of expert algorithms? Could the sufficient condition proposed in Corollary 4.1.1 be suitable for a more general ML-augmented algorithm to realize the best-of-both-world guarantee? 2. I am confused about the term "worst-case" used in the discussion about Theorem 4.1. Based on my understanding, the worst-case bound should account for the upper bound of the minimax ratio. It would be beneficial if the authors could discuss about the relationship between the minimax bound and the bound in Theorem 4.1. 3. There are concerns regarding the existence and convexity of $\mathcal{X}_t$. Does the solution of Eq. (1) below line 223 always exist? It appears from Corollary 4.1.1 that the action set $\mathcal{X}$ is finite. If that is the case, then the "robustified" action set defined at line 223 may not exhibit convexity due to its discrete nature. Can the authors answer the above two questions? 4. Would the authors be able to provide further discussion concerning the technical contributions made in addressing the issues related to feedback delay, especially in comparison with previous works? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Not much. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your efforts and time in reviewing our paper. `Sufficient condition:` The prior study [1] proves an "impossibility" result: 1-consistency and finite robustness are not achievable simultaneously without further assumptions due to the fundamental hardness of the SOCO problem (even without delay feedback or multi-step memory). Our result in Corollary 4.1.1 shows a sufficient condition for our algorithm (RCL) to achieve 1-consistency and finite robustness simultaneously. There might exist other conditions under which alternative algorithms proposed in the future may also achieve 1-consistency and finite robustness simultaneously, but our work provides the first condition in the field and advances the result in [1]. Our sufficient condition is not unrealistic in practice (Line 329-332). In fact, the condition is violated by an expert (not necessarily the best-known competitive expert iROBD) only in dummy cases (i.e., its action that results in zero switching cost also has zero hitting cost). Even when the condition is not satisfied, our main result in Theorem 4.1 still bounds the cost of RCL and is consistent with the "impossibility" result in [1]. `Minimax and Theorem 4.1:` In online optimization, the competitive ratio is typically defined as the maximum cost ratio of one algorithm to another (Definition 2). Thus, the "worst case" means the case that results in the maximum cost ratio. "Minimax" arises when minimizing the competitive ratio (i.e., minimizing the maximum cost ratio). The "worst case" in Line 256 means that our bound in Theorem 4.1 applies for any possible problem instances, including those worst-case instances that have the maximum cost ratios. `Existence and convexity of` $\mathcal{X}_t$: An improperly-constructed robust action set can result in empty solution sets (example in Lines 207-217). Thus, we make contributions by designing a reservation cost that ensures there always exists a non-empty solution set regardless of future uncertainties. This is also stated in Lines 241-242 and proved in Lemma C.4 in the appendix. As in the standard SOCO literature (e.g., [1]), the action set $\mathcal{X}$ is assumed to be a subset of $R^n$ (Line 116), and its size $|\mathcal{X}|$ can be measured by the maximum norm distance between two points in $\mathcal{X}$ (e.g., [3,4]). Thus, along with convex costs, the set $\mathcal{X}_t$ is also convex. We'll clarify this point in our revision. `Feedback delay`: The feedback delays introduce significant uncertainties for robustness guarantees, as we cannot immediately evaluate the costs of our algorithm and the expert. Thus, for robustness guarantees, we have to consider additional risks due to not following the expert's action and bound the maximum cost difference in Eq. (2) by exploiting the cost structures. The prior study [2] also considers multi-step feedback delays, but it has a more restrictive model where all the feedbacks have the same delay (whereas we allow different delays of up to $q$ steps in Definition 1 in Line 154-157). Most importantly, [2] is a purely expert algorithm and does not consider an ML prediction. To our knowledge, our work is the first to consider learning-augmented algorithms for the challenging setting SOCO with feedback delays. **References:** [1] Daan Rutten, Nicolas Christianson, Debankur Mukherjee, and Adam Wierman. 2023. Smoothed Online Optimization with Unreliable Predictions. Proc. ACM Meas. Anal. Comput. Syst. 7, 1, Article 12 (March 2023). [2] Weici Pan, Guanya Shi, Yiheng Lin, and Adam Wierman. 2022. Online Optimization with Feedback Delay and Nonlinear Switching Cost. Proc. ACM Meas. Anal. Comput. Syst. 6, 1, Article 17 (March 2022). [3] Marek Bukáček, Pavel Hrabák, Milan Krbálek. 2018. Microscopic Travel Time Analysis of Bottleneck Experiments. Transportmetrica A Transport Science. [4] https://engineering.purdue.edu/ChanGroup/ECE302/files/Slide_4_01.pdf --- Rebuttal Comment 1.1: Comment: We thank you for your valuable time and effort in reviewing our paper. We hope that our responses have satisfactorily addressed your concerns and provided a better clarification of our contribution. We would genuinely appreciate your responses or comments should there be any remaining concerns, and are more than happy to address them. Once again, thank you for your valuable time in reviewing our paper.
Summary: This paper studies Smoothed Online Convex Optimization with multi-step nonlinear switching costs and feedback delay. They propose an ML-augmented online algorithm named RCL that combines ML predictions with an expert online algorithm. The authors show that RCL can guarantee $(1 + \lambda)$ competitive ratio against any expert while improving the average-case performance. They show the effectiveness of RCL using battery management as a case study. Strengths: 1. Learning augmented algorithms is an important paradigm for designing more practical algorithms. Incorporating ML-augmented algorithms with multi-step nonlinear switching costs and feedback delay is a new and challenging problem. 2. The algorithm is simple and intuitive to understand. The authors provide a detailed analysis of the algorithm and the proof is rigorous. 3. This paper is well-written and easy to follow. The authors provide a detailed literature review and a clear introduction to the problem setting. The authors also provide a detailed discussion of the experimental results. Weaknesses: 1. The algorithmic contributions of the paper are a bit weak. The idea of constructing reservation costs to hedge against any possible uncertainties and then solving a constrained convex problem to project the ML prediction into a robust action set is quite standard in ML-augmented algorithms. 2. There lacks a lower bound on the trade-offs between robustness and average performance in ML-augmented algorithms. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is it possible for the reservation cost design to be overly conservative, resulting in the constrained optimization problem yielding an empty solution set? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your efforts and time in reviewing our paper. `Algorithm design and novelty:` It’s essential to construct a robust action set to hedge against future uncertainties for guaranteed robustness in learning-augmented online algorithms. Nonetheless, how to achieve so is very challenging. In our problem, the feedback delays introduce additional uncertainties, and multi-step memory in the switching cost means the current action can affect future costs in multiple steps. Both make it substantially challenging to construct a robust action set. In fact, even without ML predictions, addressing the hitting cost feedback delay and multi-step nonlinear memory is already challenging and an independent study. For the first time, our carefully-desgined reservation cost guarantees competitiveness of unreliable ML predictions in this challenging SOCO setting. To our best knowledge, this is a novel and significant contribution to the SOCO literature. `Empty solution set:` An improperly-constructed robust action set can result in empty solution sets (example in Lines 207-217). This also further reinforces our previous point "how to achieve so (i.e., constructing a robust action set) is very challenging". Therefore, we make contributions by designing a reservation cost that ensures there always exists a **non-empty** solution set regardless of future uncertainties. This is also stated in Lines 241-242 and proved in Lemma C.4 in the appendix. `Lower bound:` The performance analysis for learning-augmented online algorithms involves a tradeoff between following the ML predictions for average performance improvement and following the expert for robustness (as governed by $\lambda>0$ in our paper). To our best knowledge, the optimal (or "lower bound") tradeoff still remains an open challenge in learning-augmented algorithms, except for a small number of simple online problems. Our learning-augmented algorithm is the first and makes contributions to the general SOCO setting with delayed feedback and multi-step memory. Our result in Theorem 4.1 bounds the performance of RCL, and shows the clear insight that the robustness parameter $\lambda>0$ governs the tradeoff between following ML predictions and the expert's actions. While it's important to characterize the Pareto-optimal tradeoff curve, we leave it as future work that the learning-augmented algorithm community can explore (Line 415). --- Rebuttal Comment 1.1: Comment: After reviewing the rebuttal and considering the other reviewers' feedback, I have decided to maintain my original score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our rebuttal and hope that the reviewer's concerns (e.g., non-empty solution sets guaranteed by our novel algorithm design) have been satisfactorily addressed. We welcome any additional comments the reviewer may have and would be glad to address them accordingly.
null
null
null
null
Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and Optimal Algorithms
Accept (poster)
Summary: The paper considers the problem of Quadratic Hessian bandit and establishes both lower and upper bounds on algorithm performance to provide a characterization of simple regret in this setting. The authors also propose an algorithm that is agnostic to the knowledge of the Hessian. Strengths: The paper considers a new problem on quadratic bandits and provides a complete characterization of simple regret in the setting. The completeness and tightness of the results paints a clear picture of the results about this new problem. Weaknesses: One thing that the paper needs to improve upon is the writing. The exposition lacks a flow and proofs of the theorems seem to be all over the place. Some points that I realized can be improved: 1. The proof of Theorem 2.1 can be improved. It took me quite a bit of effort to understand the steps mainly because some intermediate steps were missing. The missing steps were non-trivial in the sense if they are missing, it is not easy for the reader to make the connection unless they have several results handy. I would suggest to please ensure that the proof is largely self-contained and it can be understood by the reader without actually having them to do intermediate steps. If it cannot be accommodated in the main text, you can move some intermediate result to Appendix or atleast provide a citation of some the results being used in the intermediate steps e.g. MMSE estimator. Also I think you are missing an "$L_k$" in equation 7. 2. The statement of Theorem 2.1 is not clear. From the text leading upto it and the proof, it seems that it is a statement about the lower bound. But there is no lower bound anywhere in the statement of the theorem (apart from the one that is implicit in the limit for Gaussian noise). I would suggest to please make the statement of theorem clear and precise. 3. Similarly, the way Theorem 2.4 is phrased, it is (almost) immediate from the current statement of Theorem 2.1. I think in addition to universal optimality you also need to mention that this algorithm does not need the prior information about the Hessian. This not only makes the theorem clearer but also stronger. In addition to the above comments, I encourage the authors to please check the exposition for consistency and flow. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: The choice of $x_k$ and the lower bound on $T$ in Theorem 2.2. (line 67) are rather interesting. Do you have an intuitive/physical explanation of why that particular expression comes up or what that represents? I haven't come across that combination of $\lambda_k^{-3/2}$ and $\lambda^{-1/2}$ anywhere else and it might have something fundamentally insightful. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the suggestions and below are the detailed responses. Q1: Details for the Proof of Theorem 2.1. A1: We will provide additional details for the intermediate steps, including additing a reference for the optimality of MMSE estimators. We will also correct the typo in equation (7). Q2: Clarity on the statement of Theorem 2.1. A2: We would first like to clarify that Theorem 2.1 contains both upper bounds and lower bounds, only that the proof of the upper bound is much simpler. We decided that the upper bound is to be included as it can be compared to Theorem 2.4. However, we can better emphasis the lower bound by revising the last sentence to be “..., i.e., a matching lower bound implies the following equality”. Q3: Clarity on the statement of Theorem 2.4. A3: Indeed, in Theorem 2.4 we are exactly providing an algorithm that is independent of the Hessian (which turns out to be non-trivial compared to its counterpart in Theorem 2.1). We will revise the statement to “There exists an algorithm $\mathcal{A}$, which does not depend on the Hessian parameter $A$, such that for $A$ being any PSD matrix, the achieved minimax…” to better clarify this fact. Q4: Choice of $x_k$ and the transition threshold of $T$. A4: Intuitively $x_k$ is selected such that the sample complexity for learning the sign of $x_k$ is proportional to the penalty of an incorrectly estimation, so that the performance of an algorithm on these special cases is roughly independent of how many samples are used in each direction. The explicit exponents ($1/2$ and $3/2$) is a result of the quadratic growth of the objective fuction. We believe that if the function class is instead in the form of $f=\sum \lambda_k (x_k-x_{0,k})^{\alpha}$ for some general $\alpha$, then the correct $|x_k|$ for the hard instances will be proportional to $\lambda_{k}^{-\frac{3}{\alpha+2}}$. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your response. I will maintain my score and suggest the authors to edit the manuscript based on my earlier comments should it go through. I have no further questions.
Summary: This papers studies quadratic stochastic zeroth-order optimization and provide lower and upper bounds on the minimax simple regret. First, the authors provide an asymptotically tight upper bound of $\frac{1}{2}({\rm Tr}(A^{-1/2}))^2$ where $A$ is the Hessian of the quadratic. Then, the authors provide asymptotic lower bounds which take two distinct expressions depending on properties of $A$. Next, the authors show the existence of a universally optimal algorithm, meaning that it asymptotically achieves the Hessian-dependent bounds without accessing the Hessian information. Strengths: Clear problem statement and result presentation. Weaknesses: See **Questions**. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - "Energy allocation" seems to be the same as "energy spectrum" defined in Section 3.1. Please formally define energy allocation or unify the terminology. - Line 207: serve --> served. - Line 195: remove "an". - Please formally define Truncated Diff (projecting the diff onto [-t, t]) and other simple operations in the algorithms (Simple Project, etc.). - Regarding Section 4.1, are there potential ways to do better than applying the 1D estimator poly(d) times to obtain an estimate for $A$? Are there more efficient join estimators that make use of $A$ being p.s.d.? If so, it would be helpful to point out (without incorporating them in this paper's analysis as it does not seem necessary). It might still be useful in future finite-sample analysis. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors defined the scope of this paper to be within specific problem setups. This work is largely theoretical and does not have potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments, and will revise the identified typos. Below are the point-to-point responses to the comments. Q1: Energy allocation vs energy spectrum A1: Energy spectrum refers to the particular function defined above line 86, and we used energy allocation to refer to the act of designing the algorithm to match the energy spectrum to any particular vector (e.g., assign $2t_k$ samples to the $k$th dimension in Algorithm 1). This will be better clarified in the revised version. Q2: Formal definition of subroutines A2: Thanks for the suggestion, we will add the formal definitions in the corresponding algorithm enviromennts. For example, we will revise the pseudo code of the truncated diff as follows: - For $k\leftarrow 1 $ to $t$ + Let $y_+$, $y_-$ each be a sample of $f$ at $x_0$, $x_1$, respectively + Compute the projection of the difference $y_+-y_-$ to the interval $[-t^{0.5}, t^{0.5}]$, i.e., let $z_k=...$ - End for - Return the average $\frac{1}{t} \sum_{k=1}^t z_k $ Q3: Better ways to estimate the Hessian? A3: Instead of estimating the Hessian entrywise, there are several possible alternatives, e.g., see the following paper [A]. However, for quadratic f, Hessian estimation entrywise is order-optimal (in terms of squared Frobenius norm of the error): As a lower bound, learning the hessian is harder than learning a vector length $d(d+1)/2$ by observing its unit projections under Gaussian noise (because sampling any $f=x^{\intercal}Ax$ essentially returns a noisy version of a projection of its $n(n+1)/2$ independent entries), hence we need at least $\Omega(d^4/\epsilon^2)$ samples to achieve a squared estimation error of $\epsilon^2$. This exactly matches the error achieve by entry-wise estimation. For the same reason, the PSD property will not reduce the optimal sample complexity beyond a constant factor. [A] Balasubramanian and Ghadimi, “Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points” --- Rebuttal Comment 1.1: Comment: Thanks for the explanation on the complexity of Hessian estimation, truncated diff steps and energy terminologies. They look good to me.
Summary: The paper considers the black-box optimization problem of quadratic functions specified with PSD matrices. The paper shows matching upper and lower sample complexity bounds depending on the matrices. For the upper bound, the paper proposes algorithms. Strengths: The paper gives tight upper and lower sample complexity bounds for the black-box optimization of quadratic functions. The strength of the result is that the bounds depend on the PSD matrix, which means the bounds are instance-dependent. These results recovers previous results as well. Weaknesses: I feel that the focus of the paper is a bit narrow. So far, I do not know any motivating situation where we need to solve some black-box optimization of quadratic functions. Although the results are technically solid, their motivation and practical usefulness are not strong enough. The paper would be better evaluated for more theory/optimization-oriented conferences/journals. Or, if the proposed algorithms were more practical, say, it works for general convex functions but works better when the function is quadratic (best-of-both-world type algorithms), I would evaluate the paper higher. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Do the proposed algorithm work for general convex functions? I read other reviews and rebuttal comments. The rebuttal answers my question sufficiently. But I would keep my score the same since my concern about the motivation is not resolved. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As raised above, the technical results are solid, but practical usefulness is limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. Convex quadratic functions depict the local geometry of strongly convex functions near the critical point, and their sample complexity remains an open question in the online learning community. Although our result does not directly apply to general convex functions. Fully characterizing the sample complexity in this important (quadratic) case can serve as a stepping stone that provides essential techniques and a better understanding of how to achieve optimal sample complexities in general cases.
Summary: This paper studies zero-th order quadratic optimization with bandit feedback. It provides a comprehensive analysis of the optimal sample complexity, which depends on the Hessian of the objective function. There are two contributions. Firstly, it establishes lower bounds on Hessian-dependent complexities using the concept of energy allocation, capturing the interaction between the search algorithm and the geometry of the problem. The matching upper bound is achieved through optimizing the energy spectrum. Secondly, an algorithm is presented that achieves asymptotically optimal sample complexities for all Hessian instances, independent of the Hessian. These optimal sample complexities remain valid even for heavy-tailed additive noise distributions, enabled by a truncation method. Strengths: 1. If the result is correct, it provides matching upper and lower bounds for quadratic bandits, which is pretty nice 2. The results hold even for heavy-tailed noise with the help of truncation. Weaknesses: 1. The definition of the effective dimension $k^*$ does not look correct 2. The proposed algorithm needs to know a lot of problem-related quantities, which is not practical 3. There is no experimental evaluation Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is $dimA$? It is not defined. Is it just $d$? In Line 67, in the definition of the effective dimension $k^*$, do you miss a fractional number notion? i.e., $/$ Otherwise, it would be an ill-defined dimension. The construction of the lower bound is a bit strange. When you construct x_0 in this way, it will naturally lead to the definition of the effective dim $k^*$. I’m wondering if the current lower bound is truly essential, or just an artifact? In my opinion, the non-asymptotic lower bound (Theorem 2.2) is more interesting, why don’t you present it instead of proving a weaker (and asymptotic) lower bound in the main paper? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The zero-th order quadratic optimization problem is a bit simple. But it is a good starting point The lower bound proof technique lacks generality. It might be simplified by the generalized le cam’s method. See the textbook “Bandit algorithms” Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments. We have now provided the following point-to-point response. Q1: The definition of dim$A$? A1: In the submitted version, our result is stated for dim$A=d$ where dim refers to the dimension of the Euclidian space over which A is defined. While our result holds true under this definition, it later came to our awareness that Theorem 2.2 still holds if dim$A$ instead denotes the rank of $A$, and the proof follows almost the same arguments. We plan to let dimA denote the rank of A in the revised version and update the proof steps accordingly. Q2: The definition of $k^*$ in line 67? A2: We would like to verify that the condition of T in line 67 is correct under the convention of $0^{-1}=+\infty$. So, $k^*$ is always no greater than the rank of $A$. As we plan to update the definition of dim$A$ to be the rank of $A$, this clarity issue is automatically resolved as $\lambda_k^{-\frac{1}{2}}$ for $k\in\{1,2,...,\text{dim} A\}$ is always finite. Q3: Is the current lower bound truly essential or just an artifact? A3: First, we would like to clarify that the provided bounds themselves are fundamental as they are matched by upper bounds, and the choice of $k^*$ only affects how the bounds are presented. The transition threshold of T in Theorem 2.2 (or the definition of $k^*$) is fundamental to some degree, as in certain regimes, this specific choice of $k^*$ is required for the bound to be order-wise tight. There could be potential flexibility in how $k^*$ is selected for certain cases, but we choose to present one concrete example in our theorem for brevity. Q4: Why prove a weaker (and asymptotic) result in the main paper instead of the non-asymptotic lower bound (Theorem 2.2). A4: First, we would like to clarify that Theorem 2.1 is not merely a strictly weaker version of Theorem 2.2 as it characterizes the exact constant factor for the minimax asymptotic regret. Besides, the main paper focus on the proof for the asymptotic case for the following reasons: 1. The proof of Theorem 2.1 contains all essential ingredients (Theorem 2.2 is proved following exactly the same construction ideas, but with additional technical details in the analysis); 2. the statement of the theorem 2.1 is simpler compared to the non-asymptotic case, so the readers are not lost in technical details; 3. Theorem 2.1 is essential for understanding the significance of Theorem 2.4. Q5: Comparison to le cam’s method. A5: We would like to thank the reviewer for pointing out the reference. Compared to classical methods, such as le cam’s method or general lower-bounding approaches for bandit algorithms, a key distinction in our proof is to construct hard instances and provide analysis tailored for each Hessian instance. To the best of our knowledge, our proof (especially the analysis in Section 3.3) may not be directly obtained from earlier approaches, and we believe our proposed techniques can be a stepping stone to be applied for more general settings. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The authors have partly addressed my concern, regarding dimA and $k^*$. I am raising my score by +1 accordingly.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Flexible Attention-Based Multi-Policy Fusion for Efficient Deep Reinforcement Learning
Accept (poster)
Summary: This paper introduces KIAN, a method for leveraging multiple sub-optimal policies to solve reinforcement learning tasks. The paper starts out by introducing a new Knowledge-Grounded MDP, which adds a set of external knowledge policies to the traditional MDP framework. To leverage these external knowledge policies, KIAN learns an embedding for each knowledge policy (including an inner knowledge policy for the current agent). Then at each step, the current state is mapped to a query embedding that can be used to find the policy that is best equipped to take an action at that timestep. Strengths: * The paper is structured well and is easy to read. * The authors demonstrated great attention to detail by motivating the theorems and equations with intuition before defining them concretely. * The paper is very technically sound. Weaknesses: * > In such a case, whenever the knowledge set is updated by adding or replacing policies, prior methods require relearning the entire multi-policy fusion process, even if the current task is similar to the previous one. This is because their designs of knowledge representations are intertwined with the knowledge-fusing mechanism, which restricts the number of policies in the knowledge set from being changed. * It would be good to point to specific prior works that suffer from this problem so the reader can build intuition. * There are no visualizations of the tasks used for evaluation. Adding pictures of the environments would help readers understand what the agent needs to do. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * I don't fully understand the incremental property. In the introduction. the authors state that "Humans do not need to relearn how to navigate the entire knowledge set from scratch when they remove outdated strategies or add new ones." But then Definition 3.5 states that an incremental agent has two properties: 1. The agent can use different knowledge policies to solve a single KGMDP 2. Given a sequence of KGMDPs, the agent can solve them with different knowledge sets. * In its current form, definition 3.5 does not specify the case where an agent adds or removes a specific policy from its knowledge set. I think a condition should be added to highlight how an incremental agent should be able to leverage a sequence of knowledge sets that differ by one or more policies. * How do the initial external knowledge policies perform on the given tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors discussed broader impact and future work in Appendix D, but no limitations were addressed. Could the authors please add limitations to either Appendix D or the conclusion? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *"In such a case, whenever the knowledge set is updated by adding or replacing policies, prior methods require relearning the entire multi-policy fusion process, even if the current task is similar to the previous one. This is because their designs of knowledge representations are intertwined with the knowledge-fusing mechanism, which restricts the number of policies in the knowledge set from being changed."* *It would be good to point to specific prior works that suffer from this problem so the reader can build intuition.* Thank you for the suggestion! We will add citations (e.g., for KoGuN and A2T) after the statement. 2. *There are no visualizations of the tasks used for evaluation. Adding pictures of the environments would help readers understand what the agent needs to do.* Thank you for the suggestion! We provided some screenshots of the environments used for evaluation in Figures 6 and 10 in the additional-result document and will add them to the final paper. 3. *I don't fully understand the incremental property. In the introduction. the authors state that "Humans do not need to relearn how to navigate the entire knowledge set from scratch when they remove outdated strategies or add new ones." But then Definition 3.5 states that an incremental agent has two properties:* 3.a) The agent can use different knowledge policies to solve a single KGMDP 3.b) Given a sequence of KGMDPs, the agent can solve them with different knowledge sets. *In its current form, definition 3.5 does not specify the case where an agent adds or removes a specific policy from its knowledge set. I think a condition should be added to highlight how an incremental agent should be able to leverage a sequence of knowledge sets that differ by one or more policies.* Thank you for the suggestion! We assumed changing $\mathcal{G}$ includes the operations of adding/removing knowledge policies. We will add concrete conditions of these operations in Definition 3.5. 4. *How do the initial external knowledge policies perform on the given tasks?* The initial external knowledge policies barely succeeded in most of the tasks we evaluated. These results are plotted in Figure 3 with the label “BC”. 5. *The authors discussed broader impact and future work in Appendix D, but no limitations were addressed. Could the authors please add limitations to either Appendix D or the conclusion?* Yes! We have discussed the limitations of this work. Please refer to “General Response - Limitations”. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thank you for addressing my concerns! I can't see Figures 6 and 10 in the supplementary material nor the latest paper revision. --- Reply to Comment 1.1.1: Title: Thank you for letting us know! Comment: Thank you for letting us know! There should be a link to the pdf with new experiments in the General Author Rebuttal. However, the link disappears when we check OpenReview today. We will let the program chairs and area chairs know about this.
Summary: The authors of this work introduce Knowledge-Grounded RL (KGRL), an RL framework that combines multiple knowledge policies to achieve human-like efficiency and flexibility in learning. They propose a new actor architecture called Knowledge-Inclusive Attention Network (KIAN) that enables arbitrary combinations and replacements of external policies through embedding-based attentive action prediction. KIAN also addresses entropy imbalance, a challenge in KGRL, addressing exploration in the environment. Strengths: Firstly, the paper addresses a relevant and interesting problem in the field of reinforcement learning (RL) by improving sample efficiency from arbitrary external policies and enabling knowledge transfer skills. Furthermore, the paper is a well-written work that effectively conveys the ideas and findings in a clear and concise manner. The authors demonstrate excellent writing skills, employing appropriate terminology and organizing the content in a manner that enhances readability and comprehensibility. In addition, the paper effectively establishes the motivation and position of the study in the existing literature. The authors articulate the significance and relevance of the research problem, demonstrating a strong understanding of the field. They situate their work within the broader scholarly context, highlighting how their study fills a gap in knowledge or builds upon prior research. The methodology employed in the research is clearly described, allowing readers to understand and replicate the study. The authors provide a detailed and comprehensive explanation of the experimental and theoretical approach used, supported by well-designed figures and diagrams. These visual aids enhance the understanding of the methods employed and facilitate better comprehension of the research. Lastly, the authors provide solid mathematical understanding for their proposed method. Collectively, these strengths highlight the quality of the paper. ts focus on addressing a relevant and interesting problem, combined with its well-written content, clear methodology, positioning in the literature, and mathematical rigor, make it an intersting contribution. Weaknesses: The paper has one notable weakness that should be addressed to enhance the overall quality. The evaluation presented in the paper is quite limited. The results obtained for the OpenAI robotics task only offer a slight support for the proposed method. The authors should consider expanding the evaluation to provide a more comprehensive assessment of the proposed method's performance. Additionally, the use of only 3 or 5 seeds in the experiments may be insufficient, especially when considering the large error bars observed in some models. The authors should consider increasing the number of seeds to improve the statistical robustness of the results. Moreover, the paper lacks clarity regarding the error bars shown in the plots. It is not explicitly stated what these error bars represent, which hinders the interpretation and understanding of the presented data. Lastly, the paper would greatly benefit from conducting ablation studies to investigate the effects of various factors on the proposed method's performance. Specifically, the authors could consider performing ablation studies on - the influence and distribution of attention/weights of actors, - the impact of random/irrelevant policies in $\mathcal{G}$, - the impact of the (near) optimal policy in $\mathcal{G}$, - the impact of a larger set of knowledge policies $\mathcal{G}$, - the effects of different types of knowledge policies, - and an investigation of the mentioned entropy balance issue, that the authors specifically address in their method. These ablation studies would provide valuable insights into the individual contributions and impacts of these factors on the overall approach. Overall, addressing these weaknesses would significantly improve the research paper, clarifying important aspects such as error bars, conducting relevant ablation studies, ensuring consistency in reporting variances, and strengthening the empirical evidence for the proposed method. ### Minor - I assume ori-KIAN is KIAN with the original $\mathcal{G}$, this should be mentioned in the text. - l.314 mentions less variance, but tables do not show variances for individual experiments Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the above mentioned ablation studies. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The limitations have not been discussed by the authors. The above mentioned ablation studies will probably help to identify the shortcomings of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *The use of only 3 or 5 seeds in the experiments may be insufficient, especially when considering the large error bars observed in some models. The authors should consider increasing the number of seeds to improve the statistical robustness of the results.* Thank you for the suggestion! We have increased the number of random seeds run for all experiments. Please refer to “General Response - Experimental Significance”. 2. *The paper would greatly benefit from conducting ablation studies to investigate the effects of various factors on the proposed method's performance.* Thank you for the suggestion! We have conducted a series of ablation studies. Please refer to “General Response - Ablation Studies and More Analyses”. 3. *The paper lacks clarity regarding the error bars shown in the plots. It is not explicitly stated what these error bars represent, which hinders the interpretation and understanding of the presented data.* Thank you for pointing out the clarity issue! Each error band in all figures is a 95% confidence interval. In addition, we found that the colors for all algorithms in Figure 3 make it difficult to tell the error bands of KIAN, especially for OpenAI robotic tasks. Thus, we changed the colors of all curves in Figure 5. We will update the explanation of the error bands and figures in the final paper. 4. *The results obtained for the OpenAI robotics task only offer a slight support for the proposed method.* Actually, for Pick-and-Place and Push in Figure 3 (Figure 5) as well as Push->Pick-and-Place in Figure 4, KIAN converges much faster and has much smaller error bars compared to all other baselines. Also, in Tables 1 and 2 (zero-shot transfer experiments), KIAN’s performance is generally the best or very close to the best performance for OpenAI robotic tasks. Finally, in Appendix E, we showed that KIAN performs better in zero-shot transfer experiments with small variances for all OpenAI robotic tasks. We believe that these results suggest that KIAN is also effective for continuous-control tasks. 5. *I assume ori-KIAN is KIAN with the original , this should be mentioned in the text.* Thank you for pointing out this clarity issue! Yes, ori-KIAN is KIAN without using previously learned query and keys, which is the same setting as used in Figure 3 (Figure 5). We will clarify this in the final paper. 6. *l.314 mentions less variance, but tables do not show variances for individual experiments* Thank you for pointing out this clarity issue! We were limited by space when submitting the original paper, so we moved the tables with variances to Appendix E. We will clarify this in the final paper. 7. *The limitations have not been discussed by the authors. The above mentioned ablation studies will probably help to identify the shortcomings of the proposed method.* We have added limitations of our proposed method! Please refer to “General Response - Limitations”. 8. *The influence and distribution of attention/weights of actors:* Figure 6 in the additional-result document shows the weights of each knowledge policy after training. The results demonstrate different behaviors of KIAN in discrete decision-making and continuous control tasks. 8.a) Discrete decision-making: An agent learns to attend to different knowledge policies according to the current situation. Specifically, in a DoorKey-8x8 environment, the agent chooses to sequentially “pick up the key”, “open the door”, “search for the goal”, and “reach the goal”. Note that the attention to the inner policy does not disappear throughout the entire episode. This shows that the inner policy helps an agent explore the environment. 8.b) Continuous control: An agent learns to attend to only the inner knowledge policy. This result shows that KIAN enables effective knowledge transfer between external and inner policies. We believe this difference from the behaviors in discrete decision-making can be attributed to (1) how an action is sampled in continuous control tasks and (2) the suboptimality of external policies: Since an action is sampled by Gumbel softmax, only the action suggested by one knowledge policy will be applied, even though in probability, all knowledge policies are fused together. As a result, the agent may find it more efficient to let the inner policy first copy the useful skills from suboptimal external policies and focus on the inner policy to complete a task. --- Rebuttal 2: Title: Reviewer Reponse Requested Comment: Hello Reviewer, The authors have made efforts to address your comments on their work via the rebuttal. Part of the NeurIPS review process is participating meaningfully in the rebuttal phase to help ensure quality. Please read and respond to the author's comments today, latest tomorrow, to give everyone time to respond and reach proper conclusions. Thank you for your assistance in making NeurIPS a great conference for our community. --- Rebuttal Comment 2.1: Title: Answer to rebuttal Comment: I thank the reviewers for providing the additional ablation studies, I have changed my score accordingly.
Summary: Humans can learn by aggregating external knowledge from others’ policies of attempting a task. While prior studies in RL have incorporated external knowledge policies for sample efficiency, there still remains the generalization problem to be solved that agents have difficulties to perform arbitrary combinations and replacements of external policies. The authors propose a new actor architecture for Knowledge-Grounded RL (KGRL), Knowledge-Inclusive Attention Network (KIAN), which allows free knowledge rearrangement due to embedding-based attentive action prediction. KIAN addresses entropy imbalance as well. The authors demonstrate in experiments that KIAN outperforms other methods incorporating external knowledge policies under different environmental setups. Strengths: The authors clearly define the problem as how RL can be grounded on any given set of external knowledge policies to achieve knowledge-acquirable, sample-efficient, generalizable, compositional, and incremental properties. The proposed method of KIAN is clearly described. They use knowledge keys and the query performs an attention operation to determine how an agent integrates all policies. The solution of the entropy imbalance issues when integrating external policies are proposed as well. Weaknesses: Generally this work has many related works but is tackling the unique challenge problem of fusing knowledge policies with different state and action spaces. Limitations of the proposed method is not clear based on the experimental results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: More experiments directly connected with this challenging problem would more clearly support your claims. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Some insights and analysis about limitations based on the experimental environments more directly connected with the target problem are expected to be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *Generally this work has more related works than in the related work section, such as multi-task imitation learning etc,.* We understand that other works not listed in the related work section may seem to be addressing a similar problem as KGRL. However, we would like to point out that their central point is quite different from that of KGRL: a) What is our goal? Investigate (1) how existing policies can be fused to help an agent learn new tasks more efficiently and (2) how this learning process can be flexible, i.e., by freely reusing and updating the external knowledge-policy set without relearning how to navigate the entire policy set. b) What is not our goal? b.1) Hierarchical learning: We specifically discussed the differences between HRL and KGRL in Section 2. While HRL aims to decompose a complex task into a hierarchy of sub-tasks and learn a sub-policy for each sub-task, KGRL aims to address a task by observing and fusing a given set of external policies. b.2) Multi-task and continual learning: While multi-task RL aims to learn a set of closely related tasks concurrently [49], and continual RL aims to learn a sequence of tasks without forgetting previously learned skills [50,51], KGRL aims to learn a single task more efficiently and flexibly with the help of existing policies from any source while maintaining this efficiency and flexibility to a sequence of tasks. According to our goal, the works listed in Section 2 are better aligned with the scope of this work. However, we will add this discussion to the Appendix in the final paper! 2. *In addition to focusing on the fusion problem of given set of external policies, more explored approaches would be better. The formulation of the problem from broader view of use of external policies would be better.* Without a more concrete description of “explored approaches” and “broader view”, we find it difficult to respond to these comments. You seem to suggest a different scope for this work. However, we respectively disagree that a different scope would be better. We would like to restate our motivation for investigating KGRL: Unlike humans, current RL agents lack the ability to efficiently leverage external policies from different sources to help them learn new tasks. In addition, they lack the flexibility to learn with a changeable knowledge-policy set. Without this ability and flexibility, lots of resources will be wasted on (1) relearning the skills that are already developed by someone else and (2) re-exploring how different external policies may help an agent learn a given task. Therefore, we dedicate ourselves to studying KGRL, i.e., how an agent can be knowledge-acquirable, sample efficient, generalizable, compositional, and incremental. 3. *Some insights and analysis about limitations are expected to be included.* We have added insights/analyses and limitations of our proposed method! Please refer to “General Response - Ablation Studies and More Analyses” for insights/analyses and “General Response - Limitations” for limitations. **Reference:** [49] Vithayathil Varghese, Nelson, and Qusay H. Mahmoud. "A survey of multi-task deep reinforcement learning." Electronics 9.9 (2020): 1363. [50] Rolnick, David, et al. "Experience replay for continual learning." Advances in Neural Information Processing Systems 32 (2019). [51] Khetarpal, Khimya, et al. "Towards continual reinforcement learning: A review and perspectives." Journal of Artificial Intelligence Research 75 (2022): 1401-1476. --- Rebuttal 2: Title: Reviewer Feedback requested Comment: Hello Reviewer, The authors have made efforts to address your comments on their work via the rebuttal. Part of the NeurIPS review process is participating meaningfully in the rebuttal phase to help ensure quality. Please read and respond to the author's comments today, latest tomorrow, to give everyone time to respond and reach proper conclusions. Thank you for your assistance in making NeurIPS a great conference for our community. --- Rebuttal Comment 2.1: Comment: Thank you for the responses. I have read the reviews of the other reviewers and the authors' responses. Also, I have read the related works and appendix again. One of the misunderstanding of the paper comes from the misunderstanding of your problem of fusing knowledge policies with different state and action spaces that enables efficient learning across a broader range of applications. My concerns is now solved. Only other concern about limited evaluation has been also solved. Finally I have changed and increased my score.
Summary: This paper defines the Knowledge-Grounded RL setting, a general RL setting for integrating knowledge (in the form of policies) into a policy to learn new tasks efficiently. Essentially this setting is similar to the MDP setting except that the agent is also given a set of knowledge policies to utilize. The paper also introduces a system/architecture within this KGRL setting, called Knowledge-Inclusive Attention Network (KIAN). The aim is to improve RL that is grounded on external knowledge policies. The paper outlines five desirable human-like properties they desire in their agents: knowledge-acquirable, sample-efficient, generalizable, compositional, and incremental. Moreover, they formally define these so that they are measurable within the KGRL setting (e.g., for evaluating algorithms on these dimensions). While previous methods typically intertwine knowledge representation and knowledge-fusion, thereby restricting their ability to adapt to numbers of policies, losing flexibility. KIAN is developed more flexibility, separating the knowledge representation and knowledge fusion. KIAN consists of three components: a policy that learns a strategy (similar to a normal policy) called the internal, embeddings that represent the given knowledge (or external) policies, a query that performs attentive action prediction to fuse the internal and external policies. KIAN also solves other issues that can occur in entropy-regularized KGRL. Entropy-regularized RL is common, but the authors show that in the KGRL setting issues can arise through entropy regularization where only a select few policies are selected, counterproductively reducing diversity in policy usage. The authors show that in the KGRL setting the agent will pay more attention to the policy with large entropy and in continuous control, will rely on the internal policy extensively. The paper introduces modifications so that this does not occur in KIAN. The authors show results on both MiniGrid and robotics tasks and demonstrate sample efficiency, generalizability, as well as compositional and incremental learning. Strengths: The paper is mostly well-written and well-explained. The method makes sense, and is a well-thought out architecture. I like how the authors address the entropy imbalance problem. I like that the authors define and quantify the behaviors they would like in the agent. The results do seem to demonstrate their method is effective. Weaknesses: While I think the experiments are good, with appropriate baselines and good environments to test the agent’s capabilities. I am concerned about statistical significance. In particular, only 5 seeds are run, and the performance benefit in many cases is minimal, which may quite possibly be attributed to randomness. While I do believe the method outperforms the baselines, I cannot say so with a lot of confidence to merit inclusion at NeurIPS. If the authors can run more seeds, especially given the large variance, it would dramatically improve their results. Qualms: KGRL is consistently referred to as an RL framework, which it is, but the connotation can be misconstrued as being a framework “for” RL, implying it is a solution method for RL problems. I would recommend calling it a “setting” rather than a framework. Indeed, I was confused temporarily as a reader, especially when KGRL is stated as being “introduced” by this paper (as opposed to “described” or “outlined”). Nits Typo Line 345: “border” should be: “broader” Technical Quality: 3 good Clarity: 3 good Questions for Authors: In line 33, it is stated (for incremental): “Humans do not need to relearn how to navigate the entire knowledge set from scratch when they remove outdated strategies or add new ones”. Do humans truly “remove” outdated strategies? In line 222, the paper states: “However, fusing multiple policies as equation (3) will make an agent biased toward a small set of knowledge policies when exploring the environment.” I am somewhat confused, equation (3) as is does not seem to have this issue. I thought this only occurs in the MaxEnt KGRL setting as introduced in the next section. Can the authors please clarify this? The authors state the entropy imbalance problem is a property of the maximum entropy KGRL setting. I want to clarify that this is only shown for the specific case of policies in the form of equation (3), correct? How could the policy embeddings/knowledge keys be learned? Is Gymnasium Robotics used for the experiments? Or the older OpenAI codebase? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: To me it is unclear how in more sophisticated settings how the knowledge keys (or policy embeddings) would be learned. This seems like a bottleneck to scalability that is not well-addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. *I am concerned about statistical significance. In particular, only 5 seeds are run, and the performance benefit in many cases is minimal, which may quite possibly be attributed to randomness.* Thank you for pointing this out! We have increased the number of random seeds run for all experiments. Please refer to “General Response - Experimental Significance”. 2. *Qualms: KGRL is consistently referred to as an RL framework, which it is, but the connotation can be misconstrued as being a framework “for” RL, implying it is a solution method for RL problems. I would recommend calling it a “setting” rather than a framework. Indeed, I was confused temporarily as a reader, especially when KGRL is stated as being “introduced” by this paper (as opposed to “described” or “outlined”).* Thank you for the suggestion! We agree that “setting” is a better term for this work and will update our final paper accordingly! 3. *Nits Typo Line 345: “border” should be: “broader”* Thank you for pointing out the typo! We will correct it in the paper :) 4. *In line 33, it is stated (for incremental): “Humans do not need to relearn how to navigate the entire knowledge set from scratch when they remove outdated strategies or add new ones”. Do humans truly “remove” outdated strategies?* We believe that when encountering a task, humans may collect a knowledge-policy set from a large pool of policies that they think might be useful for the task. If we find a policy in this smaller knowledge set that is no longer helpful for solving the task, we might remove it from the set. 5. *In line 222, the paper states: “However, fusing multiple policies as equation (3) will make an agent biased toward a small set of knowledge policies when exploring the environment.” I am somewhat confused, equation (3) as is does not seem to have this issue. I thought this only occurs in the MaxEnt KGRL setting as introduced in the next section. Can the authors please clarify this? The authors state the entropy imbalance problem is a property of the maximum entropy KGRL setting. I want to clarify that this is only shown for the specific case of policies in the form of equation (3), correct?* Yes, you are right! Fusing multiple policies with equation (3) will make an agent biased toward a small set of knowledge policies when exploring the environment by maximizing the entire policy’s entropy. We will clarify this point in the final paper. 6. *How could the policy embeddings/knowledge keys be learned?* They are jointly (end-to-end) learned with other components in KIAN. The gradient signals calculated by an RL algorithm, e.g., policy gradient, will be back-propagated to all policy embeddings. We provided the complete learning algorithm in Appendix A.6. 7. *Is Gymnasium Robotics used for the experiments? Or the older OpenAI codebase?* We use the older OpenAI codebase. The version we use lies within the Python package “gym” of version 0.21.0. 8. *To me it is unclear how in more sophisticated settings how the knowledge keys (or policy embeddings) would be learned. This seems like a bottleneck to scalability that is not well-addressed.* Each knowledge key is a trainable variable that can be end-to-end learned with other components of KIAN. Since each knowledge key is a point lying in an embedding space, adding more knowledge policies simply equals adding more points to the embedding space. As a result, KIAN can be easily scaled with a large group of external knowledge policies. --- Rebuttal Comment 1.1: Title: Initial Response to Authors Comment: Thank you for answering all of my questions and making a faithful attempt to address my points.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful feedback! We address the shared reviewers’ comments below and will incorporate all feedback in the final paper. ### Experimental Significance (Reviewer 7i4w and CuzY) We have increased the number of random seeds to 10 for each experiment within the given time frame. Yet, we are still running with more random seeds. Also, we changed the colors of all curves in Figure 5 in the additional-result document for easier to read the error bands. Each error band in all figures is a 95% confidence interval. We will update the figures and explanation of the error bands in the final paper. The results in Figure 5 show that even after adding more random seeds, KIAN converges much faster and reaches higher rewards for all tasks. In addition, KIAN’s error bands are small, indicating that its improved performance is statistically significant. We observe similar results when running more random seeds for Figure 4. We apologize that due to time and resource limitations, we are not able to provide an updated plot for Figure 4 on time. Yet, we will update the results in the final paper. ### Ablation Studies and More Analyses (Reviewer 2LpZ and CuzY) * The impact of random/irrelevant policies in $\mathcal{G}$: Figure 7 in the additional-result document shows the learning curves of KIAN trained with 3 relevant, 6 (3 relevant + 3 irrelevant), and 9 (3 relevant + 6 irrelevant) external policies for MiniGrid Unlock. The results demonstrate that adding redundant/irrelevant knowledge policies slightly slows down the convergence speed but still reaches high rewards with small error bands. This slight convergence-speed decrease is quite as expected since the agent needs to (1) learn to ignore the policies that are not helpful for the task and (2) navigate through a larger set of external knowledge policies. * The impact of the (near) optimal policy in $\mathcal{G}$: Figure 8 in the additional-result document shows the results of running FetchReach-v1 with an optimal external knowledge policy. We did this ablation study on FetchReach-v1 since we can only obtain the optimal policy for this environment without doing further training. The results demonstrate that with optimal external policies, SAC-RL, SAC-KoGuN, and SAC-KIAN achieve similar performance, but SAC-KIAN still performs slightly better than others. * The impact of a larger set of knowledge policies $\mathcal{G}$: In Figure 7 in the additional-result document, we can also see the effects of learning with a larger set of external policies. The results show that the convergence speed slows down when the knowledge set is expanded with more policies that are irrelevant to the task. We believe that one of the reasons for slower convergence speed is that the agent needs to navigate through more policies before it knows which ones are useful for the task. Therefore, when the knowledge-policy set is really large, KGRL methods, including KoGuN, A2T, and KIAN, are not guaranteed to be more efficient than RL methods. We will also discuss this point in the limitations. * The effects of different types of knowledge policies: Actually, KIAN’s results in Figure 4 (in the original paper) are run with an external knowledge set that includes two different types of knowledge policies: if-else-based programs and neural networks. The results demonstrate that mixing different types of knowledge policies will not hurt performance at all. This observation is as we expected since KIAN only cares about the state-action mappings provided by each knowledge policy instead of how this mapping is generated. * An investigation of the mentioned entropy balance issue: Figure 9 in the additional-result document shows the results of KIAN with and without the entropy imbalance issue addressed. The results show that without addressing the issue with equation (9), KIAN cannot fully benefit from the guidance provided by external policies. In addition, KIAN may perform even worse than RL with no external knowledge policies (see the results of FetchPickAndPlace and FetchPush). The reason is provided by Proposition 4.3. These results demonstrate that (1) entropy imbalance is indeed an issue for KGRL and (2) our proposed method can largely mitigate this issue. ### Limitations (Reviewer 2LpZ, CuzY, K32E) Here are the limitations we will summarize in Conclusion/Appendix D: * If the external knowledge set is very large and contains irrelevant policies, the efficiency of KIAN, and other existing KGRL methods, may decrease. This is discussed in “General Response - 2.4”. * The current design of KIAN requires external and inner policies to share the same state and action space. Therefore, if a policy for Task A includes a skill that can benefit Task B, but Task A and B have different state and action space, the policy cannot be used as an external policy for Task B. This can limit the number of existing policies that can be used to help learn a new task. Pdf: /pdf/a3f2f3d59210e4e83efe1b0efe5a1ac8ef6b20a2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A General Framework for Robust G-Invariance in G-Equivariant Networks
Accept (poster)
Summary: The paper introduces a novel method to enhance robustness and group-invariance in group-equivariant convolutional neural networks (G-CNNs), named the G-triple correlation (G-TC) layer. This approach uses the concept of triple correlation on groups, a polynomial invariant map that is also complete, unlike many other invariant maps like the max. This "completeness" only removes variations caused by group actions but retains the signal's structure, contributing to the G-TC layer's strong robustness, particularly against invariance-based adversarial attacks. The authors also state that the G-TC layer results in improved classification accuracy over the standard Max G-Pooling in G-CNN architectures. An efficient implementation method for any discretized group is provided, and benefits are demonstrated on various groups and datasets. The paper provides the context, highlighting the central role of the pooling operation in CNNs. The pooling operation has remained unchanged over the years, and while it serves its purpose of local-to-global coarse-graining of structure, it fails to construct robust representations that are invariant only to irrelevant visual changes. The paper then goes on to explain the role of group-equivariant convolutional networks (G-CNNs) and how they exploit group theory to achieve precise generalized group-equivariance. However, the pooling operation in G-CNNs is still susceptible to the lossiness of the max operation. To tackle this, the authors propose uncoupling the invariance and coarse-graining operations and introducing a new method for robust G-invariance through the group-invariant triple correlation. The authors demonstrate the superior performance of this approach through experiments showing improved classification accuracy. Strengths: The paper tackles the loss of information of the pooling operation in equivariant architectures and how to circumvent it. I really appreciated the fresh take on the importance of pooling and how even a very standard architectural block can still be improved. As a result, the paper brought to my attention the triple correlation operator which to my knowledge has yet to be extensively used in ML applications. Thus, this paper has a novel contribution that has the potential to benefit the community. Furthermore, I found the overall presentation and motivation of this paper exceptionally clear and thus it was an easy-to-read and parse paper. Finally, the empirical results do suggest that the introduced G-TC layer does have some benefits---although this analysis is a bit underdeveloped. Weaknesses: There are a few notable weaknesses of this paper. The first being that the proposed approach can only work on discrete groups due to the pairwise nature of the G-TC layer. Many of the current breeds of equivariant networks have found application domains with continuous groups such as $SO(3)$, $SE(3)$, etc ... but the current setup does not seem easy to extend here. Can the authors maybe comment on this or if this can be even a direction for future work? Furthermore, the scaling of this is $O(|G|^2)$---I understand the reduction by a factor of 2---but this is still quadratic scaling. This means that many potential groups of potential interest are eliminated. For example, one finite group that is not considered in this paper but could possibly be used is the symmetric group of $N$ elements---i.e. $S_N$. But I believe the computational complexity here makes it not possible to scale this up to permutation networks that also have invariance. Can the authors comment on this? The background section on signals should really be in the established language of fiber bundles. This has already been done multiple times in equivariant ML papers see E(2)-CNN, ESCNN, "A General Theory of Equivariant CNNs on Homogenous Spaces" Cohen et. al 2018. The experiments in this paper are very limited to toy datasets in MNIST. I believe there is an avenue for multiple more important datasets that are larger scale. For example, E(2)-CNN has Cifar-10 but even this dataset is missing here. I encourage the authors to try larger-scale image datasets to show the benefits of their proposed approach. **Minor**: - Typo: Eqn 4. LHS your $g$ is written as $\tilde{g}$ but the paragraph below assumes regular $g$ Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please include a plot on runtime analysis for the training of a network of G-TC vs. a regular G-CNN with max pooling. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer aA5W, We thank you for the thoughtful review. We respond to your comments and questions below: **Discrete Groups** We thank the reviewer for mentioning the very interesting question of how to generalize our method to continuous groups. We first note that many of the papers that make use of continuous groups require choosing a discretization. In our paper, we apply our model to the cases of $SO(3)$ and $O(3)$ using the octahedral discretization of the groups, which is standardly used in many G-CNN papers. Steerable G-CNNs are the exception, as their theory applies to continuous groups. In the global response on Steerable G-CNNs, we show that the G-TC Layer can be naturally generalized to this setting. However, we note that the pairwise nature of the G-TC still requires us to discretize $G$. We have brainstormed a few ideas on how to overcome this. We emphasize that these ideas could also help reduce the computational cost of the G-TC layer, since the computation would no longer depend on the size of a discrete group. - Idea 1: Consider the (continuous) Lie group H. The G-TC on steerable functions leverages the action of $H$, we could think of considering instead the action of its Lie algebra $\mathfrak{h}$. We would then consider the generators of $\mathfrak{h}$ and show that computing the G-TC on the generators is enough. However, the complicated equation of the G-TC possibly makes this approach intractable. - Idea 2: We could take inspiration from steerable CNNs, which express any (learnable) equivariant convolutional filter bank as (learnable) coefficients on an equivariant basis computed offline. Similarly, we could think of expressing the G-TC as (fixed) coefficients on a basis of invariant operators. However, the convolution is a linear operator while the G-TC is not. Consequently, the linear algebra used for steerable CNNs do not apply. - Idea 3: Lastly, we could think about using implicit representations. Instead of computing the values of the G-TC on any possible pair $(g_1, g_2)$ we could instead represent it as a neural network, depending on $f$, that takes $(g_1, g_2)$ as input. However, it is unclear whether it would be a convenient data structure to work with. It is possible that the complexity cost of the TC would only be shifted into a computational cost of the NN. Although the generalization to the continuous case is not immediately obvious, we believe the above ideas could provide fruitful paths for exploration in future work. We believe, nonetheless, that presenting the theory for discrete groups is an important first step. In equivariant deep learning, the paper *Group-Equivariant Convolution Networks* (Cohen & Welling, 2016) introduced the G-CNN for discrete groups, and only later did the paper *Steerable CNNs* (Cohen & Welling 2017) present a theory that applies to continuous groups (while the authors still used the octahedral discretization in their experiments). **Computational Complexity \& Runtime Analysis** Please see the global response on computational complexity. **Fiber Bundle Formulation** We agree with the suggestion to present the background in the language of fiber bundles, as it is has become standard in the G-CNN literature. We did not do so in the first draft to avoid overcomplication for the reader, but think it will be worth it for consistency, and plan to make these edits in the final draft of the paper. Thank you for the suggestion! **Toy Datasets** We agree that the datasets we use here (MNIST and ModelNet10) are relatively simplistic. We made the decision to use simple datasets in our analyses because we can exactly control their mathematical structure. We generate our synthetic datasets by applying the actions of specific groups ($SO(2), O(2), SO(3), O(3)$) to these data exemplars. Thus, we know exactly the groups that structure the data, and can formulate theoretical expectations for the application of the G-TC layer. The CIFAR image dataset, by contrast, possesses many variations that are not attributable to group actions -- e.g. lighting changes, general differences in appearance across exemplars from the same category. Thus, theoretically, one expects less benefit from the use of group-invariant and group-equivariant structures with this dataset. Our goal in this work was to demonstrate empirically the theoretically expected properties of our proposed layer, which we think we have accomplished with the use of simple, controlled datasets. **Typo** Thanks for catching this! We’ve updated it in the paper. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: I thank the authors for their detailed responses to my questions. The ideas about generalizing to Steerable CNNs are intriguing but still a bit raw. I am happy with most of their response and I will maintain my current score. Update: I've changed my mind after another read through the paper. I'm updating my score from 6->7 due to the fresh perspective on equivariance this paper brings. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer aA5W, We thank you very much for your engagement with our rebuttal and appreciate the reconsideration of your score. Your review was very helpful for us and has already strengthened this work. We will gladly incorporate these clarifications into the final version of the paper. All the best, Submission8266 Authors
Summary: This paper proposes a $G$-triple-correlation ($G$-TC) pooling layer in order to achieve group-invariance in group-equivariant convlutional neural networks (G-CNNs). Compared to the widely used $G$-pooling layers, the proposed $G$-TC layer is supposed to be "complete" in the sense that it removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. In other words, the $G$-TC layer is injective over different group orbits. The authors claim that this property endows the $G$-TC layer with strong robustness, leading to resistance to invariance-based adversarial attacks. Strengths: 1. The paper is relatively well written and well organized. It is easy to read. Weaknesses: 1. The proposed method is only applicable to "regular" G-CNNs, where feature maps are signals over a group. It is hard to see how this can be extended to steerable G-CNNs where feature maps can be arbitrary fields. 2. One particular disadvantage of "regular" G-CNN is its computational and memory burden -- one has to physically store a function over the group $G$, which, after discretization, could be very large. The feature dimension after the proposed $G$-TC layer increases from $|G|$ to $|G|^2$. Even if there are potential ways to reduce the computational cost of the $G$-TC layer as speculated by the authors, this memory cost can not be circumvented. 3. The authors have claimed that the "completeness" of the $G$-TC layer leads to robustness to invariance-based adversarial attacks. However, there is neither theoretical nor empirical evidence to verify this claim. 4. Also, I am not completely convinced why "completeness" is so important. $G$-pooling is typically adopted close to the end of the model, by which time "coarse-graining" of the feature is usually intentional. 5. Also, can completeness be achieved by simple registration of the feature maps (e.g., according to the max-magnitude function value over the group)? In this case, the signal can stay at the length of $|G|$. 6. The experimental set up of this paper is non-canonical. There are too many adjustment to the baseline (e.g., same filter size as the input, only one $G$-Conv block, etc.) To make the results more convincing, the authors should simply replace the last $G$-max pooling layer of a E(2)-CNN with the proposed $G$-TC layer. I understand the authors are not trying to achieve SOTA, but the results displayed in Table 1 are simply too unconvincing. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the previous sectioin. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Axj4, Thank you for the time you’ve taken to read and review our paper. We address your comments and questions below: **Generalization to Steerable G-CNNs** Please see the global response on steerable G-CNNs. **Computational Complexity** Please see the global response on computational complexity. **Completeness & Robustness** *Theoretical evidence*. The completeness of the triple-correlation (TC) is well-established in the signal processing literature: It was first demonstrated by Yellott & Iverson (1992) for the case of the translation-invariant TC, and later generalized to the TC on compact groups by Kakarala (2009). We provide a statement of the completeness property in Section 3.2 Proposition 2, and point the reader to the original references for the proof, which is lengthy and requires substantial space and background to establish. As the G-TC Layer directly performs a triple correlation, it follows that the G-TC layer is also complete. Since the G-TC Layer is complete, it follows definitionally that it is robust to invariance-based adversarial attacks. In an invariance-based adversarial attack, the goal is to find inputs that are non-identical (up to group action) that yield identical output representations. Completeness states that the only inputs that yield identical outputs are identical up to group action. Thus completeness directly implies robustness to invariance-based adversarial attacks. Since this line of reasoning was not clear to the reviewer from our paper, we plan to make it much more explicit in the main body of the paper. We will also include an appendix that replicates the proof of completeness. We provide *empirical evidence* for the completeness of the G-TC layer in Section 5.2 and Figure 2, where we perform invariance-based adversarial attacks on the models trained in Section 5.1. We find empirically that our G-TC model is complete, whereas the Max G-Pool model is not. That is, when we optimize inputs to yield points identical to a target image in the output space of the G-TC model, every optimized input is identical to the target image up to group action. This analysis was first established by Sanborn, et al (ICLR 2023) and provides empirical demonstration that the theoretically-expected completeness property holds for the trained model. As this empirical demonstration of completeness is presented in the main paper along with a figure illustrating the results, we are not sure why it was missed by the reviewer. Do you have recommendations on how to improve its presentation? **The Importance of Completeness.** A key point that we emphasize in our paper is the distinction between *coarse-graining* and *invariance*. Standardly, pooling has been used in CNNs to accomplish both objectives. However, the concepts of coarse-graining and invariance are distinct and serve different purposes. We elaborate on this distinction in paragraphs 1- 4 of the introduction. In our paper, we introduce a layer that focuses on the problem of invariance – *not* coarse-graining. Spatial coarse-graining is still important, and standard pooling approaches can be used to achieve this. However, at the end of a G-CNN, the goal is to achieve invariance to the group action, to remove the G-equivariance preserved throughout the layers of the network and output a representation that is “canonicalized.” It is in this setting that completeness becomes important. The standard approach for achieving invariance at the output of a G-CNN is to perform Max G-Pooling, which returns the maximum output value over the group. This operation loses substantial information, as many non-identical inputs may yield identical max outputs. We provide an illustration of this problem in Figure 1 of the paper. Empirically we demonstrate that the lossiness of Max G-Pooling results in lower classification accuracy than the G-TC model when all else remains fixed. Thus, model performance reflects a tangible and substantial benefit of completeness. **Registration of Feature Maps.** The idea of registering the feature maps is an interesting alternative to the TC approach we take here. It sounds like your suggestion is something like: Take the maximum value of the output of the group, use that to perform registration – e.g. shift the outputs so that the max value appears in the first position of the vector (or something like this). This is an interesting idea worth exploring in future work. We foresee a few possible issues that may arise with this approach: First, if there are multiple equivalent (or similar) maximum values, there is an ambiguity as to how to register. In practice, due to variations in data, images from the same category that are not exactly identical up to transformation will yield different feature vectors and so the maximum is likely not a reliable guide to finding the canonical registration. Nevertheless, it would be worth experimenting to see if these issues matter in practice. Generally speaking, registration is a heuristic-based approach. Our work takes a mathematically-motivated approach that possesses theoretical guarantees. Future work may seek to test its limits: Are the theoretical gains worth the computational cost in comparison to a good-enough heuristic-based approach? **Non-Canonical Experimental Setup** In this work, we chose to test our new layer in what we see to be the simplest setting: a model possessing a single G-Conv layer with no translational convolution (only convolution on the group $G$). We agree it is a somewhat non-canonical setup. However, since we used an identical setting for both models and changed only the pooling method between them, we can still draw valid conclusions about the impact of that change. Our results show an unambiguous significant increase in accuracy with our pooling layer, across all datasets, averaged over many random seeds. --- Rebuttal Comment 1.1: Comment: I'd like to express my gratitude to the authors for their thorough response. I am particularly grateful for the comprehensive clarification on steerable networks. Nevertheless, I have lingering concerns regarding three areas: Completeness Consideration: I recognize that "excessive invariance," like the one introduced by max-G-pooling, might result in different group orbits providing the same response. Yet, is this truly a significant issue in applications such as image classification? After all, different group orbits representing the same digit should indeed be classified as that particular digit. There's a potential argument that striving for "completeness" may consume excessive memory resources to retain redundant data in the triple-correlation. Robustness Inquiry: The significance of Figure 2 in supporting the "completeness" assertion is evident to me. What remains unclear, however, is its association with "invariance-based adversarial robustness". My understanding suggests that an adversarially robust model should consistently produce comparable outputs for input group orbits that are closely related. This seems different from preventing distinct group orbits from generating identical outputs. Experimental Approach: I understand the rationale behind employing a non-canonical experimental setting to demonstrate the advantages of the proposed layer. However, to truly highlight its value in practical applications, wouldn't it be more compelling to conduct the experiments under standard conditions? Furthermore, in relation to my first point, as networks become deeper, could it be that the importance of "completeness" diminishes? It might be more desirable for different group orbits of the same digit to produce analogous outcomes. In conclusion, while I hold the authors' comprehensive response in high regard, I remain to be fully persuaded of the paper's overarching significance. --- Reply to Comment 1.1.1: Title: Completeness Comment: We thank the reviewer for engaging with our rebuttal, and again express our appreciation for the interesting points brought up in the review, which have motivated the generalization to steerable networks and improved the quality of the work. We hope to answer remaining questions here, and we are happy to discuss any points further throughout the discussion period. **Completeness** Your question about completeness in the context of image classification highlights a distinction that we may have not made sufficiently clear. Thank you; we think this clarification will significantly strengthen the paper. We summarize your question as this: *“In image classification we want different examples of the same digit drawn in different styles to yield the same network output (the correct label). Completeness ensures that digits drawn in different styles will map to different outputs (orbit separation). Is this not too stringent for the problem of image classification? Are we wasting computational resources on the wrong problem?”* We highlight that there are two sources of variation that need to be eliminated in order to solve the Rotated MNIST task: (1) the action of the group (rotation); and (2) the style variation. The purpose of pooling over the group in G-CNNs (G-Pooling – Max or TC) is to remove the first kind of variation. The second kind of variation is removed by transforming the data into the hierarchical feature space defined by filters in each layer of the network. If the model is trained well, it will learn a good feature space that removes style variation. G-Pooling is applied at the end of the G-Conv network, after the image has been transformed through multiple G-Conv layers. G-Pooling operates on a *new transformed signal*–not the original image. Ideally, the network has learned a good set of features, so style is removed and all images of a particular digit class yield similar representations. However, since the network is G-equivariant, the rotation still remains (in a homomorphic sense). Thus, at this stage, we should (ideally) have ten “orbits”--one for each digit class. For each style-removed digit class, there is some vector that abstractly represents it. The orbit consists of all “rotated” versions of that vector–where "rotated" is in the sense of the group representation of SO(2) acting on the new feature space. Now, we want to remove the rotation and “canonicalize” the signal at the output of the network. One heuristic we can use is to take the maximum value of each filter output, since this will be invariant to rotation. This is Max G-Pooling. But, this lossy operation throws away information and destroys much of the network’s representation of the signal structure. Consequently, we can construct adversarial examples that yield the same network output and class label but look like *random noise*–shown in Figure 2 of the paper. The TC guarantees that we preserve *all* information about the signal, removing *only* the equivariant bit, thus “quotienting” out the group and achieving G-invariance in a completely lossless fashion, so that these kinds of adversarial examples cannot be created. This is shown in Figure 2 of the paper for the TC network. We show that the only inputs that yield identical outputs at this stage of the network are equivalent up to group action. This is far from a waste of resources; it results in concrete gains in classification accuracy (Table 1). Completeness is not “just” there to keep digit orbits separate. It is also there to *ensure robustness*, to prevent random noise patterns from being confused for digits and ensure that representations within the network are meaningful. So, to directly answer your question, we emphasize that the purpose of completeness here is to simply keep all of the information that is already in the last layer of the network, instead of throwing much of it away in a coarse and destructive manner via the max operation. It is not too stringent for the task of image classification; it does not require that different styles of the same digit remain separated in the output space. The network preceding the G-TC removes the style variation and collapses exemplars from the same class, leaving the G-TC layer the task of selectively removing the rotation without destroying the representation.
Summary: This paper introduces a concept called G-triple-correlation (G-TC) layers, which are used in combination with G-equivariant convolutional layers to achieve G-invariant mapping functions. In contrast to conventional methods such as max or average G-poolings, which result in information loss, the proposed G-TC layers are completely identifiable: they selectively remove only the variations caused by group actions while preserving all the essential content information in images. The invariance and completeness of G-TC layers are proven under certain mild assumptions. The proposed G-TC layers are empirically validated for some representative examples, e.g., SO(2)-MNIST, O(2)-MNIST, SO(3)-ModelNet, and O(3)-ModelNet. Strengths: * The paper has a clear motivation, i.e., replacing the excessive invariance (G-pooling) with more informative and precise approach (G-TC). * The proposed method is technically sound. The assumptions used in this paper seems to be mild. The useful theoretical properties (G-invariance, completness, uniqueness) and computational complexity of the proposed method are nicely presented. * The paper is generally well-written and easy-to-follow. Figure 1 is a nice overview for the motivation of this paper. * The experimental results are convincing, especially for the cases of 3-dimensional groups. Weaknesses: - As the authors mentioned, the main bottleneck of the proposed G-TC method is its computational cost. Although the paper includes a theoretical analysis on the time complexity of G-TC, it would be beneficial if the study also presented experimental comparisons of the training and inference computational costs. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - In line 302, the authors mention as "For the Max G-Pool model, ReLU is used as the nonlinearity. Given the third-order nonlinearity of the TC, we omit the nonlinearity in the G-Conv block in the TC Model." How would the performance be affected if ReLU is applied to G-Conv with G-TC? - In line 303, "The *G-Pool* layer increases the dimensionality of the output of the G-Conv block". It seems that *G-TC* is correct? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Computational cost might be a bottleneck for the proposed G-TC method. This limitation is adequately addresed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer NbDh, We thank you for the thoughtful review, and address your comments and questions below: **Computational Complexity**. Please see the global response on computational complexity. **ReLU & Nonlinearity**. For the G-TC model, we remove the ReLU that is standardly performed after the convolution because it *breaks completeness*. By pushing all negative values to zero, the ReLU operation eliminates information that the TC computation needs to retain completeness. Importantly, we find that removing the ReLU for the G-TC model doesn’t reduce classification performance. Rather, it improves it, due to the completeness property. The completeness be observed in the adversarial example experiments. For a G-TC model without ReLU nonlinearity, no out-of-orbit adversarial examples can be found (i.e. Figure 2 of the paper). By contrast, when ReLU is used directly prior to the G-TC layer, adversarial examples can be found – in line with theoretical expectations. **G-Pool Typo**. Thanks for catching this! We have updated it in the paper. --- Rebuttal Comment 1.1: Comment: Dear Reviewer NbDh, As we near the end of the discussion period, we would like to ask if there are any remaining questions or concerns we could help resolve or clarify. Do let us know! We've appreciated your thoughts on our work and would be happy to discuss anything further. All the best, Submission8266 Authors
Summary: This work proposes to use the triple correlation to achive the group invariance. Unlike max pooling, the triple correlation preserves the entire information from the original signal, which may improve the quality of solutions for downstream tasks. Experiments show effectiveness of the proposed method. Strengths: 1. The proposed method looks very natural. The proposed triple correlation layer achives the group invariance and at the same time preserves the entire information from the original signal. As far as I undertand from this paper, it seems to be the only group-invariant operation that enjoys this nice property in the deep learning literature. 2. The results from experiments look quite encouraging. Weaknesses: 1. The novelty of the proposed method seem to be quite limited. It uses a well-known operation and apply it to NN. 2. The computational complexity of the proposed layer look quite high. Although the authors are able to save half of operations by using symmetry, the order of computation is still O(|G|^2). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Since the triple correlation corresponds to bispectrum in frequency space, would it make more sense to optimize in Fourier space (or polar Fourier)? 2. Is the proposed method robust to noisy images? If I look at (8), multiplying three noisy terms may magnify the noise. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer E2w2, Thank you for your review. We appreciate your attention and address your comments and questions below: **Novelty.** Although the triple-correlation in its translation-invariant form has a long history in signal processing, it is relatively unknown in deep learning. Google scholar reveals only a single paper (from the 90’s) that incorporates the triple-correlation (TC) into neural networks, in a very different context ([Delopoulos, et al, 1994](https://ieeexplore.ieee.org/abstract/document/286911?casa_token=6t3n5fTLMdEAAAAA:BpPufERn_ymV6UkxKtcd3GTb48yVSpfQtoP728NdQIAfcHCU3vOcC3YPxwgovRP3NhfkaO4YXA)). A handful of papers incorporate the bispectrum (BS) into deep learning, but largely as a pre-processing step rather than an architectural feature – with the exception of [Sanborn, et al, (ICLR 2023)](https://arxiv.org/abs/2209.03416). All of these papers use the generic translation-invariant form of the TC/BS – with the exception again of Sanborn, et al, (2023), which is the first to make use of the general group-invariant BS/TC applied to the problem of learning groups from data. Our work is the first to draw the connection between the group-invariant TC and the problem of *pooling and robustness in CNNs*. Here, we propose a novel computational primitive that exploits the structure of the G-convolution to construct invariant maps that significantly improve model accuracy and robustness over standard methods. Although the TC is not new, many great ideas have come from applying the right mathematics to a problem in computer science – such as the idea of a CNN itself :) **Computational Complexity.** Please see the global response regarding computational complexity. **Fourier Space.** Indeed, mathematically equivalent computations could be constructed in Fourier space using the bispectrum, the dual of the triple correlation in the frequency domain. The primary reason for performing the computation in the spatial domain is for compatibility with G-convolutional architectures, which live in the spatial domain. Spatial convolution is the dominant paradigm for computer vision, an only a handful of architectures perform equivalent computations in the frequency domain (e.g. Clebsch-Gordan Nets, Kondor et al, 2018). Nonetheless, our pooling method could be formulated in Fourier space for use in these architectures. In fact, Fourier space has some advantages with regard to computational complexity. In particular, for commutative groups, it permits an $O(|G|)$ reduction in the computation of the bispectrum (see [Kondor (2008) Group Theoretic Methods in Machine Learning, pg. 85](https://www.proquest.com/openview/d8ae7d9e47c87bb0acb05bf06ae8b6aa/1?pq-origsite=gscholar&cbl=18750)). For this paper, we chose the spatial formulation to maintain relevance to the existing literature. However, we believe the Fourier formulation is a promising avenue for future work. **Robustness to Noise.** Great question. The TC actually has special properties with respect to noise. In particular, the TC of a Gaussian signal is exactly zero. For this reason, it was originally used as a way to measure the “non-gaussianity” of a signal (See [Signal Processing with Higher-Order Spectra](https://ieeexplore.ieee.org/abstract/document/221324?casa_token=o59mtEpxSygAAAAA:TTSrggJvk8YjlWE_6UhREEMiZugsAm3OzD9OhCeyOTmp58nmk8-Lfn_zIaWpYScHKsjCzP7rNw) for more details). Thus, the TC has the nice property that it eliminates (Gaussian) noise. More generally, so long as the magnitude of the noise is $ < 1$, it will be reduced by the triple product rather than amplified. Keeping noise within this range can be encouraged with appropriate data normalization. **Questions for the Reviewer:** We note that you provided ratings of “2” (fair) for both soundness and contribution. Are there concrete results or analyses you like to see that would improve the quality of our paper in these categories? Again, we thank you for your time and attention! We look forward to discussing these topics further. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response, and it solves all my concerns. After reading all the comments and responses especially some major concerns from reviewer Axj4, I prefer not to change my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer E2w2, Thank you very much for reading our rebuttal. Your review and critical attention has already been helpful to our work. Regarding the concerns of Reviewer Axj4, we recommend that you take a look at our latest replies to them. We believe there were some confusions about completeness and the adversarial experiments that we were able to clarify. Axj4's questions indicate that these points were not clear enough in the original paper, so we intend to incorporate our responses into the final version, which will strengthen the paper considerably. We are thus quite grateful for this discourse. Please let us know if you have any further questions. We appreciate your engagement with our work! All the best, Submission8266 Authors
Rebuttal 1: Rebuttal: We thank the four reviewers for their time and attention. We have provided responses to each reviewer for unique points made in the review. Here, we highlight a few points that appear across reviews, to avoid redundancy. **Generalization to Steerable G-CNNs** We thank reviewer Axj4 for bringing up the very interesting idea of generalizing the G-TC to steerable G-CNNs, which emerges naturally from our framework -- as described here. Consider a group $G$ that is the semi-direct product $G = \mathbb{Z}_2 \ltimes H$ of the group of translations $\mathbb{Z}_2$ and a group $H$ of transformations that fixes the origin $0 \in \mathbb{Z}_2$. Consider the feature map $\Theta: \mathbb{Z}_2 → \mathbb{R}^K$ that is the output of a steerable G-CNN. Here, $\Theta$ is a field that transforms according to a representation $\pi$ induced by a representation $\rho$ of $H$ on the fiber $R^K$ (Steerable CNNs. Cohen & Welling, 2017). The G-TC can be applied to $\Theta$ by replacing the regular representation by the induced representation in Eq.(6). Specifically, replace any $\Theta(g)$, i.e. the scalar value of the feature map at group element g by $\pi(g)(\Theta)(x)$ i.e. the vector value of the feature map at $x$ after a group element $g$ has acted on it via the representation $\pi$: $$T(\Theta) = \int_G \pi(g)(\Theta)(x)^\dagger . \pi(g_1g)(\Theta)(x) . \pi(g_2g)(\Theta)(x) dg$$ Instead of computing the TC for each scalar coordinate k of $\Theta$, we directly compute it as a vector. The formulation does not depend on the choice of $x$ by homogeneity of the domain $\mathbb{Z}_2$ for the group $G$. Importantly, this TC is invariant to the action of the induced representation: the proof leverages the same arguments that the ones used in our Appendix. Consider a feature map $f_2 = \pi(g_0)[f]$ obtained from the action of $g_0$ on a feature map $f$, and show that $TC(f_2) = TC(f)$. The key ingredient of the proof is the change of variable within the integral $\int_G$, which follows the semi-direct product structure of G: $h’ = h h_0$ and $t’ = \phi(h)t_0 + t$ where $\phi(h)$ is a matrix representing h that acts on $\mathbb{Z}_2$ via matrix multiplication: e.g. a rotation matrix $R = \phi(r)$ in the case of $SE(n)$. This concludes the proof. In short, the definition of the G-TC and the proof of its invariance naturally emerge by replacing the regular representation by the induced representation. We thank the reviewer for this remark and would be happy to highlight this natural extension to steerable G-CNNs as an avenue for future work. **Computational Cost** Indeed, the TC has quadratic complexity that scales with the size of the group: $O(|G|^2)$. We acknowledge and describe this as a limitation in the paper. We note that it is quadratic in the size of the *group* -- not the size of the input. Practical use of the TC can be limited to “small” groups and enjoy the benefits provided by our approach. For the groups of rotation, this means limiting to rotations discretized into multiples of $d$ degrees. This is the framework used in many equivariant neural networks (Group Convolution Neural Networks, Cohen & Welling 2016, experiments of Steerable CNNs, Cohen & Welling 2017, etc). For the groups of translations, this means limiting to images defined on a small grid, or to feature maps that have been coarse-grained and are now defined on a small grid. Coarse-graining is typically applied during earlier stages of the network, whereas the group-invariant G-TC layer is applied at the end of the network, at which point the signal is heavily coarse-grained and substantially reduced in size. Many groups of interest can discretized coarsely in G-CNNs without considerable loss of accuracy, and this makes the computational cost of the G-TC in practice less impactful. This can be seen in the empirical time required for 1,000 training epochs for the G-TC and standard Max G-Pool model. For this analysis, we use the octahedral (rotational and full) models trained on the ModelNet10 dataset, averaged over all random seeds, and trained on a NVIDIA GeForce GTX 1080 Ti 12GB GPU. 1k epoch training times can be found below: **Rotational Octahedral |G| = 24** - Max G-Pool: 22.5 minutes - G-TC: 37.5 minutes **Full Octahedral |G| = 48** - Max G-Pool: 29 minutes - G-TC: 76 minutes We note that the octahedral groups are relatively large for discretized groups used in practice. Still, the G-TC models only result in 1.67x and 2.62x increases in training time over the Max G-Pool models. We believe that there are many scenarios in which the strong improvements in classification accuracy (Table 1), and the completeness of the model (Figure 2) will be worth this increased cost. One reviewer (aA5W) asked whether our approach could be applied to the symmetric group $S_n$ of permutations of $n$ elements, given its computational complexity. The size of the symmetric group itself is $n!$. This typically limits *any* computation on the full group, except in the regime of small $n$. Indeed, the TC would be faced with $O(n!^2)$ computations. Thus, we acknowledge that any exponentially-sized groups will pose a problem for the TC, except in the regime of small $n$. Nonetheless, many groups used in practice--such as the rotation, translation, and reflection groups--are continuous groups that possess natural discretizations that limit their size and thus the complexity of the TC in practice. We would be happy to further emphasize this limitation in the paper, if the reviewers feel that it would help the readers and future users decide on how they can best integrate the G-TC in their applications. Additionally, we refer the reviewers to our answer to aA5W regarding the extension to continuous groups. There, we have included ideas that have the potential to reduce the computational cost in future work.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Single-Pass Pivot Algorithm for Correlation Clustering. Keep it simple!
Accept (poster)
Summary: The paper shows a single-pass semi streaming algorithm for correlation clustering (minimizing disagreements) which computes a $3+\epsilon$ approximate solution using $O(n/\epsilon)$ words. This should be compared with: * a 5-approximate algorithm using O(n) words * a $3+\epsilon$ approximate algorithm using $O(n \log n)$ words The algorithm is a variant of the PIVOT algorithm combined with a simple sparsification technique which keeps $1/\epsilon$ edges per each node. Strengths: * The paper improves the space usage for an important problem * The presentation is clear. Actually the entire paper, including all proofs (excl references) fits in 6 pages Weaknesses: While I appreciate the theoretical result and the simplicity, the improvement over the previous results seems quite small: i.e. it can be seen as either slightly reducing the approximation factor or space per vertex from $O(\log n)$ to $O(1/\epsilon)$. I'm not convinced that the problem of correlation clustering in a streaming setting is popular enough for this kind of improvement to be of sufficient interest to NeurIPS audience Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I'm afraid I can't think of a question that could affect my recommendation Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
null
Summary: The paper studies the correlation clustering problem in the semi-streaming model obtaining a $(3+\varepsilon)$-approximation to the problem with $O(\frac{n}{\varepsilon})$ words of memory. This is an improvement over [CKLPU'22] who gave the same approximation with $O(n\log n)$ words of memory (I'm not sure of the dependence on $\varepsilon$) and over [BCMT'22] who gave a 5-approximation with $O(n)$ words of memory. Moreover, the algorithm is extremely simple. It picks a a random ranking of the vertices and for each vertex keeps a list of its $k=O(1/\varepsilon)$ highest ranked neighbors during the streaming phase. In post processing the algorithm goes through the vertices according to their rank, for each vertex $u$ finding the highest ranked vertex $v$ in its list such that $v=u$ or $v$ is a pivot. In the former case, $u$ is declared a pivot, in the latter case, $u$ is assigned to the cluster of $v$. If no such $v$ exists, $u$ is put in a singleton cluster. The analysis describes an equivalent offline algorithm which is analysed partly via an analogue to the Pivot algorithm from [ACN'08] and partly via a martingale argument for singleton vertices relating their cost to that of the Pivot algorithm. Strengths: The paper is very well written and the proofs are neat and readily checked to be correct. The algorithm is simple and I think it is nice and rare with a short paper with such a clean idea. Since furthermore, correlation clustering is of interest to many people in the Neurips community I think the paper should be accepted. Weaknesses: Perhaps the authors could have considered running some experiments on the quality of the clustering output by their algorihtm. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What's the dependence on $\varepsilon$ in [CKLPU'22]? l66: ".... has $O(n)$ words of memory where $k$ is a constant": Do you mean $O(nk)$? l131: I think the runtime bound can be improved to $O((k\log k \log n)n+m)$ or something like that. Indeed, with the random order, you only need to update the priority queue with probability $k/i$ when the $i$'th neighbor arrives. l142: You define $U_t$ to be the unclustered vertices but latter on in the martingale argument (l206, l209, l216,...) you refer to $U_t$ as the set of clustered vertices. I think the indicator in the potential should be $1(v\notin U_t)$. l195-196: Can you define $\Delta OPT_t$ more clearly. When you say the optimal cost of the edges settled by PIVOT, you mean the cost of these edges in the optimal solution, right? l212 "...must be a pivot chosen among non-clustered...": Couldn't it also be because it's counter increased to $k$? It doesn't seem to matter for the argument? Equation above l217: The first inequality is an equality, right? And $v\in U_{n+1}$ should be $v\notin U_{n+1}$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The algorithm by [CKLPU'22] uses $n \log(n)/\epsilon$ words of memory. 1. The memory usage is O(kn). It is linear in $n$ if $k$ is fixed. 2. __l131: I think the runtime bound can be improved to ...__ Thank you! You are right. 3. That's correct. 4. You are right. 5. __When you say the optimal cost of the edges settled by PIVOT, you mean the cost of these edges in the optimal solution, right?__ That's correct. We will revise the wording to make it clear. 6. __l212 "...must be a pivot chosen among non-clustered...": Couldn't it also be because it's counter increased to $k$? It doesn't seem to matter for the argument?__ We will more carefully explain what happens when the counter is increased to $k$. This case should be handled on l214, because in order for the counter $K_t(v)$ to get equal to $k$, it first needs to increase by 1. 7. That's correct. Thank you!
Summary: This paper demonstrates that the celebrated Pivot algorithm for (unweighted, complete) correlation clustering (with the min disagreements objective) can be adapted to the single-pass, semi-streaming setting, to yield a (3+\eps)-approximation in expectation using O(n/eps) words of memory. (Note that the original Pivot algorithm gives a 3-approximation in expectation.) This improves on previous works that show a (3+\eps)-approximation using O(n\log n) words of memory, and a 5-approximation using O(n) words of memory. In the single-pass, semi-streaming setting, edges and their labels arrive online adversarially and the stream can only be read once. It is assumed that only positive edges arrive and the remaining edges in the complete graph are implicitly negative. The Pivot algorithm chooses a random permutation of the vertices, visits each vertex in this order, and if a vertex is unclustered, makes that vertex a "pivot." The pivot grabs its unclustered positive neighbors from the remaining sequence and forms a cluster. The algorithm proposed in this paper tweaks the Pivot algorithm by again choosing a random permutation of the vertices, but for each vertex it only retains positive edges to the top-k ranked neighbors in this permutation. It then runs Pivot on this subgraph, using the chosen random permutation. The analysis rests on controlling the cost of (positive) edges cut by special vertices called "singleton vertices." These are vertices u whose top-k positive neighbors all come before u, but none are chosen as pivots, so u ends up in a singleton cluster. (Controlling the cost of other edges is done by reusing the analysis of Pivot, since these edges are cut by the pivot vertices.) The crux of the argument controlling the cost of singleton vertices is to show that a certain potential function is always positive in expectation, which in turn comes from showing it is a submartingale. Strengths: - The algorithm is a pleasingly simple variant of the Pivot algorithm. It lends insight into just how much "wiggle room" there is in the original Pivot algorithm, as it might a priori seem surprising that we only need retain the top-k ranked neighbors of each vertex. - The analysis (particularly the main lemma, which designs a novel potential function and shows it is a submartingale) is quite elegant. Moreover, it is substantially simpler than analyses in the aforementioned previous works on the semi-streaming setting. - This paper contributes to the growing literature on how the Pivot algorithm can be adapted to other models of computation (in addition to streaming, parallel models such as MapReduce). While the result itself is not a significant improvement over known results, and the algorithm is of comparable simplicity to previous ones, it provides a much cleaner analysis. Weaknesses: While the paper is overall clear and well-written, it would be useful to give a bit more background comparing the proposed algorithm to those in prior work in the semi-streaming setting. Some typos / omissions in the writing: - U_t on pg. 6 is seemingly meant as the complement of how it is defined on pg. 4 - D^+(v) = |N(v)| on pg. 5 - It would be good to add a line stating formally what it means to "cut" at an edge (although it is clear from context) And a suggestion: The use of the term singleton is a bit confusing, since, as the authors mention, not all singleton clusters correspond to "singletons" as defined in the Pivot selection and clustering section of Figure 1. It may be worth just using a distinct term altogether for such vertices. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions! We will revise the introduction and include more background on the prior work. We will also use a different name for "singletons". Finally, we will fix the _typos/omissions_. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. While it is true that the actual result is perhaps an incremental improvement, I like the analysis a lot and through it the insight into the Pivot algorithm gained. I maintain my evaluation.
Summary: This paper provides a streaming algorithm for correlation clustering on complete graphs that uses O(n/eps) space and has an approximation factor of 3+eps. The algorithm is based on the combinatorial pivot algorithm. The prior results in the streaming setting give 5 approx in the better space of O(n); and another result with 3+eps approx in space O(n log n). They are also based on the combinatorial Pivot algorithm. The best offline approximation for the problem achieves a factor below 2, but the best approximation factor based on the combinatorial pivot algorithm achieves a factor of 3. Strengths: Matches the best offline approximation based on combinatorial Pivot algorithm The algorithm is simple and thus practical to implement, and the proof is very clean. Weaknesses: They provide a code in the supplementary material but no experimental results comparing this algorithm with the prior works. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Minor comments: Page 1, sings -> signs In description of Algorithm 2, did you mean U_1 = V as opposed to U_1=U? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: You are right $U_1 = V$. Thank you!
Rebuttal 1: Rebuttal: We thank the anonymous reviewers for their valuable comments. We are grateful to them for carefully reading our paper, giving us helpful suggestions, and providing a list of typos. We will address all the feedback we received in the final version of this paper and, of course, fix the typos and add necessary clarifications. We believe that our paper offers significant theoretical and practical advancements in the study of the Correlation Clustering problem. The contribution of our work is twofold: (1) Our algorithm demonstrates improved memory usage compared to both streaming and non-streaming algorithms previously introduced. (2) It is significantly simpler than other Correlation Clustering algorithms, except for the original Pivot algorithm. This simplicity makes our algorithm easy to implement. Our reference implementation is concise, consisting of fewer than 50 lines of C++ code. Moreover, our proof is just 3-4 pages long, which makes it accessible and comprehensible. The reason why we highlight the simplicity of our algorithm is that, in practice, people prefer to use simple algorithms whenever possible. There are many known sophisticated algorithms for Correlation Clustering, but, in practice, the algorithm of choice is, due to its simplicity, the original Pivot algorithm. In the case of streaming algorithms, our algorithm is both better and substantially simpler than other algorithms. Our algorithm can also be used in the non-streaming setting when the data is stored in a file. In this case, the advantage of our algorithm is that it uses much less memory than the standard Pivot. We conducted experiments with two synthetic data sets. The plot (provided in our response as a separate PDF file) shows that the performance of our variant of Pivot is essentially the same as the performance of the original Pivot for $k \geq 4$. These data sets are based on the K-nearest neighbor (K-nn) graphs for points generated through a Gaussian mixture model. We normalized the cost such that the standard Pivot algorithm's cost is set at 1. After running our Pivot variant 100 times on each graph, we averaged the costs. Please, note that k denotes the parameter of our algorithm, while K denotes the number of nearest neighbors in the K-nn graph. Pdf: /pdf/001eb816d1ace8c79e776bf6900f2325b4ea44e2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers the problem of correlation clustering on a complete graph, where all the edges are assigned either "+" or "-". The goal is to come up with a clustering that maximizes agreement with the edge labels. This is a well studied problems in approximation algorithms. There is an elegant 3 approximation due to ACN called PIVOT: pick unclustered vertices in random order, create a cluster for each such vertex and all unclustered neighbors that have a + edge to it. The analysis is similarly simple and elegant. This paper considers the semi-streaming model. where the edges arrive one at a time. It is not obvious how the ACN version of PIVOT works in this model, since we do not know all the neighbors of a vertex v when processing it. A recent paper due to Behnezhad et al gives a 5 approximation for a variant of PIVOT. Their algorithm gives a 5 approximation in O(n) space, but its analysis is subtle and a little tedious. This paper presents another variant of pivot which gives a 3 +eps approximation, again in O(n/eps) space. The analysis is arguably simpler (but to my taste, not as simple as the original PIVOT algorithm). Previously, there was a 3 +eps algorithm, but that needed n log n space. Strengths: It is nice to have both a simple algorithm and a simple analysis for a very natural problem. This paper deserves to be published somewhere. Weaknesses: The contribution feels somewhat incremental. We had a simple algorithm that gave a 5 approximation in O(n) space, as well as a 3 + eps approximation in O(n log n) space. This paper gives 3 + eps with a simple algorithm and analysis, and O(n/eps) space. In some sense, it gets the best of all worlds. I am not entirely convinced that either then space reduction or factor improvement ranks as a important enough contribution at a top ML venue like neurips. For calibration, I dont think this would get into a top tier theory venue, but I'd be fine with accepting it to a next-tier conference. If some area expert thinks this is the case, I'd be open to changing my mind. Technical Quality: 3 good Clarity: 3 good Questions for Authors: When you say that the stream only consists of positive edges, it is worht pointing out that you are working on the complete graph and the remainder of the edges are assumed to be negative. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __When you say that the stream only consists of positive edges, it is worth pointing out that you are working on the complete graph and the remainder of the edges are assumed to be negative.__ Thank you! That's correct. We will clarify this point in the final version of this paper.
null
null
null
null
null
null
Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Accept (poster)
Summary: This paper thoroughly studies a particular undesired aspect of state-of-the-art self-supervised learning models: deja-vu memorization, where the models end up memorizing specific parts of the image instead of learning just a semantically meaningful association. The authors provide extensive empirical results to show when SSL models might memorize, and what aspects of training might minimize this effect. Strengths: * The authors introduce novel idea and testing methodology for SSL models * Provides comprehensive empirical results on different aspects of SSL modeling and training and memorization * Additional results on ablations and using different representations and aspects of the setup help understand the author’s proposed method better * Clear presentation of results and setup. Weaknesses: * Authors do not seem to talk about the impact of different augmentations on memorization, if any. Augmentations are very important for SSL performance, maybe they impact memorization? * It is hard to judge how confidently we can estimate the memorization if deploying. We compare to only one other reference SSL model. * The importance of using the generative model is not quite clear. From the result figures, the kNN images themselves seem to do enough. * Only shows using ImageNet data. No way to know if it will generalize to other datasets as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * How do different augmentations used by SSLs impact memorization? * Is using just one other SSL model enough to give confident results of memorization? Should we include more reference SSL models to increase confidence? * How many samples would be needed in the X public dataset? Could be helpful to understand the scale if deployed. * When is the generative model actually needed, when is it necessary beyond just looking at the k nearest neighbors? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * The lack of a way to estimate the confidence or uncertainty in the memorization score. * Although shown for many models, the results are shown only for a single dataset. Using more datasets could have been nice, maybe using CelebA to show background info is memorized to learn person’s identity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reading our work, and for your suggestions/questions! We have attempted to answer your primary questions and concerns below. Please let us know if you have any further questions, or if anything is unclear. **Authors do not seem to talk about the impact of different augmentations on memorization, if any. Augmentations are very important for SSL performance, maybe they impact memorization?** In our instance, the most important augmentation is the Random Cropping one. If we don’t use random crop, there is no way for the network to learn to associate a specific patch to a given image. In fact, we also show in the paper some memorization occurring with a classifier which is trained only with random cropping. Concerning SSL, it is true that additional color based augmentation like color-jitter are used which make the task of mapping the representation of different crops much harder. As request, we ran experiments in which we change the SSL data augmentations. For the first one we remove the color-jitter augmentation from the SSL augmentations (leaving cropping, grayscale, solarization and blur), while for the second one we kept only cropping as data augmentation. | | All Augmentations | Without ColorJitter | Only Cropping | |---|---|---|---| | KNN Accuracy | 40.6 | 26.6 | 18.2 | | DejaVu Score | 33.2 | 24.9 | 12,9 | --- As you can see, removing augmentations decreases as expected the KNN accuracy of the models. In consequence, It also reduces the deja vu score. The issue is that it is not really possible to disentangle augmentation from dejavu memorization from generalization. SSL models needs strong augmentations and if we remove them, we just learn bad representation that cannot be leveraged for any downstream tasks. **It is hard to judge how confidently we can estimate the memorization if deploying. We compare to only one other reference SSL model.** Our method measures model $A$’s memorization of set $A$ and model $B$’s memorization of set $B$, and reports the average. We consistently find that the degree of memorization of these two disjoint sets are nearly equal. Given that both model $A$ and $B$ appear to show systematic memorization of their disjoint training examples, we believe that this confidently reveals the existence of memorization in training. To show memorization of a single example, we agree that one would need to train several reference models (to e.g. confirm that the reference model rarely guesses a foreground Swan given a water background). However, this would significantly increase the computational demand of the test. We feel that our test confidently captures the existence of memorization without untenable computational demands. **The importance of using the generative model is not quite clear. From the result figures, the kNN images themselves seem to do enough.** Thank you for your feedback on this point. The generative model offers a single reconstruction by averaging the kNN examples. It is helpful to know that this did not add much to your read of the paper. We will edit the paper to de-emphasize the generative model reconstructions. **[...] only shows using ImageNet data. No way to know if it will generalize to other datasets as well.** Our choice of a curated dataset like ImageNet was intentional for two reasons. First, ImageNet comes with bounding box annotations that allow us to precisely separate the background (e.g. water) from the labeled foreground object (e.g. swan), allowing us to run our quantitative memorization tests. When bounding boxes are not available, we propose a heuristic test (see Appendix A.5) that simply takes a corner crop of the image which most likely removes the foreground object. We show that this heuristic offers a good lower bound of memorization. Second, ImageNet is largely deduplicated allowing us to show memorization that cannot be fixed by better dataset curation. Prior works on dataset reconstruction e.g. [1, 2] show near-exact memorization that mostly occurs with highly duplicated examples, and can be largely avoided with deduplication. We feel that this is a significant contribution of our work. **[...] maybe using CelebA to show background info is memorized to learn person’s identity.** We agree that such a study is important, however there are privacy and legal concerns in play here. Consent was never obtained from the people appearing in the CelebA dataset which might pose legal and ethical concern (and blurring the faces in CelebA will just make the dataset unusable). Using ImageNet, we have a large diversity of images which offers significantly more variety than smaller scale dataset like CelebA and others. [1] Carlini, Nicholas, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, et al. “Extracting Training Data from Large Language Models.” ArXiv:2012.07805 [Cs], December 14, 2020. http://arxiv.org/abs/2012.07805. [2] Carlini, N., Hayes, J., Nasr, M., Jagielski, M., Sehwag, V., Tramer, F., ... & Wallace, E. (2023). Extracting training data from diffusion models. arXiv preprint arXiv:2301.13188. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions and doubts, especially regarding data augmentation. It makes perfect sense that fewer augmentations reduce the quality of the representations, but it is still interesting that it also facilitates memorization. I also perfectly understand the computing requirements for training the models and agree that results holding for $A$ to $B$ and $B$ to $A$ provide some initial confidence in the results. While CelebA is a tricky dataset to use for works on privacy, it would indeed be interesting to see future results on some datasets where such privacy risks are definitively being leaked, e.g., in medical image data (maybe CheXpert?), and if a crop of an image (e.g., some hospital imaging device artifact) ends up leaking chest result data of individuals scanned by that machine. This is not to take away from the results shown. Finally, while the authors already show the impact of the SSL algorithm chosen and the data augmentations on Deja-vu, I am pretty interested to see future work on memorization in self-supervised methods and how other training choices impact Deja-vu, e.g., imposing adversarial robustness [1,2]. [1] Kim, Minseon, Jihoon Tack, and Sung Ju Hwang. "Adversarial self-supervised contrastive learning." Advances in Neural Information Processing Systems 33 (2020): 2983-2994. [2] Chen, Tianlong, et al. "Adversarial robustness: From self-supervised pre-training to fine-tuning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Summary: This paper shows how self-supervised learning (SSL) models can memorize information about their training data in ways that are not intended. SSL is a technique for learning representations of unlabeled data by training a model to solve a "pretext" task; for example, we might take training images A, B, and C and crop them in different ways, producing sets of patches (A1,...,An), (B1,...,Bn), and (C1,...,Cn). The algorithm tries to find an embedding of these patches that makes Ai similar to Aj, while minimizing the similarity between Ai and Bj. These embeddings are used for a variety of downstream tasks. This paper shows that the representations of training patches can contain surprising information about the training data. In the headlining example, the authors take an image of a black swan from the training data, extract a water-only patch, and produce an embedding of this patch. Then a generative model, which (ideally? see below) has no access to the training data, produces an image from this embedding: it contains a (different?) black swan. The water-only patch contained no information about swan, so the SSL model "remembered" what was in the image the patch came from. The trick fails when the same patch is run through an SSL model that was trained on different data: it happens only because the patch was seen in the training data. The paper has two methods for detecting such memorization. The first predicts the class label given the patch's representation. The second, mentioned above, visually inspects images generated from the representation. In both cases the classification/generation is done by a model trained on a different data set. The label-prediction method demonstrates quantitatively that the representation contains information about the label, while the visual inspection reveals finer details: the paper contains a great example of how SSL models can learn and memorize the distinction between European and American badgers, even though both share an ImageNet label. Strengths: If the authors resolve my dataset question below, I am happy to call this a very clean and original demonstration on a significant and relavant topic. It is definitely a paper to generate good discussion. The paper performs extensive experiments, showing how the results change as they turn various knobs. The quantitative and qualitative techniques complement each other nicely. The work lays out clear directions for future work. Weaknesses: As I understand from Appendices A.2.1 and A.2.3, "private" sets A and B come from ImageNet 1k, which is a subset of ImageNet. The "public" set X is a face-blurred version of ImageNet. This appears to contradict the experimental setup of Section 3.1 and, if it is the case, means I am not sure how to interpret the results. I could be mistaken and am open to raising my score. EDIT: the rebuttal adequately addressed these concerns. I was initially confused the use of the word "reconstruction." For example, Figure 1 talks about "reconstruction of an SSL training image," but the generated image looks quite different from the training image. It seems you use the word differently than recent work does [1-5], where it means producing an example that is pixel-by-pixel, or feature-by-feature, similar to the input. I suggest using a different word or clarifying the issue earlier in the paper. I think synthetic-data experiments might strengthen the paper. They might nail down specific aspects of this phenomenon. One might better probe the types of memorized information. (Of course, easier suggested than done.) I find the related work section adequate, but I think Section 3, "Defining Deja Vu Memorization," would benefit from clearer connections between prior work and the presented definitions. In particular, the definition at line 111 has similarities to the notion of memorization used in [2], which also attempts to capture the idea of memorization of specific information about individual examples, beyond what one can infer from the data distribution. The definition for testing methodology, at line 130, is similar to the definition of memorization used in [6,7]. [1] Balle, Borja, Giovanni Cherubin, and Jamie Hayes. "Reconstructing training data with informed adversaries." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022. [2] Brown, Gavin, et al. "When is memorization of irrelevant training data necessary for high-accuracy learning?." Proceedings of the 53rd annual ACM SIGACT symposium on theory of computing. 2021. [3] Haim, Niv, et al. "Reconstructing training data from trained neural networks." Advances in Neural Information Processing Systems 35 (2022): 22911-22924. [4] Guo, Chuan, et al. "Bounding training data reconstruction in private (deep) learning." International Conference on Machine Learning. PMLR, 2022. [5] Hayes, Jamie, Saeed Mahloujifar, and Borja Balle. "Bounding Training Data Reconstruction in DP-SGD." arXiv preprint arXiv:2302.07225 (2023). [6] Feldman, Vitaly. "Does learning require memorization? a short tale about a long tail." Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. 2020. [7] Feldman, Vitaly, and Chiyuan Zhang. "What neural networks memorize and why: Discovering the long tail via influence estimation." Advances in Neural Information Processing Systems 33 (2020): 2881-2891. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Section 3.1 says the datasets A, B, and B are disjoint. Appendix A.2.1 says that A and B overlap. Furthermore, as mentioned above, my understanding is that X is a face-blurred version of ImageNet, and thus X might contain (possibly blurred) examples from A and B. Can you clarify this? Could you lay out what the "adversary," the person conducting the privacy attack, has access to? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors adequately address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the related work references! We indeed can answer your questions pertaining to the dataset splits. We will edit the paper to make these points more precise. Please do let us know if we can provide any further clarifications. **Section 3.1 says the datasets A, B, and B are disjoint. Appendix A.2.1 says that A and B overlap.** To be clear: $A$ and $B$ are always disjoint! We are very sorry about the confusion on this crucial point due to the lack of clarity in the appendix. Here is the full picture: there are 1,281,167 training images in the ImageNet data. Within these images, only 456,567 of them have bounding boxes annotations (which are needed to compute the Deja Vu score). The private set $A$ and $B$ are sampled from these 456,567 bounding boxes annotated images in such a way that set $A$ and $B$ are disjoint. If we remove the 456,567 bounding boxes annotated images from the 1,281,167 training images, we get 824,600 remaining images without annotations which never overlap with $A$ or $B$. From this set of 800K images, we took 500k images as our public set $X$. So now, we have three non overlapping sets $A$, $B$, and $X$. Then, if we remove the 500K public set images from the 824,600 images without annotations, it leaves us with 324,600 images that are neither in $A$, $B$ or $X$. For simplicity, let us call this set of remaining 324,600 images the set $C$. Then, we have split the entire ImageNet training set into four non-overlapping splits called $A$, $B$, $C$ and $X$. When running our experiment with a small number of training images, we only use the set $A$ to train SSL_A and the set $B$ to train SSL_B and then use the set $X$ as a public set for evaluation. However, to run larger scale experiments, we use as additional training data for SSL_A and SSL_B: the images sampled from the set $C$. Here, SSL_A will still be trained on set $A$ but it will be augmented with images from set $C$. The same goes for SSL_B which will still be trained on the set $B$ but augmented with images from the set C. As such, some images sampled from the set C to train SSL_A or to train SSL_B might overlap. However, this is not an issue since the evaluation is done using only images from the bounding boxes annotated set $A$ and set $B$ which are never overlapping (and to be clear -- $A$, $B$ do not overlap with $C$). We would like to thank you again for raising the lack of clarity in the appendix. We will be much more straightforward in the next revision of the paper, we hope that the introduction of the set $C$ will help to clarify the experimental setup in the appendix. Please let us know if there are any remaining concerns on this very important point. **On face-blurred ImageNet** The face-blurred ImageNet is the same thing as ImageNet which is the same as ImageNet-1K (we inadvertently use different words to refer to the same dataset). Since this is not a different dataset, it is impossible that $X$ contains images form $A$ and $B$. We used only one dataset in our work, and that is ImageNet (in which people's faces were blurred for privacy reasons). It is helpful to know this was unclear since it is indeed foundational of our assertions of memorization. **I was initially confused the use of the word "reconstruction." For example, Figure 1 talks about "reconstruction of an SSL training image," but the generated image looks quite different from the training image. It seems you use the word differently than recent work does [1-5], where it means producing an example that is pixel-by-pixel, or feature-by-feature, similar to the input. I suggest using a different word or clarifying the issue earlier in the paper.** This is helpful feedback. It is true that we do not aim to reconstruct the images pixel-by-pixel. We may be better using "partial reconstruction" in which the aim is to reconstruct memorized information (foreground) from a given crop (background). We feel that partial reconstruction of *thousands* of training images is a contribution. **Could you lay out what the "adversary," the person conducting the privacy attack, has access to?** We do twice mention that Deja Vu memorization could imply adversarial advantage. We mean that an adversary with access to partial information of a training image (e.g. crop of background or an individual’s face) could use the model to infer the remainder of the image, possibly with the help of a public dataset like disjoint set $X$. In this case, the adversary could have access to the same information that our test does, as represented by the red box of Figure 2’s Inference Pipeline. It takes in set $X$, the background crop, and the model, and returns inferences about the remainder of the training image. We will make this clear where we mention the adversarial implications of memorization. However, it is worth noting that in this paper we focus on the detection of memorization in SSL, which is distinct from a methodical study of practical adversarial risk. We do believe our discovery of memorization has implications for privacy and is fodder for future security work, but are careful to not make strong claims quantifying the exact risk imposed by this memorization. As such, we intentionally focus this paper on identifying memorization and not on defining a practical adversary. **I find the related work section adequate, but I think Section 3, "Defining Deja Vu Memorization," would benefit from clearer connections between prior work and the presented definitions.** Thank you for taking the time to share these additional references. We will work them into our related work section to more accurately position our notion of memorization. **I think synthetic-data experiments might strengthen the paper.** Thank you for your suggestion, can you explain the type of synthetic based experiment you have in mind? Any detail on the different types of memorized information you are referring to would be helpful. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed comment. Based on your clarifications about the data splitting, I am raising my score to 7. Either "partial reconstruction" or "reconstruction" are fine: I think the paper would benefit from more quickly pointing out the difference from prior use of that word. For instance, in the caption of Figure 1, you talk about the "association of this *specific* patch of water (pink square) to this *specific* foreground object (a black swan)." But the first use of "specific" refers to a set of pixels while the second refers to a particular type of object. I wholeheartedly agree that extracting this type of association is a contribution! By "adversary," I meant the same thing as you mean by "test." This text description of the red box in Figure 2 is what I was looking for. I used the word to mean "whoever is extracting information" and understand that your contributions are different from attacks on real-world systems. Reflecting upon synthetic data, I don't have a strong suggestion. One might place patches (such as colored squares) in the background of images and try to determine the color of the square from a patch. However, even if this is successful it's not clear how much it adds to your story, which already recovers strong information other than the label.
Summary: Self-Supervised Learning algorithms provide a label-free alternative for representation learning and is widely used in vision, and language. In this work, the authors try to understand the information content of representations learned by SSL algorithms through the lens of memorization. In particular, they define a "deja-vu" memorization notion that measures how much a given sample from the training set can be reconstructed from a pretrained model. The authors focus on joint-embedding-based approaches, exploring the extent of such memorization during pretraining by evaluating label-inference accuracy on peripheral crops of a subset of training images ($A$). A "deja-vu" score at p% confidence measures the gap in accuracy between a target model (trained on $A$) and a reference model (trained on a different subset $B$). The gap essentially measures the extent to which exposing particular samples during training makes it easier to predict the label from partial background information. Building on these metrics, the authors present interesting results comparing different SSL algorithms under different training configurations. Notably, long pretraining increases the risk of memorization, while the size of the dataset doesn't is as consequential (Fig 4/5). With a sequence of ablations, the authors establish empirical findings and provide design recommendations that minimize deja-vu memorization while maintaining high performance. Strengths: Some strengths of the work are outlined below: **Originality**: The authors examine the representation quality in SSL from a privacy perspective. The work focuses on train-time characteristics and memorization for non-generative SSL algorithms, in line with similar works in language/generative models. The authors propose a novel leave-one-out mechanism for evaluating the extent of memorization (compared to correlation) of specific training-set examples across SSL algorithms. **Significance**: Understanding the information content of representations learned by black-box SSL algorithms is important as these algorithms become widely used in safety-critical applications. In particular, the paper makes a solid case for understanding the privacy risk induced by long-pretraining routines in existing SSL algorithms. **Quality**: The work is well-motivated and grounded in real-world challenging questions related to measuring the quality of representations learned in SSL algorithms. **Clarity**: The paper is well written, with definitions and pedagogical examples that motivate the core insights of the work. The authors include clear, well-labeled figures and plots accompanying their observations. **Reproducibility**: All experiments are performed on public datasets and pretrained models and sufficient details to reproduce the core results of the paper. The authors are encouraged to release the accompanying code at their earliest convenience. Weaknesses: Some weaknesses of the work are outlined below: **Explainability**: The authors have shared some interesting empirical findings about the effectiveness of SSL algorithms in representing data and their ability to recall examples from the training set from partial side information. Although the authors demonstrate that established methods like guillotine regularization reduce the risk of memorization, the current version of the study could benefit from more discussion on the underlying mechanisms behind these mitigation techniques. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Although the authors primarily concentrate on joint-embedding-based techniques, it is probable that reconstruction-based methods are also susceptible to this type of memorization. To enable a fair comparison across the SSL algorithm family, it would be useful to include a baseline such as MAE.nclude a baseline such as MAE. 2. The focus of the authors is on the linear probe as a potential measure of representation quality. However, they have shown that this metric is inadequate in capturing the full extent of deja-vu memorization. It would be valuable to determine if this inconsistency still exists even when the model is fine-tuned, which involves adapting the backbone as well, in an end-to-end approach. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper explores the privacy-risk profile of SSL algorithms through the lens of training-set memorization. The authors are strongly encouraged to discuss their proposed algorithm's limitations (including assumptions and failure modes). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for giving our paper a detailed read, as is clear from you thorough summary! We are happy to hear the strengths you saw in our work, and try to address some of your primary questions below. Please let us know if you have any follow up questions. **Explainability: [...] Although the authors demonstrate that established methods like guillotine regularization reduce the risk of memorization, the current version of the study could benefit from more discussion on the underlying mechanisms behind these mitigation techniques.** In this work, we wanted to alert the research community about the Deja Vu phenomenon and show some situations in which it is amplified when doing SSL training. It is not clear yet what are the exact underlying mechanisms inducing this phenomena. We are keen to study this in follow up work. The guillotine regularization authors have shown that most of the over-fitting with respect to the downstream task lies in the last layers of the network. In consequence, when training an SSL model, the last layers will be heavily biased (because of the training criteria) to match the representations of different patches into the same embedding. But this is not the case for the intermediate layers of the network which are free to assign different embeddings to different crops of a given image (which might also be better for generalization). However, there are still some mysteries: we do not know why the BYOL criterion is so much more robust with respect to Dejavu memorization in comparison to Barlow Twins or VICReg. Clearly, what causes Deja Vu is subtle and complex. We believe further work would be needed to fully assess the extent of this phenomena and its origin. **It would be useful to include a baseline such as MAE** This work focuses on introducing a new modality of memorization in SSL (Deja Vu memorization) and demonstrating it empirically in SSL joint-embedding models. We considered including MAE in our experiments, but it is unclear how to make a fair comparison with those models. MAE is trained with a decoder which generates images in pixel space. The decoder is directly trained to reconstruct training examples given patches, so there is no need to introduce another reconstruction method. This being the case, measuring Deja Vu with MAE would require an entirely different methodology than our KNN-based decoder scheme. Hence for this work, we focus on joint-embedding methods. We will modify the introduction to clarify our focus on joint embedding models for the scope of this work. **It would be valuable to determine if this inconsistency still exists even when the model is fine-tuned, which involves adapting the backbone as well, in an end-to-end approach.** The extent of Deja Vu memorization in a fine-tuning setting is an interesting question. We ran experiments with fine-tuned models. In this setup, we used a pretrained VICReg and fine-tune it for 20 epochs (For both SSL_A and SSL_B). We present the results in the table below: | \ Epoch | 0 | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | 20 | |---|---|---|---|--|---|---|---|--|---|---|---| | Train Acc. | 60.0 | 67.2 | 73.5 | 78.0 | 81.5 | 84.9 | 87.1 | 88.9 | 90.2 | 91.5 | 91.8 | KNN Acc. | 40 | 33 | 35.8 | 36.6 | 37.4 | 37.5 | 37.9 | 38.1 | 38.2 | 38.4 | 38.7 | DejaVu Score | 33.2 | 23.4 | 24.7 | 25.9 | 26.5 | 27.3 | 28 | 28.4 | 31.2 | 29.9 | 32.6 --- Our tests show that the training accuracy increase significantly however the KNN accuracy does not change much. It is also interesting to note that just immediately after the few first epoch of fine-tuning the deja vu score decrease significantly but it does go up again later in the training. SSL joint-embedding models are often used in a frozen setting (in contrast to MAE type of model who are often fine-tuned), however, we will gladly add these fine-tuning results in the appendix. Please let us know, if there is anything else you would want to see in this setup.
Summary: This work investigates to what degree self-supervised models exhibit memorization of particular samples in the training set. The authors demonstrate that a simple crop of an image, only comprised of the background, is enough for an adversary to learn the associated class label of the foreground object. Moreover, a conditional generative model can be trained, allowing the adversary to reproduce a very similar-looking image compared to the original. Such memorization capacity can lead to privacy issues and the authors demonstrate that such behaviour occurs in a broad range of models, while at the same time, this effect is not visible in common metrics such as probing accuracy. Strengths: 1. The paper is very well written and builds upon the simple but very interesting idea of memorization in self-supervised models. Such an investigation seems timely, given the similar recent efforts in large language models. 2. The experiments are very well designed, clearly disentangling the notion of correlation (i.e. actually learning something generalisable about the sample) and pure memorization (i.e. only memorizing the specifics of the sample). The splitting of the training data into two sets and training two separate models is a very neat way to measure this effect, which would otherwise remain very difficult to disentangle. 3. It is very interesting that supervised and self-supervised methods behave very differently in terms of memorization. In general, studying the difference in the two learning approaches is very worthwile and timely. Weaknesses: 1. I struggle to really see the privacy concern here. How likely is it really for someone to have access to only a crop of an image, while not being able to access the full image? The authors present the example of an image of a face of a person, which could then be potentially used to infer where that person is. But again, in what situation do I have access to the face of an image without the rest of it? As the authors also demonstrate, such an image has to be present in the training data, it is not enough to simply use another photo of a person and then the model reveals another image of the same person. I also believe that this scenario is different from the language model case, where simply “guessing” a good query as input for the model to complete seems to be more realistic due to the smaller search space and potential knowledge of similar queries. I hope the authors can comment on this. 2. This is more minor but while I really like the experimental setup, I’m still not sure if memorization and generalization are completely disentangled. Assume that you have a model that clearly memorizes images but also performs very poorly at the downstream task, i.e. its linear probe is around random guessing. Due to the complete lack of generalization, it will be tough to establish memorization since the nearest neighbours will all share no correlation. So from a pure theoretical perspective, this seems a bit unsatisfying. Of course in practice, no generalization is usually not the case, so from a practical viewpoint this makes complete sense. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What happens if you perform the nearest neighbour class inference on the training set (excluding the image itself)? This would circumwent the issue outlined above, but of course defeats the practicality of the method. Still seems like an interesting experiment though. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad to hear that you found our work timely, and our experiments interesting. Please let us know if our responses below answer your primary questions. **I struggle to really see the privacy concern here. How likely is it really for someone to have access to only a crop of an image, while not being able to access the full image?** Thank you for your comment. We would like to emphasize that this paper centers on finding systematic memorization of training data by SSL models. We do suggest that memorization could imply privacy risks, but a rigorous study of practical adversaries would require a complete followup work, and is not a primary focus of the work (the discussion around privacy is only a few lines in the paper). If there are any sentences in the paper that seemed to misconstrue this, please let us know and we will be glad to clarify the contributions. We acknowledge that the claims made in the paragraph on the privacy risks we presented in the related work (and also the sentence in the introduction about associating a face to an activity) might have been too strong without providing further privacy analysis. You are right that the comparison with NLP is not as straightforward since it is easier to find a sequence of words that might have been in the training set than a sequence of pixels. We are planning to update the paper accordingly to ensure that no misleading assertions or comparisons. To provide a simple example of disclosure as to why we made the analogy with NLP earlier, consider an individual’s public social media profile picture that is a crop of their face taken from a photo of them on vacation. If the full (private) image was used in SSL model training, the public profile face crop could be used to query the model and recover information about the remainder of the image, disclosing the vacation location. However, we recognize now that this potential setup might be more restricted than the ones used in NLP. We hope that modifying the misleading sentences around the privacy implications will solve your concerns. **As the authors also demonstrate, such an image has to be present in the training data, it is not enough to simply use another photo of a person and then the model reveals another image of the same person.** A core component of studying memorization is showing how models retain excessive information about training set examples. It would be interesting to see how this changes for slight variations of training set examples, and we will consider this in follow up work. **I’m still not sure if memorization and generalization are completely disentangled [...if the model’s] linear probe is around random guessing [...] it will be tough to establish memorization since the nearest neighbours will all share no correlation. What happens if you perform the nearest neighbour class inference on the training set (excluding the image itself)?** There are many ways a model could memorize its training data. We do not claim that Deja Vu completely disentangles generalization from memorization in Self-Supervised Learning. We claim that one modality of memorization in SSL can be revealed with our Deja Vu method. As you mentioned, a model could probably store a lot of information about training examples in its weights without producing meaningful embeddings. In this work, we are focusing on model’s embeddings for which we suppose they are meaningful (at least enough to give good KNN performance). Providing a more theoretical perspective on memorization might also be challenging since there might be many ways to define memorization in a DNN. In addition, defining generalization in a Self-Supervised setting is not straightforward (are you considering generalization on a pretext task or a downstream task, if so which ones ?) Concerning using the training set $A$ instead of the public set $X$, it would indeed defeat the purpose of removing training set access from the test. Moreover, using the training set for decoding would not quite disentangle memorization from performance, since it still requires the encoding of similar training set images (e.g. swans) to be close together. However, we still ran the experiment as requested and found a small drop in the DejaVu Score. We used our VICReg model which shown a DejaVu score of 33.2 and replace the public set $X$ by the set $A$ when doing KNN (from which we excluded) the image itself (for each case), and got a DejaVu score of 29.7 (the differences could be explain mostly because the set $A$ is much smaller than the set $X$). We think that adding such results in the paper might add more confusion than needed, however if you think this is an important point that should be in the paper, we would gladly further discuss it. --- Rebuttal Comment 1.1: Comment: I thank the authors for engaging with my comments! **Privacy concern:** Makes sense to me, re-writing this part of the paper and honestly stressing the difference to the NLP scenario would clarify things a lot without diminishing the contributions in my opinion. I completely agree that the discovered phenomenon is interesting in itself. **Recognizing test image:** If the authors rewrite the privacy section accordingly, there is also no need for such an experiment in this work. **Memorization vs generalization:** Thanks for running this experiment, I see the concerns of the authors now and agree that this might only foster confusion since the setting is already a bit complex. It is still encouraging though that this experiment results in very similar memorization scores. \ \ \ In light of the answers and conditional on the re-writing of the section regarding privacy concerns, I have raised my score.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their thoughtful remarks on our work. Reviewer H9Sq raised an important clarification, asking whether sets $A$ and $B$ are disjoint. We want to clarify for everyone that indeed our set $A$ and $B$ are always disjoint (which is fundamental in our analysis). We also want to highlight that the core of our contribution is the detection of memorization in SSL, which is distinct from a methodical study of practical adversarial risk. We do believe our discovery of memorization might have implications for privacy and is fodder for future security work, but are careful to not make strong claims quantifying the exact risk imposed by this memorization. As such, we intentionally focus this paper on identifying memorization and not on defining a practical adversary. We recognize that the claims in our paragraph on the privacy implications of Deja Vu memorization might have been too strong without providing further empirical evidence. We will be careful to update this paragraph accordingly. We were pleased to see the thorough and detailed summaries written by each reviewer, underscoring their careful read of the work. We have attempted to respond to each of your primary questions below. Please respond if you have any followup questions. We are eager to make our contributions as clear as possible, and your help is invaluable!
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression
Accept (poster)
Summary: The authors address the problem of kernel ridge regression with a finite rank kernel, in the under-paramatrized regime. They prove sharper upper bounds on the test error, compared to previous existing works. Strengths: The discussion is very well-written and easy to follow, with a number of illustrations being provided to set up the problem. Careful and detailed comparison to previous work is given, making it easy to understand the novelty of the present manuscript. Overall, the addressed problem is an important one, and I believe the derived bound will be of interest to the community. I have not checked the proof, and therefore give a low confidence score. Weaknesses: I do not have any major concern. A minor remark is that while all experimental details are provided in Appendix D, it would be good to add at least a short description of the target functions and regularization used in Fig.1 and 2 in the main text or in the corresponding captions. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - (minor) While the comparison with the bound of [6] in Fig. 2 is compelling, is there a reason why the bound of [30] is not also plotted? Its addition, or a brief sentence justifying its non-inclusion, would be welcome. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Experimental details **\[review\]** *” A minor remark is that while all experimental details are provided in Appendix D, it would be good to add at least a short description of the target functions and regularization used in Fig.1 and 2 in the main text or in the corresponding captions.”* **\[answer\]** Thank you for your suggestion; we will illustrate the experiments in more detail in the revised edition. ## Not plotting Rademacher bound **\[review\]** *”While the comparison with the bound of [6] in Fig. 2 is compelling, is there a reason why the bound of [30] is not also plotted? ”* **\[answer\]** The Rademacher bound is independent of $\lambda$, hence if we pick a very small $\lambda$, its learning curve as a function of $N$ would be way higher than ours and Bach’s. We will indeed add a sentence about this in the revised version, thank you for pointing this out. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I thank the authors for the explanations. I wish to stand by my original score, and stress once again that my assessment is limited by my inability of checking the proof, whence low confidence. --- Reply to Comment 1.1.1: Comment: Thank you for your positive rating and brief comments on our paper. We greatly appreciate your time and consideration.
Summary: The paper highlights the inadequacy of existing statistical learning guarantees for general kernel regressors when applied to finite-rank kernels. The authors have addressed this issue by developing non-asymptotic upper and lower bounds for the test error of finite-rank kernel ridge regression. These new bounds are more precise than the previously established ones and are applicable for any regularization parameters. This research provides a more dependable framework for utilizing finite-rank kernels in machine learning applications. Strengths: 1. The paper addresses an important gap in the current understanding of machine learning algorithms by developing more accurate and reliable bounds for finite-rank kernel ridge regression, which is frequently used in practical applications. 2. The research provides a more precise and dependable framework for using finite-rank kernels in machine learning problems, which could result in better performance and more efficient algorithms. Weaknesses: 1. The paper only considers under-parameterized regime. 2. Low bound is a main argument of this paper, but all details are given in the Appendix. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It looks a little bit weird that you don't need to take $\lambda=O(N^\theta)$ for some $\theta<0$ to balance the bias and variance. Any insights for this? 2. In Theorem 4.1 and Corollary 4.3.1, what are the mild conditions on the kernel K? I cannot find the mild condition on input distribution either. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Lower bound **\[review\]** *”Low bound is a main argument of this paper, but all details are given in the Appendix.”* **\[answer\]** Yes, we should have stated the lower bound in the main theorem explicitly. As mentioned in responses to other reviewers, it is in hindsight very clear to us that the lower bound should have been duly highlighted in the main text; we will do so in the revision ## Dependence of $\lambda$ w.r.t. $N$ **\[review\]** *”It looks a little bit weird that you don't need to take $\lambda=O(N^\theta)$ for some $\theta<0$...?”* **\[answer\]** Indeed, this is one of the key contributions of our work. Compared with [Rudi et al. (2015)](https://papers.nips.cc/paper_files/paper/2015/hash/03e0704b5690a2dee1861dc3ad3316c9-Abstract.html), [Rudi and Rosasco (2017)](https://proceedings.neurips.cc/paper_files/paper/2017/file/61b1fb3f59e28c67f3925f3c79be81a1-Paper.pdf) and [Bach (2023)](https://www.di.ens.fr/~fbach/ltfp_book.pdf), in our theorem, the regularization $\lambda$ can be chosen independent of sample size $N$ or kernel rank $M$. ## Balancing bias and variance **\[review\]** *”It looks a little bit weird that you don't need to take $\lambda=O(N^\theta)$ for some $\theta<0$ to balance the bias and variance. Any insights for this?”* **\[answer\]** For simplicity in the main theorem, we have suppressed the dependency of $\lambda$ in variance by upper bounding the term $S=\sum_{k=1}^M\frac{\lambda\_k^2}{(\lambda\_k+\lambda)^2}$ by $M$. Please see Proposition C.14 for a more detailed approximation of the variance. Hence it is possible to derive an optimal $\lambda>0$ to balance the bias-variance tradeoff which depends on the kernel spectrum. Since we have the lower bound as well, such $\lambda$ would be minimax optimal. Traditional analysis suggests $\lambda$ should be in the magnitude of $\sqrt{\sigma}$, but our detailed analysis could also relate the choice $\lambda$ with the spectrum $\lambda_k$. We will put this as a possible future research direction. ## Mild condition **\[review\]** *”In Theorem 4.1 and Corollary 4.3.1, what are the mild conditions on the kernel K? ”* **\[answer\]** The mild condition on the kernel $K$ is indeed Assumption C.15, stating for $x\sim\rho$, the distribution $\psi\_k(x)$ is sub-Gaussian with sub-Gaussian norm bounded from above, where $\psi\_k$ is any eigenfunction of $K$. Note that for compact input space $\mathcal{X}$ and finite rank $M$, Assumption C.15 always holds, since any bounded distribution is sub-Gaussian. This assumption is often used in KRR results, for example, in [Tsigler and Bartlett (2023)](https://www.jmlr.org/papers/v24/22-1398.html). --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank the authors for addressing my concerns. I keep my score unchanged.
Summary: This paper analyzes the test error of ridge regression with a finite rank kernel. The finite rank kernel appears in several practical settings such as transfer learning and random neural networks. The authors provide a new analysis in this setting using tools from random matrix theory. A detailed comparison to other generalization bounds is presented. Strengths: 1. New generalization results in a practical and challenging setting. 2. New analysis techniques. Weaknesses: 1. The comparison to standard generalization results is not clear (Eq. (12)). Both bounds scale as $\sqrt{\frac{\log(n)}{n}}$. It is not clear which one is tighter. 2. The technical details of the proof are not given and the novelty of the analysis is not explained in detail. Instead, most of the paper is devoted to a discussion and experiments. 3. There are some unclear technical issues (see questions below). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The bound in Theorem 4.1 and the bound in Eq. (12) both scale as $\sqrt{\frac{\log(n)}{n}}$. Which one is tighter and why? What is C in Eq. (12)? 2. The test error definition is not standard: Shouldn’t the label be $y$ in the MSE and not $\tilde{f}$? Is the definition of the test error the same in other works? Also, the bias-variance decomposition does not seem to be standard. Usually the learned predictor is the same in the bias expression (e.g., see the Wikipedia definition of bias-variance decomposition), whereas here the predictor is different: it changes from $f_{Z,\lambda}$ to $f_{(X,f(X)),\lambda)}$. Is this standard for analyzing kernel methods? 3. Where in the analysis is the under-parameterized assumption ($M <N$) needed? 4. In Eq. (10) if $\tilde{\gamma}_{>M} = 0$, the bias is exactly zero for all $N$? Is this a mistake? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not discussed in detail. Some of the limitations are mentioned throughout the paper. Maybe it will be helpful to add a limitations section and summarize all limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Technical details and novelty **\[review\]** *”The technical details of the proof are not given and the novelty of the analysis is not explained in detail.”* **\[answer\]** We will refactor the revised paper so that these things are stated more clearly and more prominently, while relegating some of the numerics to the appendices. Due to space constraints in the paper, we apologize for the absence of a paragraph dedicated to technical details. In short, the main technical tool we used in the paper is the concentration of the spectrum of a centered random matrix with sub-Gaussian entries followed by the Neumann expansion of a matrix inverse. Another novel aspect is that we demonstrate how this same novel technique can also be used to derive the lower bound for the test error. ## Rademacher bound in Equation (12) **\[review\]** *”The bound in Theorem 4.1 and the bound in Eq. (12) both scale as $\frac{\log N}{N}$. Which one is tighter and why? What is C in Eq. (12)?”* **\[answer\]** Here is Theorem 10.7 from [Mohri et al. (2018)](https://cs.nyu.edu/~mohri/mlbook/): > Let $K:\mathcal{X}\times\mathcal{X}\to\mathbb{R}$ be a PDS kernel, $\mathbf{\Phi}:\mathcal{X}\to\mathcal{H}$ a feature mapping associated to $K$, and $H=\\{ x\mapsto\mathbf{w}\cdot\mathbf{\Phi}:\\|\mathbf{w}\\|\_\mathcal{H}\leq\Lambda \\}$. Assume there exists $r\>0$ such that $K(x,x)\leq r$ and $\|f(x)\|\leq \Lambda r$ for all $x\in\mathcal{X}$. Then for any $\delta>0$, with probability at least $1-\delta$, the following holds for all $h\in H$: $$ R(h) \leq \hat{R}(h) + \frac{8r^2\Lambda^2}{\sqrt{N}}\Big(1+\frac{1}{2}\sqrt{\frac{\log \frac{1}{\delta}}{2}}\Big), $$ > where $R(h)=\mathbb{E}_x[(h(x)-\tilde{f}(x))^2]$ is the test error and $\hat{R}(h)=\frac{1}{N}\sum\_{i=1}^N[(h(x\_i)-\tilde{f}(x\_i))^2]$ is the train error. Our bound in Theorem 4.1 is tighter than equation (12) in two ways: first, the decay rate of bias bound in equation (10) is $\frac{\log N}{N}$ as $\lambda\to0$, which one cannot obtain in Rademacher bound, since it is insensitive to $\lambda$; second, the total decay in the variance bound in equation (9) is $\frac{1}{N}\cdot\sqrt{\frac{\log N}{N}}$. ## Test error and bias-variance decomposition **\[review\]** *”The test error definition is not standard… Is the definition of the test error the same in other works? Also, the bias-variance decomposition does not seem to be standar…Is this standard for analyzing kernel methods?”* **\[answer\]** Our definition of test error measures the difference between the regressor trained on a noisy sample and the (noiseless) target function, which is standard in KRR research, and is equivalent to, for example, the generalization error in [Mohri et al. (2018)](https://cs.nyu.edu/~mohri/mlbook/) , the integrated square risk in [Liang (2019)](https://arxiv.org/abs/1808.00387), out-of-sample risk in [Hastie et al. (2020)](https://arxiv.org/abs/1903.08560), excess risk in [Bach (2023)](https://www.di.ens.fr/~fbach/ltfp_book.pdf). Previous works and we choose this definition as the noise only comes from the noisy sample (train set), but not from the test point. Indeed, we have $$ \mathbb{E}\_{x,y} [(f\_{\mathbf{Z},\lambda}(x)-y)^2] = \mathbb{E}\_{x,\epsilon}[(f\_{\mathbf{Z},\lambda}(x)-\tilde{f}(x)-\epsilon)^2] = \mathbb{E}\_{x}[(f\_{\mathbf{Z},\lambda}(x)-\tilde{f}(x))^2] - 2\mathbb{E}\_{x,\epsilon}[\epsilon(f\_{\mathbf{Z},\lambda}(x)-\tilde{f}(x))] +\mathbb{E}\_{x,\epsilon}[\epsilon^2] $$ $$ = \mathbb{E}\_{x}[(f_{\mathbf{Z},\lambda}(x)-\tilde{f}(x))^2] + \sigma^2 $$ since $\epsilon$ is noise with mean zero and variance $\sigma^2$ independent to $x$. Note that we are averaging over all test points $x$ but fix the sample $\mathbf{Z}$ (as done in prior work, e.g. [Bach (2023)](https://www.di.ens.fr/~fbach/ltfp_book.pdf)), and our main theorem gives high probability bound over the sampling $\mathbf{Z}$ on the average mean square distance between the regressor and the target function. Also, our bias-variance decomposition is also standard and performed the same as in the above cited works, where we separate the effect of the sample noise to the test error as the variance. we will add a remark in the revision. ## Under-parametrization **\[review\]** *”Where in the analysis is the under-parameterized assumption ($M<N$) needed?”* **\[answer\]** The requirement that $N>M$ ensures the fluctuation matrix $\mathbf{\Delta}\in\mathbb{R}^{M\times M}$ is full-rank with operator norm that converges to zero as $N\to\infty$, then we argue with Neumann series expansion. Please see Section C.2 in the appendix for details. ## Consistent case ($\tilde{\gamma}_{ > M}=0$) **\[review\]** *”In Eq. (10) if $\tilde{\gamma}\_{ > M}=0 $, the bias is exactly zero for all $N$? Is this a mistake?”* **\[answer\]** Note that for $\tilde{\gamma}\_{ > M} = 0 $, we have the target function $\tilde{f}$ belongs to the finite-dimensional RKHS $\mathcal{H}$. Indeed, since the bias measures the fitness of the regressor with a noiseless sample $(\mathbf{X},\tilde{f}(\mathbf{X}))$, and if there is no regularization, we have the regressor \begin{equation*} f_{(\mathbf{X},\tilde{f}(\mathbf{X})),\lambda} = \arg \min\_{f\in\mathcal{H}} \sum_{i=1}^N(f(x)-\tilde{f}(x))^2 \end{equation*} and it is clearly equal to $\tilde{f}$, hence the bias $\mathbb{E}\_x[(f\_{(\mathbf{X},\tilde{f}(\mathbf{X})),\lambda}-\tilde{f})^2]=0$ and this is no mistake. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The authors addressed my concerns. I am raising the score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your positive feedback and rating on our paper. Your insightful comments have immensely contributed to the improvement of our work. We are grateful for your time and expertise, and we are thrilled that our paper resonated positively with you. Once again, thank you for your valuable contribution to the refinement of our paper.
Summary: The paper studies the kernel ridge regression under the non-asymptotic setting. The authors give the upper and lower bounds for bias and variance term, respectively. The authors argue the results improve upon those in Bach 2023. Strengths: The paper is well-structured. The authors give rigorous proofs, following by careful experiments. Weaknesses: 1. The paper requires further improvement and polishing in writing. To name a few, line 57-58; line 107-110. 2. The paper would benefit from a more consistent and standardized use of symbols and notations. For example, in line 90, it would be better to use L_2(\rho) instead of L_\rho^2. In lien 92, the notation of \tilde{f}(\textbf{X}) is not proper. In Definition 3.1, it would be better to include the decreasing order of eigenvalues in the statement rather than adding an additional remark 3.2. In line 200-207, notation K^{(\infty)} 3. As mentioned in Bach 2023, more refined bounds can be found in Rudi et al. 2015, Rudi and Rosasco 2017. However, the authors failed to mention them and other related results in the comparison. In the absence of such comparisons, it is hard to tell the novelty and improvements of the current submission. 4. The dependency of \lambda seems to be incorrect for the variance term. 5. Given Corollary 4.3.1 in the submission, I cannot see significant improvements against those in Bach 2023. Also, the authors did not give a proper explanation for considering \lambda goes to 0. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Could the authors mention the lower bound for the problem? With optimal choice of \lambda, we can derive the optimal upper bound for the test error. However, the current result does not show a bias-variance tradeoff with respect to \lambda. Could the authors explain? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Polishing **\[review\]** *"The paper requires further improvement and polishing in writing."* **\[answer\]** Thank you for pointing out certain issues with the writing; we will address them in the revised edition. ## More references **\[review\]** *"As mentioned in Bach 2023, more refined bounds can be found in Rudi et al. 2015, Rudi and Rosasco 2017. However, the authors failed to mention them... it is hard to tell the novelty and improvements of the current submission."* **\[answer\]** We agree with the comment made by the reviewer and will discuss both papers [Rudi et al. (2015)](https://papers.nips.cc/paper_files/paper/2015/hash/03e0704b5690a2dee1861dc3ad3316c9-Abstract.html), [Rudi and Rosasco (2017)](https://proceedings.neurips.cc/paper_files/paper/2017/file/61b1fb3f59e28c67f3925f3c79be81a1-Paper.pdf) as related work in the revised edition.The citations were not included in the current submission because their focus is on the upper bound of the test error for specific sketching algorithms (Nyström approximation and random feature) while our work focuses on any under-parametrized finite rank kernel ridge regression. One main contribution is that we provide both tighter upper bound and tighter lower bound than the two (they derive the same convergence rate on the upper bound up to constants but they do not derive a lower bound). Another major difference is that our bound works for any regularization $\lambda$, while the mentioned prior works require it to be dependent on the sample size $N$ (we for instance refer the reviewer to Theorem 1 in [Rudi and Rosasco (2017)](https://arxiv.org/abs/1602.04474)). ## Dependence on $\lambda$ **\[review\]** *"The dependency of $\lambda$ seems to be incorrect for the variance term."* **\[answer\]** For simplicity in the main theorem, we have suppressed the dependence of $\lambda$ in the variance by upper bounding the term $S=\sum_{k=1}^M\frac{\lambda_k^2}{(\lambda_k+\lambda)^2}$ by M. Please see Proposition C.14 for a more detailed bound on the variance. Indeed this sum $S$ is less than the so-called effective dimension $\mathcal{N}_M(\lambda) =\sum \frac{\lambda_k}{\lambda_k+\lambda}$, which appears in the upper bound of the variance in Lemma 6 in [Rudi and Rosasco (2017)](https://arxiv.org/abs/1602.04474)). Ultimately, our bound is sharper. ## Contribution of Corollary 4.3.1 **\[review\]** *"Given Corollary 4.3.1 in the submission, I cannot see significant improvements against those in Bach 2023."* **\[answer\]** The three major improvements are: a lower bound (that matches the upper bound up to constants), a better dependence on $\lambda$ (note that prior bound are vacuous for small $\lambda$, see details above) and tighter leading coefficients. Please also see our experimental results (Figure 2) that clearly shows significant improvements of our bounds. ## Ridgeless regression **\[review\]** *"Also, the authors did not give a proper explanation for considering $\lambda$ goes to 0."* **\[answer\]** Stating the case where $\lambda\to0$ is a way to understand the role of regularization. It also gives us some insights into the behavior of the bound for small (but non-zero) values of $\lambda$, for which our bound is also tighter than prior work. In practical scenarios, excessive l2-regularization can adversely affect test error, making smaller values of $\lambda$ equally noteworthy. ## Lower bound **\[review\]** *"Could the authors mention the lower bound for the problem?"* **\[answer\]** Yes, we have addressed the lower bounds in Theorems C.20-21 in the appendix and this is to our knowledge novel. We will do as you suggest and explicitly state them in the main theorem. ## optimal choice of $\lambda$ **\[review\]** *"With optimal choice of $\lambda$, we can derive the optimal upper bound for the test error. However, the current result does not show a bias-variance tradeoff with respect to $\lambda$. Could the authors explain?"* **\[answer\]** From the previous paragraphs, it is possible to derive an optimal $\lambda>0$ to balance the bias-variance tradeoff which depends on the kernel spectrum. We will put this as a possible future research direction. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed response! However, it seems the authors misunderstood the lower bound mentioned in my review. I was not talking about the lower bounds for these "upper bounds". The lower bound that I referred to is that for the estimation problem introduced in Sec 3.1, which can help understand whether the upper bounds are optimal in the minimax sense. I also cannot see the point why the authors consider these "lower bounds" in Thm C.20, C.21. In general the lower bound that I refer to can tell us how hard the estimation problem is no matter what estimation methods are employed. The "lower bounds" in Thm C.20, C.21 can only tell us how tight the analyses of the upper bounds in the subscription are using the kernel ridge regressor. Also, as agreed by the authors, the papers by Rudi and the coauthors derived the same convergence rate on the upper bound up to constants. Given these facts, I would like to keep my original score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We extend our sincere gratitude for your valuable feedback on our manuscript and for engaging in a discussion with us. ## Minimax rate It appears that your inquiry pertains to the minimax rate, as described, for instance, in [Haghifam et al. (2023)](https://proceedings.mlr.press/v201/haghifam23a/haghifam23a.pdf). Is this indeed what you are referring to? While we concur that the minimax rate is a noteworthy concern, we wish to elucidate that the primary focus of our paper diverges from this particular aspect. For instance, the rates you mentioned are independent of the algorithm, and our focus lies in KRR. We are of the opinion that our exposition is clear and devoid of any ambiguity pertaining to this matter. Our paper undertakes the specific task of bounding the bias-variance of Kernel Ridge Regression (KRR), which is in itself an important topic in machine learning (other reviewers seem to agree on this). We would be pleased to incorporate a detailed discussion and include relevant citations if the reviewer could be more specific in their request. In case we have again misunderstood your feedback, we would be more than delighted to engage in further discussion on this matter. ## Rudi et al. A pivotal differentiating aspect of our work, in comparison to prior endeavors, is the universality of our proposed bound, which remains applicable across the entire spectrum of $\lambda$ values. This stands in stark contrast to the approach adopted by [Rudi and Rosasco (2017)](https://arxiv.org/abs/1602.04474), where $\lambda$ has to scale in proportion to $1/N$. Another aspect to consider is that [Rudi and Rosasco (2017)](https://arxiv.org/abs/1602.04474) do not establish a lower bound, a topic we delve into further in the subsequent explanation. ## Why do we care about the lower bound? This element serves to elucidate the precision of the upper bound for KRR, an aspect we believe constitutes a noteworthy contribution. It is pertinent to note that preceding works have primarily focused on upper bounds, making our emphasis on a comprehensive analysis a noteworthy contribution. The importance of the lower bound is also discussed among other researchers, like [Tsigler and Bartlett (2023)](https://www.jmlr.org/papers/v24/22-1398.html). Once again, we express our sincere appreciation for your time and attention; and we hope that you might reconsider your final rating.
Rebuttal 1: Rebuttal: **General response to all the reviewers:** Thank you for your comprehensive review of our paper. We sincerely appreciate the time and effort you have dedicated to evaluating our work. Below, we outline the primary issues highlighted by the reviewers based on our interpretation of their feedback. We would be grateful if the reviewers could indicate if they have any additional inquiries or if they find our responses satisfactory. ### Lower bound of the test error We acknowledge your valuable feedback regarding the need for emphasizing the lower bound of the test error in the main text. This aspect represents a significant and novel contribution of our research since we are not aware of other prior work deriving a lower bound that matches the upper bound. We will ensure that it receives proper attention in the revised version. ### Dependence of $\lambda$ In our paper, the regularization $\lambda$ can be chosen to be any positive number independent of $N$, in contrast to previous works (please see individual rebuttals for more details). This is also a novel contribution of our research. ### Technical details Additionally, we understand the importance of providing more technical details in the main text to aid readers in understanding the overall proof procedure. We provide further details in the detailed answers and we will make the necessary improvements to address this concern. We kindly request you to please refer to our detailed rebuttals to the individual reviews provided below. Once again, we sincerely thank you for your thoughtful evaluation and valuable suggestions.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper gives a refined analysis of the test error of kernel ridge regression with a finite kernel in the underparametrized regime, where the rank M of the kernel is smaller than the number of data points N. The analysis improves upon previous ones by providing tighter non-asymptotic upper and lower bounds, which remain tight even as the ridge parameter goes to zero. When the target function lies in the RKHS, the bounds show that estimation is consistent as N goes to infinity. Strengths: The paper is well written and clear. The authors state the contributions clearly and compare carefully with previous work. Establishing tight bounds for this basic learning setting is an important problem. Weaknesses: No major weaknesses, just some minor typos. I would also suggest that the authors include the explicit lower bounds in the main paper. Given that this a major focus of the paper it should not be required for the reader to go to the appendix. It would also be nice for the appendix to give a bit clearer account for the technical differences in the proof techniques of their bounds versus previous ones. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Can the authors elaborate the technical difficulties in extending beyond the finite rank setting, or even when M ~ N? Can analogous results be shown when the kernel is approximately low rank? - It is unclear that realistic NTK or random features will fall in this underparametrized regime. Can the authors give elaborate if this should be the case? Is it possible to experiment in more realistic NTK settings to truncate the kernel and apply the bounds? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No limitations. Authors give some directions or future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Explicit lower bound **\[review\]** *“I would also suggest that the authors include the explicit lower bounds in the main paper. Given that this a major focus of the paper it should not be required for the reader to go to the appendix.”* **\[answer\]** We concur with the reviewer's viewpoint and, consequently, plan to relocate the lower bound in the main theorem and move the proof technique section to the appendix. ## Technical tool **\[review\]** *“It would also be nice for the appendix to give a bit clearer account for the technical differences in the proof techniques of their bounds versus previous ones.”* **\[answer\]** The main technical tools we used in the paper are 1) more careful algebraic manipulations when dealing with terms involving the regularizer $\lambda$ and 2) a concentration of the spectrum of a centered random matrix with sub-Gaussian entries followed by the Neumann series expansion of a matrix inverse. To this end, we require that $M<N$, such that the fluctuation matrix $\mathbf{\Delta}\in\mathbb{R}^{M\times M}$ is of full rank with operator norm converges to zero as $N\to\infty$, then we argue with Neumann series expansion in $\mathbf{\Delta}$, instead of in the terms with leading factor $\lambda$. Hence, unlike previous work, our result holds for any $\lambda>0$ which can be chosen independence to $N$. See Section C.2 in the appendix for details. ## Under-parametrized regime **\[review\]** *"Can the authors elaborate the technical difficulties in extending beyond the finite rank setting, or even when M ~ N?"* **\[answer\]** In the main theorem, we require $N$ to be exponential in $M$ in order to have a decay of the order $\frac{\log N}{N}$. But as indicated in the remark of Lemma C.17 (lines 547 - 551) in the appendix, we can relax the requirement that N to be only polynomial in M, with the tradeoff of a slower decay rate. For the case $M=N$, it is where the well-known variance explosion would happen unless we regularize, that is, unless $\lambda > 0$. Since this paper focuses on bounding the test error with a general $\lambda$, we refer the readers to [Bach (2023)](https://www.di.ens.fr/~fbach/ltfp_book.pdf) for the test error bound for $M\geq N,\ \lambda > c$. For over-parametrized regime ($M>N$) where the kernel is approximately low-rank, our strategy does not does not apply, but it is one of our future research direction. Note that there are already some published non-asymptotic result, for instance, see [Tsigler and Bartlett (2023)](https://www.jmlr.org/papers/v24/22-1398.html). ## Practical use **\[review\]** *"It is unclear that realistic NTK or random features will fall in this underparametrized regime. Can the authors give elaborate if this should be the case? Is it possible to experiment in more realistic NTK settings to truncate the kernel and apply the bounds?"* **\[answer\]** The NTK of any finite-width (shallow) fully-connected network is finite-dimensional with random eigenfunctions. For simplicity, we truncate the NTK (of a infinite-width network) by selecting only the first few eigenfunctions in the experiment as a toy example. In general, as long as the sample size $N$ is larger than the number of trainable parameters in the network, our result can be applied. In practical scenarios like fine-tuning or transfer learning in large language models, most of the pre-trained parameters are fixed except the last layer (see Section 2) or smaller low-rank matrices as in the popular LoRA approach [Hu et al.(2021)](https://arxiv.org/pdf/2106.09685.pdf). In this case, the training is simply a kernel ridge regression in the under-parametrized regime. We regard further analysis of such specific model trainings as a future research dierection. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing clarifications. I will keep my original score.
null
null
null
null
null
null
Nominality Score Conditioned Time Series Anomaly Detection by Point/Sequential Reconstruction
Accept (poster)
Summary: TSAD techniques are targeted towards detecting either point or contextual anomalies, but often struggle to adequately capture both simultaneously. In this work, the authors propose a novel reconstruction-based AD technique that introduces the notion of a nominality score and a subsequent induced anomaly score. The method achieves a better trade-off between detecting point and contextual anomalies than competing techniques, and outperforms them over a variety of benchmark multivariate datasets. Strengths: - The problem statement is interesting and relevant, and I appreciate the use of visualisations such as in Figure 1, that convey the difficulty of the task. - I appreciated that the model has several components that could be tweaked, such as the choice of soft or hard gate function, and the ablation study does a good job in highlighting their individual contributions. I would however have liked to see more discussion or synthetic examples that demonstrate whether there are certain properties of the monitored data that can guide the selection (beyond empirical comparison). Weaknesses: - I found the presentation and clarity to be quite poor overall, as a result of which the contributions are sometimes difficult to follow properly. Section 2.2 introduces a lot of similar notations that are difficult to keep track of – a figure could be very helpful here for conveying the differences between the various time series and deviations. - The authors rely on results reported in earlier papers as performance measures for several competing techniques. While I appreciate that it may be time-consuming to re-implement and re-run all experiments, I do worry about potential inconsistencies in the experimental set-up that may involuntarily skew the comparison. The authors mention how if the results from the original paper are unavailable, *“we search for the highest reported values among other publications.”* I can see this as introducing several inconsistencies, especially since the source of the result isn’t reported alongside the performance figures presented in the table. - I would have liked there to be a dedicated *Related Work* section in the main paper in order to better understand how the work fits alongside other reconstruction-based techniques. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: I would appreciate if the authors could respond to the concerns listed in the *Weaknesses* section, and either defend their approach, or indicate how they can improve on the existing submission. Although I find the problem investigated in the paper to be interesting and well-motivated, I don’t think that the submission is ready for publication in its current state, and believe it would strongly benefit from another revision/round of reviews. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere gratitude to Reviewer *UPqe* for their detailed review and valuable insights. We will address the raised points in the revised version accordingly. #### 1. More discussion or synthetic examples that demonstrate whether there are certain properties of the monitored data that can guide the selection (beyond empirical comparison). > The difficulty in choosing these parameters stems from the absence of anomalies in the training dataset. The selection of soft or hard gate functions, the value for $d$, and the ratio for $\theta_N$ largely depend on how we presume the anomaly will occur based on domain knowledge. For example, if the anomaly is expected to occur abruptly and significantly, there would be a clear gap between the distribution of nominality scores for normal and anomaly data (**rebuttal Figure 2 (a)**). In this case, a hard gate function should be chosen as it allows anomaly scores to propagate through time points without reduction, as long as an anomaly time point has a nominality score lower than $\theta_N$. Conversely, if the anomaly occurs progressively, the distribution of nominality scores is likely to overlap (**rebuttal Figure 2 (b)**). Here, a soft gate function will be more appropriate to prevent excessive accumulation of anomaly scores on normal time points, reducing false-positives. A dataset could contain both abrupt and progressive anomalies. However, based on (main text) **Table 3**, it is evident that using a soft gate function generally yields better performance compared to a hard gate function. This suggests that the distribution of nominality scores is predominantly overlapped, which is also evident in (main text) **Figure 3** and **rebuttal Figure 3**. Another heuristic for selecting whether to use the point or seq-based reconstruction error as $A(\cdot)$ is by considering the dataset's dimensionality. In an experiment conducted on a univariate dataset, we demonstrated that a sequence-based model outperforms a point-based model significantly (**rebuttal Figure 1**). This is because when dimensionality reduces, extracting meaningful statistical structure from a single time point becomes more challenging. We will include relevant additional discussions in the revision. #### 2. Section 2.2 introduces a lot of similar notations that are difficult to keep track of – a figure could be very helpful here for conveying the differences between the various time series and deviations. > (main text) **Figure 2 (a)** illustrates the relationships for the introduced time series notations, including **x**$^0_t$, **x**$^p_t$, **x**$^c_t$, Δ**x**$^0_t$, … etc.. These notations are placed beside (main text) **Figure 2 (b)** to accommodate page limit requirements. In the revised version, we will reorganize the figures to arrange them more closely to **Section 2.2**. #### 3. The source of the result isn’t reported alongside the performance figures presented in the table. Potential inconsistencies in the experimental set-up that may involuntarily skew the comparison. > The source of data is reported in **Supplementary E**, where we indicate how we obtain the data values and reference the literature where the values are reported. We are aware of the inconsistencies arising from different sources. Therefore, we ensure that we compare against the best baseline values reported. We also acknowledge other time series anomaly detection literature that utilize a similar data populating method [11, 18, 23]. #### 4. Dedicated Related Work section is required in the main paper to better understand how the work fits alongside other reconstruction-based techniques. > We will include a section for related work in the revised version similar to the following: Reconstruction-based techniques have seen a diversity of approaches over the years. The simplest reconstruction technique involves training a separate model sequentially for each channel using UAE [11]. One type of improvement focuses on network architecture, with notable examples like LSTM-VAE [36], MAD-GAN [37], MSCRED [32], OmniAnomaly [9], and TranAD [41]. Additionally, hybrid architectures like DAGMM [35] and MTAD-GAT [38] have been proposed. Another type aims to improve the anomaly score instead of using the original reconstruction error. Designing this anomaly score involves a considerable amount of art, given the high diversity of methods for calculating it across studies. For instance, USAD uses two weighted reconstruction errors [39], OmniAnomaly employs the "reconstruction probability" as an alternative anomaly score [9], MTAD-GAT combines forecasting error and reconstruction probability [38], and TranAD uses an integrated reconstruction error and discriminator loss as the anomaly score [41]. In the context of network architecture, our method utilizes a straightforward performer-based encoder-decoder structure without incorporating any specialized components. Based on our insight into the point-sequential detection tradeoff, our approach stands out due to the introduction of nominality and induced anomaly scores. This integration enables us to seamlessly combine point-based and sequence-based reconstruction errors, resulting in competitive performance. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank you for the detailed replies to all reviews. I appreciate the additional review of related work presented in your reply. As acknowledged by the authors, there are several refinements and improvements that can be applied to elevate the quality of the paper. As a result, I do not feel inclined to argue for its acceptance, but will raise my score to a borderline accept.
Summary: This paper proposes an unsupervised time series anomaly detection algorithm called NPSR (Nominality score conditioned TSAD by Point/Sequential Reconstruction), which combines both point-based and sequence-based reconstruction models. Specifically, it proposes a nominality score which is the ratio of a contextual deviation (or in-distribution deviation) to the total deviation (which is assumed to be the sum of the in-distribution deviation and the out-of-distribution deviation). The contextual deviation and the in-distribution deviation is computed by using the sequence-based and point-based reconstruction models. Based on the nominality score and anomaly score (computed using point-based reconstruction model) computed in the neighborhood, the induced anomaly score is further proposed by considering the temporal relationship. Some theoretical results of the proposed algorithm are provided. Experiments on several benchmark time series anomaly detection datasets are performed to demonstrate the performance of the proposed algorithm in comparison with several state-of-the-art algorithms. Strengths: 1. This paper studies an important and interesting problem, i.e., how to detect anomalies in time series data without label data. 2. The proposed algorithm combines both point-based and sequence-based reconstruction models, which achieves quite good performance on several time series anomaly detection benchmark datasets. 3. Some theoretical results are provided for the proposed algorithm. 4. The paper is generally well written and the presentation is clear. Weaknesses: 1. Overall, this paper proposes a heuristic-based unsupervised time series anomaly detection algorithm. The overall pipeline in Algorithm looks Ok to me, but from experiments it seems the most important part is the point-based reconstruction. In other words, the rest of the proposed algorithm may be simplified. The ablation studies in Table 3 also partially confirms it. 2. Flawed experiments. The F1 scores reported in Table 2 for other algorithms, e.g., Anomaly Transformer, are not consistent with results reported in the literature, such as [33]. More justifications are required. 3. The theoretical results look sound, but may not be useful in practice. In other words, can you justify the value of these results in real-world time series anomaly detection applications? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In Table 2, the performance of several algorithms, e.g., Anomaly Transformer, is significantly lower than those reported in the literature. More justification is required. In Figure 4(b), the anomalies on the right of t=14800 are not detected by $M_{pt}+M_{seq}$, but detected by $M_{seq}$, resulting in false negative. Can you provide some explanations? 2. The proposed algorithm is an unsupervised algorithm. Thus, how to adapt the algorithm on different datasets remains not clear to me (besides the threshold used in adjusting the F1). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere appreciation to Reviewer *gJSP* for their thorough review and valuable insights. The points raised will be duly addressed in the revised version. #### 1. From experiments it seems the most important part is the point-based reconstruction. In other words, the rest of the proposed algorithm may be simplified. > Instead of focusing solely on point-based reconstruction, we believe that the most important idea is to achieve a balance between effectively utilizing both point and sequence-based models and carefully selecting the appropriate parameters. As an extreme example, we optimize our algorithm on the Mackey-Glass anomaly benchmark (MGAB), which is a univariate time series dataset, and find that sequential-based models can significantly outperform point-based models when using their reconstruction errors as $A(\cdot)$. As in **rebuttal Figure 1**, we see that sequence-based models can correctly identify the anomalies, whereas point-based models did not learn at all. This is reasonable, as point-based models only consider a single time point. In general, the lower the dimensionality of the dataset, the harder it gets for a point-based model to learn statistically meaningful representations. Moreover, looking at the high-dimensional datasets reported in (main text) **Table 3**, the induced anomaly score remains important for some datasets: For the MSL dataset, F$1^*$ improves 0.1 (F$1^*=0.467$) compared to using the point-based reconstruction (F$1^*=0.366$). In **rebuttal Figure 3**, F$1^*$ improves 0.047 (F$1^*=0.673$) compared to the point-based anomaly score (F$1^*=0.626$). Since the amount of improvement depends on the statistical structure of the test data, it is still useful to consider using both sequential-based and point-based reconstruction in general. We will include relevant additional discussions in the revision. #### 2. Flawed experiments. The F1 scores reported in Table 2 for other algorithms, e.g., Anomaly Transformer, are not consistent with results reported in the literature, such as [33]. In Table 2, the performance of several algorithms, e.g., Anomaly Transformer, is significantly lower than those reported in the literature. More justifications are required. > In (main text) **Table 2**, we present the best F1 scores *without point-adjustment (PA)*, while [33] reports the scores *with PA*. PA involves adjusting predictions based on the true labels. However, as this method has been shown to be a flawed approach that can overestimate performance in previous studies [11, 23, 24, 34], we do not endorse its use. Nevertheless, we have included both PA and non-PA data in **Supplementary D**. Our results demonstrate that even simple heuristic methods can appear to achieve state-of-the-art performance when using PA. We see that our method still achieves competitive performance when using PA, but again, the community is moving away from this flawed metric/approach. #### 3. The theoretical results look sound, but may not be useful in practice. In other words, can you justify the value of these results in real-world time series anomaly detection applications? > Let's consider (main text) **Figure 3**. If we set $\theta_N=2$ for SWaT (or $\theta_N=10$ for WADI), which ensures that no anomaly points have a nominality score higher than $\theta_N$, and set $d=1$, then according to **Claim 2**, the performance using the induced anomaly score will surpass that of using the original anomaly score. This shows that if we set a proper $\theta_N$, we can always get a better or equal performance using the induced anomaly score. We can also see from **rebuttal Figure 3 (d)** that **Claim 2** always hold when $d=1$ and $\theta_N$ is set such that no anomaly points have a nominality score above $\theta_N$. At present, we are using a certain quantile of the nominality score of the training data (e.g., 98.5%) to set $\theta_N$. Estimating the optimal values for $\theta_N$ and $d$ will be an important future work. #### 4. In Figure 4 (b), the anomalies on the right of t = 14800 are not detected by $M_{pt}+M_{seq}$, but detected by $M_{seq}$, resulting in false negative. Can you provide some explanations? > The anomaly section between $t=14800$ and $14900$ contains relatively more contextual than point anomalies. This anomaly is described as *Damage 1 MV 001 and raw water pump to drain Elevated Reservoir tank.* When solely considering a time point, $M_{pt}$ struggles to recognize the anomaly. However, $M_{seq}$ can effectively detect anomalous time-dependent relationships, leading to a higher anomaly score using $M_{seq}$. Since the reconstruction error of $M_{pt}$ is used as $A(\cdot)$ in (main text) **Figure 4**, we lose the advantage of effectively utilizing that of $M_{seq}$. This results in an induced anomaly score that is not high enough to reach the threshold. An important future direction would be to explore how to select $A(\cdot)$ among multiple models. #### 5. The proposed algorithm is an unsupervised algorithm. Thus, how to adapt the algorithm on different datasets remains not clear to me (besides the threshold used in adjusting the F1). > The difficulty of choosing these parameters is due to the absence of anomalies in the training data and the real-time nature of the algorithm. However, leveraging domain knowledge allows us to make assumptions about anomalies. The selection of parameters, including the choice between soft or hard gate functions, the value for $d$ and the ratio for $\theta_N$, mostly depends on how we presume the anomaly will occur. For instance, $d$ controls the distance that anomaly scores may propagate. This value should be higher if we presume the average anomaly length is long and vice versa. Discussions regarding how to choose a soft or hard gate function can be found in our rebuttal replies to reviewer *UPqe* (*Question 1*). We did not discuss it here due to character limit. We will include relevant discussions in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and extra experiments, particularly the comparison with Anomaly Transformer, which is one of my core concerns for this paper. Also, the theoretical results confirm the soundness of induced anomaly score in comparison of the original anomaly score. In terms of how to apply the proposed algorithms in practical applications, it is also clearly stated in the rebuttal, though I am not fully statisfied. Overall, the proposed algorithm sound good with theoretical support, but the support cannot benefit the practical usage and adoption of this proposed algorithm. I am not saying it is a flaw, but if it could advance in this direction, it will significantly improve the adoption of the proposed algorithm, instead of simply publishing. I do not feel fully convinced for acceptance as Reviewer UPqe, and current rating may be between 4 and 5.
Summary: This paper proposed a novel idea to handle the time-series anomaly detection problem by calculating the "nominality score"(concerns on this name are in the question part and I'll keep using this name in the following review) and the induced anomaly score. And the F1 score can be mathematically proved to improve using the proposed scores. The authors then constructed the point-based and sequence-based reconstruction models to estimate the anomaly score and experiments on several datasets demonstrate the superior performances. Strengths: 1. The proposed method is novel and provides a new angle to address the anomaly detection problem. Without directly considering how "abnormal" a point is, this work start with thinking what is "normal" and induce anomality score based on the nominality score. 2. The authors provided both theoretical and experimental supports for the proposed methods which are reasonable. 3. The writing of the paper is clear and easy to follow. Weaknesses: 1. A major concern is on the expectation of improvement by using $\hat A(\cdot ;g_{\theta_2})$. While the Claim 1 and Claim 2 as well as the proofs gives a guarantee that using this method won't result in a worse F1 score, it says nothing about the gap with/without the method. Intuitively, the expected improvement may be related to $\alpha$ in Eq. (2), $d$, the function, the number of data, and other factors. The actual case could be that even if the $\alpha$ is larger than 1 by a margin (say \alpha=1.5, 2, or 5), the expected number of $N(t)\leq \theta$ for abnormal points are still a small portion of the whole dataset, making the proposed method useless in practical use. The analysis of a bound or expectation is missing. 2. The correctness of the method relies on several hypotheses, typically: 1) distribution of abnormal points $\Delta x_{t,a}^p$ has a larger variance than that of normal points $\Delta x_{t, n}^p$. This is used to guarantee that nominality score of normal points are larger than abnormal points; 2) the threshold is well selected so that the nominality score of all the normal points (with $y_t=0$) passes the threshold, while a significant number of abnormal points cannot pass the threshold. While the hypotheses are reasonable theoretically, I wonder whether they still hold when the scores are estimated and calculated based on the outputs from neural networks. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. I wonder whether the word "nominality" is a misuse, although it is presented across the whole paper. The antonym to the word "anomaly" is "normality", and "nominality" is the noun of "nominal" which means "existing in name only; far below the real value or cost", and I think it's not related to the topic at all. 2. How is the backbone of the model (i.e., the performer-based autoencoder) chosen? Will that cause an unfair comparison between the proposed method and the baselines due to the neural model's approximation ability? 3. The supplementary file mentions how a best threshold is chosen, but in 3.4 ablation study the threshold is set to 98.5% of the nominality score from the training data. Why and how will the choice of different threshold influence the performance (i.e., whether the method is sensitive to the choice of hyperparameters?) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: No apparent limitations and negative societal impact observed by the reviewer. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our heartfelt gratitude to Reviewer *EBhj* for conducting a comprehensive review and providing valuable insights. We will take into account the raised points and incorporate them appropriately in the revised version. #### 1. The analysis of a bound or expectation of improvement for the induced anomaly score is missing. > To compute the bound or expectation for this scenario, certain statistical assumptions need to be made regarding the time series, as demonstrated in the toy example provided in **Section 2.3**. In the context of unsupervised anomaly detection, we can only observe normal data in the training set, which restricts our ability to model the distribution of normal data. Any deviation from this distribution may be labeled as an anomaly. Due to the lack of constraints on anomalies, establishing a universal bound on the potential improvement becomes highly challenging. #### 2. While the hypotheses are reasonable theoretically, I wonder whether they still hold when the scores are estimated and calculated based on the outputs from neural networks. > The hypotheses will be influenced by the underlying statistical structure of the time series and our choice of threshold, but not by the neural network approximations. By using **(1)**, **(8)**, **(11)**, Δ**x**$^c_t$, Δ**x**$^p_t$, and $\theta_N$, we can calculate the gate function outputs $g_{\theta_N}(N(\cdot))$, thereby inducing any $A(\cdot)$ and computing a superior $\hat{A}(\cdot)$ (provided that $\theta_N$ satisfies the condition in either **Claim 1** or **2**). We approximate Δ**x**$^c_t$, Δ**x**$^p_t$, using neural networks, but the accuracy of these approximations will not affect the claims since there always exists some $\theta_N$ that satisfies the conditions (but it will certainly affect the performance). The toy example presented in **Section 2.3** serves as a case study to verify the appropriateness of the nominality score defined in **(1)**. We do not assume that real-world datasets have an i.i.d. normal distribution. #### 3. Misuse of the word “nominality” > We use the term **nominal** due to its usage in terms of measurement (*From a philosophical viewpoint, nominal value represents an accepted condition, which is a goal or an approximation, as opposed to the real value, which is always present.*). We did not use the term **normality** due to its related meaning with the normal distribution (*In statistics, **normality tests** are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.*) (Source: Wikipedia) #### 4. How is the backbone of the model (i.e., the performer-based autoencoder) chosen? Will that cause an unfair comparison between the proposed method and the baselines due to the neural model's approximation ability? > We opt for performer-based models due to their high efficiency compared to the original transformer model [27]. Changing the architecture is also a viable option. Our approach remains feasible as long as we have both a point-based and sequence-based reconstruction model. We do not anticipate that our model will lead to unfair comparisons. Similar to this work, most reconstruction-based baselines [11, 32, 36, 39, Zhou et al.] that utilize neural networks propose a combined workflow from designing the model architecture to calculating the anomaly score. As long as we maintain consistency in data preprocessing and metrics, our comparison method is standardized. - Bin Zhou, Shenghua Liu, Bryan Hooi, Xueqi Cheng, and Jing Ye. 2019. *BeatGAN: anomalous rhythm detection using adversarially generated time series*. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI'19). AAAI Press, 4433–4439. #### 5. The supplementary file mentions how a best threshold is chosen, but in 3.4 ablation study the threshold is set to 98.5% of the nominality score from the training data. Why and how will the choice of different threshold influence the performance (i.e., whether the method is sensitive to the choice of hyperparameters?) > To recap, there are two thresholds: $\theta_a$ and $\theta_N$. $\theta_a$ is determined automatically by **Supplementary (3)**. $\theta_N$ is a parameter for $g_{\theta_N}(N(\cdot))$, when using either a soft **(8)** or hard **(11)** gate function. **Rebuttal Figure 3 (a)** shows the nominality scores vs anomaly scores (from point-based reconstruction errors) for the WADI dataset using the parameters shown in **Supplementary Table 1**. Here, we compare the best F1 scores and corresponding false positive rates using soft/hard gate functions at different $\theta_N$. We see that our method is sensitive to both $\theta_N$ and the choice of gate function. If $\theta_N$ is set too high, it will result in an excessive accumulation of anomaly scores on normal time points, leading to an increase in false positives (**rebuttal Figure 3 (c)**). Conversely, if $\theta_N$ is set too low, no anomaly scores will propagate through time points, and the induced anomaly score will approach the original anomaly score (**rebuttal Figure 3 (b)**). The value of 98.5% for $\theta_N$ is an empirical choice that performs well across most datasets. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: I thank the authors for the detailed responses to my questions. Most of my concerns are well addressed. And I'm sorry that I didn't foresee the difficulty to induce a bound or expectation of the improvement by applying your induced anomaly score. The best result I could obtain under your hypothesis (2)~(4) using the hard gate function (11) with $d=1$ and threshold $\theta$ set to cover 100% anomaly points is in a rather complex form. The improvement seems to be relevant to the dimension of data $D$, ratio of std for normal to abnormal points $\alpha$, the size of the dataset, and the proportion of the anomalies. It seems that the increase of $\alpha$, decrease of anomaly proportion, and sadly decrease of the size of dataset (assume still large) will help improve the result. This means the method may be not effective for extreme large dataset containing many anomalies. I'm not sure whether it will be easier under other assumptions, but I admit this is beyond the scope of this paper while I hope to see further analysis in future work. Figure 3 in the provided pdf in the global response gave me the empirical intuition of the possible improvement. I don't have more concerns at current stage. Thus, I'll raise my score to accept.
Summary: Basically, this paper aims to consider anomalousness of data points both from a point perspective, which is independent of the temporal relationships in the data, and contextual perspective, which reflects temporal relationships in the data. The paper derives an induced anomaly score that considers both. Strengths: This paper addresses an important problem of considering both point and contextual anomalies. The paper provides good theoretical background for their work and convincing experimental results, showing particular anomalies that their method finds that are not found by methods that look for either point-based or contextual anomalies. Weaknesses: The datasets on which testing is performed have rather high anomaly rates. The authors should experiment with some datasets with much lower anomaly rates, perhaps by leaving out some anomalies in the datasets that they use. Post rebuttal comment: Thanks to the authors for the responses. While the anomaly rates that the authors use are worth testing, it is quite common to use lower anomaly rates, and such lower rates would constitute a better test of the performance of the new algorithm. With regard to removing anomalies from datasets, one can remove individual time series that are identified as anomalous. Technical Quality: 3 good Clarity: 3 good Questions for Authors: At the start of section 2.2, $\mathbf{x}_^0$ is part of the observed dataset but is also defined to be $\mathbf{x}_t^c + \Delta\mathbf{x}_t^p$. This cannot be correct. Did you mean to define $\delta\mathbf{x}_t^p$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitations are described on the work. Societal impact material is not relevant here. However, a description of future work should be provided based on any patterns in the errors that the presented algorithm makes. --- Post rebuttal addition: Thanks to the authors for pointing out the discussions on limitations in the supplementary material. However, such material is critical to understanding the nature of the contribution, and so needs to be in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere appreciation to Reviewer *m3iW* for conducting a thorough review and offering valuable insights. The raised points will be thoughtfully considered and integrated into the revised version. #### 1. The datasets on which testing is performed have rather high anomaly rates. The authors should experiment with some datasets with much lower anomaly rates, perhaps by leaving out some anomalies in the datasets that they use. > In the context of other time series anomaly detection literature, the datasets utilized in our work are widely used, and the anomaly rates are similar. The following contains a list of literature using the datasets: - SWaT – [10, 11, 12, 18, 23, 24, 26, 33, 37, 39, 40, 41] Supplementary [20] - WADI – [10, 11, 12, 23, 37, 39, 40, 41] Supplementary [20] - PSM – [24, 33] - MSL – [9, 11, 18, 23, 24, 25, 33, 38, 39, 40, 41] - SMAP – [9, 11, 18, 23, 24, 25, 33, 38, 39, 40, 41] - SMD – [9, 10, 11, 23, 24, 25, 33, 39, 41] Supplementary [20] > In considering removal of some anomalies from existing datasets, it is unclear how to appropriately do that – i.e., results may depend strongly on how missing values are imputed, how to remove sequence related anomalies particularly in multivariate cases, etc. We can note as future work, extensions to explore how detection rates extend to very rare anomaly rate scenarios. #### 2. At the start of section 2.2, **x**$^0$ is part of the observed dataset but is also defined to be **x**$^c_t$ + Δ**x**$^p_t$. This cannot be correct. Did you mean to define Δ**x**$^p_t$? > Yes, we are defining Δ**x**$^p_t$ and expressing that by definition (≜), **x**$^0$ equals **x**$^c_t$ + Δ**x**$^p_t$. We will update the equation and include it in the revision. #### 3. No limitations are described on the work. > We respectfully point out that we have presented the limitations in **Supplementary G**. #### 4. A description of future work should be provided based on any patterns in the errors that the presented algorithm makes. > In the revised version, we will expand the limitations section in the supplementary to include relevant discussions. For instance, in (main text) **Figure 4 (b)**, there exists a false negative between $t=14800$ and $t=14900$ when using the induced anomaly score. According to the WADI dataset, this is a 14.26-minute anomaly described as “Damage 1 MV 001 and raw water pump to drain Elevated Reservoir tank.” When examining a single time point, $M_{pt}$ struggles to recognize the anomaly. However, $M_{seq}$ can detect anomalous time-dependent relationships between the time points. This observation indicates that the section contains relatively more contextual anomalies than point anomalies, resulting in a higher anomaly score using $M_{seq}$. Since we use the reconstruction error of $M_{pt}$ as $A(\cdot)$, we lose the advantage of effectively utilizing the reconstruction error of $M_{seq}$, leading to an induced anomaly score that does not reach the threshold. An important future direction would be to explore how to select the model used for calculating $A(\cdot)$.
Rebuttal 1: Rebuttal: We sincerely express our gratitude to all the reviewers for their thorough reviews and valuable insights. A significant hurdle in time series anomaly detection involves effectively modeling time-dependent relationships and detect both contextual and point anomalies accurately. To tackle this issue, we propose an unsupervised framework – *NPSR* – that utilizes both point and sequence-based reconstruction models. We introduce a *nominality score* derived from the ratio of a combined value of the reconstruction errors, and a subsequent derivation of an *induced anomaly score*. We provide theoretical evidence supporting the superiority of the induced anomaly score under specific conditions. Extensive experiments on various public datasets demonstrate that NPSR outperforms most state-of-the-art baselines. Overall, the reviews provide constructive feedback and praise the paper's novelty and relevance. They also raise concerns regarding experimental comparisons, clarity of presentation, and the need for addressing certain theoretical aspects of the proposed method. We diligently addressed all the concerns raised by providing ample evidence and the requested results. The raised points will be thoughtfully considered and integrated into the revised version. Here is the summary of the revisions: 1. **Parameter Selection (Reviewers UPqe, gJSP, EBhj)**: We have thoroughly discussed heuristics for selecting the gate function (**rebuttal Figure 2**), $d$, and $\theta_N$. Additionally, we have provided two additional experimental results (**rebuttal Figure 1** and **3**) to support our viewpoints. 2. **Theoretical Justifications (Reviewers gJSP, EBhj, m3iW)**: We have presented an example where, according to **Claim 2**, the induced anomaly score demonstrates provable superior performance compared to the anomaly score (**rebuttal Figure 3**). Furthermore, we have clarified the conditions under which these hypotheses hold. 3. **Algorithm Justifications (Reviewers gJSP, EBhj)**: To underscore the importance of utilizing both point and sequence-based models, we have included an additional example (**rebuttal Figure 1**). Moreover, we have discussed the fairness of our algorithm in comparison to other studies. 4. **Missing Sections/Figures (Reviewers UPqe, m3iW)**: In response to the feedback, we will add the Related Works section to the main text, and we will extend the Limitations section in the supplementary material to include future works that can mitigate errors made by the presented algorithm. 5. **Flawed Experiments (Reviewer gJSP)**: We have identified a bug with calculating $\theta_a$ for Anomaly Transformer. The bug has been fixed, and the results have been updated. The results for other algorithms remain consistent. Please refer to the **updated Table 2** in the global response. We wish to emphasize that we ***do not use*** the *point-adjustment* method (a method for adjusting predictions according to true labels), which is used in some other literature. 6. **Experiment Results Explanation (Reviewer gJSP)**: In the revision, we have extended our discussion about the false negatives appearing in (main text) **Figure 4 (b)** to provide a more comprehensive explanation. Additionally, we have noticed that some important information is present in the supplementary material but not in the main text. In response to this, we will rearrange certain aspects in the revision to ensure all pertinent information is appropriately included in the main text. We are enthusiastic about addressing any further questions or inquiries from the reviewers to ensure the continued improvement and refinement of our work. Pdf: /pdf/200b15f49a5e98cf120a028532f984ca5c0e973c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection
Accept (oral)
Summary: 1. The authors prove that transformer models can perform several machine learning tasks, such as least squares, ridge regression, lasso and learning of GLMs. 2. The authors prove that transformer models can select an algorithm from a pool of possible ones, in two scenarios, post ICL validation and pre ICL testing, and thus can perform in context learning. 3. The authors construct a transformer model with near Bayes-optimal performance on linear models with mixed noise levels 4. The authors prove polynomial sample complexity results for pretraining transformers to perform ICL. 5. Experimentally, the authors show that transformer models can indeed perform algorithm selection in context. Strengths: 1. The level of clarity and writing is very good. 2. The theoretical contributions are insightful 3. The experimental results support the theoretical arguments 4. The scope of the work is broad and deep. Weaknesses: I do not recognize any major weakness Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: -- Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: -- Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the strong positive feedback on our paper!
Summary: They take an in-depth empirical and also theoretical analysis of the in-context learning abilities of transformers on various tasks. Their theoretical analysis of the learned algorithms is especially strong. The work in my eyes presents an extension of previous works in this field to more complex ICL problems (like least squares, ridge regression, Lasso). They identify mechanisms which transformers can learn by which a number of ICL tasks can be performed. Strengths: The submission is technically sound and very detailed in its explanations and supplements. The work extends previous studies on in-context learning. They provide in-depth and very relevant analyses for a number of simple ICL tasks. I do, however, wonder in which ways this work can be of practical relevance. For language modelling, the tasks and architectural sizes are quite different to these evaluated toy tasks. For learning optimization methods, it would be interesting to provide more general bounds for ICL analysis which could be used to transfer to more relevant tasks. Also, it would be interesting to see if the provided analyses could be used to make modifications to the transformer architecture. I understand the scope of this work is huge already and this work might exactly enable these kinds of follow-ups. (Capability of transformers for ICL) In my eyes it seems rather obvious that the demonstrated capabilities can be learned through transformers, given that these architectures are able to model far more complex tasks. (Architectural choices for ICL) From a practical perspective the suggestions for optimal choices seem less relevant as of now (e.g. which architectural parameters work well in practice for modelling certain ICL aspects), since architectural choices are studied in isolation and only for toy problems. However, theoretical upper bounds on architectural choices seem interesting and seem to provide interesting ways in which to modify and analyse architectures for enhanced ICL. Weaknesses: I do at times wonder if the theoretical derivations on the capabilities of in-context learning do hold up in practice. Take for example Pre-ICL. Authors motivate this with an example where the hyperparamter in a linear regression varies. I would expect the transformer to be able to learn to generalize these parameters, given a number of different hyperparamters and pretraining rounds. This would not be akin to algorithm selection but rather a learned mechanism that can learn across hyperparameters. While the work might just give upper bounds here, I imagine these are quite hard to transfer into practice. The capability of transformers to learn various tasks in in-context learning has been known for quite some time. I think some relevant prior work on "Prior-data fitted networks" is missing, that demonstrates in-context learning with transformers before the term "in-context learning" came up: Transformers Can Do Bayesian Inference (ICLR 2022): This work studies if Transformers can in-context learn algorithmic tasks (such as Gaussian Processes and MLPs), to my knowledge the first work to do so (e.g. prior to Garg and Ayurek) and with an eye of the Bayesian approximation learned in ICL. They show e.g. Gaussian Processes and MLPs can be approximated by transformers, demonstrating hyperparameter selection and deriving the Bayesian approximation capabilities of transformers. Statistical Foundations of Prior-Data Fitted Networks (ICML 2023) TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second (ICLR 2023): Studies the ability of in-context learning for tabular datasets. It also evaluates task mixtures and shows that transformers are able to model complex algorithmic tasks through ICL. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (Pretraining rounds for excess risk) the authors provide derivations of multiple tasks in terms of the pretraining rounds needed to train transformers with an excess risk to a given baseline. These results are interesting and demonstrate that for a variety of tasks such bounds can be found. I wonder weather such rules can be made up for more general classes of tasks, such that it would be easier to transfer these results to other tasks? Can polynomial guarantees, e.g. be derived given some computational complexity of the baseline algorithm and task? Did you study what happens when the pretrained transformer is applied to out-of-distribution data, e.g. linear pretraining applied to sparse linear tasks (algorithmically out of distribution) or when the input data is drawn from another distribiution (e.g. higher variance, colinearity, ..) Are there limits to the number of algorithmic tasks selected by transformers in the selection algorithm? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not clearly addressed, those could include: Theoretical analyses are hard to apply to more realtistic and complex real world tasks. The analyses are difficult to perform even for the simple tasks shown in this work, probably for most complex tasks infeasable? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback on our paper. We respond to the specific questions as follows. > …Authors motivate this with an example where the hyperparamter in a linear regression varies. I would expect the transformer to be able to learn to generalize these parameters, given a number of different hyperparamters and pretraining rounds. This would not be akin to algorithm selection but rather a learned mechanism that can learn across hyperparameters. While the work might just give upper bounds here, I imagine these are quite hard to transfer into practice. We would appreciate it if the reviewer can clarify this question or provide some concrete examples, as we were unsure if the following interpretation was correct. By “generalizing these hyperparameters”, did the reviewer mean e.g. generalizing from a finite set of $\lambda$ (as in Theorem 11) to a continuous range of $\lambda$’s? In other words, can transformer pretrained on finitely many $\lambda$’s perform near-optimal algorithm selection over continuous $\lambda$. In that case, we believe it may be possible to construct a transformer that achieves this. The technical difficulty would be in the statistical analysis of ridge regression about best $\lambda$ selection (rather than the transformer construction). While we did not pursue this, we believe this could be an interesting direction for future work. > …relevant prior work on "Prior-data fitted networks" is missing, that demonstrates in-context learning with transformers before the term "in-context learning" came up. … We thank the reviewer for pointing out the missing references on Prior-data Fitted Networks (PFNs), and will properly cite them in our revision. Overall, these work on PFNs indeed demonstrates the in-context learning capability (Bayesian optimality) of transformers in various settings. We believe the experiments in these work and the theory in our paper complement each other: Our results essentially show that transformers can efficiently approximate a broad class of ICL algorithms, and we give theoretical constructions for concrete ICL algorithms, such as linear regression, Lasso, and gradient descent on neural networks. Such results for the expressive power and the Bayes-optimality of the resulting transformers were not established in the PFN literature. Also, our results are not restricted to the Bayesian setting and can be broaderly applicable for providing frequentist in-context prediction guarantees for transformers. > (Pretraining rounds for excess risk) the authors provide derivations of multiple tasks in terms of the pretraining rounds needed to train transformers with an excess risk to a given baseline… whether such rules can be made up for more general classes of tasks, such that it would be easier to transfer these results to other tasks? Can polynomial guarantees, e.g. be derived given some computational complexity of the baseline algorithm and task? We believe there may indeed be some general conditions for tasks that are learnable in-context by transformers. For example, for any task that i) can be efficiently learnable by gradient descent, and ii) the gradient is approximable by attention layers, we should be able to construct transformers to do ICL and obtain similar polynomial sample complexity guarantees as in our paper. Concretely spelling out such general conditions, and identifying alternative conditions (different from GD) would be an important question for future work. > … what happens when the pretrained transformer is applied to out-of-distribution data? e.g. linear pretraining applied to sparse linear tasks (algorithmically out of distribution) or when the input data is drawn from another distribiution (e.g. higher variance, colinearity, ..) Most of our approximation results (e.g. Theorem 4, Corollary 5, Theorem 7, Theorem 11) apply to any such OOD scenario, since the transformers we construct work on any in-context dataset with mild boundedness and non-degeneracy assumptions, with no distributional assumptions required. On the other hand, our pretraining results (Section 5) and experiments focus on the in-distribution setting. Experimentally, Garg et al. (2022) have conducted extensive empirical studies of ICL in OOD scenarios, with mixed results. Theoretically, we believe characterizing the OOD behavior of pretrained transformers is a major open problem and may require new insights and techniques. > Are there limits to the number of algorithmic tasks selected by transformers in the selection algorithm? Our transformer constructions do not have a hard limit on the number of tasks. A larger number of tasks $K$ would require a larger hidden token dimension $D$ to store all the intermediate results, as well as a larger $N$ in order for the validation losses to be faithful estimates of the true losses. See Theorem 11 (formal version in Theorem J.1) for an example. > Limitations are not clearly addressed, those could include: Theoretical analyses are hard to apply to more realtistic and complex real world tasks… We agree that complex real-world tasks could be much more challenging than our setting, especially if the task distributions are more complicated or even consist of language data rather than real-valued data (as in our setting). We will add a discussion on this and other limitations of our work in our revision. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. Concerning "generalizing these hyperparameters": That is what I meant indeed. I imagine when a transformer is tested for lambda_1, lambda_2, .. it might be able to generalize to lambda_new atleast within the bounds of lambdas seen. This might not work perfectly but an approximation should be learned. I have to say this is less of a criticism, this would be another work and does not fit the scope of yours. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you for the response. Yes, we agree this generalizing to \lambda_new would be an interesting direction for future work.
Summary: This paper theoretically investigate the in-context learning ability from the approximation view. The authors derive the error bound of approximating least squares, ridge regression, and Lasso algorithms with transformer. The corresponding statistical properties, i.e., the convergence rate and Bayes suboptimality, are derived for transformers. In addition, the authors show that transformers are also able to implement the algorithm selection. Simulation results are provided to verify the theoretical findings. Strengths: This paper presents solid error bound for transformer to approximate different kinds of algorithms, which are interesting for the approximation theory of neural networks. The corresponding convergence rates for different algorithms are relevant for explaining the in-context learning behavior of transformer. In addition, the approximation results for the algorithm selection unit are noval in the investigation of in-context learning. Weaknesses: 1. Although this paper provides extensive parameters assignments to show that transformer can implement a series of algorithms, rare experimental results are provided to corroborate these constructions. For example, the authors construct transformers that implement one step gradient descent with several layers and repest this pattern. However, such periodic pattern of network parameters and intermediate results is not shown in the simulation section. Consequently, it becomes challenging to assert that transformers indeed implement the algorithms as stated in the paper. 2. The authors analyze the attention with ReLu activation function, although this particular activation function is rarely employed in practical designs of attentions. To justify the necessity of analyzing ReLU instead of softmax, it would be beneficial to provide reasoning considering that previous studies, such as [1] and [2], predominantly adhere to the softmax activation function. 3. Given the approximation results in the paper of transformers with ReLu activation and the fact that Multi-Layer Perceptrons (MLP) are universal approximators of continuous functions, it is beneficial to discuss that wether the results in this paper imply that MLP can also implement the algorithms in the paper and consequently possess the in-context learning ability. 4. In Corollary 6, the paper derives the estimation error of approximate ridge regression in Bayesian setting, but such Bayesian result is missing for Lasso. It will be helpful to discussion such absence of Bayesia analysis for Lasso. 5. The generalization error bound in section 5 needs more discussions. Previous works, such as [3] and [4], have derived generalization error bounds for transformers. The novalty of the generalization bound in this paper needs more discussion. I am happy to change my evaluation if the authors can answer the above questions. [1] Von Oswald J, Niklasson E, Randazzo E, et al. Transformers learn in-context by gradient descent[J]. arXiv preprint arXiv:2212.07677, 2022. [2] Akyürek E, Schuurmans D, Andreas J, et al. What learning algorithm is in-context learning? investigations with linear models[J]. arXiv preprint arXiv:2211.15661, 2022. [3] Edelman B L, Goel S, Kakade S, et al. Inductive biases and variable creation in self-attention mechanisms[C]//International Conference on Machine Learning. PMLR, 2022: 5793-5831. [4] Zhang Y, Liu B, Cai Q, et al. An Analysis of Attention via the Lens of Exchangeability and Latent Variable Models[J]. arXiv preprint arXiv:2212.14852, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The questions are listed in the previous section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation section is missing in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback on our paper. We respond to the specific questions as follows. > …rare experimental results are provided to corroborate these constructions. For example, the authors construct transformers that implement one step gradient descent with several layers and repeat this pattern. However, such periodic pattern of network parameters and intermediate results is not shown in the simulation section. Consequently, it becomes challenging to assert that transformers indeed implement the algorithms as stated in the paper. We did not mean to show that empirically learned transformers implement gradient descent (GD) in its intermediate layers, and we suspect that alternative mechanisms could exist. GD is rather **one possible mechanism** we use in theory for constructing the transformers. > The authors analyze the attention with ReLu activation function, although this particular activation function is rarely employed in practical designs of attentions. To justify the necessity of analyzing ReLU instead of softmax, it would be beneficial to provide reasoning considering that previous studies, such as [1] and [2], predominantly adhere to the softmax activation function. For the purpose of our theory, we believe there is no essential difference between ReLU and softmax. We believe our constructions can be generalized to standard softmax attention with some additional technical treatments. Our choice of ReLU was merely to simplify certain arguments. Experimentally, we tried out exactly the ReLU architecture we used in our theory (cf. Line 289-291 & Appendix M.1) and it performs well. Other recent studies (e.g. Shen et al. 2023) have also found transformers with ReLU attentions to perform well. K. Shen, J. Guo, X. Tan, S. Tang, R. Wang, and J. Bian. A Study on ReLU and Softmax in Transformer. arXiv preprint arXiv:2302.06461, 2023. Re [1,2], we remark that [1] only used **linear attentions** instead of softmax, and the constructions in [2] used softmax in its “saturating regime” where it is approximately a **hard max** (cf. their arXiv version Appendix C.4.1, Page 19). Therefore, neither paper should count as "predominantly using the softmax activation" in a strict sense. > whether the results in this paper imply that MLP can also implement the algorithms in the paper and consequently possess the in-context learning ability. By “MLP”, did the reviewer mean an MLP over the concatenated input (a vector in $\mathbb{R}^{dN}$)? In that case, by the universal approximation property, MLPs can also approximate the ICL algorithms we considered. However, such an MLP would be **significantly more inefficient** (with much larger depth and width) compared with our constructions, as i) the input dimension (and thus the number of parameters) already scales linearly with the sequence length, whereas the size of our transformers do not; and ii) we used the attention layers in efficient ways to implement various operations such as in-context gradient descent (see e.g. Proposition E.1 for an example where the attention structure is crucial), which are unclear how to implement efficiently by MLPs. We will add a discussion about this point in our revision. Feel free to let us know if we understood the meaning of the "MLP" correctly. > In Corollary 6, the paper derives the estimation error of approximate ridge regression in Bayesian setting, but such Bayesian result is missing for Lasso. It will be helpful to discussion such absence of Bayesian analysis for Lasso. A Bayesian result for Lasso analogous to Corollary 6 holds directly for linear models with the Laplacian prior (by applying Theorem 7). We stated the result for ridge regression for convenience only (instead of for any fundamental reason), as our main focus was on the later results on adaptive algorithm selection building on these “base” algorithms. > The generalization error bound in section 5 needs more discussions. Previous works, such as [3] and [4], have derived generalization error bounds for transformers. The novalty of the generalization bound in this paper needs more discussion. The sample complexity results in Section 5 (& Appendix M) can be viewed as corollaries of our efficient transformer constructions in Section 3 & 4, where we combine the constructions with standard generalization analysis to derive the sample complexities for pretraining. We did not claim technical novelty in the generalization analysis part. Our techniques are indeed standard (by controlling Lipschitz constants + chaining arguments), and alternative techniques like [3,4] may also work here. The novelty is rather in the efficient transformer constructions in Section 3 & 4, which ensures that the final sample complexity in Section 5 is mild. Results like [3,4] did not provide such concrete constructions for ICL algorithms, and thus they themselves cannot deduce our sample complexity results. We will add a discussion of these points (space permitted) in our revision. > The limitation section is missing in this paper. We appreciate the suggestion, and will properly discuss the limitations of our work in our revision. --- We thank the reviewer again for reading our response. We appreciate it if the reviewer could consider raising the score, if our rebuttal has addressed your concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply. Given the rebuttal, I have the following concerns: (Explanations of possible mechanisms and ReLU activation) Thanks for explaining the perspective where the results in this paper are meaningful. I now accept the approximation view in this paper to study ICL. Given this perspective, the explanation in the rebuttal of the softmax in [2] seems pale. Although they mainly use the saturation part of softmax, this is one possible mechanism to understand softmax, viewed from the perspective explained in the rebuttal. I think a detailed discussion about this will be very helpful. In addition, the authors mentioned that the results can be generalized to softmax attention. Could the authors provide the intuitions about how to do this? I think this is unclear from the techniques in the current paper. (ICL in MLP) Thanks for the detailed explanations of the potential problems in MLP. I am sorry for not providing a clear definition of MLP. In fact, I would like to discuss the RNN, which is an universal approximator. Could authors discuss the ICL ability of RNN? In my personal opinion, a very important question related to the main theme of this paper is that: the constructions in the paper are a sufficient condition for ICL, i.e., any NN that can approximate these algorithms will have ICL ability, or they are only a hypothesis to explain the ICL ability of transformer? If the latter is correct, could authors comment on how to verify this hypothesis? The answer to this question is not clear after reading this paper. (Novelty of Pretraining results) The authors mentioned that `We provide the first line of results for pretraining transformers to perform the various ICL tasks 75 above, from polynomially many training sequences (Section 5 & Appendix K).' in the introduction. It seems that the pretraining result is one of the main contributions of this paper. If the novelty is not the technical part, as mentioned in the rebuttal, I think the contribution here needs more discussion. --- Reply to Comment 1.1.1: Title: Response to further questions Comment: We thank the reviewer for the response. We respond to the additional questions as follows. **(Explanations of possible mechanisms and ReLU activation)** > the explanation in the rebuttal of the softmax in [2] seems pale… this is one possible mechanism to understand softmax… a detailed discussion about this will be very helpful. Apart from the “saturating regime”, the way [2] used softmax is very different from the way we used the ReLU. They used softmax attention, in combination with the MLP layers, to approximate various low-level operations such as “aff, mul, mov”, and used them to approximate the gradient of the square loss. A single gradient step requires multiple low-level operations concatenated, and consequently, they need a 9-layer transformer (cf. their Appendix A) to approximate a single SGD step. By contrast, we directly use a single attention layer with ReLU activation to approximate a full-batch GD step, and our construction works for a broader class of convex losses (cf. our Proposition E.1). Technically, it directly uses the attention structure efficiently to approximate gradients, different from [2]. We will add a discussion about this in our revision. > the authors mentioned that the results can be generalized to softmax attention. Could the authors provide the intuitions about how to do this? Our construction can be generalized to softmax attention as follows: We can use the softmax—in conjunction with a specific positional encoding in the input—to implement (tokenwise) sigmoid activation, and then approximate the gradients using sigmoid in place of the ReLU. See, for example, Giannou et al. (ICML 2023; Lemma 5) for implementing sigmoids from softmax attention. The argument would be in essence the same as ours after obtaining the sigmoid, but overall more tedious compared with our construction using the ReLU. A. Giannou, S. Rajput, J.-y. Sohn, K. Lee, J. D. Lee, and D. Papailiopoulos. Looped transformers as programmable computers. arXiv preprint arXiv:2301.13196, 2023. **(ICL in MLP)** > In fact, I would like to discuss the RNN, which is an universal approximator. Could authors discuss the ICL ability of RNN? RNNs could approximate ICL algorithms better than vanilla MLPs, due to their suitability for processing sequential inputs. However, for implementing the ICL algorithms in our paper, we believe **RNNs would still be much more inefficient than transformers**. One reason is that RNNs (in its basic form) only consist of matrix-vector products with fixed weight matrices, and lack the attention mechanism where input tokens themselves can interact with each other. Consequently, key mechanisms we used such as in-context gradient descent (cf. Theorem 9) would be much harder to implement by RNNs than transformers. A simple analogy would be the dot product $\langle x_1, x_2\rangle$: RNNs with layers of the form $\sigma(W[x_1; x_2])$ may approximate this function very inefficiently (by incurring universal approximation results in high dimension), whereas transformers can approximate this function efficiently using the attention mechanism when $x_1$ is the key and $x_2$ is the value. > the constructions in the paper are a sufficient condition for ICL, i.e., any NN that can approximate these algorithms will have ICL ability, or they are only a hypothesis to explain the ICL ability of transformer? Our paper merely provides upper bounds for Transformers to do ICL. While our constructions *suggest* that MLPs/RNNs are unlikely to match transformers in the efficiency of doing ICL, strictly speaking, our upper bounds don’t imply how these alternative architectures will do. One way to investigate this further is to establish formal lower bounds for these alternative architectures. This would be an interesting direction for future work, but we believe are out of scope for this paper and do not undermine our contributions, as transformers themselves are already widely used and have demonstrated remarkable ICL capabilities.
Summary: In context Learning, is a setting in which Transformers can learn to perform new tasks when prompted with training and test examples. This work advances the understanding of the capabilities of ICL. In particular, this paper proves that for a broad class of standard machine learning algorithms like ridge regression, lasso etc, transformers can implement these methods. Moreover ICL can perform in-context algorithm selection and select simple algorithms to learn a more complex algorithm. They also show that using their proposed method they can construct a transformer that can perform nearly Bayes-optimal ICL on noisy linear models with mixed noise levels. Strengths: * This paper expands upon our understanding of ICL. It shows that transformers can implement a plethora of standard ML tasks and require much mild bounds on the number of layers and heads. * The paper extends the analysis to in-context algorithm selection and provides two algorithm selection mechanisms. They use the proposed mechanism to construct a transformer that can perform Bayes optimal ICL on noisy linear models. * This works opens up new directions to explore (theoretically and empirically) optimal ICL construction on other problems * The paper is well written and easy to follow Weaknesses: * It would be good to know how results such as those in [1] compares to the one obtained in this paper. [1] Are transformers universal approximators of sequence-to-sequence functions? a Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: what was the reasoning behind choosing Bayes-optimal ICL on noisy linear models with mixed noise levels as an example to show case the usefulness of in context algorithm selection? Was that the most complex method for which something like theorem 12 could be proven? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback on our paper. We respond to the specific questions as follows. > How results such as those in [1] compares to the one obtained in this paper. Our approximation results and [1] are very different. [1] shows that transformers are universal approximators for a large class of sequence-to-sequence functions. However, their result is for generic continuous seq-to-seq functions, and their number of layers is exponential in the worst-case (cf. Section 4.4 of [1]). By contrast, **our results are for specific target functions (ICL algorithms) but much more efficient**—The number of layers, heads, and weight norms only depend polynomially on relevant problem parameters and the desired approximation accuracy. This happens as our constructions utilize the special structure of the ICL algorithms (our approximation target). > The reasoning behind choosing Bayes-optimal ICL on noisy linear models with mixed noise levels. There is no fundamental reason behind our choice of noisy linear models with mixed noise levels. Rather, we picked this setting as it is solvable by transformers via algorithm selection, and the setting itself is a harder one than that of Akyurek et al. 2022 which studies a single noise level. > Was that the most complex method for which something like theorem 12 could be proven? If we understand correctly, you mean the most complex “setting” (data generating model)? We believe results like Theorem 12 hold as long as 1) the model is a **mixture model**; 2) Bayes-optimal ICL can be done for each component of the mixture; and 3) $N$ is sufficiently large so that the validation losses are accurate estimations of the true population losses. Under these conditions, algorithm selection (post-ICL validation) can be used to do nearly Bayes-optimal ICL. An example is a mixture of generalized linear models with different link functions. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for clarifying my doubts. I have no further questions regarding the paper and I stand by my assessment that the paper is technically solid with high impact. PS:- One idea/further extension could be to prove a seq-2seq result like in [1] but in context. Note that this does not impact the assessment of this paper. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you for the response and the positive feedback on our paper! Re extension: Yes, we agree extending our in-context learning results to a seq-to-seq setting (for predicting at every token) would be an interesting direction for future work.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Toward Re-Identifying Any Animal
Accept (poster)
Summary: The paper focuses on the ReID problem. Different from previous works, which mainly focus on persons and vehicles, the paper wants to reID any animals. To realize this, the authors construct a new dataset that contains many animals. Also, the authors propose a visual guidance generator, a textual guidance generator, and a Text-Guided attentive module. By integrating the power of CLIP and GPT4, authors claim that they achieve re-identifying "ANY" animals. 20230808 Update: after checking the cons by other reviewers, I still lean to reject the paper. Kindly ask the fellow reviewers check what I concern, thanks. Strengths: Personally, I appreciate the paper and spent a whole day on this paper as well as its supplementary. What makes me interested in this paper is the following: (1) The interesting idea: Re-Identifying Any Animal is very interesting and promising to me. It has more influence than Re-Identifying people and vehicles. I think it is a step to Re-Identifying Anything. (2) The good writing: I can fully understand what authors want to express by reading only once. The writing is smooth and excellent. Weaknesses: Although the paper is very interesting and the topic authors discussed is very promising, I feel there are some major concerns. Below please find my comments on these concerns. Please correct me if I’m wrong. (1) The very small dataset. Building a dataset is very admirable in the data-centric AI era. However, I think the significance of the dataset is limited. As you claim in the title, you want to reID any animals; however, in the Sup Table 1 and 2, there are only 2671 identities in your proposed dataset. The number of identities is even much smaller than Vehicle datasets (Vehicle) and Person datasets (MSMT17). In the ReID task, each identity is regarded as one class. To achieve "ANY", you should include a very large identity, for example, 267100. As expected, with such a small number, you can only freeze the CLIP backbone and use the GPT4 API (I see the fine-tuning results in worse performance). In conclusion, I doubt the value of the proposed Wildlife-71. What I expect is very large re-identifying datasets and the methods are ONLY trained on this to achieve "ANY". So here, my suggestion is to enlarge the dataset by 100 times to see what happens. (2) The proposed method lacks novelty. I understand that it is acceptable that the main contribution is a dataset, and the proposed method is just regarded as a strong baseline to benchmark the proposed dataset. However, as I stated in (1), I think the value of the dataset is limited. The method part is built on existing works: 1. The CLIP and VPT: you only regard the supported images as prompt to tune efficiently. It is not novel and easy to do. 2. The Text-Guided Attentive module: there are so many methods [1-3] used to aggregate the information from vision and languages. May I know what new insights are considering these prior works? So here, my suggestion is only to focus on the training method on your proposed dataset. The model pre-training on a huge dataset (assuming you have labeled one) is very attractive. I see you only have two TITAN GPUs; using at least 8*A100 is more suitable for the "ANY" task. Apologize in advance if you do not have this. (3) The insufficient and not convincing experiments. I know the paper contains a braving idea, however, I find: 1. With the help of the strongest GPT4 and CLIP, your method only archives about +3% over other famous ReID and DGReID methods (Table 1). In my opinion, the improvement is too marginal, even with the strongest tools. 2. I would suggest adding comparisons with existing aggregation modules [4-6], which is lacking in this paper. 3. The design of the visual guidance generator: I see that you compare the tuning method in your proposed methods and others. What I am concerned about is if there are any other kinds of visual generators. Do they perform well? 4. I am curious about how to select the supported images in the test process. Do you assume there are labels? Do you use any matching methods? Comparisons are needed. 5. During the test, you concatenate the 4 visual features. That incurs how much computing burden. Please show the inference speed compared with other methods. If the added computational cost is significant compared with the accuracy gain, it is hard to judge the advantage of this system. If the authors could add the suggested experiments it would be very good. [1] Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone. [2] Language As Queries for Referring Video Object Segmentation. [3] mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. [4] Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models [5] Non-local Neural Networks (you may adapt it to utilize text) [6] A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Some small questions: 1. What is the meaning of "20" in Line 199? 2. How many prompts are used in each ViT layer? 3. Remove the Duke dataset due to its invasion of privacy. 4. If you want to work on reID anything, how can you obtain the dataset? Have you considered the synthetic dataset? 5. Do you think it is possible to reID any animals without any help from the tools like GPT4? I am charmed at Vision Foundation (Big) Models. I want you to focus on this. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1.Data: For the open-world problem, there are two mainstream solutions. One is using huge dataset to train a strong deep model, such as SAM. The other one is transfer-learning methods, e.g., UniDetetcor [A]. These methods employ a dataset with a certain volume to gather knowledge and increase the models' adaptability to adapt the accumulated knowledge to handle novel instances they encountered. In this study, we chose the second, data-efficient, solution, since collecting a huge wildlife dataset could hardly be achieved in the foreseeable future. Since ReID dataset requires multiple frames for each instance, its collection could not be achieved by simply crawling web images as SAM. So, we collect wildlife ReID data from videos. But, due to the rarity of wildlife, we found that for each category, only an average of 50 available videos could be collected on Youtube. Thus, gathering your excepted dataset requires laborers to identify, track, and photograph over 260K animals in the wild. Given the limited economic value of wildlife, in the foreseeable future, there could hardly be any institution to afford such costs. Although Wildlife-71 may be not large enough to train a foundation model, its value should not be overlooked. As shown in Table 4, compared with existing datasets, wildlife-71 could provide ReID models with effective knowledge to identify animals. Besides, incorporating the textual knowledge derived from GPT-4 and our proposed category adaptation modules, the knowledge inside wildlife-71 could be further adapted to unseen animals, which indicates the value of wildlife-71 to support the second solution. Finally, our objective is toward (not achieved) re-identifying any animal. If you feel it's over-claimed, we are happy to replace it, e.g., "Toward category-generalizable wildlife re-identification". 2.Novelty: In terms of the overall design, our UniReID offers two pivotal insights. The first is deriving textual guidance from LLMs into the ReID model, empowering it with adequate knowledge for generalization. As R@9xKB also notes, this solution could inspire future research using LLMs. The second is empowering ReID models with sufficient adaptability to adapt accumulated knowledge to handle unseen categories. These insights could benefit both ReID-AW and analogous cross-category instance retrieval scenarios. For technical details, **1)** Our visual prompt is not a tuning strategy. Unlike VPT methods that need to train a set of prompt vectors for each target task, our visual guidance generator (ViGG) could generate category-specific prompts through a fully forward process. As far as we know, this is the first work to construct visual prompts in such an adaptive and tuning-free manner. **2)** Rather than revise the attention module, our major consideration during incorporating textual prompts is to address the inherent modality gap between them and images and ensure they can accurately associate with visual cues without explicit semantic annotations (e.g. segmentation maps). For this issue, we propose to use the CLIP model, adeptly trained to bridge visual and textual elements, embedding them into a shared hidden space for the subsequent attention operation. 3.Performance: GPT-4 and CLIP are large language and vision-language models. They suffer significant modal gaps with fine-grained visual tasks. Hence, their application to ReID-AW is not straightforward and does not necessitate improvement. As shown in Figure 3, a simple application of CLIP to ReID-AW results in unsatisfactory results. To address this, we propose the visual and textual prompting strategy, which helps to bridge this gap and make use of the knowledge in these models. The effort involved in this process should not be overlooked. 4.Aggregation: We replace our textual attentive module with these three aggregation modules and evaluate them on ReID-AW. [4] achieves 60.4% CMC-1 (v.s. UniReID's 61.4%). [5] replaces the attentive module with a transformer decoder, achieving a comparable performance, i.e., 61.1. [6] introduces a Gaussian filter to capture salient clues comprehensively, which slightly increases the CMC-1 of UniReID by 0.2%. 5.Generators: We replace ViGG with two other hyper-networks, namely, the fusion net[B] and the MLP net[C]. [B] uses 64 learnable vectors as templates and then uses features of support images to predict a set of fusion weights to aggregate these templates into visual prompts. [C] receives the features of support images and adaptively embeds them into visual prompts via an MLP. For ViGG, we were devoted to integrating the benefits of both methods above. In ViGG, we use a set of trainable vectors to accumulate knowledge and a transformer layer to calibrate them into category-specific prompts under the guidance of support images. Hence, UniReID suppresses the version using the above two methods by 1.0% and 0.8% CMC-1. 6.Support images: For each test set, we randomly select a triplet of images and designate them as the support data, which, once designated, remains consistent across all experiments. 7.Speed: Please refer to Answer 2 for R@9xKB. 8.20: 20 is the number of prompts used in each ViT layer. 9.Duke: We would remove experiments on Duke. 10.Synthetic: Yes. Although models like Diffusion can spawn diverse instances, they struggle to produce images of the same identity across varying viewpoints and poses. Recently, we discern that advancements in 3D generation methods may offer a solution. Such methods can create an array of 3D models, thus securing inter-identity diversity. 3D models can also be manipulated to generate images of the same identities. 11.Tool: Since collecting a huge wildlife dataset could hardly be achieved in the near future, tools like GPT-4 are valuable. [A] Detecting everything in the open world: Towards universal object detection CVPR23\ [B] Dynamic convolution: Attention over convolution kernels CVPR20\ [C] Decoupled dynamic filter networks CVPR21 --- Rebuttal Comment 1.1: Title: 1: Data Comment: **Authors rebuttal** 1.Data: For the open-world problem, there are two mainstream solutions. One is using huge dataset to train a strong deep model, such as SAM. The other one is transfer-learning methods, e.g., UniDetetcor [A]. These methods employ a dataset with a certain volume to gather knowledge and increase the models' adaptability to adapt the accumulated knowledge to handle novel instances they encountered. In this study, we chose the second, data-efficient, solution, since collecting a huge wildlife dataset could hardly be achieved in the foreseeable future. Since ReID dataset requires multiple frames for each instance, its collection could not be achieved by simply crawling web images as SAM. So, we collect wildlife ReID data from videos. But, due to the rarity of wildlife, we found that for each category, only an average of 50 available videos could be collected on Youtube. Thus, gathering your excepted dataset requires laborers to identify, track, and photograph over 260K animals in the wild. Given the limited economic value of wildlife, in the foreseeable future, there could hardly be any institution to afford such costs. Although Wildlife-71 may be not large enough to train a foundation model, its value should not be overlooked. As shown in Table 4, compared with existing datasets, wildlife-71 could provide ReID models with effective knowledge to identify animals. Besides, incorporating the textual knowledge derived from GPT-4 and our proposed category adaptation modules, the knowledge inside wildlife-71 could be further adapted to unseen animals, which indicates the value of wildlife-71 to support the second solution. Finally, our objective is toward (not achieved) re-identifying any animal. If you feel it's over-claimed, we are happy to replace it, e.g., "Toward category-generalizable wildlife re-identification". **My opinion** 1. The paper UniDetetcor you mentioned also uses a very large dataset (*These contributions allow UniDetector to detect over 7k categories, the largest measurable category size so far, with only about 500 classes participating in training.*). Admittedly it does not train on a very large dataset, the test categories/data are much larger than yours and therefore have more meaning. For the current version, I do not see your methods have experimented on such big dataset. Therefore, I prefer a limited contribution. 2. Admittedly, collecting such a big dataset is very expensive, I do not think it is impossible. For example, the collection of dataset OpenImage by Google is much harder than the dataset I suggested. 3. Although you want to under-claim, the performance is too low. I do not think it has any practical use. 4. For the synthetic data, I mean, use some engine such as Unity to build. For example, please refer to the RandPerson and ClonePerson. **In conclusion, the rebuttal in the data view does not satisfy me. Suggest a Reject in the aspect of data.** I am not sure the authors will see this because: **Error: NeurIPS 2023 Conference Submission3282 Authors must not be readers of the comment** --- Reply to Comment 1.1.1: Comment: While we believe that accumulating huge data to train a large model is not the only factor in solving open-world problems, however, your views on this issue seem firm. From our opinion: 1. As you notice, UniDetetcor is also an annotation-efficient model, which uses only images belonging to 500 categories to train a generalizable model. This indicates that our transfer-learning solution is also valuable and feasible. For UniDetetcor ‘s 7,000 test category, it is attributed to the fact that detection is not a task as fine-grained as ReID, and detection already owns many existing multi-category datasets like VisualGenome for evaluation. In contrast, for our Wildlife-71 dataset, even facing the scarcity of wildlife data and the fine-grained nature of ReID task, we have gathered data from 7 categories (4 listed in the paper and 3 newly added) , which could also assess our model's generalization. This effort should not be regarded as a limitation of this work. 2. **Firstly**, the volume of data isn't the sole metric for evaluating a dataset's value, and our Wildlife-71 encompasses over 100K images, larger than most ReID benchmarks (Table 1 in Appendix). **Secondly**, given the limited economic value of wildlife, in the foreseeable future, there could hardly be any institution to afford the costs to build a wildlife ReID dataset of the same volume as OpenImage. However, we contend that research should not be evaluated solely on its economic merits. The social merits of wildlife re-identification technology, especially in areas like endangered population monitoring and migration pattern analysis, holds substantial value, driving our thorough research. 3. **Firstly**, open-world problem is a challenging task that needs to be solved gradually and collaboratively by community researchers, rather than addressed in one step. For instance, our admitted UniDetetcor also only brings limited improvement (3.8 AP in its Table 4) to previous works. In comparison, our UniReID brings 4.5 CMC-1 improvement to the top-performing generalizable ReID model DTIN-Net. **Secondly**, as the pioneer ``toward'' (we did not claim we have achieved) re-identifying any animal, we introduce the significant ReID-AW task, construct a benchmark encompassing over 100,000 images (outstripping the size of most existing ReID datasets), establish an evaluation protocol, and give our technical solution. We posit that our efforts could introduce this essential task to the re-identification community, which should not be fully negated based solely on dataset volume considerations. Meanwhile, we believe that with the collaborative efforts of the community, both the Wild-71 dataset and the ReID-AW task will achieve notable progress. 4. Notably, to produce synthetic data for each wildlife category using tools like Unity, we would have to gather tens of thousands of 3D scans for each wildlife category to create a 3D parametric model similar to SMPL. This process could arguably be more time-consuming than collecting ReID data itself. Consequently, we suggest leveraging nerf-based 3D generation models like [1] as we discussed in response, which could leverage 2D data that is comparatively easier to amass. However, given the imperfection of these methods, they could not be directly employed yet. [1] Wang Z, et al. “ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation.”, arxiv. Title: Response to the concern of R@ABAL about Data --- Rebuttal Comment 1.2: Title: 2.Novelty Comment: **Authors rebuttal** 2.Novelty: In terms of the overall design, our UniReID offers two pivotal insights. The first is deriving textual guidance from LLMs into the ReID model, empowering it with adequate knowledge for generalization. As R@9xKB also notes, this solution could inspire future research using LLMs. The second is empowering ReID models with sufficient adaptability to adapt accumulated knowledge to handle unseen categories. These insights could benefit both ReID-AW and analogous cross-category instance retrieval scenarios. For technical details, 1) Our visual prompt is not a tuning strategy. Unlike VPT methods that need to train a set of prompt vectors for each target task, our visual guidance generator (ViGG) could generate category-specific prompts through a fully forward process. As far as we know, this is the first work to construct visual prompts in such an adaptive and tuning-free manner. 2) Rather than revise the attention module, our major consideration during incorporating textual prompts is to address the inherent modality gap between them and images and ensure they can accurately associate with visual cues without explicit semantic annotations (e.g. segmentation maps). For this issue, we propose to use the CLIP model, adeptly trained to bridge visual and textual elements, embedding them into a shared hidden space for the subsequent attention operation. **My opinion** Different people see the novelty from different views. I acknowledge the contribution: (1) prompts an in-context learning setting (2) the aggregation of text and image. But they are not convincing enough for me. I recommend a **Weak Reject** here for the novelty aspect. I am not sure the authors will see this because: **Error: NeurIPS 2023 Conference Submission3282 Authors must not be readers of the comment** --- Reply to Comment 1.2.1: Comment: I'm sorry, but your statement "Different people see the novelty from different views. " is very confusing and somehow subjective. As you admitted, the utilization of category-adaptive features and the incorporation of text-guided representations to integrate pre-existing category-related knowledge from large language models inside our model is novel in ReID community and could be extended to analogous cross-category instance-level retrieval scenarios. Meanwhile, from our perspective, these two points above potentially provide more technical insights for future research than directly using huge data. Title: Response to the concern of R@ABAL about Novelty --- Rebuttal Comment 1.3: Title: 3.Performance Comment: **Authors rebuttal** 3.Performance: GPT-4 and CLIP are large language and vision-language models. They suffer significant modal gaps with fine-grained visual tasks. Hence, their application to ReID-AW is not straightforward and does not necessitate improvement. As shown in Figure 3, a simple application of CLIP to ReID-AW results in unsatisfactory results. To address this, we propose the visual and textual prompting strategy, which helps to bridge this gap and make use of the knowledge in these models. The effort involved in this process should not be overlooked. **My opinion** The authors do not reply to my comments about poor improvement. I acknowledge that directly using CLIP and GPT-4 has no improvement. But that should not be the reason for you use powerful tools but only gains a poor improvement. **Strong Reject** for the experiment part. I am not sure the authors will see this because: **Error: NeurIPS 2023 Conference Submission3282 Authors must not be readers of the comment** --- Reply to Comment 1.3.1: Title: Response to the concern of R@ABAL about Performance Comment: **Firstly**, as far as we know, there is no theory proving that the use of CLIP and GPT-4 will necessarily lead to amazing improvements in open-world visual tasks. Notably, another work, UniDetetcor, using CLIP to address open-world detection task also only brings limited improvement (3.8 AP in its Table 4). **Secondly**, as you notice that, directly using CLIP and GPT-4 has no improvement in ReID-AW. Hence, the efforts we have made to make them work should not be overlooked (Table 2). --- Rebuttal Comment 1.4: Title: Others Comment: Others are clear. My main concerns lie in **(1) the contributed data (2) the novelty (3) the performance**. I sincerely ask my fellow reviewers and AC to consider this. Thanks --- Rebuttal 2: Title: Questions Comment: Besides the responses to your rebuttal, I am considering what is the practical meaning of re-ID any **ANIMALS**? I cannot imagine a use of your system. --- Rebuttal Comment 2.1: Title: Response to the Questions of R@ABAL Comment: "Besides the responses to your rebuttal, I am considering what is the practical meaning of re-ID any ANIMALS? I cannot imagine a use of your system." We highly contend that research should not be evaluated solely on its economic merits. The social merits of wildlife re-identification technology, especially in areas like **endangered population monitoring** and **migration pattern analysis**, holds substantial value, driving our thorough research.
Summary: This paper introduces a new task called "Re-identify Any Animal in the Wild" (ReID-AW) which aims to develop a ReID model capable of handling any unseen wildlife category it encounters. To address this challenge, the authors created a comprehensive dataset called Wildlife-71, which includes ReID data from 71 different wildlife categories, and developed a universal re-identification model named UniReID specifically for the ReID-AW task. The authors employed a dynamic prompting mechanism using category-specific visual prompts to enhance model adaptability and leverage explicit semantic knowledge derived from GPT-4 to focus on regions useful for distinguishing individuals within the target category. UniReID showcases promising performance in handling arbitrary wildlife categories, offering significant advancements in the field of ReID for wildlife conservation and research purposes. Strengths: 1. this paper presents a new generic task compared to object re-identification and provides a dataset including 71 categories. 2. For this task, the authors propose a generic re-identification model that combines visual and textual features to re-identify categories that were not seen during model training. Weaknesses: 1. The authors only evaluated performance comparisons in categories that had not been seen for the experimental evaluation. However, comparisons in seen categories, such as pedestrian versus other methods, are missing. 2. Some minor issues: the authors do not list the contributions of this paper in an organized manner, and random erasing lacks corresponding citations [1]. **Referecens** [1] Zhong Z, Zheng L, Kang G, et al. Random erasing data augmentation. Proceedings of the AAAI conference on artificial intelligence. 2020, 34(07): 13001-13008. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The authors indicate that " The rationale behind this strategy is that emphasizing cross-category distinguishing could guide ReID models to concentrate more on coarse-grained clues while ignoring fine-grained clues that are actually beneficial for intra-category identification". The current understanding of intra-class and inter-instance differences in the ReID task is limited due to the previous studies' sole focus on the differences between classes. Hence, it remains unclear how these differences could affect the model's performance. Additionally, while the results indicate better performance on unseen classes, the accuracy of the model on the seen classes is questionable. Why not consider all variations within and between classes to gain a comprehensive understanding of the ReID task and improve the accuracy of the model? 2. The authors give an interesting example in the introduction section using chameleons, why not use this as a test set or whether this dataset contains that category and how it performs in that category? The authors have proposed an interesting task and I think the dataset is very important for re-identifying communities, however, there are some doubts about the construction of the dataset and the input on the method, I will raise the score if the authors have solved my problem well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: This seems to limit the use of the model because the model must use a triplet as the visual input. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1.**Performance comparisons in seen categories.**\ To address this concern, we evaluated our UniReID model and the top-performing competitors, DTIN-Net [A], on two seen categories: alligator and morgan. Here, we need to clarify that we are continually augmenting our Wildlife-71 dataset and have amassed over 1 K new wildlife identities. These newly acquired data are used to construct the test set above. In these categories, our UniReID achieves 71.2% and 55.8% CMC-1 accuracy, respectively. Meanwhile, the DTIN-Net achieves 67.8% and 52.7%, demonstrating that our UniReID model averagely outperforms DTIN-Net by 3.3%, which indicates the superiority of our model. For a detailed comparison in the context of the person ReID task, please refer to Section 2 in the Appendix. 2.**the authors do not list the contributions of this paper in an organized manner, and random erasing lacks corresponding citations [1].**\ Thanks for your suggestions. We would refine our paper to list our contributions and include this citation. Specifically, the contributions would be summarised as follows: 1) In this research, we propose a novel task named "Re-identify Any Animal in the Wild" (ReID-AW). The objective of ReID-AW is to develop a universal ReID model capable of handling any unseen wildlife category it encounters. 2) To equip the universal ReID model with sufficient knowledge for generalization, we construct a diverse dataset named Wildlife-71 for training purposes, which includes ReID data from 71 different wildlife categories. To the best of our knowledge, Wildlife-71 is the first ReID dataset encompassing multiple object categories. 3) In addition to the dataset, we also develop a novel ReID framework named UniReID for the ReID-AW task. Within our UniReID, a visual dynamic prompting mechanism and a textual attentive module are given, which are responsible for leveraging visual and textual guidance to adapt our model toward the target category. 3.**The authors indicate that "The rationale behind this strategy is that ...".**\ Firstly, in terms of experimental results, removing our category-specific sampling strategy could result in a 2.7% CMC-1 degradation to our final model. The reason behind this might be that the knowledge employed by ReID models to distinguish instances from varied categories can be ineffective, or even detrimental when applied to identifying instances within the same category. Intuitively, if we concentrate the ReID model on differentiating instances between horse and cow during the training stage, it could regard coarse-grained clues such as the presence or absence of horns, as a key distinguishing clue. However, such coarse-grained clues are ineffective for fine-grained intra-category identification among horses. Additionally, it is worth mentioning that this observation has also been made by Jiao et.al. [A]. in the context of the domain generalization person ReID task. They discovered that when training a ReID model to differentiate samples from different domains, the model could be misled into considering coarse-grained cues like backgrounds and seasons as primary factors, which, however, are ineffective for person ReID. Consequently, they introduced a domain-specific sampling method, which aligns in principle with our approach. 4.**Evaluation on the chameleon category.**\ In the previously submitted version of Wildlife-71, we have not included the chameleon category. Until now, as we continuously expand our dataset, we have amassed 134 distinct chameleon identities with various colors. Here, we utilized these newly acquired data as a test set and assessed the performance of our UniReID specifically for the chameleon category. In comparison with the top-performing competitor, DTIN-Net [A], our UniReID model demonstrated a 6.6% improvement in CMC-1 performance (68.9% v.s. 62.3%), which indicates the superiority of our model. [A] Jiao BL, et al. Dynamically transformed instance normalization network for generalizable person re-identification. ECCV, 2022. --- Rebuttal Comment 1.1: Title: The author's response addresses most of my concerns and the authors are encouraged to provide some statistics on the final dataset Comment: Upon carefully reviewing the comments from both the reviewers and the authors' responses, my positive evaluation of this paper remains unchanged. The primary factor behind my positive stance is that this paper presents an intriguing task; however, it is possible that the claim of re-identifying any animal in the given topic may be overly ambitious, as pointed out by Reviewer#ABAL. Similar to the concerns expressed by Reviewer#ABAL, my primary concern pertains to the dataset provided by the authors. The authors have encountered challenges regarding the size of the provided dataset; nevertheless, they have included various intriguing categories, including chameleons. Notwithstanding the limited size of the current dataset, the authors are actively expanding its volume. Hence, I kindly request the authors to furnish statistical information regarding the final dataset (the version at the time of the paper's official publication if it is accepted), thereby enabling all reviewers to reevaluate the significance of this study. --- Reply to Comment 1.1.1: Comment: Thanks for your support and admission. We will continue to expand our Wildlife dataset, whether our paper is accepted or not. Besides, we will give the latest statistical information of our dataset as an extended version in our final paper.
Summary: This paper proposes a new task called “Re-identify Any Animal in the Wild” (ReID-AW) and created a comprehensive dataset called Wildlife-71 which is used to evaluate ReID-AW methods. Furthermore, the authors present a universal re-identification model named UniReID specifically for the ReID-AW task. This model receives dual-modal guidance, i.e., visual and textual guidance, to facilitate adaptation to the target category. Experimental results show that the UniReID model considerably surpasses all compared methods. Strengths: 1) New task: The authors propose a practical task named “Re-identify Any Animal in the Wild” and first created a comprehensive dataset encompassing multiple object categories. 2) Integration of new technologies: The method uses the large-scale pre-trained model and visual prompt tuning to improve model performance. 3) comprehensive experiment: The method yields good performance under multiple settings. Weaknesses: **Major Concerns** 1) My major concern is that the proposed framework is a good combination of many popular techniques: **a) GPT-4; b) multi-modal CLIP; c) Visual prompt tuning [10]; And focuses on a new dataset for d) animal re-identification.** a) The using of GPT-4.0 seems just generate 4 fixed text phrases. This operation could be simply replaced by any other manual annotation or search engine including wiki. The use of CLIP in b) and c) generation of visual prompts seems interesting while using the text guidance for visual features is quite common. For d), there exist many datasets focusing on animals, including tigers, and fishes. Thus why not directly use these datasets considering they are relatively large? And I am still confused why this proposed method only focuses on animal re-identification, but not other objects including vehicles or human beings. 2) Some details are not clear to me. As a Re-identification task, how do the authors generate category-specific prompts for unseen ids? In my view, the unseen IDs during inference do not have learnable specific prompts for further tuning. This design seems unnatural in the re-identification task. 3) There are also some other works using CLIP models for the same task of re-identification. This paper could incorporate it into a comparison or at least a discussion. [A] *Li, S., Sun, L., & Li, Q. (2023, June). CLIP-ReID: Exploiting Vision-Language Model for Image Re-identification without Concrete Text Labels. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 1, pp. 1405-1413).* **Other Minor Concerns** 4) GPT-4 is not an open-source model. So the textual guidance generator relying on GPT-4 may be limited in its use. 5) The authors claim that“we aim to identify instances within the same category where inter-class divergence can be very subtle”. This statement is not clearly explained and the authors do not discuss this in detail. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section. The authors could respond to why these techniques are necessary and why it is distinctive for only the animal re-identification task. Or the animal re-ID is just a newly-proposed setting to attract the research attention? I cannot see why this new setting is distinctive and should only be solved by these new techniques. Due to time limitations, the authors could focus on my major concerns as a matter of priority. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation is clearly stated after the conclusion. The authors claimed that they will incorporate other categories in future work, which is also one of my major concern for this manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1.**Why techniques are necessary and why it is distinctive for animal**\ Sorry for causing confusion. Here, we need to clarify that the techniques encompassed within our model are not designed to fit animal objects but address the primary challenge of the ReID-AW task, i.e., identify instances within unseen categories. It is worth noting that existing ReID methods are primarily designed and trained to handle instances within a specific category. For instance, a tiger ReID model [B] could only identify tiger instances. In this work, we attempt to transcend this category boundary and train a universal ReID model on limited seen animal categories that can generalize to unseen animal categories in the wild. To the best of our knowledge, research on such cross-category ReID remains an unexplored domain and presents a considerable challenge. Besides, we need to clarify that while our evaluation primarily centers on animal ReID, the method we present holds inherent potential for extension to a broader array of cross-category instance-level retrieval (CCIR) challenges. The foundational elements established in our approach offer two pivotal insights that can be harnessed for analogous CCIR scenarios: first, the utilization of category-adaptive representations, and second, the incorporation of text-guided representations to integrate pre-existing category-related knowledge from large language models (LLMs). 2.**Combination of popular techniques**\ Please note that our model is not merely a naive combination of existing techniques; instead, it is carefully constructed based on our two key insights. Specifically, our first insight is to incorporate textual guidance from LLMs into the ReID model, empowering it with adequate knowledge to facilitate generalization. As R@9xKB also notes, this solution could inspire future research using LLMs. The second insight is to empower the ReID model with sufficient adaptability to efficiently adapt accumulated knowledge to handle unseen target categories. We believe that these two insights could not only address the ReID-AW but also be harnessed for analogous CCIR scenarios. \ Regarding some specific concerns, we address them below. 1) GPT-4 and multi-modal CLIP: Leveraging textual knowledge from GPT-4 is non-trivial. Unlike the visual guidance widely used in existing ReID models, e.g., keypoints [B], this textual guidance cannot be directly applied to image features due to the inherent modality gap. To alleviate this gap and ensure that textual prompts can accurately associate with visual clues without explicit semantic annotations (e.g. segmentation maps), we propose to employ the CLIP model, adeptly trained to bridge visual and textual elements, embedding them into a shared hidden space for the subsequent attention operation. 2) Animal ReID dataset: Existing animal ReID datasets focus solely on a few categories. The limited knowledge contained in these datasets is insufficient to support the training of a universal ReID model that could generalize to unseen animal categories. Therefore, we construct a more comprehensive dataset Wildlife-71 which contains 71 different animal categories to train the universal ReID model. For the existing animal datasets, i.e., Zebra, Seal, **Tiger**, and Giraffe, we use them as test sets to evaluate the category generalization capability of ReID models. 3) Besides, our UniReID model does not overfit the animal objects. As we exhibited in Section 2 in Appendix, our UniReID also achieves state-of-the-art performance under the domain generalization person ReID task. 3.**Prompts for unseen ids**\ During the inference phase, when deploying our UniReID model to address a specific target (unseen) category, such as the tiger, we initially provide a triplet of tiger images as support. Acquiring such minimal support data is feasible in real-world scenarios. Leveraging these support images, our visual guidance generator (elaborated in Section 3.3) can produce category-specific visual prompts shared for all target tiger images through a one-time, **fully forward process**. Armed with these category-specific prompts, our UniReID model can efficiently adapt to the tiger category without any training or tuning. 4.**CLIP-ReID**\ We compare our UniReID with the CLIP-ReID model [C] under ReID-AW setting. Across the four test sets, our UniReID surpasses the CLIP-ReID model by 6.3% CMC-1 on average (61.4 vs 55.1). The reason could be that the CLIP-ReID focuses on using knowledge embedded within CLIP, but falls short in adapting the accumulated knowledge to novel categories, which could be addressed by the visual and textual prompting strategy of our UniReID. 5.**Relying on GPT-4**\ Our UniReID does not rely on the usage of GPT-4. In fact, replacing the GPT-4 to GPT-3, our UniReID still averagely outperforms the best competitor DTIN-Net [A] by 4.1% CMC-1 under ReID-AW. 6.**Explanation of “we aim to identify...”**\ This statement is intended to clarify the primary distinction between the few-shot classification (FSC) and our ReID-AW. Specifically, our ReID-AW represents a more fine-grained task compared to FSC. Compared with the FSC that seeks only to determine the category of a target instance (*e.g.*, distinguishing between an elephant and a mouse), our ReID-AW goes one step further, diving into each specific category to recognize the unique identity of a target instance (*e.g.*, identifying one particular elephant from other elephants). As a result, the inter-class (*i.e.*, inter-identity) divergence in ReID-AW is much smaller than the inter-class (*i.e.*, inter-category) divergence in FSC, which inherently makes ReID-AW a more challenging task. [A] Dynamically transformed instance normalization network for generalizable person re-identification. ECCV22.\ [B] Part-pose guided amur tiger re-identification. ICCVW19.\ [C] CLIP-ReID: Exploiting Vision-Language Model for Image Re-identification without Concrete Text Labels. AAAI23. --- Rebuttal Comment 1.1: Title: Supplement illustration about our insight behind adopting GPT-4 and designed visual prompts. Comment: Due to space limitations, we did not illustrate our insight behind adopting GPT-4 and our designed visual prompts in Answer 2. Here, we would like to offer supplementary illustrations. 1) **GPT-4**: In fact, choosing GPT-4 is based on the consideration of its two superiorities. Firstly, GPT-4, as one of the most advanced LLMs, possesses substantial knowledge beneficial for category generalization. Secondly, after the API of GPT-4 being released, our model could derive knowledge on-the-fly in a question-answering manner (as we designed in Section 3.4) without any manual operation. 2) **Visual prompt tuning**: As you kindly note that the visual guidance strategy within our UniReID also represents a significant innovation. Different from existing visual prompt tuning methods [A], which require training a unique set of prompt vectors for each downstream task, our visual guidance generator can produce category-specific prompts via a **fully forward process**. To the best of our knowledge, this is the first attempt to formulate visual prompts in such an adaptive and tuning-free fashion. This innovation enables our UniReID model to efficiently adapt to unseen categories. [A] Jia M, et al. ``Visual prompt tuning'', ECCV, 2022. --- Rebuttal Comment 1.2: Title: Reply to rebuttals Comment: I appreciate the authors for their detailed responses and added new results to address my concerns. I carefully read the discussions of other reviewers and the author's response. At this stage of discussion, I still have serious doubts about the novelty of this paper. From my point of view, the use of large language models such as GPT-4 is a highlight of this paper, and it is also a key technology for why this proposed method can recognize "any" animal if I understand correctly. However, by using GPT-4, the proposed technique seems to just change the original category word into a description of an object and input it into the CLIP model. For example, the original word description for "tiger" has been changed to a word description for "some kind of four-legged reptile mammal". At this point, it seems that any linguistic dictionary can be used as a substitute for GPT-4 of any known human thing. Therefore, I wonder what the authors and other reviewers think about this issue. It's hard for me to call it an effective highlight or novelty. And it is hard to say it is an innovation in machine learning or computer vision. As other reviewers mentioned, one crucial issue for Re-ID tasks is its usability in the realistic world, and I feel that the authors did not fully convince the reviewers of this. --- Reply to Comment 1.2.1: Title: Response to Concerns of R@LLxx Comment: First, we would like to extend our sincere gratitude for your careful review and comments. 1.**The necessity of LLMs**: We are sorry for causing confusion about the role of LLMs in our approach. For the utilization of textual prompts given by the LLMs, our goal is to equip our UniReID model with the **knowledge about which specific clues are discriminative for identifying individuals within the target category** (like "Unique facial markings" and "distinctive ear shapes" for the panda, as shown in Figure 2), rather than the description of the general characteristics of target category wildlife (like tiger is four-legged reptile mammal). With such fine-grained textual prompts, we could effectively guide our UniReID model to use appropriate knowledge and capture actual discriminative clues (via attention) for identifying target category individuals. Besides, it is worth noting that providing such fine-grained knowledge is non-trivial, which needs a systematic understanding and summary of existing knowledge related to target categories and **could hardly obtain straightforwardly from existing linguistic dictionaries**. Meanwhile, considering that ReID-AW is an open-world problem aimed at generalizing to unpredictable and unseen target categories, it is also not feasible to mutually conduct exhaustive summarization and annotation for all potential target categories. To address this issue, in this work, we made an insight that the significantly developed LLMs could offer a promising solution. Specifically, the rich knowledge and reasoning ability of LLMs enables them to summarize their learned knowledge related to target categories into the textual prompts about discriminative clues, which could be used as guidance to adapt ReID models. As far as we know, in the ReID domain, our UniReID gives the first attempt to construct category-adaptive and discriminative features to identify individuals of unseen category leveraging knowledge from LLMs, which we believe could provide insights to a broader array of cross-category instance-level retrieval challenges. Meanwhile, as **R@9xKB** mentioned, “This can inspire researchers in using GPT”. 2.**Usability of ReID-AW**: For the usability of the ReID-AW task, as mentioned by **R@5HdD**, it could be valuable in the field of wildlife conservation. Specifically, a major motivation that drives us to propose the ReID-AW is to address some crucial tasks in protecting endangered wildlife, e.g., endangered population monitoring. For these endangered wildlife categories, a dataset with enough samples to train a specific ReID model could hardly be collected due to their rarity. Based on this observation, we construct our Wildlife-71 dataset and propose the ReID-AW task, which aims to develop a ReID model trained on our collected 67 wildlife categories that could generalize to unseen wildlife categories. We believe such models could be useful for relevant wildlife conservation tasks (as mentioned in our abstract and introduction). Finally, we hope that the respected reviewer, when assessing the research and practical value of our ReID-AW, could take into account the positive comments from other reviewers and consider the efforts we have made to introduce it to the community.
Summary: This paper extends common person or vehicle reid task to re-identifying any animal in the wild, called ReID-AW task. First, authors propose an animal reid dataset for the ReID-AW task that contains 71 different categories. Then, authors also propose a new method for this task to tackle specific challenges in this task. Specially, authors propose to use GPT-4 to generate prompts to guide attention learning on the visual embedding, which can inspire researchers in using GPT. Strengths: This paper extends common person or vehicle reid task to re-identifying any animal in the wild, called ReID-AW task. First, authors propose an animal reid dataset for the ReID-AW task that contains 71 different categories. Then, authors also propose a new method for this task to tackle specific challenges in this task. Specially, authors propose to use GPT-4 to generate prompts to guide attention learning on the visual embedding, which can inspire researchers in using GPT. Weaknesses: --- In experiments, only “zebra”, “seal”, “giraffe”, and “tiger” are used as test set. It’s better to evaluate the performance by setting other categories as test sets. --- Compared with other methods, the proposed method may suffer from the high computation complexity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: --- What is the difference between using GPT-4 and other LLMs such GPT-3 or llama? --- If some instances change along with time, will the model work? --- Authors use 4 test prompts in this method. Will using more prompts works better? --- Will the model trained on Wildlife-71 work well on person reid dataset? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Evaluate the performance on other categories.**\ Thank you for your suggestion. To extend our test set, we have further incorporated new instances from two seen categories, *i.e.*, alligator and morgan, as well as from an unseen category named chameleon. In these three categories, our UniReID achieves CMC-1 accuracy of 71.2%, 55.8%, and 68.9%. Compared with the top-performing competitor, DTIN-Net [A], which respectively achieves 67.8%, 52.7%, and 63.3% CMC-1 accuracy, our UniReID markedly outperforms, which indicates its superiority. 2. **Comparison of computation complexity with other methods.**\ When compared with the TransReID [B] method, which uses the same ViT backbone as our UniReID, the floating-point operations in our model increase only by 1.4 GMACs (19.7 v.s. 18.3). Additionally, the time taken to process and infer a single query image (tested on the Zebra dataset) shows a minor increase of 0.002 seconds (0.044 v.s. 0.046), which we consider to be acceptable. It is important to highlight that for a specific target category, both visual and textual prompts are consistent across all input images of that category. This means their features could be calculated only once and can then be reused for all test images. Apart from this, the additional computational consumption of our UniReID model merely stems from the text-guided attentive module. 3. **The difference between using GPT-4 and other LLMs such GPT-3 or llama.**\ As one of the most advanced language models, the GPT-4 is trained with more data than GPT-3 and llama. Nonetheless, it is crucial to note that the superiorities of our UniReID do not depend solely on the usage of GPT-4. In fact, replacing the GPT-4 to GPT-3, our UniReID only suffers average 0.3% mAP degradation (42.5 v.s. 42.2). 4. **If some instances change along with time, will the model work?**\ Our UniReID possesses the capability to adapt to varying inputs. Crucially, UniReID can adapt to instances from any specific target category via a fully forward process, guided by the provided visual and textual prompts, without any need for fine-tuning. Consequently, if target categories and instances change over time, a mere modification of the visual and textual prompts could adapt our model accordingly. This adaptability is a significant advantage of our UniReID model. 5. **Will using more text prompts works better?**\ To respond to this issue, we increase the number of textual prompts to 6 and 8, respectively, and utilize them to train our UniReID. From the results, we find that using 6 and 8 prompts could only bring an average 0.3% and 0.2% mAP improvement to our UniReID. Upon analyzing the generated prompts, we observed that compelling GPT-4 to produce an excessive number of prompts could potentially cause them to concentrate on less distinctive clues, which can hardly result in performance improvement. 6. **Will the model trained on Wildlife-71 work well on person reid dataset?**\ To address this concern, we evaluated our UniReID model, trained on the Wildlife-71 dataset, using the test set of the Market-1501 dataset. Compared to the UniReID model trained under the person ReID dataset (details are given in Section 2 of the Appendix), the model trained on Wildlife-71 shows a 29.2% decrease in CMC-1 performance (72.6 v.s. 43.4). This decline could be attributed to the significant appearance and semantic differences between wildlife and pedestrians. The knowledge accumulated in Wildlife-71, such as patterns of fur, feathers, or horns, could hardly generalize to pedestrian instances within Market-1501. [A] Jiao BL, et al. Dynamically transformed instance normalization network for generalizable person re-identification. ECCV, 2022.\ [B] He ST, et al. TransReID: Transformer-based object re-identification. ICCV, 2021.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Reject
Summary: The authors provided a theoretical analysis for independent subnet training and provided a convergence analysis in the case where communication compression is present. The authors discussed the scenario when bias is not present and provided two analyses in the homogeneous and heterogeneous case, respectively before extending their theorems to the case with bias. Strengths: 1. Sections 3 and 4 are well written and clear, which break the scenarios down in an intuitive way and presented the theorems and outlines clearly. Weaknesses: 1. Introduction could be more straightforward and can dive straight into the main technical contributions of the work. It was not clear why the problem is well motivated and what are the main technical hurdles until reading section 2 and onwards. A clearer presentation in the intro can make the paper much more readable and well-motivated. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors mentioned that prior work on the convergence of IST focus "on overparameterized single hidden layer neural networks with ReLU activations". It is not entirely clear to me why the authors considered quadratic form and what is the tradeoff between their work and prior work in this regard. A more thorough explanation would be appreciated. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work is theoretical and does not seem to have any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer LM8J, Thanks for the time and effort devoted to our paper. We appreciate the positive evaluation of our work. # Responses to questions > The authors mentioned that prior work on the convergence of IST focus "on overparameterized single hidden layer neural networks with ReLU activations". It is not entirely clear to me why the authors considered quadratic form and what is the tradeoff between their work and prior work in this regard. A more thorough explanation would be appreciated. We have tried to clarify the key differences between our approaches in general [response](https://openreview.net/forum?id=ldulVsMDDk&noteId=aaNJQt5Ksk) to all reviewers. ## Comments on Weaknesses > Introduction could be more straightforward and can dive straight into the main technical contributions of the work. It was not clear why the problem is well motivated and what are the main technical hurdles until reading section 2 and onwards. A clearer presentation in the intro can make the paper much more readable and well-motivated. Thank you for the suggestion. We will try to modify the Introduction according to your proposal. Best regards, Authors
Summary: The paper provides a theoretical analysis of the convergence properties of Independent Subnetwork Training (IST), for distributed Stochastic Gradient Descent (SGD) optimization with a quadratic loss function. The analysis considers both the cases of homogeneous and heterogeneous distributed scenarios, without restrictive assumptions on the gradient estimator. The work characterizes situations where IST converges very efficiently, and cases where it does not converge to the optimal solution but to an irreducible neighbourhood. Experimental results that validate the theory are provided in the Appendix. Strengths: The paper provides a solid analytical treatment of the important problem of distributed optimization with reduced communication overhead by means of Independent Subnetwork Training (IST). Compared to previous work, the analysis of the paper does not rely on the restrictive assumption of a bounded stochastic gradient norm. The paper is well written – the exposition is clear, and the material is well structured and well presented. Weaknesses: The work considers distributed Stochastic Gradient Descent (SGD) training with a quadratic loss. As mentioned by the authors in Section 3, a simple quadratic loss function has been used in other work to analyze properties of neural networks. While this loss function can still provide interesting theoretical insights, it would be valuable to extend the analysis and the experimental results to more generally used loss functions. Minor comments: -- Line 17: "drives from" may be changed to "derives from". -- Equation after line 217: it seems that the first part of the equation "$\mathbb{E}[g^k] = \bar{\mathbf{L}}^{-1} \bar{\mathbf{L}} x^k \pm \bar{\mathbf{L}}^{-1} \bar{b} - \frac{1}{\sqrt{n}} \tilde{\mathbf{D} b} = $ ..." should be rewritten as "$\mathbb{E}[g^k] = \bar{\mathbf{L}}^{-1} \bar{\mathbf{L}} x^k - \frac{1}{\sqrt{n}} \tilde{\mathbf{D} b} = $ ...". -- Equation after line 221: it seems that the first part of the equation "$\mathbb{E}[x^{k+1}] = x^k - \gamma \mathbb{E}[g^k] = $ ..." should be "$\mathbb{E}[x^{k+1}] = \mathbb{E}[x^k] - \gamma \mathbb{E}[g^k] = $ ...". Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be valuable if the authors could consider a discussion or possible additional experiments to extend some of the results and insights presented in the paper to the case of distributed optimization with more generally used loss functions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work relates to distributed training of large-scale models, which generally correspond to significant power consumption and CO2 emissions. However, the IST method studied in the paper aims at allowing distributed training with reduced communication overhead, corresponding to potentially reduced power consumption. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer LM8J, Thanks for the time and effort devoted to our paper. We greatly value careful reading of the material and the positive evaluation of our work. # Responses to questions > It would be valuable if the authors could consider a discussion or possible additional experiments to extend some of the results and insights presented in the paper to the case of distributed optimization with more generally used loss functions. Thank you for this suggestion. Let us comment on the matter of generalizing our results to non-quadratic settings. First, we note that there were already attempts [45, 48] to perform analysis of similar classes of methods for a different class of loss functions. There is a discussion of their results in Section 4.3 and Appendix C in more detail. We find their convergence bounds unsatisfying from a theoretical viewpoint due to too restrictive additional assumptions (e.g., on bounded gradient norm or sparsification parameter), which lead to vacuous bounds in certain cases. That is why we decided to take a step back and start with a quadratic problem setting, which allowed us to perform a meaningful theoretical analysis and reveal the advantages and limitations of IST. Generalizing our results to a non-quadratic setting is currently open, and it is not entirely clear how difficult the problem is. When we tried to solve this issue ourselves, it was found that we are not aware of a theoretical framework that allows to perform analysis for a general class of $L$-smooth functions due to challenges (e.g. biased gradient estimator) mentioned in Section 2.1 and before 4.3. ## Clarifications on “minor comments” > -- Equation after line 221: Thank you for this comment. Let us clear this up. On line 221, we mention “proper conditioning”, which means applying the following conditional expectation $\mathbb{E}\left[\cdot | x^k \right]$. That being said, a more formal form of equation (28) on line 221 is $$ \mathbb{E}\left[x^{k+1} | x^k\right] = \mathbb{E}\left[x^k\right]-\gamma \mathbb{E}\left[g^k | x^k\right]. $$ So there is no randomness in $x^k$ on Line 221. > -- Equation after line 217: Thank you for the suggestion. Let us clarify what is meant there. The transformation that is done there is based on the idea of “smart” zero: adding and subtracting the same term $$ \mathbb{E}\left[g^k | x^k\right] = \overline{\mathbf{L}}^{-1} \overline{\mathbf{L}} x^k - \frac{1}{\sqrt{n}} \widetilde{\mathbf{D} \mathrm{b}} = \overline{\mathbf{L}}^{-1} \overline{\mathbf{L}} x^k - \overline{\mathbf{L}}^{-1} \overline{\mathrm{b}} -\frac{1}{\sqrt{n}} \widetilde{\mathbf{D} \mathrm{b}} + \overline{\mathbf{L}}^{-1} \overline{\mathrm{b}} = \overline{\mathbf{L}}^{-1} \nabla f\left(x^k\right) - \frac{1}{\sqrt{n}} \widetilde{\mathbf{D b}} + \overline{\mathbf{L}}^{-1} \overline{\mathrm{b}}. $$ Best regards, Authors --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing the points raised in my review and for proposing to clarify the exposition. I confirm that I am satisfied with their answers provided by the authors in the rebuttal. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thanks!
Summary: Independent Subnetwork Training (IST) is a technique that divides the neural network into smaller independent subnetworks, trains them in a distributed parallel way, and aggregates the results of each independent subnetwork to update the weights of the whole model. This paper aims to analyze the behavior of IST theoretically. Specifically, it considers a quadratic model trained by IST. It conducts convergence analysis under both homogeneous and heterogeneous scenarios and shows that IST can only converge to an irreducible neighborhood of optimal solution. Strengths: This is probably the first work providing a thorough theoretical analysis of IST. Weaknesses: 1. This paper should include a more comprehensive motivation for the theoretical study of IST. This could involve discussing the potential limitations of current IST architectures and how a theoretical analysis can guide future modifications to improve their performance. By doing so, reviewers will have a clearer understanding of the significance of the paper's findings and how they can be applied in practice. 2. The main body of this paper does not have any experimental results. The authors should include some key experiments to validate their theoretical analysis. 3. The authors should consider expanding the scope of their experiments beyond quadratic models to include other types of models that are commonly used in SOTA IST papers, e.g., ResNet, and Graph Convolutional Networks, as listed in this paper's reference. This would allow reviewers to better understand the generality of the paper's findings and how they can be applied to real-world applications. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. How do the findings of this work be of any help to the future design of IST architecture? In addition, the authors should include more experiments on SOTA IST applications, e.g., ResNet, and Graph Convolutional Networks, to indicate the generality and significance of this paper's findings. 2. The authors should clarify the assumptions made in the permutation example in Section 3.1. Specifically, they should explain the case that n=d^2 and the use of Perm-1 sketch, while according to definition 2, perm-q refers to d=qn, which leads to n=d=1 and it’s a naïve configuration. By clarifying the assumptions made in this section, the reader can better understand the example and its implications for IST. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Future works are discussed in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer wecA, Thanks for the time and effort devoted to our paper. # Responses to questions > How do the findings of this work be of any help to the future design of IST architecture? In addition, the authors should include more experiments on SOTA IST applications, e.g., ResNet, and Graph Convolutional Networks, to indicate the generality and significance of this paper's findings. 1. Our main goal is to formalize the IST problem setting mathematically and provide meaningful convergence theory. We are making the first step in rigorously understanding this family of algorithms that combine model and data parallelism, which has almost never been studied. This work is not trying to suggest a new neural network architecture for IST. We focus on analyzing the optimization (and not generalization) aspects. Our insights reveal the advantages and limitations of IST: - In the interpolation case (often considered to apply to modern Deep Learning models), IST can perform very efficiently in both homogeneous and heterogeneous scenarios. - In the more general case, we show that naïve/vanilla IST’s convergence heavily depends on the heterogeneity level, which may require decreasing the step size later throughout the optimization. While in a data center (local cluster) setting, this can be fixed by access to shared data, in federated learning (very heterogeneous), a different method may be needed. We believe that to provide guidance for the future design of IST, first, it is important to understand its basic and fundamental properties. There are plenty of works mentioned in the Introduction part of our paper, which explore the experimental aspects of IST. That is why the motivation of this work is a lack of comprehension regarding the theoretical properties of methods that perform training with compressed models. Additionally, we would like to highlight that the main contribution of our work is from the theory side, and that is why in our experiments, we focus on well-controlled settings which satisfy the assumptions in our paper to provide evidence that our theory translates into observable predictions. These are well-designed experiments that do support our theory and main claims. Since our results guarantee that the methods work, we do not need to test them extensively on large or complicated datasets and models to show that they do (which is clearly necessary for heuristics not supported by any theory). Our goal was not to claim practical SOTA on some benchmarks - that's not what our paper is about. > The authors should clarify the assumptions made in the permutation example in Section 3.1. Specifically, they should explain the case that n=d^2 and the use of Perm-1 sketch, while according to definition 2, perm-q refers to d=qn, which leads to n=d=1 and it’s a naïve configuration. By clarifying the assumptions made in this section, the reader can better understand the example and its implications for IST. 2. Thank you for pointing this out. Please note that it is not the power 2 (for $n=d^2$) but a footnote link. So $n=d$ is meant there; thus, there is no problem leading to “naïve configuration” $n=d=1$. We have already fixed this issue in the revision of the paper. ## Comments on Weaknesses > The main body of this paper does not have any experimental results. The authors should include some key experiments to validate their theoretical analysis. We would like to highlight that the title of our work is _“Towards a Better **Theoretical** Understanding of IST”_. That is why, in the main part of the paper, we focus on contributions coming from theoretical analysis. Another reason is the limited space allowed by the conference submission format. If the reviewer believes that moving the experimental results to the main part will strengthen the points we are making, we can do this in the camera-ready version. Regards, Authors --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. The contributions and limitations of this work are clear now. The reviewer does not have further questions. --- Reply to Comment 1.1.1: Title: No further questions Comment: Thanks for confirming. If you are satisfied with the replies, please consider raising your score accordingly. Thanks! authors
Summary: This submission presents a theoretical analysis for the independent subnetwork training (IST) algorithm for training models under both data and model parallel settings. The convergence guarantees are analyzed for quadratic loss functions, when using permutation sketches as the model compressors. Strengths: - Optimization using data and model parallelization is an important practical problem, for which more theoretical insights are welcome - The submission presents a fairly thorough analysis of the convergence guarantees for IST for the quadratic loss function analyzed - The authors highlight some limitations of IST, which would be useful to be aware of (e.g. the fact that in the general case of the quadratic function the algorithm does not converge to the solution) - The submission is overall well written, and the authors are careful to introduce the setup and all the assumptions used in their analysis Weaknesses: - Overall the analysis presented in this submission is very limited, as it only deals with a specific type of loss function, which is not a practical instance where model parallelization would be useful - While the authors argue that the quadratic model has been previously used in the literature for studying neural networks (lines 140-144), in my understanding this model still relies on a Taylor approximation for the loss function of a neural network (for non-linear models). It is therefore not clear how the error introduced through this approximation would translate into the convergence analysis presented in the submission - The authors use additional simplifying assumptions, such as the fact that each node can compute the true gradient of its submodel, which would be infeasible in the case of large datasets. Additionally, the results are only presented for Perm-1 sketches when the number of nodes matches the dimension of the model, which is again not a practical use-case. While the authors argue that their results can generalize beyond these limitations, a more general formulation is not provided in the submission. - Other works have analyzed the convergence guarantees for IST, notably [28] has done so for a one hidden layer network with ReLU activations, which is a more general case than the one from this submission. It is not clear what are the additional insights presented here, compared to the previous work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the authors please detail how their choice of the quadratic loss function ties to practical applications of model parallelism (e.g. for neural networks)? - In Equation (12), do the matrices $L_i$ also need to be semi-positive definite? - For Theorems 1 and 2, can the authors please describe how the results would change when using an unbiased gradient estimator, instead of the full gradient for each node? - Are there any situations, without preconditioning, which would satisfy the conditions from Theorem 1, for model parallelization? The Identity example from Section 3.1 seems to only apply to data parallelization. - For Equation (23), I am confused by the notation for the scaling coefficient. Should it be $[(L_i)_{\pi_i, \pi_i}]^{-1/2 }$ instead? - From the analysis in Section 3.1.2 it looks like the fact that $C_i$ is biased does not have any effect on the bound. Can the authors provide more insights on why that is the case? - In Appendix B.4. the authors show generalizations for $d>n$ and for different sketches. I think these would be useful to highlight in the main submission, since they correspond to a more practical setting. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I believe the authors have properly addressed the limitations of their analysis. However, I am not convinced that the contributions presented in this submission are broad enough, which ultimately motivates my score. --------------------------------- **Edited after rebuttal** After reading the authors' answers, and the other reviews, I decided to increase my overall rating to 5, as well as the scores for Soundness, Presentation and Contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer 87cd, Thanks for your time and effort. Let us respond to mentioned weaknesses and questions separately. # Responses to questions > Can the authors please detail how their choice of the quadratic loss function ties to practical applications of model parallelism (e.g. for neural networks)? The quadratic loss function is one of the most common (along with cross-entropy) choices of loss functions in supervised machine learning. Neural networks are not an exception. Please note that some of the reasons why the quadratic model was chosen are listed in the main paper. We would like to bring your attention to the fact that even for such a “simple” problem, the analysis is very non-trivial because the resulting gradient estimator is biased. We show that even for an interpolation homogeneous case, the algorithm may not converge. Our main goal is to study the properties of the method used for training large models with combined model and data parallelism. We believe that the quadratic problem is very well demonstrative for our purposes. In addition, as of today, we are not aware of any optimization theory for deep neural networks which does not simplify the actual practical settings. > In Equation (12), do the matrices $L_i$ also need to be semi-positive definite? No, it is not necessary. For the Th. 1 we require the average matrix $\bar{\mathbf{L}} = \frac{1}{n} \sum_{i=1}^n \mathbf{L}_i$ to be positive definite. Later for heterogeneous sketch preconditioning (Sec. 3.1.2), we also need the existence of $\mathbf{D}_i^{-1/2}$ for $\mathbf{D}_i = \mathrm{Diag}(\mathbf{L}_i)$, which is more general than semi-positive definiteness of every $\mathbf{L_i}$. > For Theorems 1 and 2, can the authors please describe how the results would change when using an unbiased gradient estimator, instead of the full gradient for each node? We would like to note that the current question form probably does not allow a proper answer, as we are not aware of any analysis of Stochastic Gradient Descent (SGD)-type methods with only an unbiasedness assumption. Although, if the gradient estimator $g(x)$ enjoys a bounded variance property (one of the most used in the stochastic optimization literature) $\mathbb{E} \||g(x) - \mathbb{E} g(x) \||^2 \leq \delta^2$ then our results can be easily extended. Namely, such local gradient estimators will introduce an additional neighborhood term $\gamma \delta^2$ in our convergence bounds. This can be obtained using the bias-variance decomposition equation (38). > Are there any situations, without preconditioning, which would satisfy the conditions from Theorem 1, for model parallelization? No, we are not aware of any such cases beyond the ones studied in our work. We want to note that if random (instead of permutation) sparsification is used, it will lead to similar challenges related to satisfying inequality (14) and Remark 1. > For Equation (23) ... should it be $[(L_i)_{\pi_i, \pi_i}]^{-1/2 }$ instead? Yes, you are correct. Thank you for spotting this. We have already fixed it in the revision of the paper. > Can the authors provide more insights on why biased $C_i$ does not have any effect on the bound in Section 3.1.2? At first, we found this result quite surprising as such good convergence bounds indeed seem unusual for a method with a biased gradient estimator. To share some insight and our intuition, we would like to note that our sketch preconditioning results in the following gradient estimator $\mathbb{E} g^k = \mathbb{E} \overline{\mathbf{B}}^k x^k = x^k,$ which can be viewed as an optimal preconditioning for the true gradient of the quadratic (interpolated) problem $\overline{\mathbf{L}}^{-1} \nabla f(x^k) = \overline{\mathbf{L}}^{-1} \overline{\mathbf{L}} x^k = x^k$. Thus, the standard permutation sketch only leaves (as non-zeros) the diagonal elements, which are then scaled by our modification with local smoothness matrices $\mathbf{L}_i$. > I think these would be useful to highlight generalizations for $d>n$ and for different sketches in the main submission, since they correspond to a more practical setting. Thank you for the suggestion. We originally left this material in the Appendix in order to simplify the presentation of the key results. ## Comments on Weaknesses > additional simplifying assumptions, such as the fact that each node can compute the true gradient of its submodel Please note that we made these simplifications with a particular purpose in mind. Our goal was not to try to analyze the closest to a practical setting problem but rather to focus on particular new (and challenging) properties of the considered formulation. Quite often, such “closest to a practical” approaches, unfortunately, lead to very loose (sometimes even vacuous) bounds, which are obtained under restrictive (hard to check) assumptions introduced to facilitate the analysis, such as, i.e., bounded gradient norm and strong convexity (almost conflicting conditions). We believe that the contribution of this work includes a formalization of a novel theoretical setting, which basically has not been studied before. That is why it was essential for us to “isolate” the effects of homogeneous/heterogeneous distribution and computations with respect to submodels. To illustrate our position, let us refer to breakthrough optimization works on clipping [1] and local methods [2] (in addition to those mentioned in our paper works on GD with delayed updates and cyclical step-sizes), which considered simple (full) gradient descent updates which allowed to focus on the particular challenges of the considered problem formulations and provided insights which lead to improved understanding of the corresponding methods. ___ [1] Zhang, Jingzhao, et al. "Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity." ICLR 2019. [2] Khaled, Ahmed, et al. "First analysis of local GD on heterogeneous data." arXiv preprint:1909.04715 (2019). Regards, Authors --- Rebuttal Comment 1.1: Comment: Thank you for your answers and clarifications! I have increased my rating from 4 to 5. Although I agree that this is a thorough theoretical analysis, I still believe the overall setting is limited. I would like to ask the authors to include in the next revision a more comprehensive discussion on how the insights from the current theoretical analysis could be used in a more practical and general setting.
Rebuttal 1: Rebuttal: Dear Reviewers, Firstly, we would like to thank the reviewers for their time reading our paper and for their feedback! This is all much appreciated. We are encouraged that the considered problem is found _important and practical_, needing theoretical insights (R. 87cd). We are pleased that the limitations we highlighted are deemed _useful to be aware of_ (R. 87cd). We are glad our analysis is recognized as a _solid analytical treatment_ (R. LM8J). Reviewers ijy7 and LM8J also acknowledged the _clarity of writing and good presentation_ of the material. Concerning the comparison with the work of **Liao and Kyrillidis [28]**. We believe our works are so different that it does not make sense to include a detailed description of distinctions in our submission. After reading the reviews, we may incorporate the below response into the Appendix of the revised version of our work. Next, we would like to give detailed comments on the differences. `Disclaimer:` we try our best to briefly and accurately represent some of the previous work’s findings. ### Assumptions - The authors of [28] consider a “Single Hidden-Layer Neural Network with ReLU activations” and that the network’s first layer weights are initialized based on $\mathcal{N} (0, \kappa^2 \mathbf{I})$ and weight vector of the second layer is initialized uniformly at random from $\{−1, 1\}$. In contrast, we do not make any assumptions on the initialized parameters $x$ (in our notation). - The second differentiation is assumptions on the data. The paper [28] assumes that for every data point $(a_j, y_j)$, it holds that $\||a_j\||^2 = 1$ and $|y_j| \leq C-1$ for some constant $C \geq 1$. Moreover, for any $j \neq l$, it holds that the points $a_i, a_l$ are not co-aligned, i.e., $a_i \neq \xi a_l$ for any $\xi \in \mathbb{R}$. In contrast, we do not make any assumptions about the data apart from the ones on matrices $\mathbf{L}_i$. - In addition, analysis in [28] assumes that the number of hidden nodes is greater than a certain quantity and that NN’s weights distance from initialization is uniformly bounded. ### Model The authors of [28] consider a **regression (MSE) loss** function, a special case of quadratic loss and **full gradients** computation. They provide guarantees for IST under a “s​​implified assumption that every worker has full data access”, which corresponds to the homogeneous setting in our terminology. ### Analysis Their analysis is based on the Neural Tangent Kernel (NTK) framework, which typically relies on first-order Taylor approximation of the neural network output. This approximation depends on finite-width NTK matrix $H(k)$ at iteration $k$. In the overparameterized regime, the change of $H(k)$ is small, staying close to the NTK at initialization, approximated by the infinite-width NTK. In the end, such a series of relaxations leads to the following **quadratic loss approximation**: $$ f_{k+1} \approx f_k + \left\langle\nabla_{\mathbf{u_k}} f_k, \mathbf{u_{k+1}}-\mathbf{u_k}\right\rangle \approx f_k - \xi \eta\left\langle\mathbf{u_k} - \mathbf{y}, \mathbf{H}(k)\left(\mathbf{u_k} - \mathbf{y}\right)\right\rangle, $$ which combined with the assumption on the minimum eigenvalue $\lambda_{\min }(\mathbf{H}(k)) \geq \frac{\lambda_0}{2} > 0$ leads to a linear convergence result for a strongly-convex problem. Besides, results in [28] show linear convergence of the model weights to the neighborhood of the solution, which is typical for strongly convex and smooth problems. Wherein our results focus on reaching the stationary point (although we also provide some bounds for the iterates and loss convergence), which is more common in the non-convex settings. Another difference is that they provide high probability convergence analysis. We hope that there will be a productive discussion. Please do not hesitate to ask any further questions! Best regards, Paper 6457 Authors ___ [28] Fangshuo Liao and Anastasios Kyrillidis. On the convergence of shallow neural network training with randomly masked neurons. Transactions on Machine Learning Research, 2022.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null