title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
4M: Massively Multimodal Masked Modeling
Accept (spotlight)
Summary: The paper addresses the importance of a versatile model that is not limited to single modality and task and proposes a multi-modal pre-training scheme called 4M. 4M is a single encoder-decoder architecture trained on a large set of image and sequence-like modalities. The modalities including text, images, geometric and semantic are brought into a joint representation as tokens through modality-specific tokenizers. The training procedure relies on multi-modal masked training, with only a small set of tokens used as inputs and targets. Extensive experimentations showed that 4M can solve many common vision tasks out of the box, can be fine-tuned to unseen tasks as well as perform multi-modal controllable generation. Strengths: This publication has several strengths including: 1) The writing is very clear and easy to understand. 2) The proposed approach is scalable across three key aspects - data (more training samples increase performance), architecture (improve performance with model size as well as remain stable) and training objective (handle growing number of modalities without incurring excessive computational costs). 3) Good experimental methodology with carefully designed ablations that justifies architectural design decisions especially impacts of input modalities and target tasks, multi-modal masking strategy as model and data scaling. 4) Very exhaustive in-depth experimentation showcasing the key capabilities - zero-shot generalization to diverse set of vision tasks, fine-tune to unseen tasks, multi-modal controllable generation. Weaknesses: 1) The paper seemingly lacks any quantitative evaluation of its generation capabilities or comparison with existing state-of-the-art methods. I'd appreciate it if the authors can elaborate on this. 2) Another concern is that the paper lacks any discussion on the robustness of the proposed approach to the quality of the datasets, since low-quality data is usually readily available compared to high-quality data. This will be critical for data scaling as well as will align with model scaling to 4M-XL and beyond. 3) Minor comment: I am curious if the authors have performed any out-of-distribution analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper in its current form lacks discussion on the robustness of 4M to dataset quality as well as evaluation of its generation capabilities. Please refer to the “Weakness” section for details. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Yes, the authors discuss the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer q9W9 for their positive feedback. We address the main concerns and questions in following response: > The paper seemingly lacks any quantitative evaluation of its generation capabilities or comparison with existing state-of-the-art methods. I'd appreciate it if the authors can elaborate on this. For a quantitative analysis of 4M’s generative capabilities, please see the section 2 of the common response and the `PDF`. > Another concern is that the paper lacks any discussion on the robustness of the proposed approach to the quality of the datasets, since low-quality data is usually readily available compared to high-quality data. This will be critical for data scaling as well as will align with model scaling to 4M-XL and beyond. Assessing the influence that the “quality” of a dataset has on a pre-training strategy is a **highly interesting and important question**. Works like DataComp [1] aim to achieve this by developing filtering strategies for multi-modal datasets, but it is an **open question and research direction of its own**. To ablate the influence of the choice of pre-training dataset to some extent, **we pseudo labeled ImageNet-21K, as well as a 15M subset of COYO-700M** [2], and trained 4M-B models on each. Tab. 3 in the rebuttal `PDF` shows that the 4M-B models trained on these datasets achieve a **similar performance to the CC12M version, while still surpassing previously reported baselines**. To add to that, and related to reviewer RJ3x’s question on pseudo labeling quality, we argue that **the use of pseudo labeling is a strength of the approach as it is inherently more scalable than using high-quality off-the-shelf datasets**. Similar to the common practice in NLP of pre-training on large uncurated datasets and tuning the model with higher-quality data, it is conceivable that similar approaches can work well for 4M too. > Minor comment: I am curious if the authors have performed any out-of-distribution analysis. Please see Tab. 4 in the rebuttal `PDF` for an OOD analysis of several ImageNet-1k transfers to IN-A, IN-R, IN-S, IN-C, and IN-3DCC. **4M shows strong robustness to various OOD domains and corruptions, and is competitive with DeiT III**, which is a specialist model. Further, we show the **strong zero-shot performance of 4M** on surface normal, depth, and semantic segmentation in Tab. 1 in the rebuttal `PDF`. The performance is evaluated on the DIODE and COCO datasets which are not part of the 4M training dataset. **We note that on this OOD data, 4M matches or even surpasses the pseudo labeler networks and other strong baselines.** [1] DataComp: In search of the next generation of multimodal datasets, Gadre et al., 2023 [2] https://github.com/kakaobrain/coyo-dataset --- Rebuttal Comment 1.1: Comment: I appreciate the authors addressing my raised concerns about the quantitative evaluation of 4M's generation capabilities and also providing insights on the effect of dataset quality on pre-training strategy. I suggest the authors add the above results to the revised paper. I am happy to increase my rating.
Summary: The paper proposes a multi-modal masked modeling pre-training scheme (4M) that unifies a several modalities – including text, images, geometric, and semantic modalities, neural network feature maps. The tokenization and masked modeling enable the efficient pretraining of 4M. The pretrained model can 1) achieve reasonable finetuned performance on vision tasks, 2) achieve conditional generation under different modalities. Strengths: This work is a good practice on multi-modal masked modeling pre-training and achieve reasonable performance on both finetuned downstream tasks and generative tasks. Weaknesses: 1. This work is a combinational work of existing methods and lacks technical novelty. - The multi-modal masked modeling has been utilized by various existing works. E.g. MultiMAE has proven the feasibility of multi-modal masked modeling. Vision-NLP multi-modal is also explored by exsting works,e.g. MAGVLT. - The tokenization is claimed to be a key part for the efficient pretraining of 4M, while the tokenization has been widely used existing works, and the tokenizers used in this work are mostly borrow from other works. - The major contribution seems to be the the multi-modal pretrained data based on the CC12M dataset. But this is no novelty on the pseudo-labeling of the data. 2. The performance is not impressive given the large-scale data used in this work. - The paper claims 4M can perform a diverse set of vision tasks out of the box, but no experimental results are showed in the manuscript. - In Tab.1, the finetuned performance is relatively weak compared other pretrained methods. In addition, many stronger pretrained methods are not included in this paper. - The conditional generation results are interesting. While there is no quantitative performance comparison with other methods such as contorlnet. Also, one drawback of 4M based conditional generation is that the model accepts fixed modal after pertraining, while controlnet can be easily extended to new modal. - The paper claims the pretraining efficiency is an advantage, yet no experimental result is given to prove this point. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Clarification of the novelty of this paper. 2. The performance comparison and experimental results to support the claims in this paper. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: The limitation of this work is mainly on the lacking novelty and the experimental results cannot well support the claims. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer QEFU for their feedback. We address the main concerns and questions in following response: > This work is a combinational work of existing methods and lacks technical novelty. The multi-modal masked modeling has been utilized by various existing works. E.g. MultiMAE has proven the feasibility of multi-modal masked modeling. Vision-NLP multi-modal is also explored by exsting works,e.g. MAGVLT. The tokenization is claimed to be a key part for the efficient pretraining of 4M, while the tokenization has been widely used existing works, and the tokenizers used in this work are mostly borrow from other works. > First, we want to clarify that the simplicity of our method is a highly desirable property given its competitiveness and novel capabilities. Nevertheless, **our approach includes several technical innovations**, which we believe distinguish 4M from existing methods in meaningful ways. - **Masking:** While 4M's multi-modal masking strategy is inspired by MultiMAE, we introduce several key changes that are crucial for scaling our models beyond the three image-like modalities of MultiMAE. For a comprehensive overview of these changes, please see our response to reviewer f31a. Additionally, please note that MAGVLT, although relevant, focuses only on text-image pairs and was published within two months of the submission deadline, making it concurrent work. - **Tokenization:** The novelty lies in 4M's ability to work with multiple modality-specific tokenizers. Unlike methods like Unified-IO, which operates on a single RGB image tokenizer, our approach enables scaling to modalities beyond those that can be represented as images, including neural network feature maps. This key distinction is not about the tokenizers themselves but how 4M leverages them to jointly operate on diverse modalities. - **Architecture:** 4M's architecture was intentionally designed to be as close as possible to a standard Transformer encoder-decoder to take advantage of their scalability and flexibility. However, we also had to include some crucial modifications to enable joint modeling of both image-like and sequence-like modalities within a single encoder-decoder architecture, as described in Section 2.2 of our paper. - **Importance of combinational work:** It's worth noting that the act of bringing together these specific methods in itself introduces novelty. **Our combination of techniques leads to new capabilities and improved results that wouldn't be possible otherwise**. For example, unlike MultiMAE which can’t be used for generative tasks due to lack of tokenization, 4M can function as a generative model while also showing much better transfer performance. > The paper claims 4M can perform a diverse set of vision tasks out of the box, but no experimental results are showed in the manuscript > Please see Table 2 of the rebuttal `PDF` where we show the out of the box (zero-shot) performance of 4M on surface normals, depth, and semantic segmentation on the DIODE and COCO datasets. On this data, **4M matches or even surpasses the pseudo labeler networks and other strong baselines**. > In Tab.1, the finetuned performance is relatively weak compared other pretrained methods. In addition, many stronger pretrained methods are not included in this paper. > In our transfer study (Tab. 1 of our paper), **the pretraining methods used as comparison are recognized as strong models**. For example, MAE serves as the backbone for task-specific foundation models such as ViTDet [1] and SAM [2]. Note that 4M outperforms all reported baselines including MAE on all tasks except ImageNet classification. Furthermore, in Tab. 3 of the rebuttal `PDF`, we provide a comparison with DINOv2-Base, one of the strongest publicly available ViT-B models. However, please note that these models are not directly comparable as DINOv2: 1. uses an **order of magnitude more training data.** Furthermore, the data was curated to be similar to evaluation datasets, including those that used in our downstream tasks (IN1K, ADE20k, NYUv2). Thus, there is less of a distribution shift for DINOv2 when transferring with these datasets. 2. requires **orders of magnitude more compute.** 3. is distilled from a significantly larger model (DINOv2 ViT-g, 1.1B params), which gives a boost in performance compared to training from scratch. While DINOv2 is able to attain slightly better performance on downstream tasks, 4M-B is still able to approach DINOv2’s performance on several tasks despite notable differences in computational cost and dataset size. | Model | Dataset size | Compute cost (A100 hrs) | | --- | --- | --- | | 4M Base | 12M | 2300 | | DINOv2 Base Distilled (from ViT-g) | 142M | 22000 + 5300 | > The paper claims the pretraining efficiency is an advantage, yet no experimental result is given to prove this point. > The 4M training scheme produces a model that can predict any task from any subset of full or partial modalities — all in a single and highly efficient pre-training run. Input and target masking are crucial to make this work efficiently. Already MAE has shown that dropping masked tokens at the input level can significantly improve pre-training efficiency. MultiMAE further demonstrated its importance for multi-modal training, but as the number of tasks increases, so does the computational cost of the decoders since all masked and non-masked patches of all modalities are always decoded. 4M addresses this issue by decoding only a random subset of the masked tokens (as ablated in Appendix Tab. 17), at no cost to downstream performance. Furthermore, in Fig. 2 of the rebuttal `PDF`, **we quantitatively demonstrate that target masking can significantly improve training efficiency**, especially when training on a large number of modalities or on modalities with a large sequence length. [1] Exploring Plain Vision Transformer Backbones for Object Detection, Li et al., 2022 [2] Segment Anything, Kirillov et al., 2023 --- Rebuttal Comment 1.1: Title: Thanks for your response. Comment: - After seeing the response, my major concern about the novelty remains. - MAE cannot be regarded as a strong baseline now since many pretraining methods have been proposed with much stronger performance. - On the good side, this work is a good practice to combine exsiting methods to achieve the multi-modal pretraining. - Considering the multi-modal pretraining is a promising, I would like to rise my rating. --- Reply to Comment 1.1.1: Title: Rating update Comment: We are glad to hear that the reviewer QEFU found the proposed multi-modal training promising and would like to increase their rating. We kindly remind the reviewer that the deadline is soon and the rating update needs to be done via editing the original review. We thank once again for their feedback which improved the quality of our work.
Summary: The paper presents a foundation model for a variety of vision tasks. The authors show it can perform many key vision tasks out of the box and can also be fine-tuned to achieve highly competitive performance on unseen downstream tasks and input modalities. To handle the variety of modalities, the inputs/outputs are encoded into sequences of discrete tokens, and the model is trained on all the tasks simultaneously via a multi-modal masked modeling objective. Strengths: The paper presents strong results and a scalable method to perform a variety of vision tasks. The ablation study covers almost all the aspects of the model. The results indeed prove the superiority of multimodal training over the baselines, without any need for augmentations. The paper is well-written and paves the way for a variety of research questions about the interactions between different modalities. Weaknesses: While to tokenization method allows the model to train on a variety of tasks with a single architecture and cross-entropy loss, it also introduces quantization of the space of inputs/outputs. While this quantization does not harm the results for text, it might decrease the quality of the results for other domains (the segmentation boundaries might not be fully aligned with the objects for example). This idea induces an upper bound on the performance of such an algorithm and should be discussed. One way to evaluate this upper bound is by encoding and decoding back the ground-truth results of different domains (e.g. - quantizing the ground-truth segmentation masks and decoding them back), to verify the reconstruction quality and the downstream task-specific performance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: One suggestion is to mention in the related work other papers that deal differently with multimodal inputs/outputs for solving various vision tasks altogether. This line of work includes: - Wang et al., "Images Speak in Images: A Generalist Painter for In-Context Visual Learning", CVPR'23 - Bar et al. "Visual prompting via image inpainting", NeurIPS'22 One question that I had (and I am not sure how to evaluate) - what is more helpful for downstream performance - inputting during training tokens that correspond to the same image position but from different domains, or using more tokens from the same domain but from different locations in the image? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations of the paper are discussed and addressed in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer qLD7 for their positive feedback. We address the main concerns and questions in following response: > While this quantization does not harm the results for text, it might decrease the quality of the results for other domains (the segmentation boundaries might not be fully aligned with the objects for example). This idea induces an upper bound on the performance of such an algorithm and should be discussed. One way to evaluate this upper bound is by encoding and decoding back the ground-truth results of different domains (e.g. - quantizing the ground-truth segmentation masks and decoding them back), to verify the reconstruction quality and the downstream task-specific performance. Thank you for the suggestion. We **measure the reconstruction quality of the surface normal, depth, and semantic segmentation tokenizers** on 5000 CC12M validation images at a resolution of 224x224. Note that we cannot evaluate the performance of the tokenizers on datasets like DIODE or OASIS due to the presence of masks/holes in the dense labels. Since the tokenizers were trained on pseudo labeled data, they do not handle such masks. This is, however, not an inherent limitation, as the tokenizers could be trained with masked inputs, which we did for the Taskonomy and Hypersim tokenizers in the ablations. Fig. 1 in the rebuttal `PDF` shows qualitative examples and Tab. 1 (last row) shows reconstruction metrics. While measuring the reconstruction on CC12M is not fully comparable to the DIODE and COCO zero-shot performance of 4M and baselines shown in Tab. 1, we note that **the tokenizer reconstruction errors are of a significantly lower magnitude than the prediction errors from RGB**. This indicates that the tokenization is not a strong bottleneck when considering the difficulty of predicting these tasks from RGB. That said, tokenization may remove or change fine details present in images, which may not be as clear in these metrics but can be visually apparent (see Fig. 1). A remedy to this could be to perform tokenization on higher resolution images or to decrease the patch size. We also generally expect future advances in tokenizer training to translate directly to zero-shot and downstream performance improvements. > […] mention in the related work other papers that deal differently with multimodal inputs/outputs for solving various vision tasks altogether. Thank you for the suggestions, we’ll make sure to include a discussion of works that unify different vision tasks in this way in the camera ready version upon acceptance. > […] what is more helpful for downstream performance - inputting during training tokens that correspond to the same image position but from different domains, or using more tokens from the same domain but from different locations in the image? This is indeed an interesting question, and our ablation of the input and target mask sampling parameters $\alpha$ (see Appendix Tab. 15) provides a partial answer. Setting the input and target alphas to a very low value (e.g. 0.1) corresponds to sampling the number of tokens per modality from a “spiky” Dirichlet distribution, i.e. most of the time tokens are sampled from single modalities. This is close to the latter case mentioned, i.e. using more tokens from the same domain but from different locations in the image and predicting across domains. **This setting performs slightly worse on downstream transfers compared to more random mask sampling approaches (alphas >= 0.2), or mixtures of sampling strategies (see Appendix Tab. 18).** Note that on the other hand, the use of higher alphas (> 1.0) does not correspond to the former case (sampling tokens that correspond to the same image position but from different domains). Restricting the sampling in that way has been studied in masked video pre-training, notably spatio-temporal MAEs [1] and VideoMAE [2], and it has been observed that more random masking strategies perform similar or better. Extending this analysis to the multi-modal case is interesting future work. [1] Masked Autoencoders As Spatiotemporal Learners, Feichtenhofer et al., 2022 [2] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training, Tong et al., 2022 --- Rebuttal Comment 1.1: Comment: Thanks for the response. I don't have further comments. I will keep my rating.
Summary: The paper presents a unified transformer model by using an effective multi-modal pre-training scheme. The authors propose to perform masked modeling across different modalities. This is made possible by unifying the representation space of the considered modalities by mapping them into discrete tokens and then performing multi-modal masked modeling on a small subset of tokens. Experimental results demonstrate several promising results. Strengths: - This paper is technically valid and interesting. By conditioning on arbitrary modalities, the model can have great potential for a variety of multimodal intelligence capabilities. - The authors present comprehensive experiments and ablations, providing insightful discussions. The paper can be a good reference for future researchers. - The paper is well-written and easy to follow. Weaknesses: - The multi-modal masking strategy is highly similar to prior works, like MultiVAE. The mask-modeling part of this paper is somewhat less interesting and less innovative. The innovation is more in the developed system framework. - I don't find other significant concerns in the proposed method. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discussed some of the limitations, but this is more like descriptions of future work. No method is developed in address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer f31a for their positive feedback. We address the main concerns and questions in following response: > The multi-modal masking strategy is highly similar to prior works, like MultiVAE. The mask-modeling part of this paper is somewhat less interesting and less innovative. The innovation is more in the developed system framework. While 4M’s multi-modal masking strategy is inspired by MultiMAE, 4M proposes several important changes that enable further scaling of multi-modal models beyond three image-like modalities: - MultiMAE’s masking strategy assumes that all modalities are image-like, in their case RGB, depth and semantic segmentation maps. Applying the same masking strategy to sequence modalities such as captions or bounding boxes would not allow 4M to generate these modalities at inference time. We therefore propose to **use span-masking [1] to both benefit from masked pre-training and enable generation of sequence modalities**. - MultiMAE requires separate decoders for each target modality, and **always decodes all input and all mask tokens**. The overhead of this is manageable with three modalities and shallow decoders, but may not scale to much larger number of modalities. We propose target masking as a means to overcome this. For example, if we trained 4M without target masking, the number of tokens to decode would be around 1000, which would incur a significant compute overhead (see rebuttal `PDF` Fig. 2). In addition, our ablations in Appendix Tab. 17 show that **lower target masking budgets are more compute efficient** (for a fixed number of total training tokens). - In addition, we propose **mixtures of masking strategies** to train 4M on a diverse set of inputs and targets (see Appendix Tab. 18), with the resulting models striking a compromise between either masking scheme. > The authors discussed some of the limitations, but this is more like descriptions of future work. No method is developed in address the limitations. In the Conclusion and Limitations section of the main paper **we discuss several limitations** (limited number of modalities, tokenizer quality, dataset size and quality) and **propose potential solutions** to address them. In addition, we list here several other limitations and possible ways to overcome them: - **Text understanding**: Recent state of the art text-to-image models like Imagen [2], Parti [3], Muse [4], or Stable Diffusion 2.1 [5] commonly train on powerful text encoders (e.g. T5-XXL or OpenCLIP-ViT/H) instead of classical text tokenizers to significantly improve their image generation fidelity and text understanding. 4M is trained directly on the text tokenizer, but we expect similar text-to-image improvements when training on LLM embeddings instead. - **Fine-grained editing**: Since 4M operates on discrete sets of tokens, in-painting is roughly constrained to the relatively coarse grid of tokens. Since every token affects an area slightly larger than its relative size on the image, the area that needs to be selected to remove a certain object is even larger. In addition, tokenization is a lossy process and destroys fine-grained details. Unlike diffusion models, we are not able to perform pixel-level edits. To remedy this, we can consider in-painting at a higher resolution (with a finer grid of tokens) or training specialized in-painting adapters. - **Alignment to downstream objectives**: 4M is trained by simply minimizing the cross-entropy loss between predicted and ground truth tokens. If this objective is misaligned with a certain downstream objective (e.g. aesthetics or a downstream task metric), we are not optimizing what we care for directly. To address this, we can consider fine-tuning the pre-trained 4M model using reinforcement learning on downstream objectives that are otherwise hard to optimize for directly. [6] - **Flexible image resolution**: As most other models, 4M is trained on square images of a fixed resolution — in our case 224x224. We trained a token super-resolution model that can map 4M outputs to 448x448, but to work out of the box at different resolutions and aspect ratios, we can consider approaches similar to FlexiViT [7] or NaViT [8]. This can be done as a fine-tuning step after training the model at the base resolution. [1] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, Raffel et al., 2019 [2] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, Saharia et al., 2022 [3] Scaling Autoregressive Models for Content-Rich Text-to-Image Generation, Yu et al., 2022 [4] Muse: Text-To-Image Generation via Masked Generative Transformers, Chang et al., 2023 [5] https://huggingface.co/stabilityai/stable-diffusion-2-1-base [6] Tuning computer vision models with task rewards, Pinto et al., 2023 [7] FlexiViT: One Model for All Patch Sizes, Beyer et al., 2022 [8] Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution, Dehghani et al., 2023 --- Rebuttal 2: Comment: Thanks for the response. I don't have further comments. I will keep my rating.
Rebuttal 1: Rebuttal: # Response to all reviewers We thank the reviewers for their insightful comments and are glad to hear that the reviewers found the paper to be **“technically valid and interesting”** with **“great potential for a variety of multimodal intelligence capabilities”** (f31a), provide **“insightful discussions on the design choices of the pre-training strategy”** (RJ3x), and appreciate the **“very exhaustive in-depth experimentation showcasing the key capabilities”** (q9W9). We are also glad that the reviewers recognized the **“strong results and a scalable method to perform a variety of vision tasks”** (qLD7) and commended our writing as being **“very clear and easy to understand / follow”** (q9W9, f31a) / **"well-written and paves the way for a variety of research questions about the interactions between different modalities"** (qLD7). ## 1. Additional results overview We address the reviewers’ remaining questions and concerns in the individual responses and rebuttal `PDF`. We discuss general questions on the generative capabilities below the following list of new experiments and major addressed questions: - QEFU: Out of the box (zero-shot) performance evaluation (`PDF` Tab. 1) - qLD7: Tokenizer reconstruction quality (`PDF` Tab. 1 and Fig. 1) - RJ3x, QEFU, q9W9: Quantitative evaluation of generative capabilities (`PDF` Tab. 2) - QEFU, q9W9: Comparison to strong baselines and robustness to dataset quality (`PDF` Tab. 3) - q9W9: OOD analysis (`PDF` Tab. 4) - QEFU: Pre-training efficiency (`PDF` Fig. 2) ## 2. Common questions > RJ3x, QEFU, q9W9: General quantitative evaluation of generative capabilities Please see Tab. 2 in the rebuttal `PDF` for a quantitative comparison of 4M across model sizes, a controlled text-to-image baseline, as well as Stable Diffusion 2.1. The metrics shown are computed on 30k subsets of CC12M and COCO validation sets, and we interpolate all generated images to 256x256. To perform a controlled comparison, we train a pure text-to-image variant of 4M-B, in spirit similar to Muse [1], for a total of 300B tokens on CC12M, and using the same RGB tokenizer as used for 4M. **4M trained on all modalities achieves comparable FID and CLIP scores to this specialist model, and at the same time can be conditioned on any pre-training modality and can solve several common vision tasks out of the box.** We also compare against the 512x512 base model of Stable Diffusion 2.1 (SD-2.1) [2] and observe a considerable gap to SotA generative models on OOD data. We note here, however, that **SD-2.1 was trained on datasets two orders of magnitude and a compute budget one order of magnitude larger than what was used to train 4M-XL**. 4M is a general multi-modal pre-training strategy and considering the scaling curves of similar token-based text-to-image models like Muse or MAGE, we expect 4M’s generation quality to significantly improve given a similar data and compute regime, and better image tokenizers. > QEFU: Also, one drawback of 4M based conditional generation is that the model accepts fixed modal after pertraining, while controlnet can be easily extended to new modal. Adapting multi-modal Transformers like 4M to new modalities / tasks is an exciting future research direction. Parameter-efficient fine-tuning techniques like Low-Rank Adaptation (LoRA) have been shown to work well on LLMs and diffusion models, and we would expect similar techniques to allow 4M to be efficiently adapted to additional modalities. Since our pre-training dataset is pseudo labeled, this would require only a dataset of the new modality and RGB images, which would also be the case for training a new ControlNet. [1] Muse: Text-To-Image Generation via Masked Generative Transformers, Chang et al., 2023 [2] https://huggingface.co/stabilityai/stable-diffusion-2-1-base Pdf: /pdf/5cfb976efabffabd16991289164ab563d8a8bba6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a multimodal pre-training framework named 4M, which employs the masked data modeling style to train a transformer encoder-decoder archtecture that is capable of performing different downstream tasks. Experiments show that 4M delivers competitive transfer ability on these tasks compared with MAE / DEiT III / BEiT v2. Strengths: 1. The baseline settings in the experiments are fair and sound, especially the self-baselines to control other variables. 2. The ablation studies provide insightful discussions on the design choices of the pre-training strategy. Weaknesses: 1. The multi-modal and multi-task training of 4M needs datasets with all required modalities and labels. However, this kind of well-annotated dataset is hard to obtain and not scalable. This research employs pseudo labeling to extend existing image-text datasets such as CC12M. Therefore, the performance of the off-the-shelf labelers are important. The authors should provide more detailed and careful discussions and ablations on that. 2. The downstream tasks are limited, especially considering the target of this paper, i.e. ``massively pre-training''. Quantative results on more diverse tasks / datasets should be examined, especially on the transfer ability to novel tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the Table.2 of ablation studies, why use loss instead of corresponding task metrics as the mesure? 2. Could the authors provide some quantative results on the generative capabilities of 4M? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The model size and the data size could be further scaled up. The authors have discussed some limitations in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer RJ3x for their positive feedback. We address the main weaknesses, questions, and limitations in the following: > The multi-modal and multi-task training of 4M needs datasets with all required modalities and labels. However, this kind of well-annotated dataset is hard to obtain and not scalable. This research employs pseudo labeling to extend existing image-text datasets such as CC12M. Therefore, the performance of the off-the-shelf labelers are important. The authors should provide more detailed and careful discussions and ablations on that. - We generally agree with this summary. However, **Pseudo labeling is inherently more scalable than other approaches** due to the high availability of RGB images and off-the-shelf models. Combining datasets with incomplete annotations (like UnifiedIO [1] does) is limited by the relatively smaller number of those datasets, and the end result is not an aligned large dataset -- however, our method is not incompatible with that approach (even for the fine-tuning phase), and it'd be an interesting experiment to run. Existing annotated multi-task datasets like Taskonomy [2], Omnidata [3], Hypersim [4], etc, are also too limited in terms of their domain. Overall, Pseudo labeling is a good enabling strategy at the moment, and we acknowledge the value of training on incomplete data for the future. We will update the camera-ready with a discussion. - Performance of the off-the-shelf labelers: As shown in Tab. 1 in the rebuttal `PDF`, **the larger the 4M model, the more it approaches the performance of the original pseudo labeler, and may even slightly surpass it**. Indeed, we would expect higher-quality pseudo labels to translate well to better zero-shot performance. In addition, we use pseudo labeling to enable massively multi-modal pre-training, but it is conceivable that a fine-tuning stage with a much smaller dataset of high-quality data can significantly improve zero-shot performance. This practice of pre-training on large-scale, noisy and uncurated data and then tuning the model on a clean dataset or using reinforcement learning has worked well in the field of natural language processing (e.g., ChatGPT, or GPT-4), and we expect a similar approach to work well for multi-modal foundation models too. > The downstream tasks are limited, especially considering the target of this paper, i.e. ``massively pre-training''. Quantitative results on more diverse tasks / datasets should be examined, especially on the transfer ability to novel tasks. We agree that the transfer learning study could benefit from a more diverse set of novel downstream tasks. That said, our ablation that contains **data- and compute-controlled baselines of MAE (RGB->RGB) and BEiT v2 (RGB->CLIP) style models** contains an extensive set of both **novel downstream tasks and datasets, spanning 35 tasks over 4 different datasets**. Many of the transfers contain input modalities and target tasks that were **not seen during 4M pre-training**. 4M pre-trained on all modalities performs **better than the baselines** on most of these tasks. > In Table.2 of ablation studies, why use loss instead of corresponding task metrics as the measure? The aim of Tab. 2 is to show **how well certain instantiations of 4M transfer to arbitrary new distributions of tokens**. Since downstream performance comes down to A) how well the tokens are able to represent the downstream tasks (i.e. tokenizer reconstruction error), and B) how well 4M is able to predict these tokens, we abstract away A for this ablation since the tokenizers are the same for all settings, and only report B. In addition, reporting the cross-entropy losses, or equivalently log-perplexity, is a standard practice in NLP, and aligns well with the aim of this ablation and with the way 4M is pre-trained and fine-tuned. Reporting the cross-entropy loss makes **comparison more uniform** and avoids having to scale and average wildly different task-specific metrics such as mAP, mIoU, MSE, etc. that are not comparable with each other. > Could the authors provide some quantitative results on the generative capabilities of 4M? For a quantitative analysis of 4M’s generative capabilities, please see the section 2 of the common response and the `PDF`. > The model size and the data size could be further scaled up. The authors have discussed some limitations in their paper. Scaling the model and dataset size beyond 4M-XL is out of the scope for this paper, but we agree that 4M could benefit from exploration in that area. Our scaling trends (Fig. 5 in the main paper) and evidence from token-based text-to-image models like Muse [5] and Parti [6] paint a promising picture and suggest scaling the data and model size to be an exciting future direction. [1] Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks, Lu et al., 2022 [2] Taskonomy: Disentangling Task Transfer Learning, Zamir et al., 2022 [3] Omnidata: A Scalable Pipeline for Making Multi-Task Mid-Level Vision Datasets from 3D Scans, Eftekhar et al., 2021 [4] Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding, Roberts et al., 2020 [5] Muse: Text-To-Image Generation via Masked Generative Transformers, Chang et al., 2023 [6] Scaling Autoregressive Models for Content-Rich Text-to-Image Generation, Yu et al., 2022 --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank the authors for their response. I maintain my score as "weak accept".
null
null
null
null
null
null
The Rashomon Importance Distribution: Getting RID of Unstable, Single Model-based Variable Importance
Accept (spotlight)
Summary: This paper proposes a variable importance framework, the Rashomon importance distribution (RID), that is robust to both the Rashomon effect and to dataset resampling. It motivates the need for such a framework, and surveys existing related methods. It then provides a detailed theoretical explanation of the proposed variable importance (a distribution) and a method for its estimation. The paper presents several experiments which demonstrate that RID can distinguish between important and unimportant/extraneous variables more reliably than previous methods. It then presents a case study in immunology (HIV research) wherein RID was able to identify a variable importance that was previously undetected, creating a lead for future investigation. Strengths: This is a very strong paper overall. The quality of the writing and overall presentation (figures, equations, etc.) is excellent, within the top 5% of papers. The authors are very familiar with the relevant prior work in the area, which is clearly demonstrated in the Related Work section. The technical contributions are important. They further strengthen the arguments for using simple, interpretable model classes (like sparse trees). The inclusion of a case study where the method creates real value in an important research area demonstrates its practical value. Weaknesses: The paper’s main weakness is its (current) lack of practical applicability to non-linear model classes outside of sparse trees. Because of this, the paper’s impact at a venue where most researchers (the vast majority perhaps) are working primarily with neural networks and transformers may be more limited. That said, I think this work along with the related work it highlights nonetheless merits visibility. Collectively, this vein of research creates a compelling argument for using inherently interpretable models, and has the potential to shift the default tools used by data science practitioners. Suggestions: The paper could be strengthened by running some of the experiments in Section 4 on real datasets for which a subset of the variables are known to be important/unimportant, (extraneous ones may also be added). Some of the plot colors could be improved for greater visual clarity / contrast. For instance the greens/blues in figures 2 & 3 bleed into each other, especially when printed. It would be good to refer the reader to Appendix Section D.3 in Section 4.3 on the stability of RID, possibly putting the discussion on robustness to $\varepsilon$ there. In the main text, it would be good to note that a timing analysis was conducted, referring the reader to Section D.5. It would have been interesting to investigate the stability of RID across different dataset resample methods (e.g. bootstrap v.s. subsampling). It would be good to explicitly list some of the model classes for which the RID can currently be computed. Nitpicking a little… Line 140: Unless I’m mistaken, g* tells us P(X,Y), while f^* often aims to model P(Y|X), and needs to be used alongside some approximation of P(X) to be a surrogate for g*. Line 149: “the Rashomon set describes the set of good explanations for a single dataset” -- this is only true if the models themselves are interpretable. Lines 162 and 187: seem to describe a model’s variable importance score as though it is fixed (independent of the dataset draw), i.e. by describing it as being a quantity weighted by the number of datasets for which f is in the Rashomon set, rather than something that can vary with each dataset (and only included when f is in the Rashomon set). The mathematical notation clarifies this however. Line 286: “For real datasets” -- I think you mean the remaining synthetic datasets that include noise Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In Equation 2. (and others in the form of a ratio of model set cardinalities), did you consider a heterogenous weighting of the set members (the models), for instance a weight which decreases based on their deviance from optimality? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations could be emphasized a little more. For instance the claim in the abstract that “[Our framework] can be integrated with most existing model classes” is really only true in a theoretical sense. There are many model classes for which even estimating the Rashomon set is computationally prohibitive, let alone do so for hundreds of bootstrap resampled datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. We particularly appreciate the comments suggesting formatting/writing improvements; we do not specifically reference each of these in our response, but we do plan to integrate them into our camera ready version of the work. We acknowledge reviewer gRCY’s concern that this work may not have the largest splash at a conference where neural networks and transformers currently dominate; however, NeurIPS has always valued a broad set of problems, including those in scientific and high-stakes domains (many of which are not neural network related). RID is likely to have a large impact in domains like genetics, biology, and ecology. Within the past year, Rashomon sets have become available for trees (which was a Neurips oral in 2022) and GAMs, and new ones are likely to come out soon, adding to the impact of this work. As we discussed in the general response, trees and GAMs are already used for a huge variety of applications and are as accurate as deep learning models for tabular data sets across domains. We like the idea of repeating the experiments from Section 4 on a real dataset with known important/unimportant variables, but we have had trouble finding a non-synthetic dataset in which we are certain which variables are important. Even if we add extraneous variables, it is difficult to repeat our experiments from Section 4 because a dataset may already contain some extraneous variables. As such, we would need to find a dataset where we are completely certain which variables are important and which are unimportant. That said, we are open to suggestions if any such datasets come to mind! We agree that it may be interesting to investigate the impact of different resampling methods on RID. While we have not had time to explore this experimentally, we believe that subsampling and bootstrapping should produce similar results. It has been shown that both subsampling and bootstrapping can induce stability (Basu et al., 2018; Meinhausen and Buhlmann, 2010; Buhlmann and Yu, 2002; Grandvalet 2006). Since both methods have been shown to be effective in the stability literature, we would expect both to be effective here. Additionally, with a large enough dataset, RID should not differ significantly when using bootstrapping or subsampling because of the Dvoretzky–Kiefer–Wolfowitz (DKW) inequality. The DKW inequality states that as the number of observations increases, the empirical cumulative distribution function will converge to the cumulative distribution function (CDF) from which samples were drawn at a rate of $\sqrt{n}$. Then – by using the triangle inequality – we see that the distance between the empirical CDFs of a subsampled and a bootstrap sampled dataset would converge at a rate of $O(\sqrt{n})$. Because RID is a function of these datasets that are also similar, the bootstrap and subsampled datasets’ RIDs should be similar. We did consider heterogeneously weighing models while developing our framework, but we opted not to include an explicit weighting because our bootstrap framework implicitly weights for optimality. The truly closest-to-optimal models should generalize well and will therefore show up in many of the bootstrapped Rashomon sets; on the other hand, models that only fit a specific bootstrapped dataset well will only belong to a single bootstrapped Rashomon set. Therefore, near-optimal models’ variable importance estimates will contribute more to RID’s calculation than overfitting models that are far from optimal. If we were to heterogeneously weight the model's contributions by loss, we would inflate the measurements from over-fitting models, which would skew our analyses. References Sumanta Basu, Karl Kumbier, James B Brown, and Bin Yu. Iterative random forests to discover predictive and stable high-order interactions. Proceedings of the National Academy of Sciences, 115(8):1943–1948, 2018. Meinshausen, Nicolai, and Peter Bühlmann. Stability selection. Journal of the Royal Statistical Society Series B: Statistical Methodology 72.4: 417-473, 2010. Peter Bühlmann and Bin Yu. Analyzing bagging. The annals of Statistics, 30(4):927–961, 2002. Yves Grandvalet. Stability of bagged decision trees. In Proceedings of the XLIII Scientific Meeting of the Italian Statistical Society, pages 221–230. CLEUP, 2006. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed rebuttal. I will reiterate that I think this is a strong and important paper. As for real datasets with known important/unimportant variables, is there something from work in causality that could be repurposed? I'm also not sure you would need to know the importance of all the variables in advance. Even if you only knew the relative importance of two variables with a great deal of certainty, could they not just be compared to each other? --- Reply to Comment 1.1.1: Comment: We greatly appreciate the reviewer’s support, and continued engagement! Upon further consideration, we think repeating the analysis from section 4 of the paper on a non-synthetic dataset may be more challenging than we initially thought. Even in the causal literature, how exactly methods should be evaluated beyond synthetic data is an active area of research; Parikh et al. (2022) provides a useful overview of methods for tackling this problem and their shortcomings. Even if we were to use data from the gold standard in causal inference of randomized controlled trials, only the importance of the treatment variable is known in such datasets. All other variables may or may not be important. This said, the point about working from the relative importance of two variables is well taken. In this setting, we agree that we could evaluate whether the more important variable is assigned a higher importance than the less important variable. However, this kind of evaluation may be difficult to scale, as we would need to know the pairwise relative importance for each possible pair of variables to repeat the classification experiment from Figure 3 of the main paper. To our knowledge, the most thorough evaluation using real data that we can realistically perform is therefore the kind presented in Section 5, where the variables identified as most important by a method are directly validated against domain knowledge. This yields a coarser evaluation than is possible on synthetic data, but still helps determine whether a method can identify important variables. References Harsh Parikh, Carlos Vajao, Louise Xu, and Eric Tchetgen Tchetgen. Validating Causal Inference Methods. In International Conference on Machine Learning (pp. 17346-17358), 2022.
Summary: The paper studies the problem of quantifying variable importance in a stable manner. The authors argue that multiple models may explain a target outcome equally well for a given dataset. However, current methods to quantify variable importance only account for one of these models; therefore, without accounting for different explanations, different researchers may arrive at conflicting (and valid) conclusions. To solve this problem, the authors propose a framework to quantify variable importance that accounts for the set of all good models. The authors provide empirical and theoretical results to support their framework. Strengths: * The paper discusses an important and interesting problem of assigning variable importance while considering the whole set of good models and ensuring stability. This problem can interest the community studying the impacts of the Rashomon effect and, more broadly, the interpretability community. * The authors provide compelling experimental results in synthetical data, indicating that the proposed method can capture variable importance in data generation. They also compare with other methods in the literature and show that RID performs better or equal to state-of-the-art methods (Figure 3 top). * Theoretical results in Theorems 1 and 2 ensure that estimating by bootstrapping converges to the value of interest. * The Case study in Section 5 is fascinating. Unlike others in the literature, the authors demonstrate that their method associates a specific gene with HIV – previously an unknown relationship. Weaknesses: * The Rashomon set is still unknown for most model classes, making the application of the method infeasible using the tools presented in the paper. * Assumption 1 seems reasonable for model classes such as linear models and GAMs. However, how it will behave in more complicated classes is still unknown. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Are there any losses in using an approximation for the Rashomon set instead of the “True” empirical Rashomon set? Intuitively, there seems to be a tradeoff between the approximation for the Rashomon set and the RID. It would be interesting to have a result highlighting it in the main paper. * Can authors include an experiment like the one in Figure 3 top but for more complex data generation processes? For example, consider the ground truth DGP to be a deep neural network and calculate variable importance using decision trees. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors discuss the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. Since another reviewer also raised concerns about relatively few model classes having known Rashomon sets, we have addressed this issue in the combined response. We also address reviewer eXQC’s question about Assumption 1 in the combined response for similar reasons. Finally, we have addressed both of reviewer eXQc’s questions in the combined response in order to reference figures showing new results. --- Rebuttal Comment 1.1: Comment: I thank the authors for their careful and detailed answers in the combined response! I am increasing the presentation score to a maximum of 4.
Summary: This paper introduces a method for assessing the variable importance in prediction, when the goal is to understand variable importance as defined with respect to the underlying data-generating process, as opposed to a specific model. To that end, a method is proposed which incorporates the concept of Rashomon sets (models whose performance is approximately optimal) and stability (e.g., considering Rashomon sets over different bootstrapped replications of the data) to construct a distribution of variable importance measures for each variable. Strengths: Overall, I rather enjoyed this paper, modulo some reservations that I outline in the "weaknesses" section. The presentation of the claimed contributions is clear, the contextualization to related work seems thoughtful, and the experiments support the main argument. First of all, I found much of the presentation to be quite clear. Figures 1 and 2, for instance, give a fairly clear summarization of the motivating problem and the corresponding method. Second, I found the technical contribution to be fairly clear relative to related work. Here, the main contribution appears to be the incorporation of bootstrapping to incorporate finite-sample uncertainty and improve stability, relative to related work that considers Rashomon sets for variable importance (e.g., citation [16] in this work). Third, while the main contribution (focusing on "stability") was not quite as formalized as I might have liked, the experiments in Section 4 seem to provide compelling evidence that the proposed approach more captures variable importance more reliably than baseline methods, including a fairly long list of alternative approaches. Weaknesses: There are a few points in this paper that I felt were somewhat unclear. I look forward to discussion with the authors during the response period, and I am willing to change my score. I focus here on motivation for the given approach, and justification for the claims of "stability", which I found lacking in places. Generally, I found the technical results to be fairly straightforward consequences of the assumptions. ### (W1) Motivation for "stable" variable importance somewhat unclear The main weakness of this paper, in my view, lies in the motivation. In several places, the importance (no pun intended) of finding the "ground truth" variable importance measures is stated as an obvious fact, without justification. For instance, on line 24 it is claimed that "Variable importance would ideally be measured as the importance of each variable to the data generating process". It is not clear to me why this claim should be obvious - rather, it seems reasonable that we might want to understand variable importance in the context of a specific model, trained on a specific dataset, to better understand what drives the predictions of that model from an explainability perspective. For instance, Fisher et al. 2019, cited in this paper as [16], give a fairly nuanced view when introducing the idea of Model Class Reliance (from my skim). As I understood that work, the motivation for establishing a range of possible values for variable importance (VI) stems not necessarily from a desire to understand "the data generating process", but from the desire to understand variable importance for a single model. The catch is that the *model of interest may be proprietary and not available to the user* (e.g., in recidivism prediction). In that setting, having upper and lower bounds on VI allows us to draw some conclusions (e.g., if the lower-bound is particularly high, we can conclude under some assumptions that the proprietary model depends on this feature). I think one could make a similar argument here, e.g., if the exact underlying dataset is also not available to us, we might be concerned about the sensitivity of our VI analysis to small differences in the data-generating process. I would appreciate some comment from the authors on this question of the general motivation for finding stable variable importance measures. ### (W2) Why consider distributions vs bounds? I had some trouble understanding the motivation for the Rashomon Importance Distribution. This distribution seems to be something like "averaging" over the bootstrap replicates. As I understood it, if a variable's importance is at most k in one bootstrap sample, for all models in the Rashomon set, but greater than k for all models in the Rashomon set in 99 other bootstrap samples, then we will average these to say that the CDF at k is 1/100. It wasn't clear to me how this approach would fit with the goal of establishing upper/lower bounds for importance measures. I suppose that one could do this using quantiles of the resulting distribution, but it's not entirely clear to me what that would be measuring. TL;DR: If I were interested in something like "the lower bound of the variable importance over the Rashomon set is higher than L with high probability", is that something that could be read out from the RID? That would seem like a more natural characterization of "stability" of results. As an aside, it appears that [16] (Fisher et al. 2019) gives finite-sample / high-probability bounds on the upper/lower bounds of variable importance over the Rashomon set. How should I think about the difference between the "stability" goal of this work (which is fundamentally a finite-sample concern) compared to the goal of providing high-probability bounds in that work? ### (W3) What is stability, exactly, and why should we expect this method to achieve it? Given that one of the main distinctions to prior work appears to be the focus on "stability", I was hoping to see some formal definition for what that means, and why the bootstrapping approach in this paper should be expected to achieve it. Presumably achieving stability is not the same as accurately estimating RID, since RID is defined in such as way that it can be estimated, for any sample size, with arbitrary precision. This fact is due to the definition of RID as an expectation over a bootstrapping distribution, with respect to our given dataset of size $n$ (see lines 158-161), allowing one to simply take more bootstrap replicates to estimate it precisely (as noted on lines 203-205). If stability is a problem caused by finite samples sizes, then presumably it cannot be solved by simply re-sampling a small dataset. The fact that the given strategy achieves stability seems to be taken as a given, e.g., RID is introduced on lines 150-151 as "we define a stable quantity for variable importance", it is said that "intuitively, this [Eq 2] provides greater weight to the importance of variables for stable models" (line 163), then it is concluded that "since we stably estimate the entire distribution of variable importance values, we can create stable point estimates" (lines 209-210), etc. I would be curious to get the thoughts of the authors on what stability means in this context, and why we should expect the approach in question to achieve it - is this simply a question of intuition, to be backed up by experiments? If so, please just clarify that upfront. ## Other minor points It would be nice to discuss Assumption 1 in a bit more detail, explaining why it is necessary, when we might expect it to hold, etc. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Here I have collected relevant questions that appear in my review above. As stated previously, I am willing to update my score given compelling answers to some of these questions. 1. (W1) How would you explain to a skeptical audience why we should be primarily concerned with variable importance measures that are not linked to a particular model? 2. (W2) If I were interested in making claims like "the lower bound of the variable importance over the Rashomon set is higher than L with high probability", is that something that could be read out from the RID? 3. (W2) It appears that [16] (Fisher et al. 2019) gives finite-sample / high-probability bounds on the upper/lower bounds of variable importance over the Rashomon set. How should I think about the difference between the "stability" goal of this work (which is fundamentally a finite-sample concern) compared to the goal of providing high-probability bounds in that work? 4. (W3) How would you formally define "stability" in this context, and why we should expect the approach in question to achieve it? Of these questions, I am most interested in the answers to 2-4. I understand that 1 is a bit more subjective, and despite appearing first, it has the smallest impact on my score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. We respond to each concern below, following the same numbering used in the review. (W1) The goal of measuring variable importance for a particular model is different than ours. Scientists are often interested in understanding causal relationships between variables, but running randomized experiments is time-consuming and expensive. Given an observational dataset, we can use global variable importance measures to check if there is some relationship between two variables. If there is no relationship, we can be confident that there is little to no causal relationship. In this setting, the researcher is not interested in variable importance for a particular model but rather for the dataset as a whole. After isolating the handful of variables for which the variable importance is non-negligible, we can run randomized experiments to investigate true causal relationships. This is the use-case in our case-study: we trim the genes-of-interest in HIV load studies from 100 to five, considerably reducing the cost and time of further research. (W2.1) Yes, we can make this claim! This is the strategy we use to identify a handful of genes that are associated with high expression of Human Immunodeficiency Virus (HIV) RNA. Figure 5 displays the probability of the lower bound on conditional model reliance over Rashomon sets and data perturbations being greater than 0. This analysis could be repeated for any threshold value. (W2.2) We would first like to specify what we mean by stability. Stability is a desiderata of trustworthy analyses: a procedure is considered “stable” if small perturbations to the observed dataset (e.g., replacing a single observation) do not significantly change the computed statistic. Prior work has pointed out that there is wide agreement on the intuition behind stability, but very little on how to quantify it (Kalousis et al. 2005; Nogueira et al., 2017). As such, in line with other stability research, we do not subscribe to a formal definition and treat stability as a general notion (Yu, 2013; Yu and Kumbier, 2020; Kalousis et al. 2005; Nogueira et al., 2017). Because variable importance is often used for high stakes decision making, we need to ensure that our variable importance metrics are robust to such sampling issues. Otherwise, decision makers may be working with faulty, non-reproducible insights. While stability is fundamentally a finite-sample concern, we only ever work with finite-sample data in practice. Fisher et al.’s model class reliance (MCR) differs from ours because it (1) does not consider stability issues and (2) a range of min/max values is highly susceptible to outlier problems. As shown in Figure 1 (b) and in section D4 of the supplement (Figures 8-11), MCRs are scattered across these bootstrap iterations and do not generalize to different draws from the same DGP. These results suggest that using MCRs can lead to analyses that may not generalize well. In contrast, our intervals remain stable in both settings, as shown in Figure 4 and in section D4 of the supplement (Figures 8-11). Additionally, MCR only describes the min/max values that the model reliance could potentially be. These intervals may be less meaningful because they do not describe the interval of likely values of variable importance; outlier values of variable importance may drive MCRs to be very wide and not useful. In contrast, because we estimate a distribution of variable importance over Rashomon sets and bootstrap perturbations, we can compute measures like the highest probability regions (Figure 3 (bottom)), mean variable importance values (Figure 3 (top)), or the probability that each variable has variable importance above some target value (Figure 5). The distribution offers far more flexibility than the MCR. (W3) We discuss our definition of stability in our answer to W2.2. The stability literature has often used subsampling/resampling methods and our bootstrapping approach similarly yields stability. For example, Basu et al. (2018) use the bootstrapping procedure to find high order interactions of features that are stable to data perturbations. Additionally, in the context of algorithmic stability (Bousquet and Elisseeff, 2002), bagging has been shown to stabilize decision tree algorithms (Buhlmann and Yu, 2002; Grandvalet, 2006). Further, we show empirically that RID’s intervals are far more similar between independently generated datasets than the other methods that account for the Rashomon effect in Figure 4 , demonstrating that bootstrapping also stabilizes our variable importance measurements. References Sumanta Basu, Karl Kumbier, James B Brown, and Bin Yu. Iterative random forests to discover predictive and stable high-order interactions. Proceedings of the National Academy of Sciences, 115(8):1943–1948, 2018. Olivier Bousquet and André Elisseeff. Stability and generalization. The Journal of Machine Learning Research 2: 499-526, 2002. Peter Bühlmann and Bin Yu. Analyzing bagging. The annals of Statistics, 30(4):927–961, 2002. Yves Grandvalet. Stability of bagged decision trees. In Proceedings of the XLIII Scientific Meeting of the Italian Statistical Society, pages 221–230. CLEUP, 2006. Bin Yu and Karl Kumbier. Veridical data science. Proceedings of the national academy of sciences, 117(8), 3920–3929. 2020. Bin Yu. Stability. Bernoulli, Bernoulli 19(4): 1484-1500, 2013. Sarah Nogueira, Konstantinos Sechidis, and Gavin Brown. On the stability of feature selection algorithms. The Journal of Machine Learning Research 18.1: 6345-6398, 2017. Alexandros Kalousis, Julien Prados, and Melanie Hilario. Stability of feature selection algorithms. In IEEE International Conference on Data Mining (ICDM’05), 2005. --- Rebuttal Comment 1.1: Title: Response Comment: Overall, I appreciated the thoughtful and detailed response of the authors, I've read the other reviews and responses, and I've updated my score accordingly (from borderline reject -> weak accept) There are some points that I think could be clarified further in the main paper, but I understand that space is at a premium. I'll summarize my reactions to the discussion re: the points I originally raised: (W1) I appreciate the point, and I would suggest clarifying this motivation upfront as trying to develop some causal hypotheses to test. As a minor nit, however, I would be careful about making claims like "if there is no relationship [in terms of variable importance], we can be confident that there is little to no causal relationship". For instance, if X1 -> X2 -> Y, then the variable importance of X1 may be zero (accounting for X2), but it still has a causal relationship with Y. It's fine in my view to use variable importance as a heuristic, but just be careful not to over-claim. (W2.1) Understood, thank you for the clarification. I do think there is some lingering confusion about what "distribution" we are discussing when making probability claims. **Is it fair to interpret this probability as literally "the probability over random bootstrap samples", which we hope is a good approximation of "the probability over draws from the data-generating process"?** (W2.2) Re: your definition of stability, thank you for the clarifications - please add these points to the introduction in some form, if space permits, particularly the first paragraph of your response. Clarifying that your informal definition of "stability" is a "general notion" but not a formal one, and one which you choose to assess via stability under re-sampling would at least make this jump clearer for readers. See also my note under (W3). (W3) You could consider defining the ideal notion of "stability" as the distribution of values over re-sampling from the population distribution, and then clarify that you are making a standard leap to treat re-sampling from the finite sample as re-sampling from the population distribution. You could further clarify that, while this is of course not an exact relationship, your experiments suggest that in some cases this works well enough. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful points and for your updated score. We greatly appreciate your continued active engagement in the reviewing process. We will make sure to clarify each of these points in the paper as much as space permits. Regarding (W2.1), it is fair to say that this probability is over random bootstrap samples, and we hope this is a good approximation of random draws from the data generating process. However, we would like to note that this hope is well motivated – the Dvoretzky–Kiefer–Wolfowitz inequality provides a probabilistic bound on how well the empirical data distribution mirrors the true data distribution, stating that the two converge at a $\sqrt{n}$ rate. Because a bootstrap sample can be thought of as a draw from the empirical data distribution, we expect the approximation to be strong for sufficient values of n.
Summary: The paper presents a new method to find important predictive variables for a set of good prediction models (Rashomon set). The Ranshomon set is often unstable when some perturbations are added to the dataset. The present solution is based on an importance metric called Rashomon Importance Distribution (RID). Bootstrap sampling is proposed to estimate the cdf of RID. The method assesses the importance of each variable using the CDF that takes into account the Rashomon sets from different bootstrapped samples. This reduces the instability in variable selection due to the instability of the underlying Rashomon set. Strengths: 1. Originality: The instability problem of variable selection for a Rashomon set is underexplored in the literature. The instability issue is more challenging than finding important variables for one model on one particular dataset. 2. Quality: The paper is relatively well written. The proposed method is intuitive. A set of simulations and real-data studies demonstrate the value of the proposed method in applications. 3. Clarity: The assumption of the proposed method is clear. Some basic consistency analysis is provided for the bootstrapping procedure. 4. Significance: The paper makes a good contribution to addressing the instability problem that deserves more attention from the field. Weaknesses: 1. Notation. For example in equation (2), the probability in the LHS is not indexed by n but the expectation in the RHS is an empirical distribution on the n observations. This is very confusing for me. What exactly the bootstrapping procedure is estimating? Is the unknown expectation assuming n observations drawn from an unknown distribution, or something else? 2. Assumption 1. The connection between Rashomon Importance Distribution for a particular feature j and Rashomon Loss Distribution (RLD) is still not very convincing to me. The author should provide more intuitive explanations in the main text (e.g. using a linear model as an example). The current form of the assumption doesn't seem to hold for many model classes. 3. The method is demonstrated in terms of finding the important variables. But does the method increases the false discovery of many non-important variables? More clarification and experiments can be provided on this aspect. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. First, we would like to clarify the meaning of Equation 2. Equation 2 specifies the quantity we hope to estimate: the distribution of variable importance for all good models across all reasonable perturbations for the given dataset. However, considering all perturbations of the dataset is computationally intractable (note that there are (2(n-1) choose (n-1)) unique bootstraps for a dataset of size n), which is why we propose our bootstrapping procedure for estimating RID. We make no assumptions on the process generating the originally observed dataset; rather, we estimate this quantity for a fixed dataset. To make this notation clearer, we propose the following change: we will replace $RID_j(\varepsilon, \mathcal{F}, \ell; \lambda)$ with $RID_j(\mathcal{D}^{(n)}, \varepsilon, \mathcal{F}, \ell; \lambda).$ Partially in response to reviewer tvv9’s comment, we discuss Assumption 1 in the combined response. Finally, the RID framework does not necessarily increase false discovery rates. All of our simulated datasets include extraneous/irrelevant features. Chen’s DGP contains 6 irrelevant features, Friedman’s DGP contains 1 irrelevant feature, and the Monk DGPs contain 3 irrelevant features. The number of irrelevant covariates come from the papers with these original simulation setups. Our experiments show that RID consistently produces lower variable importance values for unimportant variables than for important variables. Furthermore, the recovery experiments in Figure 2 (bottom) demonstrate that RID’s box and whisker range contains the true model reliance; for unimportant variables, the true model reliance is 0. Therefore, it is unlikely that RID would confuse an unimportant variable as seeming important. Additionally, practitioners especially concerned with false discovery may integrate our framework with existing false discovery rate control methods (e.g., Benjamini-Hochberg or Knockoffs) and expect the first variables excluded to be the extraneous ones. Further, RID produces an entire distribution of variable importance, allowing practitioners to ask questions like “what is the probability each variable has a model reliance greater than 0?” By calibrating the lowest acceptable probability for calling a variable truly important, practitioners may control how many variables are considered “important”, further controlling false discovery rates.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their thoughtful and constructive feedback. We hope our response addresses some of the issues raised, and look forward to our continued discussion. Reviewers gRCY and eXQc noted that Rashomon sets can only be computed for a handful of model classes. We would like to note that generalized additive models (GAMs) and decision trees are two of the most effective model classes – often as accurate as neural networks – for tabular data, the setting we focus on in this paper. The papers that we cited (McTavish et al., 2022; Liu et al., 2022) showed performance at least as good as black box baselines for the challenging FICO dataset for the 2018 Explainable ML Challenge. It is also worth noting that the kind of variable importance we are interested in, variable importance for a DGP, is most relevant for tabular data. For example, in computer vision it is unlikely that any individual pixel may be meaningfully “important” or “unimportant”. Therefore, our framework is already useful for many practitioners. Further, the idea of computing Rashomon sets is quite new. As such, we expect the number of algorithms for computing Rashomon sets to continue to grow rapidly with time. Reviewers tvv9, zs1J, and eXQc noted that a more thorough justification of Assumption 1 is needed and that the current form of the assumption may not hold for many model classes. In response to these comments, we propose to relax the assumption from: If $\rho\left(RLD(\varepsilon, \mathcal{F}, \ell;\lambda), LD(\ell, n;\lambda) \right) \leq \gamma$, then $\rho\left(RID_j(\varepsilon, \mathcal{F}, \ell; \lambda), RID_j(\varepsilon, \{g^*\}, \ell ; \lambda) \right) \leq d(\gamma)$ for a monotonically increasing function $d: [0, \ell_{\max} - \ell_{\min}] \to [0, \phi_{\max} - \phi_{\min}]$ such that $d(0)=0.$ To the following relaxed assumption: $\rho\left(RLD(\varepsilon, \mathcal{F}, \ell;\lambda), LD^*(\ell, n;\lambda) \right) \leq \to 0 \implies \rho\left(RID_j(\varepsilon, \mathcal{F}, \ell; \lambda), RID_j(\varepsilon, \{g^*\}, \ell ; \lambda) \right) \to 0$. The new assumption states that, as the distance between $RLD$ and $LD$ converges to 0, the distance between $RID_j(\varepsilon, \mathcal{F}, \ell; \lambda)$ and $ RID_j(\varepsilon, \{g^*\}, \ell ; \lambda)$ will also converge to 0. We expect this to hold for many model classes and variable importance metrics since it simply requires that models that are extremely close to the DGP in terms of loss must reason on similar variables. Note that the examples given in Section C of the supplement also satisfy this modified assumption, as they show that a stronger version of it holds. Reviewer eXQc asked whether RID is sensitive to using estimated Rashomon sets (rather than exactly computed Rashomon sets, as in the main paper). Although we cannot yet give a definitive answer to this question, we conducted an experiment studying how consistent RID was when some of each Rashomn set is missing. In order to evaluate this, we randomly removed 25%, 50%, and 75% of models from each Rashomon set when computing RID for Friedman’s DGP. We used the same hyperparameter settings as in our experiments on Friedman’s DGP as in the main paper. We found that RID maintained equal performance in terms of importance classification and near-equal performance in recovery performance across all three settings. Therefore, if estimation error is independent between bootstrap iterations and models are only omitted from the Rashomon set (rather than added), we believe that RID will function well with estimated Rashomon sets. Figure 1 of the attached documents illustrates this result. Reviewer eXQc suggested we evaluate RID using a more complicated data generation process, particularly a neural network. As such, we also evaluated the ability of RID to discriminate between extraneous and important variables for a DGP in the form of a neural network. Our DGP consisted of five fully connected layers with a rectified linear unit (ReLU) non-linearity between each pair. The first four layers used the weight matrix: $\begin{bmatrix} -3 & -2 & -1 & 1 & 2 & 3 \end{bmatrix}^T \begin{bmatrix} -1 & -0.9 & -0.8 & 0.8 & 0.9 & 1 \end{bmatrix}$. The final layer used the weight matrix: $ \begin{bmatrix} -1 & -0.9 & -0.8 & 0.8 & 0.9 & 1 \end{bmatrix}$. We generated 25 features uniformly at random, between 0 and 1, and computed the outcome as a function of the first 6 features. Standard normal noise was added to the output of the DGP, and a binary label was constructed indicating whether the outcome was positive or negative. As shown in Figure 2 of the attached documents, we found that RID perfectly discriminates between extraneous and important features. However, RID did not recover all true MR values. We believe this is because sparse decision trees struggle to represent a dense neural network (which is something a user could figure out empirically and correct for). However, it is worth noting that this DGP is closer to a unrealistic “game-like” setting like chess, which requires a high-complexity model, than that of a more practical setting like medical records, where not all information is observed and the outcome may be far from deterministic function of the inputs. For such a dense DGP, another model class such as generalized additive models may be more appropriate. Nonetheless, RID succeeds in identifying extraneous variables even when modeling a difficult DGP using a less appropriate model class. References Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, and Margo Seltzer. Fast sparse decision tree optimization via reference ensembles. In AAAI Conference on Artificial Intelligence, 2022. Jiachang Liu, Chudi Zhong, Margo Seltzer, and Cynthia Rudin. Fast sparse classification for generalized linear and additive models. In Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS), 2022. Pdf: /pdf/6c9793630bb6ad636cb4ad922dd1dbb443049e33.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
L-CAD: Language-based Colorization with Any-level Descriptions using Diffusion Priors
Accept (spotlight)
Summary: In this paper, the authors propose a model to adapt pretrained Stable Diffusion for language-conditioned colorization. In specific, they propose a luminance encoder that produces latents $y_lum$. This conditions the compression decoder and the denoiser of the diffusion model. The conditioning of the denoiser is done using a "Channel Extended Convolution block". Further during sampling, they utilize gradients from a semantic segmentation model to help with instance differentiation. Comparisons are made to several language-conditioned and unconditional baselines across 2 datasets: extended coco-stuff and a (private) multi-instance dataset. They outperform these baselines on both quantitative metrics and mechanical turk. They show qualitative samples where they are able to colorize multi-level descriptions better than the baselines. Strengths: The results shown by their proposed approach are quite strong. The authors propose to augment a pretrained stable diffusion model to achieve language-conditoned colorizaiion which is novel and interesting. The authors provide adequate background information. Weaknesses: While the model works well, imho some novelty aspects are overclaimed and some simpler baselines are not ablated. I initially rate the paper slightly below borderline but I'm happy to update my rating it the authors address my concerns. See below for a detailed list of questions. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: **Most Important**: Amongst the standard training hyperparameters, the paper introduces $\lambda$ for sampling, the loss hyperparameters $\alpha$ and $\beta$ and the number of elements $N_{win}^2$ in the window for the loss in Eq 3). How are these hyperparameters tuned? Luminance-guided image compression ----------------------------------------- * The design of conditioning the “compression decoder” with the grayscale features makes sense. To be more self-conditioned, I suggest the authors to a) Briefly explain the architecture of the compression encoder + decoder b) Describe how exactly the conditioning from the luminance encoder features is incorporated in the compression decoder? * In Section 3.2, the authors introduce a discriminator loss but this is not explained or ablated. Is this introduced in this work are part of stable diffusion? * The authors introduce a loss that promotes smoothness by upweighting windows with high variance (Eq 3). The authors should ablate this weighting factor. Moreover, I think only the “luminance conditioning” is introduced in this work. So I suggest that the authors move the part that focuses on the training the compression encoder+decoder with the loss functions into the preliminaries and focus only on the luminance conditioning in this subsection. Semantic-aligned latent representation ------------------------- * Imho, the novelty of this section is overclaimed. The “semantic-aligned” latent representation amounts to a zero-initialized convolution which takes as input to the grayscale features and is added to the output of denoising downsampling blocks. It is great that it works, but the authors should describe it as it is and not overclaim novelty. * Have the authors tried the obvious baseline that just concatenates $y_lum$ with $z_t$ at the inputs with downsampling to match the shapes? Instance-aware sampling strategy ----------------------- * IIUC, the goal is to differentiate between different instances and if the output of a semantic segmentation model is used to provide this information as some sort of guidance or ground truth. If this is indeed the case, then the authors should ablate the simpler baseline, which is just to condition the denoising process with the output of a semantic segmentation model during training (as they do with the grayscale images) * What eactly is the semantic segmentation model used here? The authors say (eg SAM but it it not clear if SAM is used) * The authors should mention the shapes of the $M_est$ and $M_att$ and how they are matched. * What is the role of the softmax? Since it is cross-attention with the text tokens, aren’t the attention values normalized across the #text tokens already? Results --------- * Can the authors confirm that the images apart from the 2.9K descriptions that are removed, all have color information? * The authors describe a multi-instance dataset. Are there plans to opensource this dataset? * In Figure 1, the authors describe three levels of colorization descriptions. The authors show some qualitiative results per-level, can they report some quantitative results as well? * Can the authors also report some of their failure cases, if any? Other ------- Figure 2 combines both the training and sampling loop of the diffusion process, so it is quite confusing. I suggest that the authors separate both and clearly label the diagram wherever the loss is applicable. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: I do not believe there is negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the very detailed review and suggestions. Given the character limit (6000), we have to make our response brief. For additional details, we welcome a more comprehensive discussion during the Author-Reviewer Discussions. Note Fig. S1-S13 and Tab. S1-S2 are included in the PDF attached to the global response. ### ***Most important*** - Q: How are these hyperparameters tuned? We follow Stable Diffusion [31] to set loss hyperparameters $\alpha=1.0$ and $\beta=0.5$, and empirically set hyperparameters $\lambda=0.1$ and $N_\mathrm{win}=7$. We visualize artifact maps with varying $N_\mathrm{win}$ to illustrate its practical role, as shown in Fig. S2. We further modulate $\lambda$ and present qualitative and quantitative results in Fig. S9 and Tab. S1, respectively. This demonstrates that the value of $\lambda$ is not sensitive to variations in a specific range. ### ***Luminance-guided image compression*** - Q: Explain the architecture of compression encoder/decoder and how luminance encoder features are incorporated. We implement the compression encoder and compression decoder as the same structure of Stable Diffusion [1], as shown in Fig. S1. As presented in Fig. 2 (a), luminance encoder extracts multi-scale features from grayscale images (L133-135). Then, these features are added to corresponding scales of the compression decoder (L135-136). - Q: Explain discriminator loss in Sec. 3.2. We adopt the exactly same discriminator loss as Stable Diffusion [1], therefore we do not perform an ablation study for it. - Q: Aablating this weighting factor in Eq. 3. We apply an additional factor $\gamma$ to adjust the weight of $\mathcal{L}_\mathrm{rec}$ when training our model in the pixel space. The qualitative and quantitative results are shown in Fig. S10 and Tab. S1, respectively. This demonstrates the value of $\gamma$ is also not sensitive to variations in a certain range. ### ***Semantic-aligned latent representation*** - Q: Novelty overclaim in Sec. 3.3. Thanks for pointing this out. Although CEC block is mathematically equivalent to using a stack of convolutions to extract features and add their output to the downsampling block, we present it this way to underscore the core motivation and practical value of the module in establishing a semantically-aligned latent representation (See L159-160 and L165-168). We will revise this in the final version to tone down the claim of novelty, *e.g.*, removing ''novel'' in L157 and explaining the mathematical equivalence. - Q: The concatenation baseline. We conduct an additional ablation study ''concat''. Given the increase in the number of input channels, we extend the input channel of the first convolution layer. This modification leads to clear degradation in performance. We show the qualitative and quantitative results in Fig. S11 and Tab. S1, respectively. ### ***Instance-aware sampling strategy*** - Q: Segmentation guided baseline. We build a ''segmentation'' baseline, where estimated object contours are concatenated with luminance features. The qualitative and quantitative results are shown in Fig. S11 and Tab. S1, respectively. This demonstrates that our instance-aware sampling strategy (ISS) offers a more effective control strategy. - Q: Which is semantic segmentation model used? We exactly use SAM [18] as the segmentation model when sampling colorization results. Note that our model could switch to an arbitrary segmentation model without finetuning with ISS. - Q: Given shape of $M^\mathrm{est}$ and $M^\mathrm{att}$ The shape of $M^\mathrm{att}$ is $\bar{h} \times \bar{w} \times N_\mathrm{obj}$, where $\bar{h}$ and $\bar{w}$ are corresponding spatial resolution at $l$-th CA block, and $N_\mathrm{obj}$ is the number of objects in the description. The shape of $M^\mathrm{est}$ is $H \times W \times N_\mathrm{obj}$, where $H$ and $W$ are spatial resolutions of input grayscale image. we downsample its spatial resolution to $\bar{h} \times \bar{w}$ to ensure compatibility with $M^\mathrm{att}$. - Q: The role of the softmax. We apologize for typos in L193-194, which should be rewritten as: L193 $\mathcal{M} \leftarrow \mathrm{Sigmoid}(M^\mathrm{att}_l)$ L194 $\hat{M}^\mathrm{att}_l \leftarrow M^\mathrm{att}_l - \lambda \nabla _\mathcal{M} \mathcal{L} _\mathrm{BCE}(\mathcal{M}, \hat{M}^\mathrm{est}_l)$ We integrate L193-194 into the attention mechanism. Specifically, we execute matrix multiplication to compute unnormalized attention maps, and then modify them (L193-194) followed by a softmax operation. This procedure aligns $M^\mathrm{att}_l$ with downsampled estimated contours $\hat{M}^\mathrm{est}_l$. We will revise this in the final version. ### ***Result*** - Q: Whether all data have color information. Previous L-CoDe [41] and L-CoIns [5] ensure all descriptions of the extended COCO-stuff and multi-instance dataset include color information. - Q: Release Multi-instance dataset. L-CoIns [5] has released the multi-instance dataset. - Q: Per-level qualitative results. There is no distinct criterion to partition descriptions of the extended COCO-stuff dataset and multi-instance datasets into complete-level and partial-level representations. As such, we present the quantitative results which include descriptions from both levels, as shown in Tab. 1 (left). We further show qualitative results with scarce-level descriptions in Tab. 1 (right) and Tab. 3 of the supplementary materials. - Q: Failure cases. Given that the estimated object contours are utilized solely in the latent space and subsequently become low-resolution, our method still encounters challenges when attempting to accurately colorize small objects with corresponding color descriptions. We provide failure cases in Fig. S12. ### ***Other*** - Q: Confused Fig. 2. Thanks for the helpful suggestion. We have redesigned Fig. S13 to serve as the revised Fig. 2, enhancing the clarity of our pipeline. We will revise this in the final version. --- Rebuttal Comment 1.1: Title: Updated Rating Comment: Thanks to the authors for the new experiments. I updated my rating to 7. Can the authors explain Fig. S2 more? Why is $N_{win} = 7$ optimal? --- Reply to Comment 1.1.1: Title: Explaination of Fig. S2 Comment: Thanks for your comments. As shown in Fig. S2, we observe that a smaller $N_\mathrm{win}$ provides fewer, yet more accurate, artifactual positions. In contrast, a larger $N_\mathrm{win}$ presents a higher number of these positions, but at the expense of accuracy. To strike a balance, our approach is designed to maintain a reasonable level of accuracy while ensuring a sufficient number of evident artifactual positions.
Summary: The paper proposes a modification over Stable Diffusion to adapt it to perform language-based colorization of images with varying levels of description details. The method is based on three modifications. First, in addition to the auto-encoder used for SD, the authors add another encoder that is tasked with preserving the spatial features of the input image. Second, the convolutions in the downsampling layers of the UNet are replaced with a novel Channel-Extended Convolution to encourage the reliance on the encoded spatial features. Finally, a segmentation model is employed to encourage correct object-level color assignment. Extensive experiments are conducted to demonstrate the method's superiority over both language-conditioned colorization methods and automatic colorization methods. Strengths: - The authors conduct very extensive experiments, using various datasets and human evaluation. - Overall, the reviewer found the qualitative results to be convincing. - The idea of slightly modifying the Stable Diffusion architecture for other tasks is interesting and can possibly be generalized to other tasks. Weaknesses: Please note that the low confidence is due to the fact that the reviewer is not familiar with literature on image colorization, and therefore feels less confident in giving an assessment of this work. Readability: Overall, the reviewer found the method section to be a bit hard to follow. Intuitions are lacking in some parts, and familiarity with previous works is required to follow the explanations. - The writing of the preliminaries section on diffusion models is a bit confusing and inaccurate. For example, it starts with describing the inference (L. 106-107) but then turns to describe the training process (Eq. 1 and on). - Section 3.3 (Channel-Extended Convolution) is not entirely clear. A supporting figure demonstrating the VC vs. CEC would be beneficial to better understand the module, its motivation and novelty. - Section 3.4 heavily relies on familiarity with existing colorization methods, and the manipulation to the cross-attention maps is not clear to the reviewer (e.g., why employ a sigmoid non-linearity on the attention?) Evaluation: - The authors claim that when ablating the Instance-aware Sampling Strategy (ISS) "significantly degrades the performance of the model to correctly assign colors to corresponding objects that have different descriptions." (L.256) however, the results in Fig. 5 and Tab. 1 appear to be pretty similar to the proposed method. - The comparison to image editing methods are partial, however given the extensive comparisons, this is not a major concern. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The authors mention the "ghosting effect" several times, but no explanation is given as to what it is (e.g., L. 153). - Why is the CEC block only used in the first half of the UNet (downsampling)? - How robust is the method to OOD outputs? For example, what happens if you train on COCO-Stuff and evaluate on multi-instance? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are discussed briefly. The reviewer thinks that a more in-depth comparison of runtime to SD and to the baselines is needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable feedback. Given the character limit (6000), we have to make our response brief. For additional details, we welcome a more comprehensive discussion during the Author-Reviewer Discussions. As the rebuttal instruction, Fig. S1-S13 and Tab. S1-S2 are shown in the PDF attached to the global response. ### ***Readability*** - Q: Confused preliminaries. Thanks for the suggestions. To enhance the clarity and comprehensibility of the preliminaries section, we will incorporate additional background information and strategically reorganize the arrangement of the paragraphs in the final version. - Q: Figure about VC vs. CEC. The CEC block is designed to extend the channel number of vanilla convolution block so that the model could utilize the extended channels to effectively capture the local structural semantics of the luminance in the latent space (L159-160). By initializing the weights of extended channels to zero, CEC block ensures that our model maintains functional equivalence to the pretrained generative model prior to training (L166-168). We provide a figure demonstrating the VC vs. CEC in Fig. S6 for better clarity. In application, we feed the concatenation of resized luminance features and feature maps into our CEC block to extract joint feature maps. - Q: Relying on existing methods. Previous works assume that users provide comprehensive color descriptions for most of the objects in the image, which causes suboptimal performance (L25-26). Given the inherent ambiguity in the number of objects mentioned in any-level descriptions (L38-39), we leverage the pretrained cross-modality generative model (*i.e.*, Stable Diffusion [31]) to utilize its robust language understanding for mentioned objects and rich color priors for unmentioned ones (L39-41). - Q: Cross-attention maps. We apologize for typos in L193-194, which should be rewritten as: L193 $\mathcal{M} \leftarrow \mathrm{Sigmoid}(M^\mathrm{att}_l)$ L194 $\hat{M}^\mathrm{att}_l \leftarrow M^\mathrm{att}_l - \lambda \nabla _\mathcal{M} \mathcal{L} _\mathrm{BCE}(\mathcal{M}, \hat{M}^\mathrm{est}_l)$ We briefly introduce the manipulation procedures applied to attention maps: *(i)* Matrix multiplication is firstly utilized to compute the unnormalized attention maps $M_l^\mathrm{att} \in \mathbb{R}^{\bar{h} \times \bar{w} \times N_\mathrm{obj}}$, where $\bar{h}$ and $\bar{w}$ are corresponding spatial resolution at $l$-th CA block, and $N_\mathrm{obj}$ is the number of objects in the description. *(ii)* Then, we apply L193-194 to manipulate the attention maps $M^\mathrm{att}_l$, utilizing sigmoid function to confine the value range of attention maps to $[0,1]$ — the same value range as that of the estimated object contours $M^\mathrm{est}_l$. *(iii)* Finally, we apply a softmax operation to normalize the modified $\hat{M}^\mathrm{att}_l$. We will revise this in the final version. ### ***Evaluation*** - Q: Effectiveness of Instance-aware Sampling Strategy (ISS). The performance improvement by adopting ISS is notable. As presented in Tab. 1 (left), when provided with complete-level and partial-level descriptions, the absence of ISS (*W/o* ISS) results in a decline in PSNR from 25.97 to 25.32 (-0.65) on the extended COCO-stuff dataset. Furthermore, for the multi-instance dataset, which provides samples featuring multiple instances with different visual characteristics within a single image, the PSNR drop becomes larger from 25.51 to 24.57 (-0.94). Figure 5 reveals that *W/o* ISS may not correctly assign colors to corresponding objects that have different descriptions, *e.g.*, the right woman in the first row being incorrectly colorized in pink instead of dark blue, and the man's white pants in the second row being colorized as yellow. It is only in the context of scarce-level descriptions, which inherently lack meaningful color information for objects, that the model without ISS exhibits comparable performance. In these scenarios, the significance of ISS understandably diminishes. - Q: Comparison to image editing methods are partial. We appreciate the reviewer's understanding. The primary objective of this study addresses persistent challenges in colorization task, *i.e.*, colorizing images with descriptions of varying detail levels. ### ***Questions*** - Q: Ghosting effect. This concept is recognized in the field of photography and image restoration [R1]. In the context of our work, the ''ghosting effect'' implies the model synthesizes an image resembling a composite created from multiple blended images. *E.g.*, in the second row of Fig. 5, *W/o* SLR appears to produce an image of a man in a yellow shirt and subsequently merge it with the original grayscale image. We will explain this term clearly in the final version. - Q: CEC block in downsampling. We strategically apply CEC block in downsampling modules to expedite training convergence. Additionally, we conduct an ablation study where we replace VC blocks in both upsampling and downsampling modules with CEC blocks, denoted as ''upsampling''. The qualitative and quantitative results are shown in Fig. S7 and Tab. S1, respectively. Consequently, this ablation results in comparable performance. - Q: Robustness of OOD. We train models on extended COCO-stuff dataset and subsequently evaluate it on the multi-instance dataset, and vice versa. We name this experiment as ''OOD'', and provide qualitative and quantitative results in Fig. S8 and Tab. S1, respectively. These results demonstrate a certain degree of robustness of our method. ### ***Limitations*** - Q: In-depth comparison of runtime to baselines. We include training and inference time for all methods in Tab. S2, which indicates our method cost more inference time. This could be mitigated by using advanced fast sampling methods (DPM-Solver++). [R1] YC Shih, D Krishnan, F Durand, and WT Freeman. Reflection removal using ghosting cues. In *CVPR* 2015.
Summary: This paper presents a novel approach to image colorization, which demonstrates superior performance in language-based colorization methods. The proposed model employs language descriptions at varying detail levels to produce high-quality, customizable colorized images by diffusion in the latent space. A key innovation is L-CAD's adaptive understanding of any-level descriptions, facilitating precise colorization based on user requests. To ensure proper spatial alignment with grayscale inputs and avoid ghosting effects, a luminance-guided compression module, and a channel-extended convolution operator are introduced. An instance-aware sampling strategy was adopted from previous literature to enhance color assignment to objects. L-CAD exhibits very good results in both quantitative and qualitatve experiments. Strengths: - The paper is well-written and easy to follow. - The problem that they tackle is interesting and valuable. - The results are promising and the ablation studies to disentangle different modules have been conducted. Weaknesses: - The method is a little complex and needs multiple modules to work properly. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can the authors please compare training and inference time between their method and the baselines? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - The authors have mentioned that their model is relatively slow. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and suggestions. As the rebuttal instruction, Fig. S1-S13 and Tab. S1-S2 are shown in the PDF attached to the global response. ### ***Weaknesses*** - Q: The method is a little complex. Previous works [4,5,6,24,41] implicitly assume that users provide comprehensive color descriptions for most of the objects in the image, which often leads to suboptimal performance, especially for objects without corresponding color descriptions (L24-26). To address the aforementioned issue, we intend to utilize Stable Diffusion [31]'s robust language understanding for mentioned objects and rich color priors for unmentioned ones (L38-41). Therefore, the model needs to be elaborately designed to be aligned with Stable Diffusion in the pixel space, the latent space, and the sampling strategy. As illustrated in Sec. 4.3, we demonstrate that every component of our model is indispensable for effectively performing language-based colorization with any-level descriptions. ### ***Questions*** - Q: Training and inference time. Thanks for the suggestions, we present the training and inference time of all methods in Tab. S2. ### ***Limitations*** - Q: Model is relatively slow. As illustrated in L278-279, this limitation could be mitigated by using advanced fast sampling methods (*i.e.*, DPM-Solver++ [R1]). We show qualitative and quantitative results using DPM-Solver++ in Fig. S5 and Tab. S1 of the attached PDF, respectively. The accelerated inference time is presented in Tab. S2. [R1] C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, and J. Zhu. DPM-Solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2023.
Summary: This paper presents a well engineered method for Stable Diffusion-based text-guided image colorization. The paper addresses the challenge of structural fidelity to the original grayscale image by additional conditioning modules in the denoiser architecture. The paper also proposes an instance-aware sampling procedure that uses object segmentations to encourage accurate color assignment. Extensive quantitative and qualitative comparison show the proposed method outperforms the compared baselines. Strengths: - This is a well engineered method. The design decisions appear reasonable and are justified with ablation studies. - The paper is well written, the ideas are clear and easy to understand. - The qualitative results are remarkable and quantitative comparisons show the advantage of the proposed method in standard benchmarks and user preference studies. Weaknesses: - The colorization baselines in Figure 1 and 4 look much weaker than in the original papers (e.g. [43] Fig. 6 and [40] Fig. 5). I wonder how the proposed method would compare in the referenced figures. - It would be interesting to include in the quantitative comparison the number of sampling steps required for each method, as this is a disadvantage of the diffusion-based methods compared to the GAN-based ones. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for this thoughtful review and we are glad to see their positive assessment. As the rebuttal instruction, Fig. S1-S13 and Tab. S1-S2 are shown in the PDF attached to the global response. ### ***Weaknesses*** - Q: Colorization baselines look much weaker. For the performance assessment of [40] and [43], we adopt the publicly available pretrained weights (*i.e.*, [https://github.com/MenghanXia/DisentangledColorization](https://github.com/MenghanXia/DisentangledColorization) and [https://github.com/shuchenweng/CT2](https://github.com/shuchenweng/CT2)), which ensures a transparent and objective comparison. The scenarios displayed in Fig. 1 and Fig. 4 are particularly challenging due to the abundance of small objects with intricate textures. In our observations, when objects in grayscale images present recognition difficulties, [40] and [43] tend to produce undersaturated colorization results. Leveraging the robust language understanding and rich color priors of Stable Diffusion [31], our model could adeptly achieve vivid colorization, even in such challenging cases. To further show the superior performance of our proposed method, we present supplementary qualitative experiments against [40] and [43] in typical outdoor scenarios where they have been known to perform satisfactorily. Please refer to Fig. S4. - Q: The number of sampling steps. It is noteworthy to point out that among the methods enumerated in Tab. 1 and Tab. 3, ours is the only one based on diffusion models. Therefore, we conduct an additional ablation study focusing on the impact of varying the number of sampling steps $N_\mathrm{step}$. The corresponding evaluation scores and colorization results are presented in Tab. S1 and Fig. S5, respectively. Furthermore, we demonstrate that our method could be accelerated by adopting advanced fast sampling methods (*i.e.*, DPM-Solver++ [R1]) while maintaining high-quality colorization results, as shown in Fig. S5. [R1] C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, and J. Zhu. DPM-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2023.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and acknowledging that the paper is well-written (Gu3m/Gyr2/whze), well-motivated (Gu3m), and easy to follow (Gyr2/whze); proposed method is effective (Gu3m), reasonable (Gyr2), interesting and can be generalized (zuAx), and novel and interesting (oFt4); authors conduct extensive ablation or even human evaluation experiments (Gu3m/Gyr2/whze/zuAx); results are advantaged (Gyr2), promising (whze), convincing (zuAx), and quite strong (oFt4); tackled problem is interesting and valuable (whze); and background information is adequate (oFt4). We have carefully considered your comments and will take them into account to further improve the quality of our work. Please find below our responses to specific concerns of each individual reviewer. Note that Fig. S1-S13 and Tab. S1-S2 can be found in the PDF attached to the global response. Once our paper is accepted, we will release the code and checkpoints to facilitate reproducibility and further research in this area. We remain committed to addressing any further questions or concerns from the reviewers promptly. Pdf: /pdf/f74bdaeb07182f62aebc10441d9cf222945a7442.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper addresses the problem of colorizing images with descriptions of diverse levels of details. The key idea of the work is to propose a unified model that adaptively understands any-level descriptions by leveraging the pretrained cross-modality generative model. Additionally, the paper introduces modules that aid in preserving local spatial structures and prevent the ghosting effect by aligning with input conditions in both the pixel space and the latent space. Further, the paper presents an instance-aware sampling strategy to correctly assign colors to corresponding objects, enabling effective colorization in diverse and complex scenarios. The work demonstrates state-of-the-art performance in both automatic and language-based colorization methods. Strengths: The paper is well-written and well-motivated. Qualitative and quantitative comparisons to existing methods show the effectiveness of the proposed approach in language based colorization and automatic colorization. User studies and ablation studies sufficiently justify the key conclusions made in the paper. Weaknesses: One of the key weaknesses of the paper is the readability of a few sub-sections. For example, I struggle to understand the detailed implementation/intuition for section 3.2 where the paper introduces luminance guided image compression and how it helps preserve local structural semantics. It is not very intuitive as to how it can help bridge alignment between colorization results and grayscale images. Some more detailed intuitions could help readers to better follow this section. Similarly the justifications for equation 3 could be further elaborated. In lines 153-154, on the ghosting effects, I am curious to know if this is still the case with using region based guidance (for example, see below two references). I will revise my scores based on the clarifications in the rebuttal. Yang, Z., Wang, J., Gan, Z., Li, L., Lin, K., Wu, C., ... & Wang, L. (2023). Reco: Region-controlled text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14246-14255). Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., ... & Lee, Y. J. (2023). Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 22511-22521). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful and constructive feedback. As the rebuttal instruction, Fig. S1-S13 and Tab. S1-S2 are shown in the PDF attached to the global response. ### ***Weaknesses*** - Q: Detailed implementation/intuition for Sec. 3.2. Our primary intuition is to leverage Stable Diffusion [31]’s robust language understanding and rich color priors for language-based colorization with any-level descriptions (L125-126). To realize this goal, we analyze the architecture of Stable Diffusion. This method adopts a compression encoder $\mathcal{E}$ to encode image $x$ from pixel space into latent space as $z = \mathcal{E}(x)$, and a compression decoder $\mathcal{D}$ to reconstruct the image as $\tilde{x}=\mathcal{D}(z)$ (L118-122). A critical observation is that Stable Diffusion lacks the ability to preserve local spatial structures of input grayscale images (L128-129). To remedy this limitation, we propose an additional luminance encoder $\hat{\mathcal{E}}$ in the pixel space as a bridge to align colorization results with grayscale images (L130-132). As shown in Fig. 2 (a), luminance encoder extracts multi-scale features from grayscale images, which preserves local structural semantics of grayscale images $\hat{\mathcal{E}}(x^{\mathrm{lum}})$ (L133-135). These features are added directly into the compression decoder, guiding its decoding process (L135-136). We implement the architecture of luminance encoder $\hat{\mathcal{E}}$ mirrors that of the compression encoder $\mathcal{E}$ of Stable Diffusion. The weights of compression encoder $\mathcal{E}$ and compression decoder $\mathcal{D}$ are fixed to retain prior knowledge from the pretrained model (L136-138). A visualization of the architecture for both the compression encoder and decoder is provided in Fig. S1. We will revise this in the final version for better readability. We will release the code, offering a more comprehensive understanding once the paper is accepted. - Q: Equation 3 could be further elaborated. When training the luminance encoder $\hat{\mathcal{E}}$, our goal is to identify and minimize the discrepancy between colorization results and grayscale images (L140-141). Since we observe that erroneous pixels significantly damage visual perception (L139-140), we intend to estimate an artifact map $M^\mathrm{art}_{h,w}$. This map would serve to indicate the probability of encountering artifacts at specific spatial position $(h,w)$ within the colorized results $\tilde{x}$. Specifically, we calculate the residual between the ground truth image and the colorized result as $\delta = x - \tilde{x}$ (L142-143). Next, we compute the variance of the aforementioned residual within local square windows at each position (L143-144), as shown in Eq. 3. Given that artifacts typically show up with high-frequency characteristics, areas with higher variances likely indicate where these artifacts are. Finally, we apply the artifact map as a weight to the image reconstruction loss (L146-147) to focus on the pixels where the model needs to further minimize the discrepancy. We further visualize the artifact map with different $N_{\mathrm{win}}$ to demonstrate the effectiveness of estimated artifacts map, as shown in Fig. S2. We will revise this in the final version for better readability. - Q: Using region-based guidance. To investigate whether object boxes could effectively mitigate the ghosting effects, we conduct two additional ablation studies by replacing modules in the latent space with the corresponding components from reference [R1] and [R2]. These modifications are individually designated as "L-ReCo" and "L-GLIGEN". To evaluate these ablation studies on the extended COCO-Stuff and Multi-instance datasets, we employ DINOv2 [R3] to estimate object boxes. While the results indicate a reduction in ghosting effects, they remain visible. This is because object boxes could only offer a coarse-grained location of primary objects, which is far less precise compared to the fine-grained luminance features provided by grayscale images. We show qualitative and quantitative results in Fig. S3 and Tab. S1, respectively. [R1] Z Yang, J Wang, Z Gan, L Li, K Lin, C Wu, N Duan, Z Liu, C Liu, M Zeng, and L Wang. ReCo: Region-controlled text-to-image generation. In *CVPR*, 2023 . [R2] Y Li, H Liu, Q Wu, F Mu, J Yang, J Gao, C Li, and YJ Lee. GLIGEN: Open-set grounded text-to-image generation. In *CVPR*, 2023. [R3] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, et al. DINOv2: Learning robust visual features without supervision. *arXiv preprint arXiv:2304.07193*, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying my questions. I updated my review.
null
null
null
null
null
null
BERT Lost Patience Won't Be Robust to Adversarial Slowdown
Accept (poster)
Summary: The paper considers adversarial slowdown attacks on multi-exit text classification models based on BERT. They propose an attack, Waffle, which adapts text adversarial example attacks to a slowdown objective. They measure the susceptibility of multi-exit models for GLUE to their attack, evaluate attack transferability, analyze their generated adversarial text, and discuss mitigations for this attack. Strengths: The evaluation is quite broad in the classification tasks considered, multi-exit models used, and baselines. I also appreciated the transferability analysis and the "UAP". I was interested to see the linguistic analysis of attacks. The linguistic markers mentioned seem plausible and I like the qualitative analysis in Table 3. However, I would like to also see some quantitative analysis of this property as well. The paper is the first I am aware of to consider adversarial slowdown on text classifiers. This is a natural problem and may be of interest as text models become increasingly large. Weaknesses: Reading the paper, I was surprised that I never saw an experiment's running time measured, since this is the motivation for the attack. I think this is a pretty important consideration especially for defenses. If the running time of a defense (especially ChatGPT) is higher than the actual slowdown, there's not any point in applying the countermeasure (this is never discussed in the subsection). The attack often creates incoherent text, as seen in Table 3. This seems to be a feature of text attacks, rather than a limitation specifically of the Waffle attack. However, it seems that the attack could be overfitting to a specific "distance metric" for the attack. Using some other distance functions, such as token/character edit distance, character replacement, as supported in TextAttack may also be useful to understand the generality of the attack. ChatGPT is likely to have data leakage here, making it a bad scientific baseline. I would encourage at least a discussion of this. This may be one reason why the more standard grammar checking tools are unable to correct the attacks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What running time impact do your adversarial slowdown attacks have? How much time do the grammar checking countermeasures take? How often do subject-predicate agreement and changing named entities result in slowdown? What fraction of successful slowdowns fall into these buckets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The running time of experiments is never measured, which is a limitation that is not mentioned in the paper. Data leakage in ChatGPT may also be a factor in the countermeasures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time to read and provide feedback. Below, we provide answers to your questions and concerns. We will also include them in the final version of our paper. — > (Question 1) What running time impact do your adversarial slowdown attacks have? We acknowledge the importance of measuring our attack’s impact on the actual runtime of multi-exit models. We first show our attack’s impact on the runtime in the results below. All samples are crafted using TextFooler with DeeBERT as the victim model (results are formatted as CLEAN → WAFFLE, CLEAN being the clean inputs and WAFFLE their perturbed counterparts): --- **QQP** Efficacy: 0.36 → 0.22 Runtime: 7.5 s → 9.1 s **RTE** Efficacy: 0.34 → 0.12 Runtime: 2.7 s → 3.4 s **MRPC** Efficacy: 0.35 → 0.09 Runtime: 3.7 s → 4.7 s **QNLI** Efficacy: 0.35 → 0.10 Runtime: 7.8 s → 10.7 s **CoLA** Efficacy: 0.34 → 0.13 Runtime: 7.8 s → 10.0 s --- As is evident, a reduction in efficacy is correlated with an increase in runtime. Our usage of efficacy in our paper mainly comes down to it being hardware agnostic, as the exit layer will not change between models run on different machines. This makes it a strong metric for quantifying the speed-up of multi-exit models, regardless of the exact experimental setup. — > (Question 2) How much time do the grammar checking countermeasures take? The runtime of the grammar-checking countermeasures explored in Sec. 7 is higher than the inference time of all multi-exit models our work considers. As the reviewer accurately points out, this would make them useless from a pure runtime standpoint. However, we reason that the purpose of using these defenses was primarily exploratory, aiming to understand further why specific text causes more slowdown and how modifying such text can revert this slowdown. Moreover, input sanitization is already used in commercial models. Claude-2 [1], a conversational model similar to ChatGPT, already employs input-filtering techniques, which we believe is a promising future work direction. We acknowledge the importance of defense mechanisms with less computational costs and consider this to be an important area for future work. We will include this discussion in Appendix E. [1] https://claude.ai/ — > (Concern 1) The attack often creates incoherent text, as seen in Table 3. This seems to be a feature of text attacks, rather than a limitation specifically of the Waffle attack. However, it seems that the attack could be overfitting to a specific "distance metric" for the attack. Using some other distance functions, such as token/character edit distance, character replacement, as supported in TextAttack may also be useful to understand the generality of the attack. We acknowledge that incoherent texts are a limitation of adversarial attacks in the natural language processing domain, and WAFFLE is not exempt from this limitation. The specific distance metric used for WAFFLE depends on its underlying adversarial crafting algorithm; in the case of our experiments, we use the distance metrics in-line with the original work of [1] and [2]. We clarify that WAFFLE itself has no distance metric, instead providing an objective function that adversarial crafting algorithms may utilize with their own respective distance metrics. We leave the exploration of different distance metrics as future work, as well as exploring newer adversarial crafting algorithms that boast greater sentence coherence (e.g. [3]). [1] Jin et al., Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment, arXiv 2019 [2] Yoo & Qi, Towards Improving Adversarial Training of NLP Models, ACL 2021 [3] Li et al., Contextualized Perturbation for Textual Adversarial Attack, ACL 2021 — > (Question 3) How often do subject-predicate agreement and changing named entities result in slowdown? What fraction of successful slowdowns fall into these buckets? To stress the prevalence of subject-predicate disagreement and changing of named entities in samples crafted by WAFFLE, we follow the reviewer's advice and conduct an experiment that categorizes such samples into "buckets." Taking the top-100 slowdown-inducing adversarial texts from QQP crafted on DeeBERT, we quantify the number that contain at least one instance of subject-predicate disagreement or at least one instance of a changed named entity. 84% introduce some form of subject-predicate disagreement, and 31% change a named entity. This result is consistent with our linguistic analysis and speaks to BERT's importance on these factors when performing inference. An important consideration is that not all samples contained a named entity, explaining why the prevalence of named entity changes was much lower despite consistently contributing to the slowdown. We thank the reviewer for considering this quantitative approach and will update our final paper to include this result. — > (Question 4) ChatGPT is likely to have data leakage here, making it a bad scientific baseline. I would encourage at least a discussion of this. This may be one reason why the more standard grammar checking tools are unable to correct the attacks. We thank the reviewer for bringing this to our attention. We acknowledge the likelihood of ChatGPT leaking data and will discuss the risks in our limitations and societal impact section (Appendix E). We did not mean to suggest ChatGPT is a practical defense (execution time aside). Instead, since other input sanitation (Grammarly) failed, we used it as an accessible tool for input sanitization via a conversational model as a proof-of-concept that it may have effectiveness as a defense. We envision future work on evaluating the robustness and efficiency of input sanitization defenses using conversational models (with better controlled and known datasets). --- Rebuttal Comment 1.1: Title: Thank you Comment: I'm happy with the response and will increase my score. --- Reply to Comment 1.1.1: Title: Thank You Comment: Dear Reviewer 69Mr, We would like to thank you again for taking the time to read our rebuttal. We are happy that our response addresses your concerns and questions. We will make sure our responses are reflected in the final version of our paper.
Summary: This papper proposes WAFFLE, a slowdown attack to generate natural adversarial text bypassing early-exits. Empirical results show the robustness of multi-exit language models against adversarial slowdown. Strengths: 1.The paper is well-written and easy to follow. 2.The evaluation is comprehensive. Weaknesses: 1.It seems reference[1] does similar slowdown attacks, but [1] works on computer vision domain. What are the differences between WAFFLE and [1]? 2.Does WAFFLE still work on non-transformer based architectures, such as LSTM? [1] Hong, S., Kaya, Y., Modoranu, I.V. and Dumitras, T., 2020, October. A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference. In International Conference on Learning Representations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.Whether the proposed method can still work properly on other transformer architectures, such as GPT-2, RoBERTA? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time to read and provide constructive feedback. Below, we answer the questions and concerns. We will also include this discussion in the final version of our paper. — > (Weakness 1) It seems reference [1] does similar slowdown attacks, but [1] works on computer vision domain. What are the differences between WAFFLE and [1]? We first clarify that while our work and the work done in [1] both achieve slowdown against multi-exit models, differences in problem domain lead to differences in approach. (1) Against language models, we often do not have access to input gradients (which is straightforward in attacks against computer-vision models). We thus need to design a new slowdown objective compatible with non-gradient-based attacks. (2) We must bound the values of our slowdown objective within [0, 1]. We found that the objective used in the prior work [1] is unbounded to [0, inf]; thus, a straightforward adaptation of this objective for adversarial text-attack algorithms leads to unbounded perturbations, and the resulting text completely differs from the original one. (3) The attack against language models works with discrete text inputs; not all embedding-level perturbations we compute exist as words and small changes to input (word, characters) can result in large logit changes. We must search for candidate words (or word combinations) for substitution. Due to the space limits, we summarized the challenges in Line 74–78, but for clarity, we will include this discussion in the final version of our paper. [1] Hong et al., A Panda? No, It’s a Sloth: Slowdown Attacks on Adaptive Multi-exit NN Inference — > (Weakness 2) Does WAFFLE still work on non-transformer based architectures, such as LSTM? Given the nature of our attack, it is necessary that a victim model contain early-exits. To the best of our knowledge, there is no prominent work on LSTM based early-exit models for the natural language processing domain. So, we have not tested on LSTM models. We focus on transformer-based architectures due to their effectiveness and future potential in the NLP domain. However, if there is a recommended LSTM model, we would be happy to investigate. — > (Question 1) Whether the proposed method can still work properly on other transformer architectures, such as GPT-2, RoBERTA? We provide evidence of WAFFLE’s transferability between different models and architectures by testing three different model-architecture pairs in our main analysis in Sec. 4. Further combinations are tested in Sec. 5, which shows that WAFFLE transfers well between transformer-based models. To answer the reviewer's question explicitly, we tested several multi-exit models with RoBERTa and found similar slowdown results as the BERT versions. This is unsurprising because RoBERTa is essentially a BERT model but with a modified and improved set of hyperparameters. At the time of our implementation, GPT-like early exit models were not yet prominent and it would have taken significant time and resources to modify GPT-2 with early exit methods and retrain the models. We acknowledge that some now exist, such as [1, 2], and results would be interesting to see. [1] Schuster et al., Confident Adaptive Language Modeling, NeurIPS 2022 [2] Din et al., Jump to Conclusions: Short-Cutting Transformers With Linear Transformations, arXiv 2023 --- Rebuttal Comment 1.1: Title: Thanks for explaination. Comment: Thanks for the detailed explaination. The reviewer is satisfied with the response.
Summary: The paper evaluates robustness of multi-exit language models against specifically perturbed datapoints that induce adversarial slowdown and controbutes to the literature on availability attacks. The paper targets language models, as opposed to the vision models that were explored in the existing literature. The paper presents WAFFLE and attack that forces the early exists to be avoided and ultimately slows down the computation. The paper then explores the constracted examples and finds that they appear slightly more out of distribution. Strengths: + Interesting important setting Weaknesses: + Unclear performance with respect to the related work Technical Quality: 3 good Clarity: 3 good Questions for Authors: Thank you very much for the paper, I enjoyed the read very much. I really only have two comments. 1. To the best of my knowledge, availability attacks against language models were previously done by Shumailov et al. with Sponge examples (EuroS&P, https://arxiv.org/abs/2006.03463) and similarly adopted by Boucher et al (reference [1] in the paper). How would these attacks do in comparison to WAFFLE? 2. Would you expect to see more performance degradation if text were turned even more out of distribution? Are there cases where ChatGPT defence would not work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time to read and provide feedback. Below, we provide answers to the questions and concerns. We will also include this discussion in the final version of our paper. — > (Question 1) To the best of my knowledge, availability attacks against language models were previously done by Shumailov et al. with Sponge examples (EuroS&P) and similarly adopted by Boucher et al (reference [1] in the paper). How would these attacks do in comparison to WAFFLE? We first clarify that our attack is the first of its kind: a slowdown attack that generates natural adversarial text that bypasses the early-exit layers of multi-exit models. Sponge examples [1] (and as used in Bad Characters [2]) are ‘resource’ attacks. They exploit computational properties of hardware or tokenization, e.g., input dimensionality and/or activation sparsity, to increase the inference runtime. In contrast, our attack is hardware-agnostic and targets multi-exit model architectures, a new algorithm for efficient language model computations. [1] Shumailov et al., Sponge Examples: Energy-Latency Attacks on Neural Networks, IEEE 2021 [2] Boucher et al., Bad characters: Imperceptible nlp attacks, IEEE 2022 — > (Question 2) Would you expect to see more performance degradation if text were turned even more out of distribution? Are there cases where ChatGPT defense would not work? Following our linguistic analysis in Sec. 6, we do suspect that further pushing samples towards out-of-distribution would further degrade performance. The quantification of an out-of-distribution sample may be non-trivial, but we believe some evidence of the previous claim is exhibited in Fig. 2. As we increase the attack success threshold, i.e. the score needed to satisfy our slowdown objective in Sec. 3.2, both the accuracy and efficacy of the victim models decrease. This could in part be due to these samples being pushed further out-of-distribution, a claim in alignment with our linguistic analysis. We also agree that it is an interesting question to ask to what extent conversational models like ChatGPT offer robustness to adversarial slowdown. However, it is not the scope of our work and requires future work. While ChatGPT shows some robustness to OOD and adversarial attacks in the prior work [2], we clarify that our claim is **not** that ChatGPT is robust to any attacks with input text perturbations. Instead, as a tool for realizing input sanitization, we observe that ChatGPT provides some effectiveness with fewer side-effects. We next envision future work on evaluating the robustness of input sanitization via conversational models against adaptive adversaries; recent work [3] would be a nice starting point. We will include this discussion in the final version of our paper. [2] Wang et al., On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective [3] Zou et al., Universal and Transferable Adversarial Attacks on Aligned Language Models --- Rebuttal Comment 1.1: Comment: Many thanks for your response. I am not sure I agree with the hardware-specifics discussions above and the distinction. One can think about auto-regressive models as a type of early-exit, where smaller sequences use less compute -- Sponge examples with their focus on larger input/output sequences cause the exit strategy to change for them. Never the less, as long as the discussion of the differences is included into the final draft I am happy with bumping the score. --- Reply to Comment 1.1.1: Title: Thank You Comment: We would like to thank the reviewer again for taking the time to read our rebuttal. We agree, in particular for the case of auto-regressive models, there may be similarities to sponge examples. We will make sure to update the paper with a discussion on sponge examples.
Summary: This paper evaluates the robustness of multi-exit language models against adversarial slowdown. The authors propose a slowdown attack that generates natural adversarial text to bypass early-exit points. They conduct a comprehensive evaluation of three multi-exit mechanisms using the GLUE benchmark and demonstrate that their attack significantly reduces the computational savings provided by these mechanisms in both white-box and black-box settings. The study reveals that more complex mechanisms are more vulnerable to adversarial slowdown. Adversarial training is found to be ineffective in countering the slowdown attack, while input sanitization with a conversational model like ChatGPT can effectively remove perturbations. The paper concludes by emphasizing the need for future research in developing efficient and robust multi-exit models. Strengths: - Proposed the first slow-down attack on language models. - Evaluate the proposed methods on different architectures (i.e., early-exit mechanisms) - Demonstrate the effectiveness of the methods in different threat models/attack settings (i.e., black-box, white-box) - Analysis on generated adversarial examples is conducted to provide further insights into the vulnerability of the model - Mitigation and defense methods are discussed to show that sanitization methods are more effective compared to robust training method. Weaknesses: - Since the slow-down attack has been demonstrated in vision task, the challenge of adapting it to language model is not clear. - The proposed slow-down objective seems trivial in terms of novelty. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In general, this paper is well written with complete story and comprehensive analysis. The reviewers would appreciate if the authors could improve on several perspectives: - Highlighting the difference of slow-down attack between language and vision task to show the challenges this paper resolved. - Providing some discussing regarding the linguistic analysis on the adversarial examples to provide actual insights of improve the language model/exit classifiers in the future. - Limitation of the paper should be discussed to shed light on potential future works along this direction. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not have a section regarding the limitation discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time to read and provide valuable feedback. Below, we provide answers to the questions and concerns. We will also include this discussion in the final version of our paper. — > (Weakness 1 and Question 1) Highlighting the difference of slow-down attack between language and vision task to show the challenges this paper resolved. We first clarify the unique challenges we addressed in developing our slowdown attacks, compared to ones developed for computer-vision models, as follows: (1) Against language models, we often do not have access to input gradients (which is straightforward in attacks against computer-vision models). We thus need to design a new slowdown objective compatible with non-gradient-based attacks. (2) We must bound the values of our slowdown objective within [0, 1]. We found that the objective used in the prior work [1] is unbounded to [0, inf]; thus, a straightforward adaptation of this objective for adversarial text-attack algorithms leads to unbounded perturbations, and the resulting text completely differs from the original one. (3) The attack against language models works with discrete text inputs; not all embedding-level perturbations we compute exist as words and small changes to input (word, characters) can result in large logit changes. We must search for candidate words (or word combinations) for substitution. Due to the space limits, we summarized the challenges in Line 74–78, but for clarity, we will include this discussion in the final version of our paper. [1] Hong et al., A Panda? No, It’s a Sloth: Slowdown Attacks on Adaptive Multi-exit NN Inference > (Weakness 2) The proposed slow-down objective seems trivial in terms of novelty. Thank you for your comment, we acknowledge that we could have been more clear on the novelty of our work and will expand the Appendix accordingly. As we clarified in our response to the first question, due to the differences between CV and NLP, it was unclear how well the attacks would work if at all. For space and readability reasons, we did not show all of the variations of the attack, search methods, and objective functions that we attempted in order to find the most effective methodology. Additionally, NLP has additional dimensions of semantics and sentence structure which we analyze in order to understand why the attacks work and how to defend against them. — > (Question 2) Providing some discussing regarding the linguistic analysis on the adversarial examples to provide actual insights of improve the language model/exit classifiers in the future. We thank the reviewer for pointing out this. We acknowledge that due to the space limitations, our connections to future work are unclear. We first clarify that our purpose of providing the linguistic analysis (Sec 6) is not just to provide our attack’s characteristics but to shed light on future directions for bringing efficient and robust multi-exit language models. We highlight some of the less clear lessons Sec. 6 offers as follows, and we will also update the final version of our paper to better describe them. **Models Robust to Larger Perturbations May Not Be Robust to Smaller Perturbations** Conventional wisdom from studies in computer vision is that: if an adversary leverages larger input perturbations (e.g., the perturbations are bounded to 16 pixels), their attack will be stronger than the attacks with smaller input perturbations (e.g., 8 pixels). In other words, if we robustify a model against the attacks perturbing 16 pixels at most, the model is also robust to the 8-pixel bounded perturbations. However, we show this is not true for our slowdown attacks. Investigating the adversarial texts generated from our “unbounded” slowdown attacks, we could not find the correlation between the attack strengths and the perturbation amounts. It questions the effectiveness of adversarial training, a conventional defense that trains a model with bounded adversarial texts. The observation led to our first experiments on potential countermeasures, and we show that adversarial training is ineffective (and also causes undesirable consequences, e.g., the utility and efficacy loss of a model). For future work this suggests adversarial training is still not mature and may need to utilize linguistic information. **Input Sanitization May Be A Promising Direction for Defeating Slowdown Attacks** Sec 6 offers an alternative insight for developing future defenses. We show that an adversary can exploit the subject-predicate mismatch to make a model less confident about the perturbed sample's prediction. This misalignment, while easier for humans to identify, is difficult for a target model to do so. Thus, in Sec 7, we propose to leverage models able to correct grammar errors, including the subject-predicate mismatches, for sanitizing inputs before being fed to the target multi-exit models. However, input sanitization may be slow which offsets the early exit speedup and we showed that some sanitization methods may not be effective either. This suggests future work in input sanitization for fast and effective methods, where we uncovered some key linguistic features that may be necessary. — > (Question 3) Limitation of the paper should be discussed to shed light on potential future works along this direction. The authors did not have a section regarding the limitation discussion. We kindly remind the reviewer that we discuss the limitations and future work in Appendix E. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explaination/clarifications. The reviewer is satisfied with the response besides a few comments: > Models Robust to Larger Perturbations May Not Be Robust to Smaller Perturbations This seems to be counter-intuitive. Usually a large perturbation is a superset for smaller perturbation, right? Did the reviewer miss anything for this insights? > Limiatation sections It would always be a good practice to have a reference in the main paper for contents in appendix, especially for important contents like discussions of limitations :) --- Reply to Comment 1.1.1: Title: Thank You and Our Response to Additional Questions Comment: Dear Reviewer KooQ, We first would like to thank you again for taking the time to read our rebuttal. > This seems to be counter-intuitive. Usually a large perturbation is a superset for smaller perturbation, right? Did the reviewer miss anything for this insights? We clarify that the perturbation is the “word-level” perturbations, i.e., how many “words” are perturbed in crafting adversarial texts. This is different from “numerical perturbations” to the inputs that attacks against computer vision models use. We observe from our experiments in Sec 6 that it is important for the attacker to choose the “right” word(s) to cause slowdowns rather than perturbing many words. We also point out that existing adversarial text attacks, e.g., [1, 2], correlate (or quantify) the attack strengths as the % of perturbed “words.” Nevertheless, our work highlights that it may not be the right metric, as even a single word can be sufficient to craft a strong adversarial text. > Limitation Sections Thank the reviewer for a great suggestion. We make sure to have references to the Appendix in the main paper. --- We are happy to answer any further questions (or concerns); let us know. [1] Jin et al., Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. [2] Yoo et al., Towards Improving Adversarial Training of NLP Models
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fast Trainable Projection for Robust Fine-tuning
Accept (poster)
Summary: This paper proposes a robust fine-tuning technique that has the decent scalability and efficiency to achieve higher ID and OOD performance when transferring a pre-trained model to downstream tasks. A new projection-based fine-tuning algorithm, Fast Trainable Projection (FTP) is proposed for computationally efficient learning of per-layer projection constraints with the theoretically analyses in the lens of Lipschitz continuity. Extensive experiments show its effectiveness. Strengths: The paper is well-written and easy to read. The proposed FTP is reasonable and easy to implement with the provided pseudo code. The theoretical analysis is also appreciated. Extensive experiments on several downstream tasks and datasets are conducted to show the strong performances of the proposed method Weaknesses: While the method is simple with strong performances, the novelty is somewhat limited. It seems like a direct extension of TPGM. Using the previous model and cached gradients to update the constraints is trivial to me and makes limited technical contributions. Gradient annealing is also like a trick. Can the theoretical analyses prove that the proposed FTP is more powerful than TPGM? Since the experimental results show the superior performance of FTP. The ablation study is insufficient. The ablation should be done on ProjUpdate and Gradient Annealing to show the effectiveness of each component. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I understand FTP is more efficient than TPGM. But why FTP achieves better performance? Could the author give an intuitive explanation? For the continual learning experiment, the comparison is somewhat not fair, since the proposed method needs to know the task id. (Page 9 line 288 we re-initialize FTP after each task and use the current model as the “pre-trained model” for the next task.) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Well addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and constructive feedback! * **Regarding novelty** The novelty of our method lies in its algorithmic improvement to the prior work TPGM in terms of efficiency, flexibility, and robustness. To successfully use the previous information to update internal parameters in FTP requires rigorous analysis. It requires careful caching of the correct model components from previous steps. For example, in Eq.4, we store previous “unconstrained” update $\tilde{W_{t-1}}$ instead of naively the previous update $W_{t-1}$ for mathematical correctness that’s not immediately clear without analysis. While the end result is a simple algorithm (a strength), our contribution is to come up with this method and rigorously evaluate it, showing significant benefits as other reviewers have mentioned. The differences in performance can be partially attributed to the algorithmic differences between the two and also see discussion in Appendix 7.3, but we agree that future work could have potential to understanding and designing even better methods. In addition to this, the improvement of FTP is also in its efficiency and flexibility. For example, we demonstrated the regularization strength of FTP in continual learning experiments (sec.4.3). This is not possible with TPGM due to the lack of validation data under a continual learning setting. Hence such differences have practical import. Gradient annealing is the most direct way to adjust the regularization strength of FTP much like hyper-parameters in other works. * **Regarding the theory** No, the theory does not indicate that FTP is better than TPGM. The improvement of FTP over TPGM is algorithmic in terms of efficiency and applicability, not theoretical. Nevertheless, section 3.3 introduces a general theory on why projection is useful for fine-tuning, a theoretical question that hasn’t been answered in prior works. While you are right that it generally applies to all projection-based methods including FTP and TPGM, we believe it was important to include to progress this area empirically and theoretically. We will add text and organization to explicitly discuss this. * **Regarding ablation study** The ProjUpdate component introduces the core update equation in the FTP optimization algorithm. It serves as the mathematical foundation for FTP and cannot be easily replaced by other alternatives without breaking the core mechanism. Therefore, it is not clear to us how to ablate this component immediately. Nevertheless, We are more than happy to hear further discussion from the reviewer. On the choice of the Gradient annealing hyper-parameters, we treat it as a hyper-parameter that regularizes the strength of projection. In our experiments, we performed a hyper-parameter search for different annealing values and use the best one on validation data. Please refer to Appendix 7.4 and 7.5 to see the exact value used for each experiment. * **Regarding intuition on why FTP is better than TPGM.** FTP improves TPGM algorithmically in terms of efficiency, in terms of flexibility by removing limiting factors such as the need for validation data and nested update loops, and in some cases in terms of OOD robustness. The differences in performance fluctuations can be partially attributed to the algorithmic differences between the two and also see discussion in Appendix 7.3, as well as simplified tuning, but we agree that future work could have the potential to understand and design even better methods. * **Regarding the continual learning experiments** Our comparison is absolutely fair because our method does not require knowledge of the task id during inference. It is privileged only to the knowledge of task boundary, which is also required in LwF.MC [45], L2P [46], DualPrompt [47], CODA-P [44], EWC [48], and L2 [44]. Specifically, * LwF changes the reference regularization model at the task boundary and treats new and old logit heads differently. * DualPrompt and Coda-p initialize new model parameters at the task boundary * L2P/DualPrompt/Coda-P treat old and new logits differently * EWC and L2 use task boundaries to determine the regularization parameters. EWC actually does significant regularization calculations at each task boundary. In other words, we do not use any additional information beyond what other compared methods do. --- Rebuttal Comment 1.1: Comment: The rebuttal address my concerns and I would like to maintain my rate.
Summary: This paper introduces Fast Trainable Projection (FTP), a new algorithm aimed at improving the robustness and efficiency of fine-tuning pre-trained models. FTP achieves visible improvement in terms of computational speed and adaptability, demonstrated across various vision tasks and models, contributing to an average 35% speedup on benchmarks compared to prior works. The authors also provide a theoretical explanation for FTP's ability to maintain the robustness of pre-trained models through the lens of Lipschitz continuity. Strengths: 1. papers are well-written, figures are well-made and very easy to read 2. problem being studied is important -- the efficient finetuning of models are particularly relevant given the popularity of foundation models today. Weaknesses: 1. Limited to Vision Tasks: The experiments are largely focused on vision tasks. Additional experiments in other application areas such as natural language processing could further demonstrate the algorithm's effectiveness. 2. Does the method work for foundation-level models? For example, in vision, there's stable diffusion, segment anything etc. There are already very mature finetuning pipeline and benchmarks for these models that can be experimented with. If the proposed method works for these models, the impact will be significantly increased. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Does the proposed algorithm work on low-rank adaptation (LoRA)? Would be interesting to see related experiments for that. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and constructive feedback! * **Regarding limited-to-vision tasks** Theoretically, our method applies to any fine-tuning problems with pre-trained weights. To further showcase the adaptability of our method, we conducted a quick experiment on NLP for the rebuttal. Specifically, we use the DistilBert model [1], pre-trained on Wikipedia and Bookcorpus, and fine-tune it to 1% of the IMDB dataset. We finetune the DistilBert model in an unsupervised masked fashion and measure its performance on the fine-tuned IMDB dataset and the original Wikipedia dataset using perplexity (lower the better). We use the implementation open-sourced by Hugginface [2] for this experiment (15% token masking rate, 50 epochs, and learning rate 5-e5). | | IMDB $\downarrow$ | Wikipedia $\downarrow$ | |---------|-------|-----------| | Vanilla | 11.05 | 9.94 | | FTP | **10.88** | **9.69** | We use 1% data to simulate the situation of overfitting due to a small amount of data. Compared to vanilla fine-tuning, FTP can better avoid overfitting (lower perplexity on IMDB) and maintain more information from the original pre-training dataset (lower perplexity on Wikipedia). These observations corroborate our findings in other experiments. [1] Sanh, Victor, et al. "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter." arXiv preprint arXiv:1910.01108 (2019). [2] https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt * **Regarding other foundation models** Theoretically yes. FTP is a generic projection-based optimization technique that can be integrated into existing optimizers to formulate new generic optimizers such as SGDP (sec.5). In this regard, theoretically, FTP can be applied to any fine-tuning problems (with pre-trained weights) in deep learning regardless of the underlying model architecture. In our experiments, we have successfully applied it to different deep learning models (CNNs and Transformers), tasks (classification and dense vision tasks), and even different learning paradigms (supervised, unsupervised and continual learning). While the rebuttal did not leave sufficient time to try additional foundation models, we agree this is a great area of future work to maximize impact, and we plan on doing so. * **Regarding LoRA.** No, the method does not apply to LoRA because LoRA adds additional weights that are not pre-trained and randomly initialized. Our method relies on the projection toward pre-trained weights. If a method introduces new model parameters, then our method does not apply. Nevertheless, we add experiments to compare to low-rank finetuning methods (as suggested by Reviewer 2kDG), Polyhistor [1], and LORA [2], using Adapter and low-rank factorization. Specifically, we compare them on the PASCAL dense vision benchmarks in Sec.4.2. In the following table, we directly compare them to the reported results from Polyhistr since we used its open-sourced code to benchmark FTP in our paper. | | Segmentation $\uparrow$ | Human Parts $\uparrow$ | Surface Normal $\downarrow$| |------------|--------------|-------------|----------------| | Vanilla | 66.03 | 62.21 | 18.98 | | LORA | 70.12 | 57.73 | 18.96 | | Polyhistor | 70.87 | 59.54 | 17.47 | | FTP | **73.79** | **65.50** | **15.51** | We observe that on all fine-tuning tasks, FTP achieves better performance than LORA and Polyhistor. [1] Liu, Yen-Cheng, et al. "Polyhistor: Parameter-efficient multi-task adaptation for dense vision tasks." Advances in Neural Information Processing Systems 35 (2022): 36889-36901. [2] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).
Summary: In this paper, the authors propose an efficient fine-tuning method for deep learning models that learns the constraint for each layer more efficiently. During neural network training, the proposed method optimizes the layer-wise constraint using the last batch of data, unlike a prior method (TPGM) that updates layer-wise constraints on a "validation" dataset. This new method is less computationally burdensome since the gradient of the constraint can be computed using the chain rule, which reuses the gradient stored for the last batch. The authors also propose a notation to describe fine-tuning robustness and provide theoretical justification for the proposed method under this notation. To demonstrate the effectiveness of the new method, the authors conduct a comprehensive list of experiments and show that the proposed method is not only more efficient but also has better performance. Strengths: -The writing in this paper is of high quality, as the authors have presented the methodology in a clear manner. -The related work section is well-discussed. The authors have categorized the existing fine-tuning studies into three categories: when, where, and how much to fine-tune. This is an insightful way to approach the current studies. -The authors have provided theoretical justification for the proposed method. -The experimental results are comprehensive, as the authors have included a list of different datasets and model architectures. The proposed method has been shown to outperform all baseline methods. Weaknesses: -My first concern is that the proposed method's novelty might be overshadowed due to its similarity to a previous method called TPGM. The major difference between the two seems to be the selection of batch data for updating the constraint parameter. The previous work used validation data, which may not be a common choice in the supervised learning setting, while this work uses the previous batch data. Although using the stored gradient indeed improves computational efficiency, the authors did not clearly state why this method has better performance than TPGM. The validation data and the other training data should have the same distribution. -The proposed difference function's meaning requires in-depth justification. While (Lipschitz) smoothness is usually associated with better performance and (adversarial) robustness for deep neural networks, it is not directly implied that the proposed difference function will improve the downstream performance of a fine-tuned model. Additionally, this notation only describes the difference in feature space; how does it describe the linear-probing fine-tuning method, which only changes the linear last layer? Can it be shown that this proposed notation indicates fine-tuning downstream performance? -The robustness notation here seems to be related to the OOD generalization capability. Are they the same thing here? Usually, the robustness is used to describe the model’s performance under some unwanted perturbation (adversarial samples or corruption) -The Low-rank fine-tuning methods have been really popular recently. Is it possible to compare the proposed method with them? Minor typo: Line 52, Liptschitz → Lipschitz Line 127, “Then FTP calculates the gradients”, is here FTP meant to be TPGM? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What’s the pattern of learned constraint for each layer, with regard to training epochs? Do they converge during the training? I am really curious about the statement of discrepancies here. The author mentioned multiple times that the discrepancy between the validation data and training data or between the different batches of data enables the learning of meaningful projection constraints but without further elaborations. Typically, the training and validation data should be subject to the same distribution. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback! * **Regarding Novelty** The novelty of our method lies in its algorithmic improvement to the prior work TPGM in terms of efficiency, flexibility, and robustness. To successfully use the previous information to update internal parameters in FTP requires rigorous analysis. It requires careful caching of the correct model components from previous steps. For example, in Eq.4, we store previous “unconstrained” update $\tilde{W}_{t-1}$ instead of naively the previous update $W_{t-1}$ for mathematical correctness that’s not immediately clear without analysis. While the end result is a simple algorithm (a strength), our contribution is to come up with this method and rigorously evaluate it, showing significant benefits as other reviewers have mentioned. The differences in performance can be partially attributed to the algorithmic differences between the two and also see discussion in Appendix 7.3, but we agree that future work could have potential to understanding and designing even better methods. In addition to this, the improvement of FTP is also in its efficiency and flexibility. For example, we demonstrated the regularization strength of FTP in continual learning experiments (sec.4.3). This is not possible with TPGM due to the lack of validation data under a continual learning setting. Hence such differences have practical import. * **Regarding the difference function** Optimizing the difference function improves the robustness of the fine-tuned model. To understand the logic, we need to combine both Lemma 1 and Lemma 2. * Lemma 1 says that if we can minimize the right-hand side of the difference function, we can maximally maintain the robustness of the pre-trained model. * Lemma 2 says that minimizing the difference function, in fact, leads to a projection operation in the weight space. Therefore, the conclusion can be drawn that projection can lead to better fine-tuning robustness. The main contribution of the difference function is that by virtue of Lemma 2, it establishes an equivalence between feature space and weight space, i.e., projection in weight space is equivalent to minimizing the difference function in feature space. This applies to any weight layer including the last linear layer in a neural network. * **Regarding low-rank methods**. This is a great point, and for this rebuttal, we add experiments to compare to low-rank finetuning methods, Polyhistor [1] and LORA [2], using Adapter and low-rank factorization. Specifically, we compare them on the PASCAL dense vision benchmarks in Sec.4.2. In the following table, we directly compare them to the reported results from Polyhistor since we used its open-sourced code to benchmark FTP in our paper. | | Segmentation $\uparrow$ | Human Parts $\uparrow$ | Surface Normal $\downarrow$| |------------|--------------|-------------|----------------| | Vanilla | 66.03 | 62.21 | 18.98 | | LORA | 70.12 | 57.73 | 18.96 | | Polyhistor | 70.87 | 59.54 | 17.47 | | FTP | **73.79** | **65.50** | **15.51** | We observe that on all fine-tuning tasks, FTP achieves better performance than LORA and Polyhistor. [1] Liu, Yen-Cheng, et al. "Polyhistor: Parameter-efficient multi-task adaptation for dense vision tasks." Neurips (2022) [2] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." (2021). * **Regarding FTP learned pattern** Please see the attached PDF for a visualization. The learned constraints grow slowly from a small value, e.g., 1e-8, at different paces for each layer and eventually converge. The pattern is highly similar to that of TPGM as reported in its paper [1]. We observed that layers closer to the output tend to have larger constraints (less constrained and more changes allowed) whereas the layers closer to the input tend to have smaller constraints (more constrained). This is similar to our intuition that early layers learn more general information and later layers learn task-specific information. The constraints do converge because they are calculated based on the current model gradient g_t which decreases to zero as the learning rate drops (Eq.4). To qualitatively show this, we record the history of FTP constraints for the classification experiment in Appendix Tab.6. In this setting, we fine-tune a pre-trained ResNet50 on DomainNet real images. We visualize the learned constraints for each layer through time in the attached pdf file. There are two observations. 1) Early layers (dark colors) generally have smaller constraints than the latter layers (light colors) throughout training. 2) Constraints grow from small to large and converge in the end. [1] Tian, Junjiao, et al. "Trainable Projected Gradient Method for Robust Fine-tuning." CVPR 2023. * **Regarding "discrepancies"** This is a great question. Here the discrepancies refer to the variances between sampled batches. Even though training data and validation data are sampled from the same distribution, each mini-batch can lead to very different directions of updates on the current model. Intuitively, we utilize these variances to “control” how much the model should deviate from the pre-trained weights. If two batches disagree more on where the gradients should go in a certain direction, then the projection will be stronger towards the pre-trained model. Mathematically, this is reflected by the dot product in the updated equation 4 between $\mathbf{g}_t^{i,\intercal}$ and $-({\tilde{\mathbf{w}}_{t-1}^i}-\mathbf{w}_0^i)$ where the first quantity represents the gradients of the current batch and the second quantity represents the gradients of the previous batch. If the two batches disagree, then the product will be positive, which leads to smaller projection constraints (i.e., the gradient $\nabla\gamma_t$ is positive), i.e., stronger projection. --- Rebuttal Comment 1.1: Title: Response Comment: I would like to thank the authors' response as well as the additional experimental results. While I still hold my concern regarding some of the limitations I raised here, (most importantly, the discrepancies between training/validation batches), most of my concerns have already been addressed and I will increase my score and lean to acceptance. Thanks!
Summary: The paper presents a new algorithm for robust fine-tuning of pre-trained models, specifically focusing on maintaining out-of-distribution (OOD) robustness while achieving competitive in-distribution (ID) performance. The algorithm, dubbed Fast Trainable Projection (FTP), is designed to overcome the scalability and efficiency limitations of current methods, offering a 35% speedup compared to prior work (to be specific, TPGM). FTP achieves this by efficiently learning per-layer projection constraints during the fine-tuning process. The claimed contributions of this paper are: 1. The introduction of the FTP algorithm that significantly improves computational efficiency while learning projection constraints and fine-tuning the model simultaneously. This algorithm can be integrated with existing optimizers like SGD and AdamW and can be adopted as a new drop-in fine-tuning optimizer. 2. Empirical validation of the FTP algorithm's robustness on OOD datasets. They tested the algorithm across four vision tasks with five different pre-trained models, demonstrating robustness especially in scenarios with domain shifts and natural corruptions. The FTP algorithm also achieves state-of-the-art performance on a continual learning benchmark. 3. Theoretical explanation of FTP's robustness maintenance capabilities. The authors provide a mathematical perspective that explains why FTP is effective at preserving the robustness of pre-trained models. They explore this through the lens of Lipschitz continuity, taking into account both the feature space and weight space of a model. Strengths: The introduction of the Fast Trainable Projection (FTP) algorithm is an interesting contribution. It addresses the real-world limitations of scalability and efficiency associated with TPGM, making it a practical solution for various tasks. The robustness of the FTP algorithm has been validated with experiments across four vision tasks and five pre-trained models. Superior results on OOD datasets and state-of-the-art performance on a continual learning benchmark provide empirical support for the authors' claims. Weaknesses: 1. **Limited Scope:** the proposed method focuses on a specific Trainable projected gradient method (TPGM) and aims to improve its efficiency. However, the extent of its practical application could be questioned as it's not entirely clear how well it would generalize to other types of models or algorithms. Its significance might be seen as limited if it only optimizes a specific kind of model, especially when there are many alternative and possibly more efficient methods available. 2. **Lack of Efficiency:** Despite the fact that the FTP improves on the TPGM's efficiency, it remains slower than many other methods, which raises questions about its practicality in real-world applications where computational resources and processing times are often crucial factors. 3. **Theoretical Contributions:** The theoretical contribution of this paper may be seen as somewhat trivial, as the authors basically just formalize an intuitive understanding of Lipschitz continuity in relation to model robustness. The assumption that robustness equates to Lipschitz continuity may oversimplify the complex nature of robustness in deep learning models. Real-world robustness often depends on various other factors such as data variance, architecture, and loss landscape, which aren't addressed in this paper. Furthermore, they fail to delve deeper into the specifics of how the generalization relates to pre-trained weights, which might have offered more novel insights. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the memory footprint and runtime overhead of the proposed method? How does the proposed method affect generalization of neural networks? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback! * **Regarding limited scope** Flexibility and extended applicability are advantages of the proposed method FTP because FTP is a model-agnostic optimizer. Specifically, FTP is a generic projection-based optimization technique that can be integrated into existing optimizers such as Adam and SGD to formulate new generic optimizers such as AdamP and SGDP (Sec.5 and Appendix 7.7). In this regard, theoretically, FTP can be applied to any fine-tuning problems in deep learning regardless of the underlying model architecture. In our experiments, we have successfully applied it to the below models. **Note that we have added an entirely new task/model result for masked language learning, demonstrating the significant flexibility of our method.** * Different deep-learning models * CNNs (Tab.1, 2, 6 ) * Transformers (Tab.3, 4, 5, 7, 8) * Different Tasks * Classification (Tab.1, 2, 6) * Dense classification * Segmentation (Tab. 4) * human parts segmentation (Tab.7) * Dense regression * Surface normal estimation (Tab.8) * Masked language learning * See discussion with Reviewer WZxc. * Different learning paradigms * Supervised learning(Tab. 1, 2, 3, 4, 6, 7, 8) * Continual learning (Tab.5) * Unsupervised learning(See discussion with Reviewer WZxc) [1] Sanh, Victor, et al. "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter." (2019). [2] https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt * **Regarding efficiency** As mentioned by Reviewer Fz4t, the benefits of FTP are many-fold including better OOD generalization and lack of need of held-out validation data. Further, in terms of efficiency as mentioned computation tends to be smaller than the large-scale pre-training, so some slowdown can be tolerated. More specifically, FTP’s efficiency can be optimized to suit different practical fine-tuning applications that may vary what is finetuned. To see this, we need to understand that FTP is an optimizer that can be applied to any gradient descent-based fine-tuning problems. Just like we can fine-tune an entire model using SGD or just fine-tune the bias terms for faster speed (Partial Fusion in Tab.1 and Tab.2), FTP can be used to just fine-tune the bias terms as well, which will lead to much better efficiency. For example, in the continual learning experiments (Sec.4.3), FTP is used only to fine-tune the QKV components in a Transformer model. To further demonstrate the point, for the rebuttal here we show the efficiency of applying FTP to different parts of a model. Specifically, we adopt the same setting as in the segmentation experiment (Tab.4), which uses a Swin-Transfomer as the backbone. Here we only profile the time used by the optimization process, excluding data loading and the model forward pass, which are not part of FTP, to more directly demonstrate the efficiency of FTP. | s/it | Full Model | Bias Only | QKV Only | Decoder Only | |------|------------|-----------|----------|--------------| | Adam | 0.204 | 0.187 | 0.169 | 0.168 | | FTP | 0.314 | 0.280 | 0.188 | 0.186 | There are two important observations. 1) on average FTP is only 30% slower than the vanilla optimizer, which is not a major bottleneck in most cases. 2) as the number of tuning parameters decreases, the speed difference between FTP and the vanilla optimizer further diminishes (only 10% slower when only tuning the decoder). **This means that in extreme cases where computation resources and processing times are critical, FTP is virtually just as fast as vanilla optimizers for example when only finetuning a single last linear layer as in linear probing.** * **Regarding Theory** Lipschitz continuity is a standard and popular mathematical measurement of robustness in deep learning literature [1,2]. While it does not encompass all the aspects of deep learning robustness such as data variance, architectures, etc., it is a valuable mathematical tool to start analyzing the robustness of deep learning models. The generalization capability of a pre-trained model in its relationship to the pre-trained weights is explored in the prior work [3] through linear systems. Further, we believe our extremely thorough empirical results of a practical algorithm bolster the practicality of the analysis, despite limitations. In short, the better the pre-training datasets cover the downstream datasets, the more robust the fine-tuned model will be. [1] Weng, Tsui-Wei, et al. "Evaluating the robustness of neural networks: An extreme value theory approach." (2018). [2] Zhang, Bohang, et al. "Rethinking Lipschitz neural networks and certified robustness: A boolean function perspective."Neurips (2022): 19398-19413. [3] Tian, Junjiao, et al. "Trainable Projected Gradient Method for Robust Fine-tuning." CVPR. 2023. * **Regarding memory** The only major memory requirement is caching the previous gradient. This is rather a common requirement. For example, the Adam optimizer keeps running copies of aggregated first and second-moment information from previous iterations. The memory consumption consumed by storing the projection constraints is negligible since they are scalars, the size of which are orders of magnitude smaller than the size of gradients. * **Regarding Generalization** FTP has a positive effect on the generalization of neural networks as seen in our OOD generalization experiments in Tab.1 and Tab.2, where FTP achieves the best performance. The goal of the proposed optimizer is to improve the generalization ability of the fine-tuned model by maintaining the knowledge acquired during pre-training. In this regard, the effects on generalization should scale positively with the generalization ability of the pre-trained model. In other words, if we use a stronger pre-trained model, the positive effect on generalization from FTP will be even bigger. --- Rebuttal Comment 1.1: Comment: I have raised my score, to appreciate the empirical results of this paper. The fact that it is just an addon of a previous paper that is not widely used, remains unchanged. And I insist that the "theoretical" analysis part should be removed if the paper is finally accepted. It indeed adds little value to the paper.
Rebuttal 1: Rebuttal: We thank the reviewers for all the constructive feedback and positive comments on the real-world relevance (wzxc), strong performance (Fz4t), and extensive experiments (HDd6, 2kDG, yjp3). We have provided a detailed discussion for each of your questions below. For this rebuttal, we clarified some misunderstandings and introduced new experiments to address specific questions. To recap, we proposed FTP to improve the generalization and robustness of fine-tuned models. FTP is * An optimization technique that can be integrated into existing optimizers. * **Easy to use** in a plug-and-play fashion. * **Broadly applicable** to most fine-tuning problems regardless of the underlying tasks and models. * **More efficient** than prior works, achieving ~2x speedup. We have demonstrated its effectiveness across different deep learning models (CNNs and Transformers), tasks (classification and dense vision tasks), and even different learning paradigms (supervised, unsupervised and continual learning). New experiments: * Masked Langage modeling experiments (suggested by Reviewer WZxc) * The efficiency of FTP when fine-tuning to different parts of a model (inspired by Reviewer HDd6) * Comparison to low-rank fine-tuning methods e.g., LoRA (suggested by Reviewer 2kDG) * Visualization of FTP’s learned per-layer constraints with respect to training time (suggested by Reviewer 2kDG) All these experiments are included in the attached PDF file. Pdf: /pdf/d67478e3db0778ae88081cbf6aa345a511281fc1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents Fast Trainable Projection (FTP), an efficient fine-tuning algorithm. FTP learns a projection radius based on the current training batch. Through both analysis and experiments, the paper shows that FTP improves out-of-distribution robustness while maintaining in-distribution performance. Experiments also show that FTP can accelerate learning by up to 35% compared to previous methods. Strengths: - The idea seems to be a natural improvement following the lines of MARS-SP (project after unconstrained GD step) and TPGM (learn a different radius for each layer). - Section 3.3 was clear and interesting; it establishes a connection between weight-space projection and "robustness" in terms of the Lipschitz constant. - The experimental results are strong, across DomainNet, ImageNet, and PASCAL fine-tuning experiments. Weaknesses: - A main claimed benefit of FTP (in the abstract and intro) is its computational efficiency. To what extent is computational cost a bottleneck in the experimental settings you consider? I think fine-tuning is generally considered to be pretty computationally light, especially compared to the training of the foundation model itself. Given the experimental results, maybe the more relevant benefits are (1) better OOD acc (2) no need for held-out val data like TPGM. - While section 3.3 was interesting, I'm not sure if it contributes the to the main point of the paper (benefits of FTP over e.g TPGM), since this simplified analysis really applies to all projection-based methods. Does this analysis motivate FTP over other ways of using projection for fine-tuning? minor typo: Fig 2 caption PorjUpdate -> ProjUpdate Technical Quality: 3 good Clarity: 3 good Questions for Authors: - There seem to be similarities between FTP and the general hyperparameter tuning method proposed in https://arxiv.org/abs/1909.13371, in that both methods delay updates with the current batch to optimize hyperparameters in an online fashion (gamma here corresponds to alpha there). Could the authors comment on how these two methods relate? - see other questions in "Weaknesses" above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. No particular negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and constructive feedback!! We have added additional results/visualization according to other reviewers' comments. We hope they would further strengthen your confidence in our method. * **Regarding Benefits of FTP** Indeed, fine-tuning is much less computationally heavy than pre-training. In our experiments, fine-tuning on ImageNet takes about 2 days using TPGM but roughly 1 day using FTP. The speed-up of FTP vs. TPGM is roughly 2x. Nevertheless, as the reviewer mentioned, FTP brings other benefits such as no need for held-out validation data. This is a major improvement because it allows us to integrate FTP into existing optimizers for much better adaptability. For example, we demonstrated the regularization strength of FTP on continual learning experiments (sec.4.3). This is not possible with TPGM due to the lack of validation data under a continual learning setting. Thank you for the suggestions; we will better emphasize all of the benefits in the revised version! * **Regarding Sec 3.3** Thank you for pointing out the typo. No, the theory does not indicate that FTP is better than TPGM. The improvement of FTP over TPGM is algorithmic in terms of efficiency and applicability, not theoretical. Nevertheless, section 3.3 introduces a general theory on why projection is useful for fine-tuning, a theoretical question that hasn’t been answered in prior works. While you are right that it generally applies to all projection-based methods including FTP and TPGM, we believe it was important to include to progress this area empirically and theoretically. We will add text and organization to explicitly discuss this. * **Regarding https://arxiv.org/abs/1909.13371** Thank you for bringing up this work, which is a very valuable reference for FTP. We will cite and discuss this work. The work introduces a smart backpropagation modification for computing “hyper gradients” to optimize internal hyper-parameters in an existing optimizer. From this perspective, FTP can be seen as an extension of this idea of projection-based optimization, where the projection constraints are the hyper-parameters. Nevertheless, our novelty stands out in two aspects. First, FTP introduces the projection operation as an integral component of an optimizer. The projection has always been treated as a separate operation outside the update of optimizers [1,2]. It is not clear how to apply the idea of “hyper-gradient” unless we formalize projection as part of the gradient update as in this paper. Second, the update of “gamma” uses the Adam update rule and is not simply gradient descent as in the referenced work. The Adam update rule introduces smoothness in the updates of the projection parameters and enables us to apply the updated projection parameters to the current updated model without rolling back to the previous state, greatly saving computation (please see Appendix 7.3 for detailed discussion). [1] Gouk, Henry, Timothy M. Hospedales, and Massimiliano Pontil. "Distance-based regularisation of deep networks for fine-tuning." arXiv preprint arXiv:2002.08253 (2020). [2] Tian, Junjiao, et al. "Trainable Projected Gradient Method for Robust Fine-tuning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
null
null
null
null
null
null
MIM4DD: Mutual Information Maximization for Dataset Distillation
Accept (poster)
Summary: This paper proposes mutual information maximization loss for dataset distillation. Specifically, the authors compute the mutual information between real and synthetic feature distribution in multiple layers by constructing positive and negative pairs. Experiments show that when plugging-in state-of-the-art dataset distillation methods, obvious performance improvements are achieved in multiple datasets. Strengths: 1. The idea of introducing mutual information into dataset distillation is natural and under-explored. It will be an interesting work to the researchers in this field. 2. The paper is well-written and technically solid. Necessary analysis on the learned synthetic data has been provided. 3. Remarkable performance improvements have been achieved when plugging the proposed loss into state-of-the-art distillation methods. Weaknesses: Incomplete experiment results: a) As a loss function, this paper lacks the experiment results on training with MIM4DD loss independently. b) The ablation study on beta in Table 2 is incomplete, as larger beta has not been tested. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Typo: 1. Line 135 D_real* -> D_real 2. Left of Figure 2: Synthetic Data -> Real Data Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 7GDB Thank you for the very constructive comments and support. ## Response to Weakness 1: Training with MIM4DD loss independently. Thank you for raising this concern. In response, we conducted supplementary experiments where we trained solely with the MIM4DD loss, without integrating it into any other methods. The results are as follows (for a detailed breakdown of the experimental settings, please refer to Table 1 in the main paper). | Method | CIFAR10 IPC-1 | CIFAR10 IPC-10 | CIFAR10 IPC-50 | CIFAR100 IPC-1 | CIFAR100 IPC-10 | |-------|-------|-------|-------|-------|-------| | only MIM4DD | 51.1 ± 0.2 | 63.2 ± 0.4 | 72.0 ± 0.2 | 24.8 ± 0.5 | 36.2 ± 0.8| Our standalone MIM4DD performance was slightly below its results when utilized as a plug-and-play module, while it still shows its performance superiority. We surmise this marginal performance drop is due to the inherent overfitting tendencies associated with dataset distillation. Furthermore, the regularization property inherent to our method might also play a role in this observation (see Sec. 3.4 Regularization Propriety L300-309 in the main paper). ## Response to Weakness 2: Larger $\beta$ study. Thank you for highlighting this oversight. We have further expanded our ablation study on the parameter $\beta$. The additional results can be referenced in the following table (other settings are consistent with Table 2). From our extended analysis, we found that the optimal value for $\beta$ is indeed 2. | $\beta$ | Accuracy | |-------|-------| | 0.5 | 62.8 ± 0.6 | | 1.0 | 63.8 ± 0.6 | | 2.0 | **66.0 ± 0.5** | | 4.0 | 64.9 ± 0.4 | ## Response to Questions: Typos. Thanks for pointing out the typos, and we will revise the manuscript. --- Rebuttal Comment 1.1: Comment: I have read the reviews and response. Thanks for supplementing the ablation study results, which will enhance the soundness. --- Reply to Comment 1.1.1: Title: Thank you to the reviewer! Comment: Dear Reviewer 7GDB, We sincerely appreciate your prompt response and are pleased that you found our additional experiments beneficial. We're thrilled that your score will be maintained. Thank you once more for your valuable inputs in enhancing our submission. Best regards, Authors
Summary: Dataset distillation aims to synthesize a small dataset with similar test performance to the original full dataset. This paper argues that previous works neglect information theory considerations, and argues that a well-designed information metric between variables is very important. Therefore, it introduces the concept of mutual information maximization and transforms it into a lower bound optimization problem at the feature map level, which improves the performance of some methods as an add-on module. Strengths: 1. The authors analyze the previous dataset distillation method from the perspective of information theory for the first time; 2. The authors provide a rigorous and reasonable mathematical formula derivation for how to convert the maximization of mutual information into the lower bound constraint of the sample feature map representation; Weaknesses: 1. The reason why the method works is not sufficient. The maximum mutual information describes the degree of correlation between two variables from the perspective of information theory, but does this mean that more classification information can be learned from the synthetic images for the model to improve the quality of distillation? 2. The method is not tested on more popular gradient matching or feature matching based dataset distillation frameworks (e.g. DC, DM, DSA, etc.) as a module. 3. The NCE loss mentioned in line 307 should correspond to the right of Figure 4. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In some datasets, there are bias and noise samples in the same category. For such datasets, is it necessary to use full original dataset information to maximize mutual information? 2. How does the module perform on high-resolution datasets such as TinyImageNet and ImageNet Subset? And if ipc is raised to 50 on cifar100, what is the result? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Weakness 1: Why mutual information should be introduced to dataset distillation (DD). Thank you for raising this insightful question regarding the connection between mutual information (MI) and the improved quality of distillation. **Theoretical Background:** At its core, Dataset Distillation (DD) can be conceptualized as a compression problem, with the primary goal being the maximization of preserved information from the original data. To this end, it's imperative to have a robust metric that can measure the degree of shared information between variables – a metric that has been notably absent in previous works. We chose to incorporate MI, a well-established metric in information theory, to steer the optimization of synthetic datasets. The power of MI in the domain of neural networks has been endorsed by the information bottleneck theory [4,5,6]. This theory underscores the principle of encoding input data into a compressed form that optimizes target prediction. Such encoding necessitates minimizing the MI between the input and its latent representation, while concurrently maximizing the MI between the output and this representation. Several works that have built upon this foundational theory have reported improved performance, substantiating the efficacy of MI in the realm of deep learning. More comprehensive discussions on this are available in the related work, Appendix B. **Empirical Evidence:** We took efforts to corroborate our theoretical assertions with empirical validations, particularly the examination of information flow through CKA analysis (Sec. 3.5). Responding to your astute query, we conducted an additional experiment during the rebuttal phase. We visualized the MI between the real and synthetic datasets, $I(D_{sys}, D_{real})$, utilizing methodologies from references [1-3]. The findings, which are illustrated in **Figure 1 of the supplementary one-page pdf**, reveal that datasets produced by our method exhibit higher MI and yield better accuracy performance. This directly supports the hypothesis that enhanced MI can indeed refine the quality of distillation. [1] Opening the black box of deep neural networks via information [2] MI Neural Estimation, ICML, 2018 [3] Deep VIB, ICLR, 2017 ## Response to Weakness 2: Comparisons to DC, DM, DSA. Thank you for your observation regarding the inclusion of popular gradient matching or feature matching-based dataset distillation frameworks in our evaluations. In our evaluation presented in Table 1, we have indeed compared our approach with a broad selection of recent DD methods from esteemed conferences like NeurIPS, ICLR, ICML, and CVPR. This includes the gradient matching or feature matching-based dataset distillation frameworks you mentioned, such as DC, DM, and DSA. It's essential to highlight that while DC, DM, and DSA were indeed state-of-the-art two years ago, the landscape has evolved. Recent works like MTT (CVPR2023) and BPTT (NeurIPS2022) now represent the new state-of-the-art in this domain. As evidenced in Table 1, MTT and BPTT surpass the performance of the earlier methods, including DC, DM, and DSA, by substantial margins - more than 10% absolute accuracy across all benchmarks. Furthermore, our method consistently outperforms all 12 recent DD methods, emphasizing its efficacy and relevance in the current landscape. Due to the status of DD, we choose to add MIM4DD on MTT and BPTT instead of DC, DM, and DSA. ## Response to Question 1: Dataset usage. Indeed, your observation regarding datasets that might contain bias and noisy samples within the same category is valid. In such cases, our approach still utilizes the entire dataset as input. Here's why: Dataset Distillation (DD) can fundamentally be viewed as a data compression problem, where the primary objective is to retain as much pertinent information from the original dataset as possible. In the context of deep learning, compression doesn't typically involve filtering out data. Instead, it focuses on representing data in a more concise form while retaining its essence. This principle applies to our method as well. By using the entire dataset, including its biases and noisy samples, we aim to ensure that the distilled dataset is a genuine representation of the original, capturing all its intricacies and nuances. ## Response to Question 2: Experiments on ImageNet subset. Thank you for pointing out the potential benefits of exploring more complex datasets and architectures. We acknowledge the importance of demonstrating the versatility of our method, and in fact, we have taken steps in that direction: - **Expanded Dataset:** We applied our method to ImageNet subsets, focusing on a limited number of classes, using the MTT codebase as a foundation. - **Increased Resolution & Depth:** Given the intricacies of high-resolution images, we tested our method on 128×128 subsets of ImageNet. Accommodating this higher resolution necessitated an architectural adaptation, leading us to employ a depth-5 ConvNet for these experiments. - **Specific Subsets:** ImageNette (assorted objects) and ImageWoof (dog breeds) are existing subsets designed to be easy and hard to learn respectively. - **Results:** The outcomes of these experiments are showcased in Table 1 of the supplementary one-page PDF. Encouragingly, our method continued to demonstrate its effectiveness and superiority across these tests. For the IPC=50 experiment on cifar100, conducting it poses significant hardware challenges due to the immense v-ram requirement. Specifically, it would demand around 200G of v-ram, which translates to approximately 10 GPUs of the 3090 or A6000 caliber. This requirement surpasses our current hardware capabilities. Additionally, it's worth noting that most Dataset Distillation (DD) research in academia, to our knowledge, does not typically conduct experiments with IPC=50 on cifar100 due to similar constraints. --- Rebuttal Comment 1.1: Title: Results of this manuscript in Tab1 Comment: 'Furthermore, our method consistently outperforms all 12 recent DD methods, emphasizing its efficacy and relevance in the current landscape.' Based on the authors' reply, I double check the results in Tab.1. The proposed method does not perform better than TESLA and FRePo-w in some settings. Here are some state-of-the-art methods: HaBa(Neurips2022), IDC(icml2022), DREAM(iccv2023), FTD(cvpr2023). pleased compare with these methods, especially on CIFAR10/100 and TinyImageNet datasets. The sota results in CIFAR10 ipc1,10, and 50 should be around 50.5, 69, 74.5; CIFAR100 ipc1, 10, 50 should be around 29, 45 and 52; TinyImageNet should be around 10, 24, 29. For the hardware issue about IPC50 on cifar100, please refer to the GitHub of DC. The author has shown how to implement it for such large categories, I remember. 'It's essential to highlight that while DC, DM, and DSA were indeed state-of-the-art two years ago, the landscape has evolved. Recent works like MTT (CVPR2023) and BPTT (NeurIPS2022) now represent the new state-of-the-art in this domain.' MTT is proposed in CVPR2022, not 2023. --- Reply to Comment 1.1.1: Title: Comparsion to other SoTAs is coming soon. Comment: Thanks for **acknowledging** our theoretical and empirical **Response to Weakness 1** of _**why the method works is not sufficient**_. Upon your recommendation, we revisited the literature and the recent advances like HaBa, IDC, DREAM, and FTD. We have chosen **DREAM (ICCV2023)** as it is good for efficient training, and the rebuttal time is limited. Importantly, its codebase supports gradient matching and feature matching-based dataset distillation frameworks, which can supplement the **Response to Weakness 2: considering the gradient matching or feature matching-based dataset distillation frameworks in our evaluations**. We aim to conduct a comprehensive comparison against these methods, especially on the CIFAR10/100 and TinyImageNet datasets as you mentioned. It is true that replicating and comparing with each of the state-of-the-art methods is time-intensive, especially given the limited time during rebuttal. Nevertheless, we will try our best to fill the table with more evaluations. Yes, MTT is proposed in CVPR2022. We have confirmed that we correctly cite MTT in the main paper.
Summary: The authors tackle the dataset distillation task — synthesizing a smaller dataset using which one can train models towards comparable test performance to models trained on the full dataset. Unlike the current methods that optimize through heuristic matching between the real and synthetic datasets, the proposed method performs data distribution mutual information maximization. Interestingly, when it comes down to the implementation, the method looks very much like a regular SimCLR-like contrastive learning formulation. However, the authors provide theoretical groundings on how it is related to the mutual information maximization problem. Strengths: 1. It is a fairly simple idea and I am surprised that no one has been doing it — verified by a quick literature search. I suppose one reason is that prior researchers who thought of using mutual information encountered the difficulty of attaining an estimate of the data distribution, which the authors worked around (line 135-163). 2. Illustration in Figure 2 is well-made and very informative. 3. The authors provided theoretical grounding for using the “mundane” contrastive learning formation (Figure 2) by showing its optimization target is closely related to the proposed “Accessible Mutual Information”. In some sense this is an interesting explanation of why SimCLR-like contrastive learning (single instance multi-view, positive/negative samples) works so well. 4. Significant improvement over the baseline (see Figure 3). Weaknesses: 1. At the moment, I am unaware of any work showing the theoretical link between mutual information and SimCLR-like contrastive learning. If there exist prior work on this topic, the novelty of this work could be largely mitigated. 2. It might have been better to explore other more sophisticated datasets and architectures beyond the three-layer ConvNet, though not absolutely necessary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The authors seem to define dataset distillation in slightly different manners in the abstract compared to in the Preliminaries section. In the former case, they said “dataset distillation aims to synthesize a small dataset whose test performance is comparable to a full dataset using the same model” whereas in the latter case they said “the goal of dataset distillation is to synthesize a small training set such that models trained on this synthetic dataset can have comparable performance as models (with the same architecture) trained on the large real set”. The nuance makes the two definitions a bit different — the former is helpful for testing acceleration while the latter is helpful for training acceleration. I would recommend the authors to clarify which definition they decide to go with (or maybe both) and ensure consistency. 2. Is there any reason BPTT+MIM4DD is shown but BPTT is not included as a standalone baseline in Table 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Nothing to be noted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer sR5S, Thank you very much for the constructive comments and support. ## Response to Weakness 1: Novelty discussion in the content of contrastive learning. Actually, we have discussed a more details about dataset distillation and contrastive learning in the Appendix (due to space limit of main paper, we only put the short version related work in the main paper): The fundamental idea of all contrastive learning methods is to draw the representations of positive pairs closer and push those of negative pairs farther apart within a contrastive space. Several self-supervised learning methods are rooted in well-established ideas of MI maximization, such as Deep InfoMax [A9], Contrastive Predictive Coding [A15], MemoryBank [A21], Augmented Multiscale DIM [A1], MoCo [A8] and SimSaim [A5]. These are based on NCE [A7] and InfoNCE [A9] which can be seen as a lower bound on MI [A16]. Meanwhile, Tian et al. [A18] and Chen et al. [A4] extend the contrastive concept into the realm of Knowledge Distillation (KD), pulling and pushing the representations of teacher and student. The formulation of our method for DD, MIM4DD also absorbs the core idea (i.e., constructing the informative positive and negative pairs for contrastive loss) of the existing contrastive learning methods, especially the contrastive KD methods, CRD and WCoRD. However, our approach has several differences from those methods: - (i) our targeted MI and formulated numerical problem are totally different; - (ii) our method can naturally avoid the cost of MemoryBank for the exponential number of negative pairs in CRD and WCoRD, thanks to the small size of the synthetic dataset in our task. Given that the size of the synthetic dataset $M$ typically ranges from $0.1-1$% of the size of the real dataset $N$, the product $M\cdot N$ is significantly smaller than $N\cdot N$ (i.e., $M\cdot N \ll N\cdot N$). ## Response to Weakness 2: Experiments on ImageNet subset. Thank you for pointing out the potential benefits of exploring more complex datasets and architectures. We acknowledge the importance of demonstrating the versatility of our method, and in fact, we have taken steps in that direction: - **Expanded Dataset:** We applied our method to ImageNet subsets, focusing on a limited number of classes, using the MTT codebase as a foundation. - **Increased Resolution & Depth:** Given the intricacies of high-resolution images, we tested our method on 128×128 subsets of ImageNet. Accommodating this higher resolution necessitated an architectural adaptation, leading us to employ a depth-5 ConvNet for these experiments. - **Specific Subsets:** ImageNette (assorted objects) and ImageWoof (dog breeds) are existing subsets designed to be easy and hard to learn respectively. - **Results:** The outcomes of these experiments are showcased in Table 1 of the supplementary one-page PDF. Encouragingly, our method continued to demonstrate its effectiveness and superiority across these tests. ## Response to Question 1: Clarification for Definition of Dataset Distillation. Question 1: In the former case, they said “dataset distillation aims to synthesize a small dataset whose test performance is comparable to a full dataset using the same model” whereas in the latter case they said “the goal of dataset distillation is to synthesize a small training set such that models trained on this synthetic dataset can have comparable performance as models (with the same architecture) trained on the large real set”. The nuance makes the two definitions a bit different — the former is helpful for testing acceleration while the latter is helpful for training acceleration. The definition of dataset distillation is the latter. Dataset distillation is helpful for training acceleration. For the former, we intend to say "dataset distillation aims to synthesize a small dataset where test performance of models trained on the small dataset is comparable to a full dataset using the same model", which is the same meaning of the latter. We recognize the potential for confusion and will ensure that the definitions are consistent and clear throughout our paper in future iterations. ## Response to Question 2: Detailed explanation about Table 1. Thank you for raising this point about the inclusion of BPTT in Table 1. To clarify, BPTT is indeed present as a standalone baseline. You can find its performance metrics on the third-last line, labeled as BPTT [11]. The subsequent line shows the performance when combined with our method, BPTT+MIM4DD. Finally, the last line depicts the performance improvement attained. We appreciate your attention to detail and will ensure such listings are more prominently highlighted in the future. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I would like to thank the authors for addressing the comments. I do not have additional concerns. Besides, I must show appreciation to the authors for being so patient and polite when they replied to the apparently stupid question (Question 2). I really don't understand why I even had that confusion from the first place. The rating (7) still reflects my assessment of this submission and I decide to keep it as is. Best of luck. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer sR5S, We sincerely appreciate your response and are pleased that you found our response beneficial. We're extremely happy that your score (7 accept) will be maintained. Thank you once more for your valuable inputs in enhancing our submission. Best regards, Authors
Summary: The paper proposes a new method named: MIM4DD that tries to maximize the mutual information shared between synthetic images and real images during the process of dataset distillation. The paper derives a lower bound of MI and formulates it as a learning objective for optimization. The proposed new loss can be combined with a wide varieties of DD methods to boost performance. ========== I have read the author's responses and they have addressed my concerns by running more experiments and showing more proof. ========== Strengths: - The paper proposes a new way of boosting the performance of DD methods by measuring the mutual information between synthetic datasets and real datasets - This paper studies the mutual information and formulates it mathematically which is largely ignored by a lot of previous methods. - Through approximation, the paper derives a lower bound of an accessible MI loss and proposes to tackle it with contrastive learning. - The paper is well organized and easy to follow - The comparison is thorough such as Table 1. Weaknesses: - The foundation of MIM4DD seems incorrect: MI in the targeted data level is equivalent ot MI in the feature maps level. From equation 5, F and G may not be invertible which heavily depend on the activation function. Therefore theorem 1 doesn't apply. - The evaluations results are mostly within the variance of the baseline methods such as MTT, there is no strong evidence that the proposed method works. - Strong limitations are introduced such as 50K negative pairs which could make the method infeasible to apply to larger datasets. - Writing errors such as "to encounter this obstacle" Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - It's hard to interpret the performance boost in Table 1, what are the hyperparameters such as $\lambda$ and $\beta$ used in MTT + MIM4DD and BPTT+MIM4DD? - Please also see my comments in weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - The derivation seems incorrect - The method introduces strong scalability limitations such as the number of negative pairs and the per-layer matching loss. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Weakness 1 and Limitation 1: Overall invertibility of entire neural networks is well-supported. Thank you for raising this concern. It's essential to clarify that while individual modules of neural networks might not be inherently invertible, the overall invertibility of entire neural networks is well-supported in the literature. Here's a brief overview: - Dosovitskiy and Brox [1] demonstrated the possibility of inverting the hidden activations of feedforward CNNs back to the input domain using upsampling deconvolutional architectures. - Zhang et al. [2] provided evidence that commonly used CNN architectures like VGGNet and ResNet are almost fully invertible, especially when leveraging pooling switches. - Gilbert et al. [3] presented both theoretical and empirical evidence for the invertibility of entire neural networks. Their theoretical explanations were rooted in compressive sensing, and they corroborated these findings with practical analyses on several learned networks. In the realm of mutual information in neural networks, this property of overall invertibility is not an oversight but a foundational premise. As a prime example, the information bottleneck theory [4,5,6] emphasizes encoding input data into a compressed representation that maximizes target prediction. The theory is contingent upon minimizing the mutual information between input variables and their latent representations, while simultaneously maximizing the mutual information between the output and these latent representations. Several subsequent studies have employed this principle, implicitly assuming the invertibility of entire neural networks, to shed light on the intricacies of neural network operations. In summary, the assumption of invertibility at the level of entire neural networks is not only well-founded but also pervasive in the literature exploring mutual information dynamics within these networks. [1] Inverting visual representations with convolutional networks. In CVPR, 2016. [2] Augmenting neural networks with reconstructive decoding pathways for large-scale image classification. In ICML 2016 [3] Towards Understanding the Invertibility of Convolutional Neural Networks. [4] Opening the black box of deep neural networks via information [5] MI Neural Estimation, ICML, 2018 [6] Deep VIB, ICLR, 2017 ## Response to Weakness 2: Results discussion. It's essential to frame our results within the broader context of advancements in dataset distillation (DD): **Comparative Performance:** Our method consistently surpasses 12 recent DD methods from esteemed conferences like NeurIPS, ICLR, ICML, and CVPR. We emphasize that many of these prior studies have only achieved approximately 1-2% accuracy improvements over their predecessors. **Relative Improvement:** Absolute improvement might sometimes be deceptive. For instance, if there's just a 3% accuracy gap between real dataset and distilled dataset, expecting a 10% or even 5% absolute improvement is impractical. Notably, our method MIM4DD demonstrates a 13% average relative improvement over the second-best method (BPTT, NeurIPS’2022). Importantly, MTT and BPTT are recently the strongest baselines. **Peer Validation:** Other reviewers have also acknowledged our method's superiority. For instance: - Reviewer sR5S observed, "Significant improvement over the baseline (see Figure 3)" - Reviewer pJjQ highlighted, "improves the performance of some methods as an add-on module". - Reviewer 7GDB appreciated that "Remarkable performance improvements have been achieved when plugging the proposed loss into state-of-the-art distillation methods". We believe it's crucial to view performance advancements not just in isolation, but in relation to the current boundaries of the field. " ## Response to Weakness 3 and Limitation 2: Scalability Discussion. Thank you for pointing out concerns related to scalability. I'd like to clarify a few things: **Contrastive Learning Scalability:** As elaborated in Appendix (A. L71-74), our MIM4DD method leverages fundamental principles from contrastive learning, particularly the ideas central to contrastive KD methods like CRD [A18] and WCoRD [A4]. These established methods offer a framework for managing a vast number of negative samples through a solution called MemoryBank [A21]. This system allows for computationally efficient processing of even millions of negative samples. **Distinctiveness of Our Approach:** Despite the shared inspiration, MIM4DD holds significant differences from the aforementioned methods: Our mutual information target and the resulting numerical formulation diverge substantially; our method can further decrease the cost of MemoryBank for the exponential number of negative pairs in CRD and WCoRD, thanks to the small size of the synthetic dataset in our task. Given that the size of the synthetic dataset $M$ typically ranges from $0.1-1$% of the size of the real dataset $N$, the product $M\cdot N$ is significantly smaller than $N\cdot N$ (i.e., $M\cdot N \ll N\cdot N$). **Efficiency in Practice:** In actual implementations, the overhead introduced by MIM4DD is minimal. Our evaluations show that the additional training time required due to MIM4DD is a mere 3%. In essence, while we acknowledge the scalability challenge, our method effectively leverages established techniques and unique task characteristics to remain feasible for large datasets. ## Response to Question 1: $\lambda$ and $\beta$ in Table 1. We have detailed the selection process for hyperparameters $\lambda$ and $\beta$ in Sec.3.3 (L280-295). For clarity, based on our empirical evaluations, we set $\lambda= 1$ as illustrated in Figure 3 and $\beta = 2$ , which can be referenced in the corresponding Table 2. Both of these values were consistently used across all experiments presented in Table 1. --- Rebuttal Comment 1.1: Title: BPTT + MIM4DD is lower than BPTT? Comment: Hi, thanks for the authors for the response, I have a few other questions 1. BPTT + MIM4DD performs worse than BPTT? The results of BPTT[11] for MNIST IPC 1, 10 and 50 are: 98.7, 99.3 and 99.4. The results reported in this paper for BPTT+MIM4DD is 95.8, 98.9 and 99.2 which are lower than the original methods. Similar downgraded performances are also seen in other datasets such as CIFAR-100, 34.0 and 42.9 for BPTT and 25.0/38.5 for BPTT+MIM4DD. What's the cause of the absolute 9% performance drop? Does it mean that MIM4DD can actually hurt BPTT's performance? There is also a huge 15% performance drop on CIFAR-10 IPC 1 and 10. 2. Can the author upload the results for TinyImageNet, just IPC 1 and 10 are fine if the authors have trouble getting results for CIFAR100 IPC50. It should be quick to run. (BPTT has TinyImageNet IPC 1 and MTT has IPC 1, 10 and 50). It will be great if you can provide the results for IPC 1 for BPTT and MTT and IPC 10 for MTT. 3. What is the scale of $L_{NCE}^k$ loss and $L_{DD}$ loss? Are they on the same magnitude. 4. What is the motivation for dividing $L_{NCE}^k$ loss by $\beta^{K-1-k}$? 5. When you tried to get the results for reviewer: 7GDB, did you also apply the division factor as mentioned in point 3 above? 6. Conflict (typo) in the newly uploaded pdf (figure 1). The figure description says the right one is without MIM4DD and the left one is trained with MIM4DD but the subtitle says the other way around. --- Reply to Comment 1.1.1: Title: Second-round Responses (I) Comment: Thanks for acknowledging our theoretical and empirical **Response to Weakness 1 and Limitation 1: Overall invertibility of entire neural networks is well-supported**. For the new questions, we respond with the following answers (**NQ** stands for new question): ## NQ.1. BPTT + MIM4DD is lower than BPTT? Thank you for highlighting this discrepancy. It's essential to note that the results for BPTT we have reported are based on our strict reproduction using BPTT's official codebase. Despite our meticulous adherence to the methodology, we were unable to replicate the exact results claimed by BPTT. Moreover, based on our examination of all 17 papers citing BPTT, none of them refers to BPTT's reported results, which further suggests that other researchers might also be facing challenges in reproducing those numbers. Thus, in our paper, we decided to use the results we obtained from our reproduction since they still remain competitive and within the state-of-the-art range. We appreciate your understanding and will clarify this in our revision. Additionally, in light of Reviewer pJjQ's comments, we've made efforts to further compare our method using the DREAM (ICCV2023) framework, resulting in the achievement of new state-of-the-art results. DREAM is a stronger baseline than BPTT. **More details can be found in our response to Reviewer pJjQ.** ### ( Using DREAM (ICCV2023) [R.1] as a baseline framework, we reached a new SOTA!) **Why DREAM was chosen for additional experimentation:** - Despite not being a required comparison according to NeurIPS policy, DREAM represents the cutting-edge in dataset distillation, and we aim to remain at the forefront of this research area. - Given the limited rebuttal timeframe, DREAM's efficiency and clear codebase provided an ideal setting for our experiments. - DREAM's codebase is compatible with gradient and feature matching-based dataset distillation frameworks. Incorporating these provides a comprehensive response to concerns raised about these aspects in our previous evaluations. Building upon DREAM's framework, we integrated our MIM4DD module. The cluster-wise DD approach of DREAM was retained, with our contrastive aligning module enhancing DREAM’s match loss component. All experimental settings strictly adhered to DREAM's parameters, ensuring that only our module contributed to any observed variations. **Experimental Results:** Adding our method MIM4DD on DREAM. | Method | CIFAR10 IPC-1 | CIFAR10 IPC-10 | CIFAR10 IPC-50 | CIFAR100 IPC-1 | CIFAR100 IPC-10 | |-------|-------|-------|-------|-------|-------| | DREAM [R.1] | 51.1±0.3 | 69.4±0.4 | 74.8±0.1 | 29.5±0.3 |46.8±0.7 | | DREAM + MIM4DD | 51.9±0.3| 70.8±0.1 | 74.7±0.2 | 31.1±0.4 | 47.4±0.3 | Top-1 accuracy of test models trained on distilled synthetic images on **TinyImageNet**. | IPC | Ratio % | DM [39] | MTT [5] | DREAM [R.1] | DREAM +MIM4DD | Whole | |-------|-------|-------|-------|-------|-------|-------| | 1 | 0.017 | 3.9±0.2 | 8.8±0.3 | 10.0±0.4 | 11.2±0.2 | 37.6±0.4 | | 10 | 0.17 | 12.9±0.4 | 23.2±0.2 | 23.9±0.4 | 24.8±0.3 | 37.6±0.4 | These results underline that MIM4DD, when integrated to DREAM, further enhances performance. While hyper-parameters weren't exhaustively fine-tuned, the results reflect MIM4DD's versatility across different dataset distillation frameworks. In conclusion, the enhancement of DREAM's results with our MIM4DD module attests to its efficacy and adaptability. We appreciate the reviewer's feedback, which provided an avenue for us to further highlight the method's robustness and relevance in contemporary DD research. ## NQ.2. Results on TinyImageNet Please refer to NQ.1. We use a new SOTA codebase to realize the experiments on TinyImageNet. **reference** [R.1] DREAM: Efficient Dataset Distillation by Representative Matching, ICCV 2023
Rebuttal 1: Rebuttal: Here is the one-page pdf for submitting the experiment figure and table. Pdf: /pdf/b2aed65f42c1867ab2d575b46657df837095d4ac.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Variational Perspective on High-Resolution ODEs
Accept (poster)
Summary: This paper introduces forced Lagrangians in order to better understand discrete schemes for accelerate convex numerical optimization from a continuous-time ODE perspective. Strengths: Two choices of F enable to re-derive continuous-time ODEs which were introduced recently in the literature (Section 2). Likewise, accelerated discrete schemes derive subsequently (Sections 3, 4) recover known schemes or in slightly modified form with slightly superior convergence rates. Empirically, it is demonstrated that even for non-convex network training, applying stochastic versions of accelerated schemes may perform well. This paper significantly contributes to our understanding what unconstrained convex optimization with optimal convergence rates really means. Weaknesses: Authors use their jargon right from the beginning which makes the paper hard to read for readers who do not work on similar topics. For example, what “LR-ODE”, “HR-ODE” and the “rate matching technique” means should be briefly explained to a broader educated readership in the introduction. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: none Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: see "weaknesses" on how to improve the presentation Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and comments. We will update the introduction of our camera ready version with a “Background” section where we describe the terms like “LR-ODE”, “HR-ODE” and “rate matching technique” in order to reach a wider audience, as thoughtfully suggested by the reviewer.
Summary: This work combines variational perspective approach with high-resolution ODE functions to investigate the Nesterov accelerated gradient descent algorithm (NAG). With variational perspective, the authors reconstruct various high-resolution ODEs derived in previous research using alternative methods. Moreover, through this approach, they propose a special representation of NAG that exhibits an improved convergence rate in terms of gradient norm minimization. The authors also discuss some new properties of rate-matching technique. Finally, the authors analyze the stochastic setting both theoretically and empirically. Strengths: 1. The idea of combining variational perspective and high-resolution ODEs by including external forces is very interesting. 2. The authors show several important theoretical results in their manuscript. By carefully checking their proofs of theorems (except for Section 5 because of time), I think overall these results are correct. 3. The numerical results indicate the potential of having better optimization algorithms based on the theoretical results in this manuscript. 4. Overall, the manuscript is well written. Overall, I have a positive impression of this work, however, I also admit that I am not an expert in this specific field, which may affect my confidence in assessing its accuracy and significance. Weaknesses: 1. It would be better to have a more detailed introduction to low-resolution ODEs and high-resolution ODEs. Without reading some previous work, it is difficult to understand the differences between low-resolution ODEs and high-resolution ODEs. Are high-resolution ODEs the ODE functions that contain the learning rate $s$? Why do people need to care about high-resolution ODEs? 2. I feel that the proof of Proposition 4.1 is more of an intuition rather than a rigorous demonstration. The treatment of the condition $s\to 0$ seems imprecise, as sometimes the authors will directly consider this condition as $s=0$ (e.g. line 60), while at other times the authors maintain $s$ to be a non-zero value (e.g. line 59). The use of the word “approximately” in Prop 4.1 is also vague. 3. Some typos/ unclear parts in the proofs that the authors may need to double-check. (a) line 84, $\frac{\partial L}{\partial X}(X_t,...)$ should be $\frac{\partial L}{\partial X_t}(X_t,...)$. (b) equation (7), left hand side, $\bigtriangledown f$ should be $\bigtriangledown f(X_t)$. (c) Theorem 2.1, $\dot{\gamma}=e^{\alpha t}$ should be $\dot{\gamma}=e^{\alpha_t}$. (d) line 138, ODE(14) should be ODE(12). (e) equation (25) $(3/t+\sqrt{s}\bigtriangledown f(X_t))$ should be $(3/t+\sqrt{s}\bigtriangledown^2 f(X_t))$. (f) line 196, $\sigma$ not introduced, is it the variance of the noise? (g) equation (38) (appendix), second line $\sqrt(s)e^{-\alpha_t}\ddot{\beta_t}$ should be $\sqrt(s)e^{-2\alpha_t}\ddot{\beta_t}$. (h) line 392, (7) should be (11). (i) line 406, first equality $+\frac{1}{2}||v_k-x^*||^2$ should be $-\frac{1}{2}||v_k-x^*||^2$. (j) line 408, the term $+\frac{s^2(k+2)}{4}||\bigtriangledown f(x_{k+1})||^2$ in the second inequality is left out, therefore the authors need to double-check whether the results still hold after considering this term. (k) line 411, the first equaltiy redundant, (l) line 413 "not ethat" typo (m) line 421, the term $\frac{ks}{2}(\bigtriangledown f(x_k-\bigtriangledown f(x_k))$ miss ")". (n) equation (62), $(3/t+\sqrt{s}\bigtriangledown f(X_t))$ should be $(3/t+\sqrt{s}\bigtriangledown^2 f(X_t))$. 4. Figure 1 is hard to read. (font size too small) 5. It would be better to describe what "NAG" means (Nesterov accelerated gradient?) the first time this abbreviation is used. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. It is unclear to me that equation (21) and equation (27) (by replacing one term) is equal to NAG (line 76), why is that? 2. In line 211, the authors say that practically $k_0$ is lower than the term $(\cdot)^{1/\alpha}$. However, one of the conditions in Theorem 5.1 and Theorem 5.2 is that $k_0\geq (\cdot)^{1/\alpha}$, does that mean this condition will not be satisfied in practice? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors don't specifically discuss the limitations of this work. The authors may consider adding a paragraph in their manuscript to discuss the limitations of their work based on the reviewer's feedback. I don't think there will be a significant negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and valuable comments. We are pleased that you recognized the strength of our paper in terms of novelty, significance, soundness, and presentation. Below, you can find our responses addressing the concerns you raised. **Reviewer**: “It would be better to have a more detailed introduction to low-resolution ODEs and high-resolution ODEs. Without reading some previous work, it is difficult to understand the differences between low-resolution ODEs and high-resolution ODEs. Are high-resolution ODEs the ODE functions that contain the learning rates? Why do people need to care about high-resolution ODEs?” **Authors**: We will update the introduction of our camera ready version with a “Background” section where we describe the terms like “LR-ODE”, “HR-ODE” and “rate matching technique” and their significance to our analysis in order to reach a wider audience, as thoughtfully suggested by the reviewer. The HR-ODEs contains the step-size (learning rate) ‘s’ allowing them to effectively incorporate Nesterov's Accelerated Gradient (NAG) method as the step-size changes. In contrast, this is not the case for LR-ODEs which were independent of the step-size (for an illustrative description, refer to [3, Figure 2]). The importance of HR-ODEs lies in their direct connection to accelerated algorithms. These ODEs can be discretized using the common discretization techniques like the semi-implicit Euler discretizer and recover well-known methods like the NAG algorithm. This is not the case for LR-ODEs in [1], where more complicated and less intuitive discretizers (like the rate-matching technique in [2]) are needed. **Reviewer**: “I feel that the proof of Proposition 4.1 is more of an intuition rather than a rigorous demonstration. The treatment of the condition $s\rightarrow 0$ seems imprecise, as sometimes the authors will directly consider this condition as $s=0$ (e.g. line 60), while at other times the authors maintain s to be a non-zero value (e.g. line 59). The use of the word “approximately” in Prop 4.1 is also vague.” **Authors**: We use the term 'approximation' because we employ approximations as in equations (59) and (60). The reason for utilizing different forms of approximations is as follows: Given that $s$ converges to zero more rapidly than $\sqrt{s},$ we neglect $s$. This same 'approximation' strategy is used in [3, equations (2.2),(2.3)]. We will provide a clear explanation of the concept of 'approximation' in this specific context in our camera-ready version. **Reviewer**: “Some typos/ unclear parts in the proofs that the authors may need to double-check.” **Authors**: Thank you for your careful reading and detailed comments. We will fix these typos and mistakes in our camera ready version. Particularly, (f): Yes, $\sigma^2$ is the noise variance. (j): Thanks for noticing this, it is easy to fix this problem. We simply forgot updating the coefficient of $||\nabla f(x_{k+1})||^2$ to $\frac{s(k+2)}{4}\left(\frac{1}{L}-s\right)$. This means that the term you pointed out should be combined with the second $||\nabla f(x_{k+1})||^2$. Therefore, the last 2 lines of (48) should be $$=-\frac{s(k+2)k}{8}\left(\frac{1}{L}-s\right)||\nabla f(x_{k+1}-\nabla f(x_k))||^2-\frac{s(k+2)}{4}\left(\frac{1}{L}-s\right)||\nabla f(x_{k+1}||^2-\frac{s^2(k+2)k}{8}||\nabla f(x_{k})||^2$$ $$\leq -\frac{s^2(k+2)k}{8}||\nabla f(x_{k})||^2$$ **Reviewer**: “Figure 1 is hard to read. (font size too small)” **Authors**: We will make the font size larger for the camera-ready version. **Reviewer**: “It would be better to describe what "NAG" means (Nesterov accelerated gradient?) the first time this abbreviation is used.” **Authors**: We will clarify this in our camera ready version. **Question 1**: “It is unclear to me that equation (21) and equation (27) (by replacing one term) is equal to NAG (line 76), why is that?” **Authors**: This can be seen when one writes down the one line representation of the update (21) or (27) with one term replacement. The pathway is as follows: consider (21), write $v_k$ as a function of $x_k,x_{k+1}$ through the first line of the update (21). Then, replace in the second line and rearrange the terms and get $$x_{k+2}=x_{k+1}+\frac{k}{k+3}(x_{k+1}-x_k)-\frac{sk}{k+3}\left( \nabla f(x_{k+1}-\nabla f(x_k)\right)-s\nabla f(x_{k+1}) $$ which is the one-line representation of the NAG method. For (27) after replacing that one term, same approach for the sequence $x_k$ (eliminating $y_k$'s and $v_k$'s) leads to the one-line representation of the NAG method. **Question 2**: “In line 211, the authors say that practically $k_0$ is lower than the term $(⋅)^{1/α}$. However, one of the conditions in Theorem 5.1 and Theorem 5.2 is that $k_0 \geq (⋅)^{1/α}$, does that mean this condition will not be satisfied in practice?” **Authors**: The condition on $k_0$ is the minimum number of iterations needed for our theoretical guarantees to hold. In line 211 we mean that in practice the method achieves the error bounds in Theorems 5.1 and 5.2 even for the smaller number of iterations than the theoretical bound $(⋅)^{1/α}$. In a sense, the method works better than the theory suggests. Thank you for your careful reading. We will clarify this statement to avoid confusions. **Limitations**: “The authors may consider adding a paragraph in their manuscript to discuss the limitations of their work based on the reviewer's feedback.” **Authors**: We will add this paragraph, thank you for your suggestions. **References** [1] W. Su, S. Boyd, E. J. Candes, "A Dfferential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights" (2016) [2] A. Wibisono, A. C. Wilson, and M. I. Jordan, "A variational perspective on accelerated methods in optimization" (2016) [3] B. Shi, S. S. Du, M. I. Jordan, and W. J. Su. "Understanding the acceleration phenomenon via high-resolution differential equations" (2021). --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I would like to thank the authors for their response. I have one follow up question. I agree that equation (21) can be transferred to the form of the one-line representation above, However, according to line 76, the one-line representation of NAG should be $$x_{k+2}=x_{k+1}+\frac{k+1}{k+4}(x_{k+1}-x_{k})-\frac{s(k+1)}{k+4}(\bigtriangledown f(x_{k+1})-\bigtriangledown f(x_{k}))-s\bigtriangledown f(x_{k+1})$$. Do these differences matter? --- Reply to Comment 1.1.1: Title: Response 1 Comment: Thank you for your comments. The relation of the NAG (the one-line update) holds for a 3-point sequence, it does not matter if the sequence is $x_k,x_{k+1},x_{k+2}$ or $x_{k-1},x_{k},x_{k+1}$. This is just a matter of notation/convention; in fact, we can define a new sequence $z_k:=x_{k+1}$ and get exactly the same one-line update as the NAG.
Summary: They generalize the Lagrangian formulation of known first order optimization methods [Wibisono et al., 2016, Wilson et al., 2021] by introducing the notion of external forces, and show theorems on convergence of convex/strongly convex functions. In addition, they show the variational analysis leads a special representation of NAG, and it enjoys to superior convergence rates than [Shi et al., 2019]. Based on the special representation of NAG, they generalize it to stochastic variation in Section 5, called NNAG, and show its convergent theorem. They check the practical behavior of the proposed NNAG by using - binary classification, - classification on CIFAR10. Strengths: originality - Introducing the external force term to the Lagrangian formulation and show convergent theorems with it. quality/clarity - They show their idea and mathematical statements clearly, and show their proofs. significance - Their formulation can lead known dynamics also. In this sense, their proposal can be regarded as a unification of various optimization dynamics. Weaknesses: - The first part of the paper sounds natural, but to me, there seems no theoretical reason to introduce i.i.d. noise to the gradients in section 5. - In my opinion, they should conduct more numerical experiments. For example, the experiments in classification on CIFAR10, NNAG and SVRG+NNAG are competitive to SGD. This result, itself is good, but it means there is no big incentive to use the proposed algorithm, but SGD. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Is there any natural/theoretical motivation to introduce i.i.d. noise to the gradients in section 5? - Is there any strong incentive to use NNAG or its variants compared to SGD in practice? - Besides it, the authors wrote `As the figure depicts, the SVRG+NNAG performs faster than the other methods in terms of minimizing the training error.`, but minimizing training error itself sounds possibility of overfitting in machine learning context. If not, please correct me. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and comments. We are pleased that you recognized the strength of our paper in terms of originality, quality/clarity, and significance. Below, you will find our responses addressing the concerns you raised. **Reviewer**: “The first part of the paper sounds natural, but to me, there seems no theoretical reason to introduce i.i.d. noise to the gradients in section 5. Is there any natural/theoretical motivation to introduce i.i.d. noise to the gradients in section 5?” **Authors**: The motivation for studying the model with i.i.d. noise in Section 5 is twofold: From a practical standpoint, this choice is motivated by the fact that many algorithms employed in machine learning (for example for training neural networks) are characterized by noisy gradients [1,2,3]. Thus, adding noise to the gradients takes our theoretical results from the earlier sections one step closer to the real-world practical scenarios. In addition, from a theoretical standpoint, our interest lay in probing the novel representation (21). For instance, [1] extends the continuous time analysis of deterministic NAG from [4] to a stochastic setting. Similarly, we extended our analysis in Section 5 to investigate the impacts of the new representation (21) in the stochastic setting. **Reviewer**: “In my opinion, they should conduct more numerical experiments. For example, the experiments in classification on CIFAR10, NNAG and SVRG+NNAG are competitive to SGD. This result, itself is good, but it means there is no big incentive to use the proposed algorithm, but SGD. Is there any strong incentive to use NNAG or its variants compared to SGD in practice?” **Authors**: As an incentive, we can highlight that SVRG+NNAG outperforms SGD in terms of convergence rate in our experiments. However, we wish to emphasize that our primary contribution lies in a new comprehension of acceleration, and SVRG+NNAG simply serves as a proof-of-concept resulting from this insight. The main contribution of our paper is a novel understanding of the acceleration phenomenon through an innovative extension on the continuous time analysis of the Nesterov’s accelerated gradient (NAG) method that leverages the forced Euler-Lagrange equation that we present in Section 2. Although NAG itself is mathematically well-founded, the particular mechanisms behind its effectiveness are not immediately obvious and often considered ‘mysterious’; please see lines 28-50 in our paper for a short survey on different attempts to ‘demystify’ this phenomenon. In this context, our novel understanding of acceleration through the forced Euler-Lagrange equation is interesting in its own right. Nevertheless, beyond its theoretical significance, we believe that this novel perspective also holds great potential for deriving new results and practical algorithms. We mention some immediate implications in Sections 3, 4, and 5, with SVRG+NNAG being just one of these examples. Our goal in Section 5 is to demonstrate the potential of exploring different representations (like (21)) in practice. This idea is further emphasized in the future directions, including the combination of NNAG with ADAM or RMSprop. Finally, we are open to the idea of including more numerical experiments if the reviewer can provide specific recommendations regarding the suggested experiments. **Reviewer**: “Besides it, the authors wrote "...", but minimizing training error itself sounds possibility of overfitting in machine learning context. If not, please correct me.” **Authors**: In the context of neural networks (or non-convex optimization in general), there is always the possibility of overfitting as well as converging to a poor local minimum that does not generalize well. This is why we include both "validation accuracy" and "training error" plots in Figure 2. These plots help us ensure that these issues do not arise in our experiments. It is important to note that faster convergence rates of NNAG+SVRG do not pose problems of overfitting. If needed, one can terminate the algorithm earlier to achieve top performance more quickly. This observation is evident in Figure 2. For instance, NNAG+SVRG achieves the top "validation accuracy" after around 20 epochs (with a validation accuracy of approximately 0.6 and a training error of approximately 0.66), after which a slow overfitting phase begins. Similarly, for SGD and SVRG, their peak "validation accuracy" results are achieved after approximately 50 epochs (with a validation accuracy of around 0.6 and a training error of about 0.66), followed by an overfitting trend. Finally, NNAG achieves comparable results after roughly 100 epochs. We will include this discussion in our camera ready version. Finally, please note that any technical analysis of the generalization error and/or overfitting is outside of the scope of our paper. **Before we conclude**, we would like to kindly express our surprise regarding your relatively low score for our paper, especially because your review acknowledges the strengths of our paper in terms of originality, quality/clarity, and significance. Based on your feedback and statements, it appears (in our opinion) that the positive aspects of our work are more crowded than the negative ones. We have noticed that your concerns are focused on the stochastic extension in Section 5. In light of the comprehensive responses we have provided above for your concerns, we hope that you would consider reevaluating your score for our submission. **References** [1] M. Laborde and A. Oberman. “A Lyapunov analysis for accelerated gradient methods: from deterministic to stochastic case” (2020) [2] A. Defazio, F. Bach, S. Lacoste-Julien, "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives" (2014) [3] J. Wu et al. “On the Noisy Gradient Descent that Generalizes as SGD” (2019) --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. Their response to my questions: - motivation to introduce i.i.d. noise - The authors answered that it is a one step closer to the real-world practical scenarios like SGD in training of neural network. This makes sense from practical perspective. - incentive to use NNAG - The authors answered that "SVRG+NNAG outperforms SGD in terms of convergence rate in our experiments". In addition, I understand that the contribution of this paper is more theoretical, a new interpretation of the NAG method by the forced Euler-Lagrange equation. - on overfitting problem - The author explained why "faster convergence rates of NNAG+SVRG do not pose problems of overfitting" in the rebuttal, and I agree with the author's point. In summary, I can say that the author's explanation has, for the most part, addressed my concerns. So I would like to raise my score to weak accept.
Summary: This paper did four works about High-Resolution ODEs and first-order optimization algorithms. The first part uses forced Euler-Lagrange equations to generalize the analysis of Low-Resolution ODEs to High-Resolution ODEs. The second part is a refined result for bound estimation of gradient norm. The third part is an interpretation of Nesterov’s acceleration by rate-matching discretization. In the last part, the authors propose a stochastic version of Nesterov’s accelerated gradient method, named NNAG in this paper, and compared it to several known stochastic methods on binary classification and training CNN. Strengths: This paper proposes a new idea of external force and uses the idea of forced Euler-Lagrange equations to generalize the analysis of Low-Resolution ODEs to High-Resolution ODEs. This leads to a new formulation of NAG. Combined with rate-match discretization, the authors discover a new connection of NAG and the continuous ODE, i.e., it is understood as a perturbation of the Low-Resolution ODE. The new formulation of NAG further inspires the framework of Noisy NAG. The idea of forced Euler-Lagrange equations may bring new insight to fast algorithm design. Weaknesses: Although this paper provides a new reformulation of NAG, as the author suggested, the refined convergence rate of gradient norm for NAG is already appeared in [S. Chen, B. Shi, and Y.-x. Yuan. Gradient norm minimization of nesterov acceleration: o(1/k^3). arXiv preprint arXiv:2209.08862, 2022]. The Lyapunov function is essentially the same using the implicit-velocity form and the explicit-velocity form in [B. Shi, S. S. Du, M. I. Jordan, and W. J. Su. Understanding the acceleration phenomenon via high-resolution differential equations. ArXiv, abs/1810.08907, 2021]. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The Lagrange function is kinetic energy minus potential energy (though multiplied by some decreasing coefficient). The Lagrange function and external force is considered separately in this paper. Is it possible to give physical explanation about the acceleration by forced Euler-Lagrange equation and use different energy function and external force to inspire better algorithm design? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations and have indicated problems for further study. This work appears to have little negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and comments. In what follows, we address and respond the points raised by the reviewer. **Reviewer**: "Although this paper provides a new reformulation of NAG, as the author suggested, the refined convergence rate of gradient norm for NAG is already appeared in [Chen et al., 2022]." **Authors**: [Chen et al., 2022] appeared while we were already working on our paper. Although they achieve essentially the same convergence rate, the analysis techniques are significantly different. In particular, they follow an implicit velocity perspective leading to a different Lyapunov analysis, whereas we introduced the forced Euler-Lagrange perspective. Despite the end results being similar, we believe that our analysis is interesting in its own right. In particular, the use of external forces can inspire new methods or new convergence results for existing methods. **Reviewer**: "The Lyapunov function is essentially the same using the implicit-velocity form and the explicit-velocity form in [She et al., 2021]." **Authors**: Although our Lyapunov function is essentially the same as in [1] and [2], we would like to highlight that our new representation (please see (21)) enhances the understanding behind the selection of this Lyapunov function. This is due to the fact that the second component of this Lyapunov function, represented as $$\left( \frac{1}{2}||x_{k+1}-x^*+\frac{k}{2}(x_{k+1}-x_k)+\frac{ks}{2}\nabla f(x_k)||^2 \right)$$ equates to $$\frac{1}{2}||v_k-x^*||^2$$ where $v_k$ is defined in (21) (it is easy to see this simply by rewriting $v_k$ as a function of $x_k$ and $x_{k+1}$ by using the first line of (21)). This intuitive understanding seems to be missing in the prior work. In addition, please note that one of the main goals of [1] is to simplify the analysis in [2]. Our work significantly advances this aim: Upon comparing the proofs of Theorem 3.1 in [1], Theorem 6 in [2], and Theorem 3.1 in our work (see Appendix A.4), it is apparent that our new representation substantially simplifies the algebraic aspects of the proof. We will further clarify these connections and make comparisons with the existing approaches in the camera-ready version. **Reviewer**: "Is it possible to give physical explanation about the acceleration by forced Euler-Lagrange equation and use different energy function and external force to inspire better algorithm design?" **Authors**: From a physical perspective, the forces are non-conservative (line 87 in our paper) meaning that they are dissipative in nature, similar to friction or air resistance. This characterization provides a more comprehensive and accurate formulation compared to that in [3], which depicts a particle losing potential energy and gaining kinetic energy and therefore accelerating. In this respect, proper choice of the force leads to a more accurate physical model and thus better algorithm design and can even yield other interesting findings. For example, in the strongly convex regime, choosing an appropriate force to derive the HR-ODE of the TM method could unveil new convergence rates for both the HR-ODE and the algorithm (line 287 in our paper). As it stands, the HR-ODE in [4] for the TM method has a proven convergence rate that is slower than the algorithm itself. We will provide a detailed discussion on this in the camera-ready version. **References** [1] S. Chen, B. Shi, and Y. Yuan. "Gradient norm minimization of nesterov acceleration: $o(1/k^3)$" (2022) [2] B. Shi, S. S. Du, M. I. Jordan, and W. J. Su. "Understanding the acceleration phenomenon via high-resolution differential equations"(2021) [3] A. Wibisono, A. C. Wilson, and M. I. Jordan, "A variational perspective on accelerated methods in optimization" (2016) [4] B. Sun, J. George, and S. S. Kia. "High-resolution modeling of the fastest first-order optimization method for strongly convex functions" (2020) --- Rebuttal Comment 1.1: Comment: I would like to thank for the author's reply. I agree that the forced Euler-Lagrangian is anothor way to help simplifying the proof in [B. Shi, S. S. Du, M. I. Jordan, and W. J. Su. "Understanding the acceleration phenomenon via high-resolution differential equations"(2021)]. Although the level of simplification compared to [S. Chen, B. Shi, and Y.-x. Yuan. "Gradient norm minimization of nesterov acceleration: o(1/k^3)"(2021)] is still not very clear to me, the authors have addressed my question. On the other hand, the physical explanation of the non-potential external force and its potential usage seem interesting to me. In summary, I decide to raise my score by one based on the responses from the authors.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures
Accept (oral)
Summary: This paper studies the stochastic linear bandits as introduced in Abbasi-Yadkori et al., 2011. Here, at each time step $t$, an action set $\mathcal{A}_t$ is given. The learner then selects an action $a_t \in \mathcal{A}_t$, which maps to a feature vector $\phi(a_t)$, and subsequently receives a reward $\phi(a_t)^{\mathsf{T}}\pmb{\theta}^*+\epsilon_t$. The objective is to maximize the cumulative rewards over a designated time horizon $T$. The main contributions of this paper can be summarized as: 1. The authors propose a general approach based on the notion of Martingale Mixtures to create confidence sets. These are subsequently utilized in the meta LinUCB algorithm to derive the bandit algorithms. It is further demonstrated that such algorithms can be efficiently computed via convex optimization. 2. The paper shows that such bandit algorithm derived from a suitable selection of mixture distributions $P_t$ attains the same worst-case regret as found in Abbasi-Yadkori et al., 2011. 3. Evidence is presented to show that the algorithm derived in this paper outperforms the approach proposed in Abbasi-Yadkori et al., 2011 when applied to several real-world datasets. Strengths: While I'm not thoroughly acquainted with the most recent literature on stochastic linear bandits, this paper appears to present compelling results. The idea of using mixture martingales to derive linear bandit algorithms is innovative and provides a general methodology for generating new algorithms. The authors supplement their theoretical contributions with empirical evidence showing that their algorithm outperforms those presented in previous literature, further strengthening their findings. Weaknesses: The downside of the paper lies in its inability to improve upon the theoretical worst-case bound as shown in Abbasi-Yadkori et al., 2011. The strength of this paper would be significantly enhanced if the authors could demonstrate that employing the methodology from their current work could theoretically offer improved bounds compared to previous results (or new results in novel settings). Typos: - Line 216, $\sum_{t=1}^T$ is missing - Line 696, AUCB appeared twice Technical Quality: 3 good Clarity: 3 good Questions for Authors: I'm curious, how would the present work compare to the general approach as in "The Statistical Complexity of Interactive Decision Making" by D. J. Foster, S. M. Kakade, J. Qian and A. Rakhlin.? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No issue with negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and questions. We respond to the reviewer's points: * *The downside of the paper lies in its inability to improve upon the theoretical worst-case bound as shown in Abbasi-Yadkori et al., 2011. The strength of this paper would be significantly enhanced if the authors could demonstrate that employing the methodology from their current work could theoretically offer improved bounds compared to previous results (or new results in novel settings).* While our worst-case regret bound in Thm. 7.6 only matches the equivalent regret bound for the OFUL algorithm by Abassi-Yadkori et al., we can prove that our UCBs are tighter than the OFUL UCBs. In App. C.2, we show that for any value of the OFUL regularization parameter (similar to our $\alpha$ parameter), there are valid (and simple) choices of $\alpha$, $\mu_t$ and $T_t$ (which are not necessarily the optimal choices) such that our analytic UCBs (and therefore also our numerical UCBs) are always strictly tighter than the OFUL UCBs. We believe that there is potential to obtain worst-case regret bounds with improved dependence on $T$ using our methodology. In App. E.2, we investigate “more adaptive” choices for the mixture distributions, which depend on previously observed rewards. We observe that the radius quantity grows at a slower rate (in $T$) when using these more adaptive mixture distributions. This means that the data-dependent regret bound (in Thm. 7.5) also grows at a slower rate. In future work, we would like to search for improved data-independent bounds on the radius (which would give improved worst-case regret bounds). * *I'm curious, how would the present work compare to the general approach as in "The Statistical Complexity of Interactive Decision Making" by D. J. Foster, S. M. Kakade, J. Qian and A. Rakhlin.?* The present work gives a new and improved way to *construct confidence sets* for bandits, which are turned into improved bandit algorithms with guarantees *via the UCB/LinUCB meta-algorithm* (Sec. 4). The cited work by Foster et al. turns *any online predictor* (with guarantees) into a bandit/RL algorithm (with guarantees) *via the E2D meta-algorithm*. While Foster et al. aim to establish a complexity measure for general interactive decision making, we focus on providing as tight as possible confidence statements for a given bandit task. There are many further differences between both works in the scope and in the techniques. Thank you for pointing out the typos. We hope that this addresses the reviewer's points. We are open to discussion. --- Rebuttal Comment 1.1: Comment: I appreciate the authors addressing my concerns. Based on my understanding, while your meta-algorithm may have tighter UCBs than OFUL through appropriate parameter selection, this doesn't necessarily demonstrate that the current approach achieves tighter (asymptotic) worst-case regrets than OFUL (e.g., removing the $\ln T$ factor from the regret). However, I concur that the "adaptive" strategy outlined in Appendix E appears to be a promising route for attaining a tight dependency on $T$. I agree that the current work is significant enough to justify acceptance, and I'm happy to recommend it for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for responding to our rebuttal. Your response is a good summary of the theoretical results in the paper. Thank you for raising your score and recommending acceptance.
Summary: A very well written paper introducing a slightly different way of constructing confidence sets for $\theta^\star$ in the adaptive regression setting. The two main differences are that the paper bounds the norm of the observation noises $\epsilon_t$ directly, rather than the projection of the noises that features in the standard bound. This is done by showing that the method of mixtures can be used with a suitably adapted sequence of mixing measures, and choosing that mixture appropriately. Strengths: Paper is _very_ well written. Method introduced is neat and the perspective taken in deriving it will be of interest to other researchers in adaptive regression/design. Weaknesses: I don't believe the paper has any significant weaknesses. One could argue that it has, perhaps, limited scope: but bandits and adaptive regression/design are very popular topics nowadays, and this is an interesting read for many people interested in those. One issue I feel strongly about, but that can be easily addressed: you advertise that your confidence intervals are robust to misspecification (abstract, line 12). I was very disappointed to see, when I got to section 8.1 and particularly lines 282-289, that you mean misspecification to a prior in a Bayesian setting. The general setting of your work is frequentist, and in the frequentist setting, misspecification has a well understood meaning which, of course, does not coincide with that which you show. It is unclear that your method is any better in this respect than the standard concentration inequality used in OFUL (Yasin's original work); and indeed, that's a generous interpretation: it is generally accepted that bounds derived under frequentist assumptions work well in a misspecified Bayesian setting. Please remove this claim from your abstract. Another issue is that your plots are unreadable when printed in grayscale (all lines look the same). I'm reviewing a grayscale printed version of this paper, so cannot asses empirical performance from plots. Fortunately for you, I also happen to care little for empirical results. I have some minor feedback, solely for the purpose of improving the manuscript: -High level: please reiterate in the introduction, e.g. on line 44, that your UCBs are tighter in an empirical sense; that you have not (to my understanding) shown them to be tighter in a theoretical sense. Your results are neat: by risking the perception that you might be overclaiming/misleading you'd be doing yourself a disservice. -High level: your method is highly related to the rather excellent paper -Line 65: the result you cite Chowdhury & Gopalan 2017 for is implied directly by theorem 4.1 in Yasin Abbasi-Yadkori's PhD thesis (2012); that the result of Chowdhury & Gopalan was novel is an error in the literature that ought not be propagated. -Lines 97-98, you state that $\mathcal{H}_t$-measurability equates to 'can be calculated using the data available just after reward $r_t$ is revealed'. From the rest of the paper, I know that you know that this isn't true. It may seem like a nice simplifying explanation, but its misleading, and often inexperienced authors make a mess of things because they take that to be the definition. Indeed, in your setting, the event $\{ \theta^\star \in \Theta_t\}$ is $\mathcal{H}_t$ measurable (assuming $\Theta_t$ is a closed set, see next comment), since $\theta^\star$ is a constant and $\Theta_t$ is a measurable random set. But $\theta^\star$ is explicitly unknown at the end of the $t$th iteration, and so the indicator of $\theta^\star \in \Theta_t$ _cannot_ be computed with the information available at that point. Please remove that statement, and if you feel the reader might need a primer on measurability, please include a reference to any standard measure/probability textbook. -Lines 95 to 100: you define what are effectively random sets with a certain property. One has to be careful around the definitions of random sets to ensure they behave in a way that one would expect. For example, we'd usually like that for a random set $A'$ subset of $A$, for any $a \in A$, the event $\{a \in A'\}$ is measurable. Your sets are closed and you work on a Polish space, so this is true; but I would point out that this is so (and indeed, you might run into trouble if the confidence sets were open). See Molchanov, Ilya: Theory of Random Sets. Springer London. 2017, 2nd edition. Proposition 1.1.2; that should be all you need. -Eq (5), I would point out that this is just the 2-norm of $\epsilon_t$; this makes it much clearer, for example, where your naive bound of line 158 comes from. -On your assumptions 7.1-7.4: It seems to me that 7.2+7.3 together imply a bound of the form asked for in 7.4; is there a good reason you have a separate assumption 7.4? PS: I have not read the appendix. I am confident from the sketches in the main text that the result claimed goes through. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I included some minor questions in the weaknesses section. I have no major questions for the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Some aspects of the writing could be thought to overclaim, specifically when it comes to robustness to misspecification. I feel strongly that this ought to be addressed. But this is also easily fixable; I hope the authors do so. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their careful and helpful comments! We address the reviewer's points under "Weaknesses": * *One issue I feel strongly about, but that can be easily addressed: you advertise that your confidence intervals are robust to misspecification (abstract, line 12). ...* We understand the reviewer's point, thank you for pointing this out. We will remove the claim about misspecification from the abstract, and in the main text of the paper we will clarify this term by writing "Bayesian prior misspecification". We hope that this makes clear that we make no claim about the frequentist notion of misspecification. * *Another issue is that your plots are unreadable when printed in grayscale (all lines look the same). ...* We will use a more grayscale-friendly color scheme for the revised version of the paper. * *High level: please reiterate in the introduction, e.g. on line 44, that your UCBs are tighter in an empirical sense; that you have not (to my understanding) shown them to be tighter in a theoretical sense. ...* Our UCBs are tighter in a theoretical sense. In App. C.2, we show that for any value of the OFUL regularization parameter (similar to our $\alpha$ parameter), there are valid (and simple) choices of $\alpha$, $\mu_t$ and $T_t$ (which are not necessarily the optimal choices) such that our analytic UCBs (and therefore also our numerical UCBs) are always strictly tighter than the OFUL UCBs. We will add a pointer to App. C.2 in the paragraph on line 44. * *High level: your method is highly related to the rather excellent paper* We would be curious to know which excellent paper the reviewer is referring to here, especially if we haven't cited it in our paper yet. * *Lines 97-98, ...* We will replace our previous statement about $\mathcal{H}_t$-measurability with: "each $\Theta_t$ can be calculated using the data $a_1, r_1, \dots, a_t, r_t$." * *Lines 95 to 100: ...* After this paragraph we will add the sentence: "We remark that the confidence sets $\Theta_t$ in this paper are random closed sets in the sense of [Molchanov, Def. 1.1.1], which implies that the event $\theta\in\Theta_t$ is actually measurable for any $\theta\in\mathbb{R}^d$." * *On your assumptions 7.1-7.4: It seems to me that 7.2+7.3 together imply a bound of the form asked for in 7.4; is there a good reason you have a separate assumption 7.4?* We agree that 7.2 + 7.3 imply assumption 7.4 with $C = LB$. Our reasons for stating a separate assumption are: (a) this is in line with the conventions of other linear bandit analyses (e.g. [Lattimore and Szepesvári, Section 19.3]); (b) this leaves open the possibility that a better (than $LB$) value for $C$ is known. The reviewer's other minor points under "Weaknesses" which we did not address above, we will directly fix in the paper as suggested. [Lattimore and Szepesvári] Lattimore, T. and Szepesvári, C., *Bandit algorithms.* Cambridge University Press. 2020. --- Rebuttal Comment 1.1: Comment: Oh, I seem to have failed to paste the paper I had in mind in, and now I had no idea what it was. My apologies. I've increased my score to 8, I believe this paper should absolutely be accepted. --- Reply to Comment 1.1.1: Comment: Thank you for responding to our rebuttal, raising your score and recommending acceptance.
Summary: The paper considers the problem of stochastic linear contextual bandits, and proposes an improvement of the classic LinUCB / OFUL algorithmic template via a more sophisticated construction of the confidence sets for the hidden reward vector. The improvement comes from replacing the confidence ellipsoid used since the classic work of Abbasi-Yadkori et al. (based on the method of mixtures) with a tighter confidence set based on what the authors call "adaptive martingale mixtures". The authors eventually derive a confidence ellipsoid resembling the ones used by Russo and Van Roy, which enjoys the useful property of having a potentially data-dependent radius, and can also be turned into a confidence sequence very easily via an application of Ville's inequality. Using two different methods (an exact convex solver and an approximation of the optimal width), the authors then turn these confidence sets for theta into confidence bounds for the rewards of each action, and use the resulting bounds in a UCB scheme. The algorithms are then shown to outperform standard LinUCB in some simple experiments. Strengths: The paper is superbly written and presents a very interesting technique, improving one of the main building blocks of UCB algorithms that have been used for over a decade without any substantial changes. The proposed techniques make use of techniques that have recently gained popularity for mean estimation and proving PAC-Bayesian generalization bounds. The derivations up until the end of Section 5 are very neat and satisfying, and I believe that everyone interested in confidence ellipsoids and linear bandits should find them interesting. Weaknesses: The concrete approaches proposed in Section 6 are also nice but leave something to be desired. Perhaps it is my fault, but I have missed how one should choose the predictions mu_t and T_t. From what I understand, both approaches in Sections 6.1 and 6.2 should give valid results irrespective of the choice of these parameters, but I still wonder how the choice will impact the quality of the guarantees. The authors only suggest that choosing mu and T that "are good predictors of the (stochastic) reward r" will yield good results, but do not elaborate further. For instance, is setting mu_t as the least squares estimator and T_t as the Grammian a good idea? It feels somewhat unsatisfying to introduce all the possibilities for adaptivity and then simply set a constant lambda_t and go with the standard choice of mu and T... Also, the confidence set proposed in Section 6.2 doesn't seem all that different from the standard tail bound popularized by Abbasi-Yadkori et al. Accordingly, the regret bounds of Theorem 7.5 and 7.6 also take the same for as previous bounds, which is not unexpected given how the theorems are stated for rather generic choices of mu and T. I can see that the new bounds *could* be tighter, but at the moment the theory does not reflect this, which is somewhat disappointing. One additional thing that I would like to comment on is the computational complexity of the resulting method. I understand that the UCB's obtained from the method are always convex in the feature representation of the actions (given how they are a maximum of linear functions). Thus, calculating the action with maximal UCB is a convex maximization problem, which is NP-hard in general. This is generally true for all UCB-like methods I can think of, so perhaps it is a bit odd to avoid discussing this question altogether in the paper and suggest that the gradient-based optimization scheme for finding the optimistic actions is a theoretically well-justified idea. (It is not, but I understand that it often still works in practice.) I think it would be nice to add a comment on this in order to not mislead more casual readers who may not be familiar with this computational difficulty. Despite all my criticism above, I am happy to support acceptance of this paper to the NeurIPS program. I am looking forward to future literature addressing the current limitations of the approach proposed in this otherwise very nice paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We are very happy to receive such an enthusiastic review! We now address the reviewer's points under "Weaknesses": * *how one should choose the predictions mu_t and T_t ... I still wonder how the choice will impact the quality of the guarantees ... For instance, is setting mu_t as the least squares estimator and T_t as the Grammian a good idea? It feels somewhat unsatisfying to introduce all the possibilities for adaptivity and then simply set a constant lambda_t and go with the standard choice of mu and T ...* The effects of $\mu_t$ and $T_t$ on the tightness of the UCBs and the regret guarantees are determined by their effects on the squared radius $R_{\mathrm{MM}, t}^2$, which is defined in Eq. (5). Based on Eq. (5), we can treat the mean vector $\mu_t$ as a prediction of the reward vector $r_t$. The covariance matrix $T_t$ can be thought of as the uncertainty associated with this prediction. If the distance between $\mu_t$ and $r_t$ is close to 0 (i.e. $\mu_t$ is a good predictor of $r_t$), then the quadratic "prediction error" term in (5) will be close to 0, and we can afford to choose $T_t$ close to zero to minimize the log determinant term. Unfortunately, we cannot simply choose $\mu_t = r_t$, because then the mixture distributions would not satisfy the conditions in lines 812-816 (i.e. each component of $\mu_t$ can only depend on the *preceding* rewards). Hence, we can think of the $k$th component of $\mu_t$ a prediction/guess/bet for the $k$th reward. We agree that it is exciting to consider more adaptive choices of $\lambda_t$, $\mu_t$ and $T_t$. In App. E.2, we investigate a generic method for setting $\mu_t$ and $T_t$ based on previously observed actions and rewards; this results in somewhat lower regret in an experiment (Fig. 6 in App. E.2) and possibly better regret bounds. In App. B.1, we derive the radius $R_{\mathrm{MM}, t}$ for more general choices of $\lambda_t$. However, we don't analyze these choices of $\lambda_t$, $\mu_t$ and $T_t$ in the main paper because: (a) these choices make the resulting bandit algorithms very difficult to analyze theoretically; (b) we already struggled to fit everything into the 9 page limit. We view the analysis of more adaptive versions of our method as an exciting challenge to address in future work. * *Also, the confidence set proposed in Section 6.2 doesn't seem all that different from the standard tail bound popularized by Abbasi-Yadkori et al. Accordingly, the regret bounds of Theorem 7.5 and 7.6 also take the same for as previous bounds, which is not unexpected given how the theorems are stated for rather generic choices of mu and T. I can see that the new bounds could be tighter, but at the moment the theory does not reflect this, which is somewhat disappointing.* In App. C.2, we compare our analytic UCBs (from Thm. 6.1) to the OFUL UCBs. We prove that for any value of the OFUL regularization parameter (similar to our $\alpha$ parameter), there are valid (and simple) choices of $\alpha$, $\mu_t$ and $T_t$ (which are not necessarily the optimal choices) such that our analytic UCBs (and therefore also our convex-program UCBs from Eq. (6)) are always strictly tighter than the OFUL UCBs. Furthermore, there is hope that more the adaptive choices for $\mu_t$ and $T_t$ described in App. E.2 could lead to regret bounds with an improved growth rate in $T$. See Fig. 6 in App. E.2 and the discussion below it for more. * *One additional thing that I would like to comment on is the computational complexity of the resulting method. ... This is generally true for all UCB-like methods I can think of, so perhaps it is a bit odd to avoid discussing this question altogether ... I think it would be nice to add a comment on this in order to not mislead more casual readers who may not be familiar with this computational difficulty.* We fully agree with the reviewer on this point. We will add a comment on this in the revised paper. --- Rebuttal Comment 1.1: Title: thank you Comment: Thank you for the response! All comments make perfect sense. I am keeping my score and will continue to support acceptance of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for responding to our rebuttal and recommending acceptance.
Summary: This paper studies the stochastic linear bandits problem and proposes an improved algorithm with sub-linear regret guarantees. The improvement is achieved using a novel tail bound for adaptive martingale mixtures to construct tighter upper confidence bounds, which leads to a smaller regret than existing algorithms for linear bandits. The authors also verify the performance of the proposed algorithm via experiments on hyperparameter tuning tasks. Strengths: #### **The following are the strengths of the paper:** 1. The performance of any upper confidence bound (UCB) based bandit algorithm depends on the tightness of the confidence bounds. This paper proposes a novel way (using the tail bound for adaptive martingale mixtures) to improve the upper confidence bound, leading to an improved UCB-based bandit algorithm with a smaller regret. 2. The authors propose two novel methods for computing the confidence bounds: Convex Martingale Mixture UCB (CMM-UCB) and Analytic Martingale Mixture UCB (AMM-UCB). CMM-UCB uses a convex solver for the UCB maximization (gradient differentiable convex optimization), whereas AMM-UCB uses a weak Lagrangian duality to obtain an analytic UCB (gradient can be computed in closed-form or via standard automatic differentiation procedures). 3. When the mixture distribution is a Gaussian distribution, the authors showed the sub-linear data dependent/ independent cumulative regret for CMM_UCB and AMM-UCB. The author also empirically validated the performance gain of the proposed methods over existing linear bandit algorithms. Weaknesses: #### **The following are the weaknesses of the paper:** 1. Assumption of mixture distribution having Gaussian distributions: The regret bounds stated in the paper hold only when the mixture distribution is Gaussian. It is unclear from the paper how practical this assumption is and what the consequences are (especially in analysis) if this assumption does not hold. 2. Linearity assumption: Assuming a linear relationship between the reward and action's features (in high dimensional space using a known feature map) restricts the applications of proposed methods. Even though authors claim their tail bounds can be used to derive confidence sequences for non-linear reward functions, it is unclear what are the challenges of extending their work to non-linear reward functions (or kernelized bandits). 3. Unexplained notations: Many notations are not properly defined in the paper. For examples: \ i. Line 145: What is $Z_t(f_t)$ connection with existing regret analysis (e.g., OFUL)? \ ii. Line 149: How are the parameters ($\boldsymbol{\mu}_t, \boldsymbol{T}_t$) of Gaussian distribution computed? \ iii. In Theorem 7.5: what is the regret upper bound in terms of $T$? \ iv. Line 234: How to set the value of $c$? \ v: Line 248: What is $\sigma_0$ and how to set its value? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please address the weakness raised in ***Weaknesses**. Minor comment: 1. Line 303: FTS: Freq-TS I can change my score based on the authors' responses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I have raised a few limitations of the paper in my response to the ***Weaknesses**. Since the paper is a theoretical contribution to linear bandits literature, I do not find any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their kind and constructive comments. We now address the reviewer's points under "Weaknesses": 1. The reason for choosing Gaussian mixture distributions is that the expected value just above Eq. (5) can be calculated analytically when $P_t$ is a Gaussian, which is convenient for running the algorithm and for proving regret bounds. Restricting the mixture distributions to be Gaussian does not impose any assumptions on the ground truth reward function or the bandit problem. The mixture distributions can simply be thought of as hyperparameters of our bandit algorithm. 2. Our bandit algorithm applies to kernel bandits with minor modifications. The main challenge is in obtaining data-independent regret bounds analogous to the one in Thm. 7.6, since the feature dimension is $d=\infty$ for interesting kernels. In more detail, our confidence sequences in Corollary 5.2 can be immediately extended to non-linear reward functions $f^*$, simply by replacing $\phi(a_t)^\top\theta^*$ with $f^*(a_t)$ in the definition of $Z_t(f_t)$. The result is a confidence set, as in Corollary 5.2, with a squared error constraint for non-linear functions $f$ and a suitable boundedness constraint $\Vert f\Vert\leq B$. For kernel bandits, the corresponding UCB is still the solution of a convex program, so we can run our CMM-UCB and AMM-UCB algorithms in kernel bandit problems. While the data-dependent regret bound in Thm. 7.5 remains basically unaltered, the (derivation of the) data-independent regret bound in Thm. 7.6 must be modified. The main challenge is that the regret bound must now depend on quantities like the effective dimension or the maximum information gain of the kernel instead of the dimension $d$ of the feature vectors (which is $d=\infty$ for interesting kernels). We will explain this in more detail in the revised version of the paper. 3. i. The random variables $Z_t(f_t)$ in our submission play a similar role to the random variables $Z_t$ in [Russo and Van Roy, App. B.1]. ii. A standard choice is $\mu_t\equiv 0$ and $T_t = \Phi_t\Phi_t^\top$, which can be motivated by choosing $\theta\sim{\mathcal N}(0,I)$ and then considering the distribution of the function values $\Phi_t\theta$. In general, $\boldsymbol{\mu}_t$ and $\boldsymbol{T}_t$ can be freely chosen as long as the sequence of mixture distributions $(\mathcal{N}(\boldsymbol{\mu}_t, \boldsymbol{T}_t) |t \in \mathbb{N})$ satisfies the requirements for being a sequence of adaptive mixture distributions; see lines 126-130 for general mixture distributions and lines 812-816 for Gaussian mixture distributions. A particular adaptive choice of $\mu_t$ and $T_t$ is examined in App. E.2 (i.e. such that the entries of $\mu_t$ and $T_t$ depend on the previously observed rewards, namely they are predictors of the reward at the newly selected action), and shown to yield good results. iii. The growth rate in $T$ of the regret bound in Thm. 7.5 is determined by the growth rate of the radius and the sum of norms. If we upper bound each of these terms by quantities with explicit dependence on $T$, then we arrive at the data-independent bound in Thm. 7.6. We can therefore say that the dependence on $T$ of the regret bound in Thm. 7.5 is no worse than that of the regret bound in Thm. 7.6 (i.e. no worse than $\mathcal{O}(\sqrt{T}\ln(T))$). iv. There are at least two good choices of $c$. In all of our experiments, we used $c = 1$, which is a simple choice that appears to work well. Alternatively, one can choose $c = B$. With this choice, the data-independent regret bound in Thm. 7.6. has improved dependence on the norm bound $B$ (roughly $\mathcal{O}(\sqrt{B})$ instead of $\mathcal{O}(B)$). v. One can think of the real number $\sigma_0$ as a guess for the distance between the observed reward vector $\boldsymbol{r}_t$ and the predictions $\Phi_t\boldsymbol{\theta}_0$. Consider Eq. (5) with $\boldsymbol{\mu}_t = \Phi_t\boldsymbol{\theta}_0$ and $\boldsymbol{T}_t = \sigma_0^2\Phi_t\Phi_t^{\top}$. If the distance between $\Phi_t\boldsymbol{\theta}_0$ and $\boldsymbol{r}_t$ is close to 0, then we should choose $\sigma_0$ to be close to 0, since both the quadratic and log determinant terms in Eq. (5) will then be close to 0. Alternatively, if the distance between $\Phi_t\boldsymbol{\theta}_0$ and $\boldsymbol{r}_t$ is large, we should choose a larger $\sigma_0$ so that the quadratic term in (5) is not too large. We call $\sigma_0$ a guess because it has to be chosen *before* observing $\Phi_t\boldsymbol{\theta}_0$ and $\boldsymbol{r}_t$. Thank you for pointing out the typo on line 303. We hope that the reviewer can now recommend acceptance of the paper. [Russo and Van Roy] Russo, D. and Van Roy, B., Eluder dimension and the sample complexity of optimistic exploration. *Advances in Neural Information Processing Systems*, 26. 2013. --- Rebuttal 2: Comment: Thanks for the clarifications. As the authors have clearly addressed my concerns, I am increasing my score.Thank you for the clarifications. As the authors have clearly addressed my concerns, I have increased my score. --- Rebuttal Comment 2.1: Comment: Thank you for responding to our rebuttal and raising your score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets
Accept (poster)
Summary: This paper investigates the offline RL with the imbalanced dataset setting. The proposed method, which optimizes a parameterized density-ratio weighting (DW) model for each transition, anchors a policy close to the good part of trajectories in the dataset. The proposed method is shown to outperform state-of-the-art methods on 72 imbalanced datasets both in varying imbalance and initial state distribution. Strengths: 1. Overall good writing quality and clarity. 2. The proposed approach is well-motivated by adopting existing approaches. 3. Sufficient technical details for reproducing the experiments. Weaknesses: 1. Novelty limited: Fixed conservativeness is defective for imbalanced (heterogeneous) datasets and reweighting or filtering samples/trajectories are not new in offline RL [1, 2, 3]. On the other hand, this paper does not connect with the previous most similar work [1] and explains why DW is better than AW, though it claims that AW is empirically prone to overfitting. 2. Lack of details for the dataset setup. 3. Lack of comparison: the imbalanced dataset is also called the noisy dataset [4]. If the authors follow the setup of the dataset from AW [1], the same as the noisy data setup [4], it is reasonable to include more comparisons between SQL/EQL enforcing a Q-function weighting on the transitions and show the edge of DW. 3. Experiments between AW and DW: In Figure 2, AW-M is better than DW-Uniform in three of four tasks. However, the authors claim that they are better than AW. More explanation is needed about why DW is better than AW. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 4. It is reasonable to set $\gamma=1$ to avoid the estimation of $\rho_0$. Have the authors analyzed the role of the discount factor for offline RL [5]? It will be appreciated if any theoretical analysis or/and experiments are provided. 5. Lacking the AW-L benchmark in Figure 2(b). The medium temp is extremely beneficial in IQL (see Figure 2(b)), why do not present the performance of AW-L? 7. In Figure 3, It is counterintuitive that DW is not influenced by diverse initial states and varying trajectories because DW does not consider the initial state distribution. Could the authors explain this? 8. Thank the authors for providing technical details including their hyperparameter search range with $\lambda_f$ and $\lambda_k$. However, I am a bit concerned that the necessity of penalties, i.e., Bellman flow and KL-divergence. Could the authors please introduce more experiences to show their necessities and the guidance for hyperparameter research with new tasks? [1] Hong Z W, Agrawal P, des Combes R T, et al. Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting[C]//The Eleventh International Conference on Learning Representations. 2023. [2] Brandfonbrener D, Whitney W F, Ranganath R, et al. Quantile filtered imitation learning[J]. arXiv preprint arXiv:2112.00950, 2021. [3] Wu Y, Zhai S, Srivastava N, et al. Uncertainty weighted actor-critic for offline reinforcement learning[J]. arXiv preprint arXiv:2105.08140, 2021. [4] Xu H, Jiang L, Li J, et al. Offline rl with no ood actions: In-sample learning via implicit value regularization[J]. arXiv preprint arXiv:2303.15810, 2023. [5] Hu H, Yang Y, Zhao Q, et al. On the role of discount factor in offline reinforcement learning[C]//International Conference on Machine Learning. PMLR, 2022: 9072-9098. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See questions and weaknesses. --- raise the score to 6 after reviewing the response from the authors. --- **Minors**: 1. Cite twice: The classic results in [34] indicate that a policy’s expected return can be expressed in terms of its stationary state-action distribution [34]. 2. IQL training object for policy improvement (line 234) is wrong. 3. What is DW-uniform? Only use DW? I suggest changing the name of DW-uniform because *uniform* will cause misunderstanding that DW-uniform is a combination sampling method between DW and Uniform. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the quality and clarity of our paper, and breadth of our experimental evaluation. We respond to the main questions in the following (W=weakness, Q=questions). > W1: Novelty We're the first to use density ratio optimization to address fixed conservativeness in offline RL (like CQL, and IQL), differing from prior work [1, 2, 3]. [1] need trajectories to start from similar initial states; otherwise, excessive weights can be assigned to trajectories starting from lucky initial states with higher returns, and data in other good trajectories with unlucky initial states will be underutilized. DW doesn't require trajectories to start from similar initial states, outperforming AW in datasets with diverse initial states in Fig. 3. [2] filter low-value data, but we maximize the expected rewards of the reweighted data distribution. Since their code is not released, we compared DW with its closet work [4], showing that our DW outperforms [4] in Fig. 7 in the attached PDF. [3] weight samples by Q-values' variance, but DW weights data by their rewards (with some constraints). In Fig. 7, UWAC underperforms the other methods, suggesting it may not be an effective approach to address the issues in imbalanced datasets. > W1: Why DW is better than AW > We want to clarify that we already cited and compared with AW [1], showing that DW-AW outperforms AW on various imbalanced datasets (Figs 2 and 3). **DW is better than AW as it can up-weight valuable data in both low- and high- return trajectories**, while AW only does this for high-return ones. This is crucial as valuable state-action pairs might be hidden in low-return trajectories. In Fig. 8 (PDF), AW and PF fail to find optimal state-action distribution due to missing optimal trajectories. In contrast, DW learns the optimal distribution, highlighting the benefit of DW over trajectory reweighting. > W2 & 3 and Q2: Lack of details for the dataset setup / Lack of comparison / Lack of AW-L > The dataset setup is in Appendix A.3.3. We’ve added AW-L, SQL and EQL in Fig. 7. AW-L is close to AW-M in small datasets. Both DW-AW and DW-Uniform outperform SQL and EQL in imbalanced datasets. > W4: Why DW is better than AW in Figure 2 > DW is better than AW since combing DW and AW (i.e., DW-AW) achieves higher average return or match the baselines in Figures 2(a) and 2(b). Please let us know if this explanation answers the question. > Q1: Role of the discount factor for offline RL [5] We keep the original discount factors for training offline RL algorithms (CQL, IQL), only setting $\gamma=1$ for DW. The effects of discount factors in offline RL should directly transfer to the combination of DW with offline RL algorithms. > Q3: Why DW isn’t influenced by diverse initial states The reviewer might believe DW works only with a single initial state due to Eq. 10 omitting initial state distribution. However, we'd like to kindly clarify that DW's formulation isn't restricted to one initial state but agnostic to initial states because it optimizes undiscounted returns instead of discounted ones, as detailed in Sec. 4.1. > Q4: Necessity of Bellman flow and KL penalties. - **Bellman flow penalty is required** to ensure the learned weights are valid in the MDP (Sec. 4.1). Otherwise, maximizing the objective (Eq. 6) makes all the weights assigned to the data with the highest reward. - **KL penalty is required,** according to the theoretical analysis shown by Sec. 5.1 in [6]. Otherwise, high weights can be assigned to the actions that lead to next states that are absent in the dataset. > Q4: Guidance for hyperparameter search - We suggest starting with low **flow penalty $\lambda_F$** since it is lower bounded by zero, and increasing it often leads to performance gain. The following table shows the average return shown in Table 2 in Appendix, where K and F denote $\lambda_K$ and $\lambda_F$, respectively. Increasing F improves the performance in IQL. We found that CQL is less sensitive to F. | K,F | 0.2,0.1 | 0.2,1.0 | 0.2,5.0 | 1.0,0.1 | 1.0,1.0 | 1.0,5.0 | | --- | --- | --- | --- | --- | --- | --- | | CQL | 48.1 | 41.7 | 47.5 | 10.1 | 7.5 | 11.5 | | IQL | 32.0 | 56.0 | 52.4 | 47.2 | 58.5 | 61.5 | - Regarding **KL penalty weight $\lambda_K$**, we discuss in two datasets: 1. **Low-return dominant:** To ease offline RL's conservatism on data from low-performing policies, we suggest starting with low $\lambda_K$ since high $\lambda_K$ limits deviation from the dataset. The above table displays average performance in low-return dominant datasets, showing higher $\lambda_K$ can lead to drops in CQL. 2. **High-return dominant:** As data here is nearly optimal, we suggests beginning with high $\lambda_K$. The following table shows the performance in **`halfcheetah-expert-v2`** dataset, showing that high $\lambda_K$ leads to better performance. | K,F | 0.2,5.0 | 1.0,5.0 | | --- | --- | --- | | CQL | 59.2 | 81.9 | | IQL | 95.2 | 95.1 | While optimal coefficients vary by dataset, we've shown that the same hyperparameters can enhance performance in imbalanced datasets while not harming overall performance in high-return dominant datasets from the original D4RL. > Minors Thank you for your feedback. We'll revise the manuscript accordingly. "DW-Uniform" refers to DW trained with uniform sampling, and "DW-AW" to DW trained with AW sampling, based on the optimization of Eq. 13. [1] Hong et al. “Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting” [2] Brandfonbrener et al. Quantile filtered imitation learning [3] Wu et al. Uncertainty weighted actor-critic for offline reinforcement learning [4] Chen et al. Bail: Best-action imitation learning for batch deep reinforcement learning [5] Hu et al. On the role of discount factor in offline reinforcement learning [6] Zhan et al. Offline reinforcement learning with realizability and single-policy concentrability --- Rebuttal Comment 1.1: Title: Response to the Authors' Rebuttal by Reviewer XiFQ Comment: I have carefully reviewed the comments from other reviewers, and the authors have addressed most of my concerns. The additional experiments provided sufficiently demonstrate the advantages of density ratio optimization. However, I still have one minor concern. Regarding my previous concerns about Q1 and Q3, the authors chose to set $\gamma=1$ to bypass the dependence on the initial state distribution, i.e., $\rho_0$. Nonetheless, I believe that setting $\gamma=1$, which is agnostic with the initial state distribution, comes with sacrifice. Could the authors discuss the limitations associated with this choice? Overall, I appreciate the detailed response and have decided to raise my rating to 6. --- Reply to Comment 1.1.1: Comment: We are glad that our response addresses the reviewer's concerns. **Regarding the choice to set $\gamma=1$:** This choice might not be aligned with the task objective when short-term rewards are preferable over long-term ones. For instance, in stock trading scenarios, one may prefer short-term revenues (rewards) over long-term ones since one may not be able to afford large losses before obtaining a huge revenue in the far future. We thank the reviewer again for the appreciation of our response and hope our follow-up response addresses the question.
Summary: Authors propose a new method for weighting the samples from datasets in order to mimic the sampling from dataset which is collected by the better policy the the behavioral policy. This method can be integrated into any algorithm and combined with previous approaches. Strengths: Method seems to be not that hard to implement and it shouldn't introduce a notable computational overhead while can be applied with the arbitary algorithm.On imballanced locomotion datasets generated by the authors proposed approach helps to achieve much better results than other approaches. Benchmarking over high number of the datasets is another strength. Weaknesses: While approach helps to improve performance when D4RL locomotion datasets are mixed with the random datasets, it seems like there is no advantage when applied to the original mixed datasets (medium-replay, medium-expert, full-replay) or the performance even drops on those if we look at Tables 4 and 5. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the avereged scores across domains in tables 4 and 5? As mentioned in "Weaknesses" it seems like the proposed method does not benefit a lot (if benefit at all) when applied to the original D4RL datasets. Could you please add those numbers to see what is the situation on average? I mean average scores over locomotion, antmaze, kitchen, adroit, generated datasets and averaged scores over all of the domains. How does the method affect algorithms time needed for training? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Beside the uncetainty whether approach helps when dataset is not imbalanced that hard there is another limitation in design. If I understand approach correctly, it can't be applied to the datasets which require trajectories stiching to complete task. The example of such task is AntMaze and authors' approach mostly decrease performance when applied to those datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our extensive evaluation and simplicity of implementation. In the following, we answer each of the reviewer’s question. > Could you please add those numbers to see what is the situation on average? > **Answer:** **Advantage in the original mixed datasets:** DW performs similarly with AW because the original mixed datasets (i.e., medium-expert, medium-replay, full-replay) in D4RL already have enough proportion of good data. The mixed datasets used in our experiment (Figure 1) only have an average normalized trajectory return of 0.08, but the original mixed datasets in D4RL have 0.5, which is far higher than ours. As the average return of datasets indicates the performance of behavior policies, this means staying close to the behavior policy won’t hurt the performance much, which explains why reweighting doesn’t lead to much performance gain. **Average scores across domains:** Table 7 in the attached PDF shows the average scores for the original D4RL MuJoCo, antmaze, kitchen, and adroit datasets, as well as our imbalanced MuJoCo dataset. We also included the average scores across all domains. The numbers inside the parenthesis denote the relative score compared with the Uniform. **DW matches the baselines in the original D4RL datasets (D4RL Adroit, Antmaze, Kitchen, and Gym-MuJoCo).** - **For CQL**, both DW+AW and DW+Uniform match Uniform’s and AW’s performance, with average relative performance of +3.45 (DW+AW) and +1.47 (DW+Uniform). This shows that in terms of average performance, DW+AW and DW+Uniform even slightly improve the performance over Uniform in the original D4RL datasets. - **For IQL**, DW+AW and DW+Uniform also perform closely with the baselines, except for DW+AW in Antmaze and Kitchen. The performance drop of DW+AW is likely because AW is worse than Uniform by 16.9 and 5.9 points in both domains. > How does the method affect algorithms time needed for training? > **Answer:** We provide the running time of each method below. DW only incurs 10mins more in the training time, not causing excessive overhead. | | Uniform | AW | PF | DW-AW | DW-Uniform | | --- | --- | --- | --- | --- | --- | | CQL | 58mins | 60mins | 60mins | 70mins | 67mins | | IQL | 40mins | 43mins | 43mins | 60mins | 57mins | > If I understand approach correctly, it can't be applied to the datasets which require trajectories stiching to complete task. > **Answer:** We would like to clarify that DW can be applied to tasks requiring trajectory stitching and illustrate why in the following. **DW doesn’t hurt in AntMaze.** The following table presents the average return of CQL and IQL with varying sampling/reweighting methods in AntMaze datasets. DW+Uniform matches the performance of Uniform, showing that it doesn’t hurt the performance in tasks requiring trajectory stitching. The reason why applying DW doesn’t improve over Uniform can be that offline RL algorithms have already done trajectory stitching well. Thus there is not much room for improving the sampling distribution. It is likely because the state-action distribution in AntMaze is not skewed toward low-performing trajectories. As such, regularized offline RL algorithms won’t suffer in such cases. | | Uniform | AW | PF | DW+AW (ours) | DW+Uniform (ours) | | --- | --- | --- | --- | --- | --- | | CQL | 15.2 | 23.4 | 17.7 | 22.1 | 19.1 | | IQL | 63.8 | 47.0 | 0.0 | 45.5 | 63.2 | **Four-room experiment shows that DW can stitch trajectories.** To assess DW's trajectory stitching ability, we conducted an experiment in a didactic four-room environment [2]. - **Experiment setup:** See Figure 8 in the attached PDF for the environment illustration. The agent starts from an orange initial state, traverses non-red cells, gaining +1 reward at the green goal, else zero. To test trajectory stitching, a suboptimal dataset with 1000 trajectories was generated, where none of each is optimal trajectory. Due to absence of optimal trajectories, up-weighting trajectories (i.e., AW and PF) with high-returns won’t produce a state-action distribution matching the optimal policy. Thus, if a method can generate state-action distribution matching the one of the optimal policy, it indicates that the method is able to stitch trajectories because it can identify optimal state-action pairs leading to the goal even though those optimal state-action pairs are not observed in the same trajectory in the dataset. - **Results:** Figures 8(a), 8(b), and 8(e) display the state-action distributions of behavior policy, optimal policy, and DW; the number above each plot is expected return under the state-action distribution. We see that both AW (Figure 8(c)) and PF (Figure (d)) fail to match the state-action distribution of the optimal policy in Figure 8(b), hence leading to suboptimal performance. In contrast, DW successfully approximates the optimal policy's state-action distribution, confirming DW can identify optimal state-action pairs observed in different suboptimal trajectories and stitch them to optimal trajectories. **DW improves BC in Ant U-maze.** To see if the learned importance weight mirrors a data distribution of a better policy than the behavior policy, we evaluate DW upon BC. This way, we can exclude trajectory stitching from offline RL algorithms like CQL and IQL since BC cannot do trajectory stitching. To improve beyond the behavior policy, one needs to change the data distribution to make it reflect a better policy. Our results show that DW achieves better performance than uniform sampling (65 v.s. 45), indicating that DW is applicable in tasks requiring trajectory stitching. [1] Hong Z W, Agrawal P, des Combes R T, et al. Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting, ICLR'23 [2] Lee, Jongmin, et al. "Optidice: Offline policy optimization via stationary distribution correction estimation." ICML'21 --- Rebuttal Comment 1.1: Comment: Thank you very much for answering my questions and conducting additional experiments. I'm increasing the rating in my review. --- Reply to Comment 1.1.1: Comment: We are glad that our answers address the reviewer's questions. We will include these new experimental results in the updated manuscript.
Summary: This paper proposes a method to improve offline reinforcement learning (RL) performance on imbalanced datasets, where most of the data comes from low-performing policies and only a few from high-performing ones. The method, called density-ratio weighting (DW), optimizes the importance sampling weights to emulate sampling data from a data distribution generated by a nearly optimal policy. The paper shows that DW can enhance the performance of state-of-the-art offline RL algorithms on 72 imbalanced datasets with varying types of imbalance. Strengths: - It proposes a novel method of DW that optimizes the importance sampling weights to emulate sampling data from a data distribution generated by a nearly optimal policy, rather than the original dataset. - It demonstrates overall performance gains over SoTA offline RL algorithms and other baselines on imbalanced datasets with varying types and degrees of imbalance. - Its written style is clear and easy to follow. Weaknesses: - The theoretical analysis is relatively weak. - Slightly lack of discussion with a type of offline RL algorithms which filter out low-performance trajectories. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Though obtaining a performance bound is quite difficult as describing the imbalanced dataset is hard, is it possible to prove the optimization with the proposed loss function will converge? For example, I guess, derive an iteration equation of $w$ as the Bellman equation, and prove the operator is a contraction mapping. - In Section 6 related work, the paper discusses offline imitation learning approaches. It claims that these approaches "assume prior knowledge of which data points are generated by experts". However, there are some offline imitation or RL methods which only use the reward signals in the dataset and filter out low-quality data to improve the final performance, such as BAIL [1] and COIL [2]. I am curious about what the advantages of DW are compared to them, especially for BAIL which does not need trajectory information. Also, it will be better if you can try BAIL on your imbalanced dataset and show the results. [1] Chen, Xinyue, et al. "Bail: Best-action imitation learning for batch deep reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 18353-18363. [2] Liu, Minghuan, et al. "Curriculum offline imitating learning." Advances in Neural Information Processing Systems 34 (2021): 6266-6277. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the novelty of our work, extensive evaluation, and clarity of our writing. We clarify the reviewer’s questions in the following. > Though obtaining a performance bound is quite difficult as describing the imbalanced dataset is hard, is it possible to prove the optimization with the proposed loss function will converge? For example, I guess, derive an iteration equation of $w$ as the Bellman equation, and prove the operator is a contraction mapping. > **Answer:** Note that as we don’t use fixed point iteration (i.e., not using target network of $w$) like value iteration to learn $w$, the update operator doesn’t have to be contraction mapping for convergence. Regarding convergence, the optimization objective of $w$ is a convex optimization problem over $w(s,a)$. We use gradient descent to optimize w. According to the standard results in Boyd et al. [1], gradient descent converges as long as the optimization domain of $w$ is bounded, the gradient w.r.t., the loss $\nabla_w L(w)$, is bounded, and the learning rates are not too large asymptotically: $\sum_t \alpha_t^2 < \infty$. All three conditions are met in our case. Thus optimization of w will converge too. [1] Boyd, Stephen P., and Lieven Vandenberghe. *Convex optimization*. Cambridge university press, 2004. > In Section 6 related work, the paper discusses offline imitation learning approaches. It claims that these approaches "assume prior knowledge of which data points are generated by experts". However, there are some offline imitation or RL methods which only use the reward signals in the dataset and filter out low-quality data to improve the final performance, such as BAIL [1] and COIL [2]. I am curious about what the advantages of DW are compared to them, especially for BAIL which does not need trajectory information. Also, it will be better if you can try BAIL on your imbalanced dataset and show the results. > **Answer:** Thank you for your suggestion! We’ve added BAIL as a baseline and presented the results in Figure 7 in the attached PDF. The results show that DW performs better than BAIL. We are currently running COIL as well and will update the manuscript accordingly when we have the results. Here is the BAIL codebase used in our experiment: https://github.com/lanyavik/BAIL. --- Rebuttal Comment 1.1: Comment: I am very glad to see your new experiment results. It has well proven the effectiveness of your algorithm against other baselines. For the convercency part, I recommend to include or mention this property in the main paper or the appendix for clarity. Now I am also willing to raise my rating. Thanks for your kind reply. --- Reply to Comment 1.1.1: Comment: We thank the reviewer's appreciation of our new results and for raising the rating. We will include these new baselines and analysis on convergence in the updated manuscript.
Summary: The authors consider an offline RL problem where the offline dataset has a small number of high-reward episodes and a larger pool of low-reward episodes. They argue this setting is fairly common in reality, since generating high-reward episodes is often higher effort. Considering offline RL objective functions that are structured like "maximize reward while staying close to the behavior policy", the argument is that staying close to the behavior policy causes optimization to be too conservative / biased too much towards the large pool of low-reward episodes. Borrowing the DiCE techniques from off-policy evaluation, the proposed method is to learn importance weights to bias the "stay close to data" objective towards the high-reward (expert) trajectories over the support of the original offline dataset. The learning of these weights is done standard to the DiCE style techniques, although in practice the importance weights are given a KL constraint to stay close to the original dataset to avoid collapsing the distribution to a few rare examples, and the importance weights are also applied to the reward-maximizing terms in offline RL (even though theoretically this should not be required). This is then compared with two other approaches that bias the offline distribution towards higher return episodes - sampling based on the advantage of the offline episode, or sampling the top K% of the offline data. The best results are found by combining advantage weighting with density ratio weighting Strengths: I have some objections about claiming that offline RL methods have not considered the downsides of constraining to poor return trajectories. (More on this later). But in general, I agree that many datasets often have a smaller number of good examples, importance weights to adjust the conservatism penalty make sense, and using methods from the OPE literature is a reasonable way to learn those importance weights. The evaluation is also extensive enough for me to trust the results. Weaknesses: There have been methods that attempt to constraint the policy to the support of the offline dataset, rather than the behavior of the offline dataset. BEAR and BRAC come to mind due to using Kernel MMD to measure policy divergence (and they ought to be cited.) In practice these methods have underperformed CQL and TD3+BC so I think it is okay to benchmark against just CQL, but it is still a notable omission. I also suspect that the KL penalty is one of the more important hyperparams, would need to be tuned separately per dataset (as the acceptable level of deviation will change depending on dataset composition), and there is not much guidance on how to set this parameter. This weakness is common to many offline RL methods though (i.e. the weight of CQL penalty also needs to be tuned). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Could the authors comment on how the importance weights $w(s,a)$ evolve over time? To me the most dangerous outcome is that $w(s,a)$ learning becomes unstable, or training overfits too heavily due to upsampling a smaller section of the data. Is there any way to compare how the weighting of $w(s,a)$ compares to methods like top K%, in how much they consider different examples in the data? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Seems fine. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s valuable comments and appreciations on our extensive evaluation. The following address the reviewer’s questions. > There have been methods that attempt to constraint the policy to the support of the offline dataset, rather than the behavior of the offline dataset. BEAR and BRAC come to mind due to using Kernel MMD to measure policy divergence (and they ought to be cited.) In practice these methods have underperformed CQL and TD3+BC so I think it is okay to benchmark against just CQL, but it is still a notable omission. > **Answer:** Thank you for your suggestion! We will cite BEAR and BRAC in the updated manuscript. Meanwhile, we added BEAR as a baseline in Figure 8 in the attached PDF. We ran BEAR based on this public codebase (https://github.com/takuseno/d3rlpy). The results show that both DW-Uniform and DW-AW with CQL and IQL outperform BEAR by a large margin in imbalanced datasets. This indicates that support constraint implemented with kernel MMD may not be able to impose support constraint nicely, hence leading worse performance than CQL with uniform sampling. > Could the authors comment on how the importance weights $w(s,a)$ evolve over time? To me the most dangerous outcome is that $w(s,a)$ learning becomes unstable, or training overfits too heavily due to upsampling a smaller section of the data. Is there any way to compare how the weighting of $w(s,a)$ compares to methods like top K%, in how much they consider different examples in the data? > **Answer:** - **Does $w(s,a)$ learning become unstable?** We observed the importance weight $w(s,a)$ converge in the early stage of training (~10,000 gradient steps, which is 1% of training steps). Figure 9 presents the data’s weights at different training epochs, indicating that the weights converge after ten epochs. This means that the weights learned by DW are stable during training rather than oscillating over time. - **Does training overfit a small section of the dataset?** Since DW outperforms uniform sampling by a large margin, overfitting may not be a severe issue. Even in the small dataset (Figure 2b) that is prone to have overfitting issues, DW outperforms uniform sampling, which suggests that overfitting is less of a concern. **DW can assign finer-grained weights per transition than AW and top K% (i.e., PF in our paper).** We visualize the weights generated under dataset `walker2d-random-medium-small-v2`, where DW outperforms the baselines. See Figure 9 for the plot of the evolution of weight over time and the weights of AW and PF. The color at each index denote the weight of a transition at a particular dataset index (horizontal axis). The left partition (i.e., left of the red line) store the data from medium policy, and the others are data from random policy. - **For AW**, we see that only one block of transitions receives weights. It means that all the weights concentrate on the same trajectory. As such, AW is prone to overfit a few trajectories in the dataset. - **For top K%**, it weights each trajectory from medium policy uniformly with the same weight. This may not suffer from overfitting since it’s close to uniform sampling, but it may not improve over uniform sampling a lot. - **Our DW**, in contrast, can assign finer-grained weights per transition, avoiding biasing to only one trajectory and improving the data distribution. This may explain why DW outperforms the baselines in this case. Moreover, we see that the weights of DW converge after epoch 10 (each epoch consists of 1000 gradient steps), which means that the weight evolution is stable. > I also suspect that the KL penalty is one of the more important hyperparams, would need to be tuned separately per dataset (as the acceptable level of deviation will change depending on dataset composition), and there is not much guidance on how to set this parameter. This weakness is common to many offline RL methods though (i.e. the weight of CQL penalty also needs to be tuned). > **Answer:** - **Per-dataset KL penalty tuning:** We found that per-dataset tuning is not required. DW outperforms the baselines in imbalanced datasets using the same KL penalty weight across all the datasets. This means that in addition to the hyperparameter of the base offline RL algorithm, we did not observe DW needing any more tuning. - **Guidance on setting KL penalty weight $\lambda_K$:** We suggest using a high KL penalty ($\lambda_K$) for datasets with limited coverage in the state-action space, like expert or medium datasets in D4RL. This prevents undesired weight assignments on state-action pairs that might lead to out-of-distribution next states. If such data get high weights, training can lead to error accumulation and poor policy performance in offline RL [1]. However, setting high $\lambda_K$ does not hinder policy improvement by DW, as datasets with limited coverage also offer fewer opportunities for theoretically-possible maximal policy improvement due to limited rooms for stitching trajectories covering different regions in the state space. [1] Liu, Yao, et al. "Provably good batch off-policy reinforcement learning without great exploration." *Advances in neural information processing systems* 33 (2020): 1264-1274. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I will leave my score unchanged.
Rebuttal 1: Rebuttal: We thank all the reviewers’ suggestions and want to highlight **five new baselines** **(Figure 7)** and **two new analysis (Figures 8 and 9)** in the attached PDF file. The following is the summary of our new experiments.  1. We added the comparison to five new prior works, BEAR [1] (Reviewer WKvr), BAIL [2] (Reviewer yys4), SQL and EQL [3], and UWAC [4] (Reviewer XiFQ) in Figure 7 in the attached PDF. The results showed that **our DW-Uniform and DW-AW with CQL and IQL outperform all the newly added baselines on 72 imbalanced datasets.** 2. **Figure 8 (Reviewers** `qcFR` and `XiFQ`)**:** We presented a new analysis to answer why our DW performs better than AW and PF (i.e., top-K% filtering) by showing that DW can stitch trajectories but AW and PF struggle to do so in didactic four-room environment. 3. **Figure 9 (Reviewers `Wkvr`):** we presented the evolution of the distribution of learned importance weight over training epoch, showing that the training of DW is stable. We list the public baseline implementation used in our experiments in the following: - BEAR: [github.com/takuseno/d3rlpy](github.com/takuseno/d3rlpy) - BAIL: [github.com/lanyavik/BAIL](github.com/lanyavik/BAIL) - SQL/EQL: [github.com/ryanxhr/IVR](github.com/ryanxhr/IVR) - UWAC: [github.com/apple/ml-uwac](github.com/apple/ml-uwac) [1] Kumar, Aviral, et al. "Stabilizing off-policy q-learning via bootstrapping error reduction." *Advances in Neural Information Processing Systems* 32 (2019). [2] Chen, Xinyue, et al. "Bail: Best-action imitation learning for batch deep reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 18353-18363. [3] Xu H, Jiang L, Li J, et al. Offline rl with no ood actions: In-sample learning via implicit value regularization[J]. arXiv preprint arXiv:2303.15810, 2023. [4] Wu Y, Zhai S, Srivastava N, et al. Uncertainty weighted actor-critic for offline reinforcement learning[J]. arXiv preprint arXiv:2105.08140, 2021. Pdf: /pdf/540891541d4694d1e080600f54903d95dc39a814.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint
Accept (poster)
Summary: This paper studies fair PCA problem. The authors provide a novel formulation of fair PCA based on the "Null it Out" approach and propose the corresponding criterion called PAFO-learnability. The authors also present a streaming algorithm for fair PCA, which has low memory complexity. Experimental results verify the scalability of the proposed method. Strengths: 1. The paper is well-written and mostly clear. 2. The studied problem is interesting and important. Streaming algorithms are very useful in the limited-memory setting. Weaknesses: 1. Assumption 6.1 is a bit strong to me. The paper may show that when D_s is a common distribution such as sub-Gaussian distribution, the generated data satisfies assumption 6.1. 2. I'm worried about the tightness of theorems in section 6. The theoretical results show that Alg 1 and Alg 2 may require very large block sizes, which reduces the contirbution of saving memory. 3. The experiments part may present some quantitative results rather than just show the images. The expriments should verify that proposed FNPM algorithm is PAFO-learnability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why this paper proposes a new formulation of fair PCA rather than use a existing one? What's the advantage of "Null it out" formulation? 2. Do existing fair PCA algorithms satisfy PAFO-Learnability? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer’s valuable review and questions. Here, we respond to each point raised by the reviewer: > **W1. Strong assumption** Please refer to our general response. > **W2. Tightness of the theorems & large block size** We understand your concern about the tightness of the theorems, particularly as they assert that one needs a large block size. However, we want to emphasize that **the memory complexity is independent of the size of the blocks.** Indeed, our algorithm only requires O(d max(m, k)) *storage* as all other computations are handled in a running-average-type manner, with the primary computational cost being matrix-vector multiplications. It is important to note that the dependencies on epsilon, delta, and the singular value gap in our results align with those found in previous works on the (vanilla, non-fair) noisy power method [8,9], which also relied on Bernstein concentrations. In practice, using large block sizes helps mitigate large variances and is a common theoretical and empirical approach for the noisy power method. > **W3. Experiments verifying that the FNPM algorithm is PAFO-learnable** Thank you for suggesting experiments to verify the PAFO-learnability of our algorithm. We have done two sets of additional experiments that would address this point. **Due to space constraints, we report only partial results, but we emphasize that we will report the full results in our revised manuscript.** First, we conducted experiments on UCI datasets (COMPAS, German Credit, & Adult Income) to compare our algorithm's *quantitative* performance against previous approaches. We evaluated our approach based on metrics such as explained variance, distributional fairness (measured by MMD distance), downstream task accuracy, and downstream task fairness (measured by demographic parity, or DP). The experimental protocols were adopted from prior fair PCA literature [2,3] to ensure a fair comparison. The results demonstrate that our alternative formulation of fair PCA and its streaming variant exhibit comparable performance in both runtime and overall performance. Second, we conducted synthetic experiments varying the block sizes (and thus the overall sample complexity). With a fixed confidence level ($1-\delta = 0.9$), we reported the maximum possible resulting $\varepsilon_1$ (for optimality) and $\varepsilon_2$ (for fairness constraint). To disentangle the effect of these two error terms, we fix one of either $\boldsymbol{V}$ or $\boldsymbol{N}$ and train the other one with our NPM-based algorithm. As confirmed by Figure 1 in the additional supplementary pdf file, the blocks of sizes $\approx O(\epsilon_1^{-2} + \epsilon_2^{-2})$ are sufficient to achieve PAFO-learnability with errors $\varepsilon_1$ and $\varepsilon_2$ with probability $0.9$. Our obtained sample complexity dependencies seem to match the experiments. > **Q1. Regarding the significance of our new formulation** Please refer to our global response. > **Q2. Do existing fair PCA algorithms satisfy PAFO-Learnability?** We appreciate your comment and bringing up this interesting and important point. We agree that investigating whether existing algorithms satisfy PAFO-learnability is an interesting future topic, but it is a nontrivial task. However, we want to reiterate that none of the existing fair PCA algorithms satisfy PAFO-learnability for *streaming* PCA (Definition 5.1), where the crucial part is whether the memory limitation can be satisfied; this is discussed in Section 5.2. Based on our understanding, we suspect that the fair PCA algorithms proposed in [2,3] may be PAFO-learnable in the *non-streaming* setting (with $\Omega(d^2)$ memory). The reason for excluding [1] (SDP-based approach) is because, empirically, the algorithm proposed by [1] consistently yielded significantly sub-optimal performance regarding explained variance and downstream task accuracy compared to other fair PCA algorithms [2,3]. For further details and discussions, please refer to the experimental sections of [2,3]. In conclusion, we believe that our current approach does indeed address the memory limitation, even with a large block size (which is unavoidable for noisy power method-type algorithms), and that our theoretical guarantees will hold even with a relaxed assumption. We are open to addressing any further questions or concerns the reviewer may have. We hope that our response, including the additional experiments verifying PAFO-learnability, has properly addressed the reviewer’s concerns, and we hope that the reviewer would reconsider the score. Thank you again for your insightful reviews and comments. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the rebuttal, which address part of my concerns. However, I still have some additional questions: 1. It is not clear to me whether Alg.1 is online or offline. 2. The iteration number U in Alg.1 does not seems to appear in the theorems in section 6. How does it affect the performance of the algorithm? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their attention to our work and the insightful questions they have raised. Below, we provide responses to each of the questions. > **Question 1: It is not clear whether Alg. 1 is online or offline.** Our Alg.1 is an online algorithm that takes a data point one by one, and performs learning in $O(dm)$ space complexity to output $N$, an approximate unfair subspace (see Section 3.2). Although we have shortened our pseudocode due to space constraints, the main estimators used in Alg. 1 (see Eqn. (5) in our draft) are updated in an online manner using only vector-vector additions and vector-matrix multiplications. We included the complete pseudocode of Alg.1 in Appendix B, where the reviewer can find a more clarified version of our Alg. 1. In the offline setting, as we can fully compute $\mathbf{Q}$ and $\mathbf{f}$ from the given offline data, the unfair subspace $\mathbf{N}$ can be computed via SVD of $[\mathbf{Q} | \mathbf{f}]$, as discussed in Section 3.2. Depending on the problem setting, one can still transform an offline setting into an online setting by going through the data points one by one. This would help alleviate memory limitations or other issues, as we’ve done in our CelebA dataset experiment. > **Question 2: Effect of the iteration number $U$ on the theorems in Section 6 and the performance of Alg. 1** Thank you for pointing this out. Our final sample complexities (Theorem 6.3) are derived by multiplying the iteration number by the batch size for each phase, which is why there isn’t an explicit mention of the iteration number $U$ in the theorem statements; for the proof we have chosen a suitable $U$, and it is included in our Theorem 6.3. Precisely, Theorem 6.1 and 6.2 characterize the sufficient block size for ensuring small noise terms in the noisy power iterations, and Lemma 6.1 (which is taken from [8]) universally characterizes the iteration number that ensures a small final error, given that the iteration errors are small. Let us further elaborate on the effect of the block size $b$ and the iteration number $U$ on the convergence rate of the noisy power method. With a closer look at the convergence result by Hardt & Price [8], especially their Lemma 2.2 and Theorem 2.3 (Lemma 2.3 and Theorem 2.4 of their arXiv version, resp.), the distance between the noisy power method iterates and the ground truth decays roughly as $\varepsilon + C^U$, where $\varepsilon$ scales inversely with the square root of the block size $\sqrt{b}$, $U$ is the iteration number, and $C$ is a problem-dependent quantity that depends on the singular value gap and $\varepsilon$. Thus, for a fixed block size $b$, our choice of $U$ is a minimal choice (and thus “tight”) such that the second term becomes negligible compared to the first term, resulting in the error $\varepsilon \sim \frac{1}{\sqrt{b}}$. In other words, even though $U$ is increased far beyond our choice, the final error will still be $\varepsilon$, i.e., a much higher number of iterations does not lead to an error less than $\varepsilon$. Again, we emphasize that our choice of $U$ is *sufficient* to ensure the final error is small. In practice, a lesser iteration number may be sufficient for good performance. Indeed, for our CelebA dataset experiments and additional UCI/synthetic dataset experiments, we have observed that a moderate number of iterations (10~20) is enough. For completeness, lastly, we provide here the precise form for the iteration number in Alg. 1 (For the notation, please refer to Assumption 6.2.): $$ U = O\left(\frac{\nu_m}{\nu_m-\nu_{m+1}}\log\frac{d}{\epsilon\delta}\right) = O\left(\frac{K_{m,\nu}}{\Delta_{m,\nu}}\log\frac{d}{\epsilon\delta}\right) $$ We hope these responses resolve the reviewer’s concerns, and we are happy to answer any more questions or concerns that the reviewer may have.
Summary: This paper proposes a new approach for Fair PCA algorithms that is scalable and fair at the same time. The main contributions of the paper are as follows: - A new formulation of fair PCA based on the "Null It Out" approach. The goal is to maximize explained variance while nullifying the subspace spanned by the mean difference and leading eigenvectors of the covariance difference between groups. This formulation leads to a closed-form solution and avoids infeasibility issues in previous covariance matching approaches. Their approach removes the unfair subspaces using a noisy power method. - A new notion of learnability for fair PCA called Probably Approximately Fair and Optimal (PAFO)-learnability. This provides a statistical framework to analyze some of the fair PCA algorithms. - A new setting called fair streaming PCA, which addresses practical memory limitations. The authors propose an algorithm called Fair Noisy Power Method (FNPM) which only requires O(dk) memory, where d is data dimension and k is the target PCA dimension. - The empirical study of this method using a vision task is intuitive. Strengths: I believe the strengths of this paper can be summarized in three main key points: - Theoretical rigor. The paper provides the first statistical framework for analyzing fair PCA in terms of PAFO-learnability. This gives theoretical guarantees on the solution quality of algorithms like FNPM. Previous works mainly lacked such a framework. - They propose FNPM for this problem, which is quite simple to implement, building on standard tools like cumulative averaging and the noisy power method. This makes it easy to apply in practice. - Their approach is scalable and intuitive. They have validated that with a vision task to demonstrate the scalability. Also, the formulation based on nullifying the "unfair" subspace gives flexibility in how much fairness to impose by choosing m. This can be tuned based on the use case. The analysis provides insights into how properties like the singular value gaps and mean difference norm affect the sample complexity and solution quality. Weaknesses: There are some main concerns I have regarding this paper: - The results are limited to binary sensitive attributes and two groups. Extending the approach to handle more complex, multi-group scenarios with sensitive feature interactions would strengthen the paper. Some of the main previous approaches can easily handle this problem as well [A,B]. - Certain assumptions made, like those in Assumptions 6.1 and 6.2, are quite strong and restrictive. Relaxing these assumptions, or testing how sensitive the approach is to their violation would improve the robustness. - The experiments focus on how much FNPM removes features visually related to sensitive attributes. Evaluating the fairness of solutions in a more quantitative, metric-based fashion would provide a more objective assessment. This quantity is defined differently in various approaches. - Lack of comparison with the other branch of fair PCA methods. Although the goal of equalizing losses is different than what presented here, it would be more beneficial to better understand how different methods would compare. - Comparing the effects of this fair PCA on downstream tasks like classification can be beneficial to better understand the effects of a fair PCA approach. [A] Morgenstern, Jamie, et al. "Fair dimensionality reduction and iterative rounding for sdps." arXiv 2019 (2019). [B] Kamani, Mohammad Mahdi, et al. "Efficient fair principal component analysis." Machine Learning (2022): 1-32. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the previous part Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some of the limitations I discussed in the weakness section are not clearly discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review, for recognizing the significance of our contributions, and for providing valuable feedback. Let us address the raised points below: > **W1. Dealing with multiple sensitive groups** We appreciate the reviewer for bringing up the issue of focusing on two sensitive groups in our paper. While we agree that the simplicity of the discussions led us to consider two groups initially, we acknowledge the importance of extending our work to handle multiple sensitive groups. We agree with the reviewer that this direction would significantly enhance the strength and impact of our paper. We first emphasize that the definition of fair PCA considered in references [A, B] mentioned by the reviewer is entirely different from ours as their definition is such that the reconstruction losses across groups are equalized, while ours is in terms of fair representation learning. Thus, their approaches are not readily applicable to our definition of fair PCA as a straightforward extension to multiple groups. Among the ones that tackle fair PCA from the fair representation learning perspective, [1,3] explicitly consider dealing with multiple groups in the fair PCA (fair representation). However, the approach of [1] is not scalable to higher dimensions (as also discussed in [2,3]), and [3]’s approach only deals with mean-matching, is not memory-efficient, and has no theoretical guarantees. For completeness, we outline how our formulation can be extended to handle multiple sensitive groups. In the streaming setting, we propose to sample the sensitive attribute from a multinomial distribution over the $G$ sensitive groups, where each group corresponds to a separate data distribution. The fair PCA formulation would then involve nullifying the projected mean differences and the top-m eigenvectors of the projected covariance differences for all possible pairwise group comparisons. To address theoretical analyses, we assume that $G = O(1)$ (or even up to $G = o(\sqrt{d/m})$, as further explained later). For the mean difference, we construct a $d \times (G-1)$ matrix whose $j$-th column is the (estimated) mean difference between group $j$ and $j+1$. For the covariance difference, we construct $\binom{G}{2}$ number of $d \times m$ matrices, each corresponding to top-m eigenvectors of the (estimated) covariance difference between two groups for all possible pairwise comparisons. These matrices are to be nullified, forming a $d \times O(mG^2)$ matrix $\boldsymbol{N}$ used in Phase 2. Estimating $\boldsymbol{N}$ in Phase 1 can be achieved in $o(d^2)$ space, even when $G= o(\sqrt{d/m})$ as mentioned at the beginning. With minor adjustments, we believe our algorithm can effectively satisfy PAFO-learnability for multiple groups with similar assumptions. > **W2. Strong Assumptions** Please refer to the general response 2. > **W3-5. Lack of quantitative evaluation, downstream tasks, and comparison with the other branch of fair PCA** We appreciate the reviewer's suggestion and acknowledge the importance of quantitative evaluation in providing an objective assessment. As showcased in **Table 1** of our supplementary attachment, we have already conducted experiments on UCI datasets (German Credit, Adult Income, COMPAS) to compare the performance of our algorithm against previous approaches. We evaluated our approach based on metrics such as explained variance, PCA fairness (measured by MMD distance), downstream task accuracy, and downstream task fairness (measured by demographic parity). The experimental protocols were adopted from prior fair PCA literature [2,3] to ensure a fair comparison. Additionally, we performed additional experiments comparing our approach with the algorithm presented in reference [11], which focuses on equalizing reconstruction loss across groups. The results (shown in Table 1) demonstrate that our alternative formulation of fair PCA and its streaming variant exhibit comparable performance. Moreover, as already reported in [2,3], in general, the fair PCA algorithms that focus on fair representation learning, including ours, outperform [11] in both PCA fairness (in the context of fair representation) and downstream task fairness. For further information, we refer the reviewer to Appendix A of [2], where the conceptual difference between the two different fair PCA formulations is well explained. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the responses. I will keep my score. --- Reply to Comment 1.1.1: Comment: We are also grateful for your thorough examination of our response. If you have any remaining questions or additional remarks, please let us know without hesitation.
Summary: This research paper focuses on the fair principal component analysis (PCA) problem using streaming data while requiring low memory. The authors introduce a new formulation for fair PCA, which involves optimizing the vanilla PCA objective with a linear "fair" constraint. In the oracle setting, where the true parameter is known, the problem has a closed-form solution. The authors define the concept of "PAFO-learnable" to quantify the sample complexity of learning a semi-orthogonal matrix V, which is an approximately optimal solution to the oracle problem. They present streaming algorithms and prove that such an algorithm scheme has finite sample complexity according to the proposed PAFO-learnable notion. The main idea is to utilize the noisy power method framework to estimate the unfair subspace (the linear constraint) and subsequently employ this estimation in the NPM (Noisy Power Method) to estimate fair PCA. The effectiveness of the proposed method is evaluated using real-world data. Strengths: Overall, the contribution of the paper is well-motivated and aligns with the ongoing development of methods for problems involving fair constraints. The paper is well-written, and the investigation is quite extensive. However, I must admit that I am not familiar with recent developments in fairness in machine learning and cannot provide an assessment of the significance of the new formulation Equation (1) and the concept of learnability (Definition 4.2), although they appear reasonable and interesting. Weaknesses: I have a few minor comments: * L68: It might be more appropriate to use the term "semi-orthogonal matrix" instead of "orthogonal" to distinguish between O(d) and St(d,k). * L69, L116, L171: QR decomposition typically yields two outputs, namely the (semi-)orthogonal part and the upper triangular part. * L164: Regarding F_d, it differs from the one defined in Definition 4.2, where learnability is defined for a different quantity denoted as F_d. * L262: Once again, F_{d,m,k} is inconsistent with the definition provided in Definition 4.2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing a detailed and insightful review of our paper. We are glad that the reviewer recognized the significance of our work and appreciated our contributions. We provide our answers below. > **S1. Recent developments in fairness in ML** The field of fairness has seen considerable progress in recent years in proposing sensible and new definitions of fairness and new algorithms with the fairness constraint. However, scalability remains a crucial challenge in the latter part (so-called algorithmic fairness), particularly for fair PCA. Indeed only recently has the issue of scalability begun to be studied, focusing on fair clustering [14,15,16]. One of our primary focuses was addressing the memory limitation associated with fair PCA by proposing its streaming variant. Especially as PCA is one of the standard tools used for high dimensional data analysis (see our Introduction section) and previous approaches to fair PCA [1,2,3] are still not so scalable (see our Experiments section), we believe that making fair PCA further scalable is timely and important. > **S2. Regarding the significance of our new formulation Equation (1) and the concept of learnability (Definition 4.2)** Again, we appreciate the reviewer's keen interest in our paper. We would like to take a moment to delve deeper into the significance of our work. Our alternative formulation of fair PCA offers two crucial advantages: **feasibility** and **scalability**. To begin, our formulation of fair PCA as a constrained optimization problem is always feasible so that there is no need for further relaxation, which is sometimes not the case for the previous formulations under certain group-wise distributions [1,2,3]. This, in turn, has paved the way for the rigorous establishment of the novel concept of statistical sample complexity in terms of PAFO-learnability. Indeed, this achievement stands as a first in the field of fair PCA literature. Moreover, our formulation is scalable, particularly in the context of streaming scenarios, allowing us to develop the FNPM algorithm for fair streaming PCA. For a more detailed elaboration on this, we kindly direct the reviewer to our general response. > **W1. Minor comments on the writing** We genuinely appreciate the reviewer's comments on the writing and assure you that we will incorporate all the suggested fixes in our revised manuscript. To be more specific, we will reflect the following points: * L68: We will clarify by using the term ‘semi-orthogonal’ rectangular matrices, as per the reviewer’s suggestion. * QR decomposition (L69, L116, L171): We would like to remark that the symbol “$QR(\cdot)$” itself is quite widely used in (streaming) PCA literature, which is generally used as an orthogonal projection operator onto the Stiefel manifold (for the case of vectors (i.e., $k=1$), $QR(v)$ is defined to be the same as the normalization $v/\|v\|$, an orthogonal projection onto the unit sphere). Nevertheless, we will elaborate more clearly on this in our revised manuscript. * L164 & L262: We greatly appreciate the reviewer’s comment on our notation. We correct the notation such that for fixed integers $d, k, m$, we define PAFO-learnability for a collection $\mathcal{F}_d \subset \mathcal{P}_d \times \mathcal{P}_d \times (0,1)$. We will make this clear in our upcoming manuscript.
Summary: The paper defines a new notion for fair pca. It is assumed that the data comes from mixture of (two) distributions and the goal is to find a subspace for it such that the solution subspace is perpendicular to 1) the difference in the difference vector of the mean 2) the top m eigen vectors of the difference of the variance This problem can simply be computed by computing PCA on a projected space. This paper looks for an algorithm that is 1) with probability 1-delta, reports an approximate solution. 2) Works in the streaming setting where once gets samples from the mixture of the distributions and uses space which depends on O(kd). The algorithm is to use the samples to estimate the parameters of the two distributions and thus approximating the orthogonal constraint and then run the standard SVD algorithms. Strengths: - The paper defines a new notion for fair PCA. Weaknesses: - The paper does not discuss why this particular notion captures fairness. - The amount of technical novelty in the paper is limited. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments. We assure the reviewer in advance that all the answers and discussions provided here will be incorporated into our revised manuscript. Below, we respond to each point raised in your review: > **W1. Why does this particular notion capture fairness?** To the best of our understanding, your question can be divided into two parts: 1) ‘Why does our “Null it out” formulation of fair PCA capture fairness?’ 2) ‘Why is our PAFO learnability an appropriate statistical framework for fair PCA?’ First, we emphasize that our intuitive notion of fairness in PCA, where the projected distributions are approximately matched, has already been well studied [1,2,3]. Including our paper, this line of work builds upon the foundation of fair representation learning [4], a seminal work in fairness that recently received the test-of-the-time award at ICML 2023. The main idea is to learn a low-dimensional representation that retains as much information as possible about the high-dimensional data while ensuring fairness, such that any vanilla downstream task learner can achieve fairness without explicit regularization. This matches the intuition that if the appropriate (conditional) distributions match, any vanilla supervised learners would be fair. Second, we elaborate on why our proposed PAFO learnability aligns well with fair PCA. We emphasize that no existing literature on fair PCA for fair representation learning has provided statistical frameworks to analyze the problem setting. Our new formulation defines optimality and fairness criteria. Optimality refers to how suboptimal our solution's explained variance is compared to the optimal fair solution, which is always well-defined, a key characteristic of our new formulation. Fairness is defined based on how much component is left in the "unfair" directions. > **W2-1. Amount of technical novelty** We believe that our paper contributes significant technical novelty in making and analyzing scalable (memory-efficient) fair algorithms. While these points are summarized in our contributions (pg. 2) and throughout the paper, we reiterate them for clarity: > **W2-2. The novelty of our proposed new streaming setting.** The problem of fair PCA for fair representation learning has been well-studied [1,2,3]. However, previous approaches cannot handle memory limitations, where the algorithm is restricted to using only $o(d^2)$ (or $O(dk)$) space. To handle this, we introduce the new fair streaming PCA setting, where data points arrive sequentially from an "unfair" distribution, and the algorithm must learn under memory constraints. Even without fairness, such a streaming setting has received considerable interest from the stat/ML community due to its potential in processing data streams and dealing with memory limitation, especially in streaming PCA [5,6,7,8,9,10]. **We thoroughly discussed in Section 5.2 and Appendix C why existing approaches/formulations of fair PCA [1,2,3] can*not* be trivially extended to this streaming setting.** > **W2-3. The novelty of our statistical framework.** None of the previous works on fair PCA [1,2,3] had a statistical framework in which the performance of their algorithms could be rigorously shown. By statistical framework, we mean a learnability framework (similar to PAC-learning) in which the number of samples sufficient to solve the problem of fair PCA can be formalized. Indeed, due to several approximations that [1,2,3] had done to make their algorithm and/or optimization problem feasible, it is hard to see exactly which part of the approximation causes the bottleneck in the sample complexity. **As discussed in Section 3**, by considering a new “Null It Out” formulation of fair PCA, we could overcome the infeasibility issues and allow us to develop the PAFO-learnability framework. > **W2-4. Technical difficulties in the analysis.** None of the previous works on fair PCA [1,2,3] provide statistical guarantees for their algorithms. While our algorithm is based on the well-known noisy power method [8,9], **the analysis is significantly more challenging because there are two sources of randomness: group membership described by Bernoulli random variables and sampling from a group-wise distribution.** We had to modify the given random variables to apply the existing Bernstein concentration results during Phase 1's convergence proof. > **W2-5. Relevance of our setting to real-life: Experimental results.** In Section 7, we experimentally demonstrate the significance of memory limitations when performing a fair PCA on real-world datasets (full resolution, full colored CelebA dataset) using previously proposed algorithms; none of them could run on this dataset with our local machine. By transforming the problem into a streaming setting and applying our algorithm, we show that such memory limitation can be circumvented, making fair PCA *scalable*. Additional quantitative results on UCI datasets, parts of which we report in Table 1 of our attached supplementary pdf, demonstrated that our new formulation achieves similar performance as previous algorithms, showing that our formulation and algorithm are a strict improvement over the previous ones. Furthermore, the simplicity of our algorithm enhances its applicability to real-world datasets; this is in contrast to some of the previous fair PCA algorithms [1,2] as they require external libraries such as SDP solver [1] or manifold optimization package [2]. In conclusion, we believe that our paper offers substantial technical novelty in both algorithm design and theoretical analysis, as well as a new problem setting of fair streaming PCA. We are open to addressing any further questions or concerns the reviewer may have. We hope that our response has properly addressed the reviewer’s concerns and that the reviewer would reconsider the score. Thank you again for your helpful reviews and comments. --- Rebuttal Comment 1.1: Comment: Thank you for providing a response. I am retaining my score.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for providing detailed and insightful reviews/comments/questions about our paper. We assure the reviewers in advance that all the answers and discussions provided here will be incorporated into our revised manuscript. We are encouraged to see that the reviewers recognize the relevance of our newly proposed problem setting in fairness (SSmx, VcC6, 6AMc, GuCQ), *scalable yet simple* algorithm for our setting (6AMc), theoretical rigor (6AMc), clarity of our exposition (VcC6, 6AMc, GuCQ), and extensive investigation into the effectiveness of our algorithm (VcC6, 6AMc). From now on, let us first provide our responses to three commonly raised questions: ### **Regarding the significance of our new formulation (VcC6, GuCQ)** We start by elaborating on the justification of our new formulation of fair PCA (Eqn. (1)). Intuitively, the PCA projection that nullifies the mean difference and top eigenvectors of the covariance difference would result in an orthogonal representation from which any linear (or stronger) adversaries have difficulty in distinguishing between the sensitive groups. We acknowledge that this notion of fair PCA has been previously discussed in [1, 3], as well as recent advances in guarding protected attributes of word embeddings via the “Null It Out” approach [12,13]. Despite the existence of similar prior formulations, we clarify why we needed to propose another new formulation. Our new formulation, compared to the prior ones, yields two main benefits. One is that our formulation is always feasible, unlike previous formulations [1,2,3], it allows us to define the notion of learnability for fair PCA rigorously. Another is that this makes the problem amenable to a memory-efficient streaming algorithm for fair PCA. On a more technical side, our formulation is quite similar to that of [3], but the key difference lies in the order in which the nullification is applied. [3] applies mean difference nullification first, then it applies the covariance nullification on the subspace resulting after the mean difference nullification. On the other hand, we apply both simultaneously. In Section 5.2 and Appendix C.2, we provide extensive discussions on why previous approaches to fair PCA [1,2,3] CANNOT be easily adapted to our memory-limited & streaming setting. Especially for the rigorous definition of *learnability* in the context of fair PCA, we start by remarking that *no prior work has approached fair PCA from this perspective*, emphasizing the importance of establishing a solid statistical foundation. We firmly believe that defining learnability in this context will shed light on the number of samples required to achieve a desired output, providing valuable insights for researchers and practitioners in the field, as the PAC-learning framework has done since its introduction. ### **Strong assumptions (6AMc, GuCQ)** While our assumptions may seem stringent, they are essential to highlight that no previous works on fair PCA (for fair representation learning) [1,2,3] have provided statistical guarantees. Consequently, these assumptions were indispensable in achieving rigorous results for our streaming fair PCA formulation. Assumption 6.1 enables us to utilize the simpler version of vector/matrix Bernstein concentrations, which require bounded random variables. This was also the case in various streaming PCA literature [5,6,7] for similar reasons, where using Bernstein concentrations was necessary. Indeed, we are sure that our sample complexity results will be retained with relaxed (but similar) assumptions by using appropriate variants of the concentration inequalities (e.g., [10]). Regarding Assumption 6.2, it is crucial to assume a singular value gap for the convergence of the noisy power method. Without this assumption, the eigenvectors lose uniqueness, hindering the overall convergence. ### **Experiments (6AMc, GuCQ)** We have also attached a supplementary pdf containing additional experimental results that support our rebuttal. Figure 1 shows synthetic results that verify our sample complexity result w.r.t. block sizes. Also, Table 2 showcases the quantitative results of comparing several fair PCA methods on the UCI Adult Income dataset, showing the efficacy of our proposed FNPM. Due to space constraints, we report only partial results, but we will report the full results in the upcoming revised manuscript. Lastly, we put all the relevant references for our rebuttal below. --- [1] Olfat & Aswani, “Convex Formulations for Fair Principal Component Analysis.” AAAI 2019. [2] Lee et al., “Fast and Efficient MMD-based Fair PCA via Optimization over Stiefel Manifold.” AAAI 2022. [3] Kleindessner et al., “Efficient fair PCA for fair representation learning.” AISTATS 2023. [4] Zemel et al., “Learning Fair Representations.” ICML 2013. [5] Bienstock et al., “Robust Streaming PCA.” NeurIPS 2022. [6] Jain et al., “Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja’s Algorithm.” COLT 2016. [7] Huang et al., “Streaming k-PCA: Efficient guarantees for Oja’s algorithm, beyond rank-one updates.” COLT 2021. [8] Hardt & Price, “The Noisy Power Method: A Meta Algorithm with Applications.” NIPS 2014. [9] ​​Balcan et al., “An Improved Gap-Dependency Analysis of the Noisy Power Method.” COLT 2016. [10] C. Jin et al., “A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm.” arXiv 2019. [11] Samadi et al., “The Price of Fair PCA: One Extra Dimension.” NeurIPS 2018. [12] Ravfogel et al., “Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection.” ACL 2020. [13] Ravfogel et al., “Linear Adversarial Concept Erasure.” ICML 2022. [14] Backurs​​ et al., “Scalable Fair Clustering.” ICML 2019. [15] Ziko et al., “Variational Fair Clustering.” AAAI 2021. [16] ​​Wang et al., “Scalable Spectral Clustering with Group Fairness Constraints.” AISTATS 2023. Pdf: /pdf/17a8ace54ceb30767960ef5892950b281966823f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bounce: Reliable High-Dimensional Bayesian Optimization for Combinatorial and Mixed Spaces
Accept (poster)
Summary: Most real-world optimization problems contain a mix of continuous, categorical, binary, and ordinal variables and might be high-dimensional. Although there are some methods that have investigated this problem, they are not always reliable with regard to finding a satisfactory optimum. In order to tackle this problem, the authors propose Bounce (Bayesian Optimization Using iNcreasingly high-dimensional Combinatorial and continuous Embeddings). This work seems to be a natural extension of BAxUS, which used a similar embedding strategy as for Bounce but only for continuous variables. It uses a novel trust region management system to grow or shrink the trust regions. The proposed method is tested on a representative range of test problems and compared with state-of-the-art benchmarks and shows convincing results. Strengths: - Most real-world optimization problems have mixed spaces and are high-dimensional for which vanilla Bayesian optimization methods are not suited, this work thus addresses an important problem. - The proposed algorithm is tested on a broad range of problems and compared with state-of-the-art benchmarks and has a strong performance. - The authors provide proof that the algorithm converges in the limit of infinite iterations, making it a reliable method. - The paper is well-written and well-structured. Weaknesses: - In general, the paper is very complete and well-written and contains a very complete related work section. However, the paper does expect the reader to have a significant amount of knowledge of previous work. Especially about BAxUS and papers using TR management strategies. Perhaps the authors could provide a little bit more information and illustrations in this work to make it more easily accessible. For instance, provide an illustration of the binning procedures and/or about the TR management. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Although I do understand that this is a typical question in BO paper reviews, for most problems, benchmarks are performed for 200 iterations. Would you say that this is enough for the dimensionalities of the benchmark problems? For instance, for the 125-D MaxSAT, you would say you probably need a lot more data points for the surrogate model to model the objective function effectively. What are your thoughts on this and do you think some of the benchmarks can be better than Bounce at a higher budget? - It is mentioned in Appendix C that the original implementations for COMBO, BODi and Casmopolitan are used. And that you use the same setting as in those works, what kind of settings are these exactly? Do these approaches use the same kernels for the GP models for example? Could this influence the performance on the benchmarks? - Have the authors contacted Oh et al. regarding the bug in COMBO? If it is possible, it would of course be ideal if results using a bug-less version can be provided. - I really like the results shown in Figure 5 regarding the efficacy of batch acquisition. In general, I like that the # batch evaluations are used on the x-axis as this directly shows what is good to use for users who are optimizing parallel processes. However, I'm also interested in how these plots look as a function of function evaluations. My intuition would say that b=1 is always better, as in this way the model is as informed as it can for the next iteration. Just out of interest, could you elaborate on this here or could you add a plot to the appendix that shows these results as a function of iterations? - As a suggestion to your citations regarding chemical engineering and materials discovery. There also is a range of works regarding the optimization of lab equipment for sample analysis. See for instance [Hagan et al.](https://pubs.acs.org/doi/full/10.1021/ac049146x?casa_token=dnIitFW7lO4AAAAA%3Ado8InveDdcPw3TwtOVSQuRvM5NQhSFEo3M1jmpdpEHvRbzA0f1jYTJ2_bloYb7he8-Ofb8u96oMXvCvO), [Boelrijk et al.](https://scholar.google.nl/citations?view_op=view_citation&hl=nl&user=1z-BBwkAAAAJ&citation_for_view=1z-BBwkAAAAJ:9yKSN-GCB0IC). These problems can typically contain mixed spaces and many variables. Some small textual remarks: - Could it be that d_0 and d_{init} are interchangeably used? For instance lines 2 and 3 of Algorithm 1. - Line 90, 'sequencies' should be 'sequences' or was this the originally proposed name by the BODi paper? - Lines 277-278, as well should be as well as? - Typo in Ass. 5. in Appendix Section a. One should be once? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors describe the societal impact of their work. The authors do not describe the limitations of their work. Perhaps the authors could dedicate some sentences to this, for example, would the algorithm handle a noisy setting? Or does this violate the binning procedure to some extent? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's remarks and will make sure they are appropriately addressed. > Perhaps the authors could provide a little bit more information and illustrations in this work to make it more easily accessible. For instance, provide an illustration of the binning procedures and/or about the TR management. Thank you for the suggestion! We will add a figure that explains the binning procedure and embedding to the camera-ready to facilitate the understanding of this crucial part of the algorithm. > Although I do understand that this is a typical question in BO paper reviews, for most problems, benchmarks are performed for 200 iterations. Would you say that this is enough for the dimensionalities of the benchmark problems? When running our experiments, we saw Bounce converging on most benchmarks after 200 function evaluations. A lower evaluation budget is common for discrete BO papers due to the increased cost of optimizing the acquisition function. For instance, the authors of BODi [1] worked with the same evaluation budget. However, we agree that running a higher evaluation budget would be interesting for the Labs and MaxSAT125 benchmarks where Bounce has not yet converged. Due to the high cost of these experiments, we will add these analyses to the camera-ready version. > It is mentioned in Appendix C that the original implementations for COMBO, BODi and Casmopolitan are used. [...] Do these approaches use the same kernels for the GP models for example? Could this influence the performance on the benchmarks? We agree that the choice of the kernel is a key aspect of these methods and is instrumental to their performance. Thus we use the exact settings and implementations of the respective authors. The core contributions of BODi and COMBO are their kernel constructions based on dictionary and diffusion kernels, respectively. We do not think replacing the kernel for these methods is reasonable. Note that Bounce uses the CoCaBO [2] kernel construction, also used in CASMOPOLITAN [3]. We will clarify this in the appendix. > Have the authors contacted Oh et al. regarding the bug in COMBO? If it is possible, it would of course be ideal if results using a bug-less version can be provided. We recently contacted the COMBO authors and are currently waiting for a response. We fixed the bug in COMBO independently, and the PDF uploaded with this rebuttal contains an updated figure for the PestControl benchmark, showing the original version of COMBO and a fixed version (marked with “(fixed)”). We observe that fixing this bug improves COMBO’s performance on this benchmark considerably so that COMBO outperforms BODi on the modified benchmark. As expected, fixing the bug makes COMBO agnostic towards the modification of the benchmark. Note that Bounce still outperforms all other algorithms. PestControl is the only benchmark in our experiments where COMBO is only affected by this bug (see Appendix D.2). We will update our discussion to reflect these new findings. > I really like the results shown in Figure 5 regarding the efficacy of batch acquisition. [...] However, I'm also interested in how these plots look as a function of function evaluations. [...] Just out of interest, could you elaborate on this here or could you add a plot to the appendix that shows these results as a function of iterations? A figure with the number of function evaluations on the x-axis for the batched version of Bounce is an interesting addition. As expected, large batches perform ‘worse’ when plotting against the number of function evaluations. There is almost no difference between small batches, highlighting the efficacy of our batching strategy. We added this figure to the PDF in the global response and will add it to the appendix in the paper. > As a suggestion to your citations regarding chemical engineering and materials discovery. [...]See for instance Hagan et al., Boelrijk et al. We agree on the relevance of this application area and will gladly add Hagan et al. and Boelrijk et al. as references for the practical applications. > Could it be that $d_0$ and $d_{\textrm{init}}$ are interchangeably used? For instance lines 2 and 3 of Algorithm 1. $d_0$ and $d_{\textrm{init}}$ are the same variable. We will revise this and keep only one of them. > Line 90, 'sequencies' should be 'sequences' or was this the originally proposed name by the BODi paper? The term ‘sequencies’ was introduced in the BODi paper [1] and describes the number of changes from 0 to 1 and vice versa in a bit vector — This is equivalent to ‘frequency’ over a continuous domain. We will clarify this. > The authors describe the societal impact of their work. The authors do not describe the limitations of their work. Perhaps the authors could dedicate some sentences to this, for example, would the algorithm handle a noisy setting? Or does this violate the binning procedure to some extent? While we discuss limitations in the discussion, we will expand this section, as suggested. We did not consider noisy function evaluations. In future work, we would like to explore the performance of a variant of Bounce on noisy problems where we replace EI or qEI with noisy EI or noisy qEI [4]. With these changes, we expect Bounce to also perform well on noisy problems. [1] Bayesian Optimization over High-Dimensional Combinatorial Spaces via Dictionary-based Embeddings, AISTATS, 2023 \ [2] Bayesian Optimisation over Multiple Continuous and Categorical Inputs, ICML, 2020 \ [3] Think Global and Act Local: Bayesian Optimisation for Categorical and Mixed Search Spaces, ICML, 2021 \ [4] BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization, NeurIPS, 2020 --- Rebuttal Comment 1.1: Comment: I have carefully read all reviewer comments and their respective rebuttals and I'd like to thank the authors for their hard work and effort. I am satisfied with the author's rebuttal and will keep my score as it is.
Summary: The paper proposes a new BO method, namely Bounce, to tackle the problem of BO with combinatorial and mixed space. The key idea is based on the trust region approach (as with TurBO) and adaptive space (as with BAxUS), but is applied to the combinatorial and mixed variables. The proposed method is also enabled to work with batch. Experiments are conducted on various synthetic and real-world problems to evaluate the efficacy of the proposed method. Strengths: + The paper’s writing is generally clear and easy to understand + The paper tackles an interesting problem, which is to solve BO problems with categorical and mixed variables + The methodology developed in the paper seems to be sound to me. + The experiments are conducted with various benchmark optimization problems (synthetic and real-world) Weaknesses: + The proposed method seems to be largely an extension from existing methods: TurBO and BAxUS – normally this is alright if the results are impressive but there are some issues with the experiments which I will describe in more detail in the below bullets. + There is not much insight on why the proposed method, Bounce, works. There is a theoretical analysis in the appendix (Theorem 1) that shows the consistency of the Bounce. However, I think this theorem doesn’t have much meaning. It assumes that the search domain is finite, and the objective function is noiseless, and it states that Bounce will find a global optimum with probability of 1 when the number of samples N goes to infinity. But for a finite domain and noiseless observations, any algorithm will be able to find the global optimum? + One of the contributions of the paper is to conduct an in-depth analysis of two state-of-the-art algorithms for combinatorial BO, COMBO [45] and BODi [17], however, I don’t really find this to be interesting. Unless I missed it, it seems to be discussed in Section 4.6? But this is largely empirical and not much insight. + The experiments seem to lack of some well-known baselines for categorical and mixed search space, for example, SMAC, TPE, and some new BO methods like the work “Bayesian Optimization over Hybrid Spaces” by Deshwal et al (ICML 2021), and the work “Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization” by Daulton et al (NeurIPS 2022). + Also, in the experiments, the number of iterations is quite small very high-dimensional problems. The number of iterations evaluated in all problems is just 200, which is too small. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address my comments in the previous section (section Weakness) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I don’t find any dedicated section describing the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for the reviewer's valuable input and will ensure their remarks are duly considered. > There is not much insight on why the proposed method, Bounce, works. There is a theoretical analysis in the appendix (Theorem 1) that shows the consistency of the Bounce. However, I think this theorem doesn’t have much meaning. It assumes that the search domain is finite, and the objective function is noiseless, and it states that Bounce will find a global optimum with probability of 1 when the number of samples N goes to infinity. But for a finite domain and noiseless observations, any algorithm will be able to find the global optimum? It is a crucial property of an algorithm to be consistent. We politely disagree with the assessment that any algorithm can find the optimum with an unlimited evaluation budget and a finite domain. In particular, most algorithms in the subspace BO literature (REMBO [1], HeSBO [2], Alebo [3], …) lack this property because they make a random bet on the embedding, and they are unable to recover from choosing a wrong embedding. See, for example, the discussion in [3]. While we agree that regret bounds would be an interesting addition, no regret bounds have been proven for this line of work. Nevertheless, our design choices are well motivated, and the fact that Bounce first optimizes over a low-dimensional subspace and eventually reverts to optimization in the input space is crucial to its performance. > One of the contributions of the paper is to conduct an in-depth analysis of two state-of-the-art algorithms for combinatorial BO, COMBO [45] and BODi [17], however, I don’t really find this to be interesting. Unless I missed it, it seems to be discussed in Section 4.6? But this is largely empirical and not much insight. Regarding the deeper analysis of BODi and COMBO, we would like to refer the reviewer to Appendix D, which discusses the causes of the performance degradation of those algorithms in detail. In the camera-ready version of the manuscript, we will refer to this appendix more prominently in the main text — We had to move this detailed analysis to the appendix due to space constraints. > The experiments seem to lack of some well-known baselines for categorical and mixed search space, for example, SMAC, TPE, and some new BO methods like the work “Bayesian Optimization over Hybrid Spaces” by Deshwal et al (ICML 2021), and the work “Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization” by Daulton et al (NeurIPS 2022). We evaluated RDUCB [4], a recent additive method proposed at this year’s ICML, as an additional algorithm. We further added SMAC [5] to the comparison. Similarly to the CASMOPOLITAN [6] paper, we see SMAC performing poorly. We did not compare against probabilistic reparametrization (PR) [7] as we see PR falling in a different category, i.e., a meta-algorithm to optimize the acquisition function. However, we see the potential for improvement by combining Bounce with PR and would like to explore this in the future. > Also, in the experiments, the number of iterations is quite small very high-dimensional problems. The number of iterations evaluated in all problems is just 200, which is too small. We use the same evaluation budget as BODi [8], which also tackles high-dimensional problems, and Bounce converges after 200 iterations on most benchmarks. We want to emphasize that sample efficiency and the ability to find good solutions quickly are important properties of Bounce. To study the behavior for larger sample budgets, we increased the number of function evaluations to 500 for the Labs and ClusterExpansion benchmarks in the figures we submit with the rebuttal. We observe that Bounce continues to outperform the other algorithms and do not find a qualitative difference. > I don’t find any dedicated section describing the limitations of the work. For the camera-ready version, we will discuss limitations more prominently in the discussion section. [1] Bayesian optimization in a billion dimensions via random embeddings, JAIR, 2016 \ [2] A Framework for Bayesian Optimization in Embedded Subspaces, ICML, 2019 \ [3] Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization, NeurIPS, 2020 \ [4] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation? ICML, 2023 \ [5] Sequential Model-Based Optimization for General Algorithm Configuration, LION, 2011 \ [6] Think Global and Act Local: Bayesian Optimisation for Categorical and Mixed Search Spaces, ICML, 2021 \ [7] Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization, NeurIPS, 2022 \ [8] Bayesian Optimization over High-Dimensional Combinatorial Spaces via Dictionary-based Embeddings, AISTATS, 2023 --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I would like to thank the authors for your response. The response has addressed many of my concerns so I decided to increase the score to 6. The reason I don't increas more is that I still have concerns regarding the very small number of iterations - normally I expect more for high-dim problems.
Summary: The paper proposes a new Bayesian optimization algorithm for combinatorial and mixed search spaces containing input variables of different types (continuous, binary, categorical, ordinal), which promises to be relevant for problems as varied as materials discovery, hardware design, neural architecture search, and portfolio optimization. Bounce relies on a Gaussian process (GP) model of the objective function, which is based on lower-dimensional subspaces of the original search space, generated by partitioning input variables into “bins”. Notably, the bins only contain input variables of the same type (e.g. continuous) and all variables in a bin are forced to take the same value during the optimization of the acquisition function, in effect operating in a lower-dimensional subspace. During the course of an optimization run, Bounce splits up bins into smaller ones, enabling the algorithm to propose candidates with increasingly finer structure. Bounce also leverages existing techniques for high-dimensional BO, including the trust-region approach. Strengths: - Strong empirical results, comparing against published baselines, in addition to ablations on the batch optimization performance. - Performance is more robust to shifts of the solution than previously published methods. - Sheds light on a non-trivial structural assumption of prior methods. - Generally a well written paper. Weaknesses: - No theoretical analysis. - Without a “modified” or “shifted” label for the left subplots of Figures 1, 2, 3, 4, the presentation is confusing and can easily be misread to suggest that the left subplots report results on the problems as defined in the literature, but they do not. Please add a “modified” or “shifted” label for the left subplots. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In the first 60 or so iterations, Bounce is outperformed by BODi on the non-modified 125d MaxSAT problem, as well as the non-modified PestControl problem. Since sample-efficiency is a chief concern for BO methods, is there a way to leverage BODi’s inductive biases for Bounce? On a high level, something like this might be possible since both methods leverage embeddings of the input space. - Is there something more you can say about any theoretical aspects of the method? Does a global convergence guarantee hold? What can you say about the convergence rate, either local or global? How does the choice of embedding influence the convergence? - Are there other embeddings approaches that could be combined with Bounce? Suggestions: - The description of the cardinal and ordinal embeddings are a bit repetitive and verbose, especially since they formally use the same approach. Can you unify the presentation, possibly include a single math-mode version of the embedding formula? That would aid the readability of the section. - Line 191: “The embedding of ordinal variables follows [that of the] categorical variables” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insights and will now discuss their remarks. > Without a “modified” or “shifted” label for the left subplots of Figures 1, 2, 3, 4, the presentation is confusing and can easily be misread to suggest that the left subplots report results on the problems as defined in the literature, but they do not. Please add a “modified” or “shifted” label for the left subplots. We agree with their assessment that the Figure captions can be misleading when scanning the paper. We will add a label as you suggested. We submitted a PDF file with the rebuttal that shows the updated figures. > In the first 60 or so iterations, Bounce is outperformed by BODi on the non-modified 125d MaxSAT problem, as well as the non-modified PestControl problem. Since sample-efficiency is a chief concern for BO methods, is there a way to leverage BODi’s inductive biases for Bounce? On a high level, something like this might be possible since both methods leverage embeddings of the input space. Regarding the question about the inductive bias, we would like to stress that BODi is biased towards problems where the optimum lies in the origin or, for categorical problems, where the optimum is realized by setting all categories to the same ‘value’. In Appendix D, in the supplementary material submitted in April 2023, we discuss that most benchmarks in the BODi paper have this property due to their synthetic nature. We do not believe that this bias is reasonable for most practical applications. However, suppose the user has a prior belief that the optimal solution is located in or close to the origin or realized by setting all categorical variables to the same ‘value’. In that case, the user can similarly bias Bounce. We discuss this in Appendix B.3 in the supplementary material and show that this ‘low-sequency version’ of Bounce outperforms or is on par with BODi even on the version of the benchmark where the optimum is at the origin. > Is there something more you can say about any theoretical aspects of the method? Does a global convergence guarantee hold? What can you say about the convergence rate, either local or global? How does the choice of embedding influence the convergence? We refer to Appendix A in the supplementary material, where we prove that Bounce is consistent, i.e., converges to the global optimum in the limit. We did not prove regret bounds and would like to point out that no regret bounds are known for this line of work. > Are there other embeddings approaches that could be combined with Bounce? For the general case, we are not aware of any embedding that can generally be combined with Bounce. The Bounce embedding construction that conveys observations from a lower-dimensional to a higher-dimensional subspace is a key contribution of the paper. However, for the special case of purely continuous problems, Bounce could also use the HeSBO [1] embedding since it also uses a many-to-one mapping of input dimensions to target dimensions. Note that [2] showed that the HeSBO embedding has a lower worst-case probability of containing the optimum, so we opted for the BAxUS [2] embedding construction. > The description of the cardinal and ordinal embeddings are a bit repetitive and verbose, especially since they formally use the same approach. Can you unify the presentation, possibly include a single math-mode version of the embedding formula? That would aid the readability of the section. Thank you for the suggestion! We agree and will revise the section accordingly for the camera-ready version. [1] A Framework for Bayesian Optimization in Embedded Subspaces, ICML, 2019 \ [2] Increasing the Scope as You Learn: Adaptive Bayesian Optimization in Nested Subspaces, NeurIPS, 2022
Summary: The paper considers the problem of optimizing black-box functions defined over high-dimensional combinatorial and mixed continuous-combinatorial spaces. The key idea is to use the Bayesian optimization framework specialized to increasingly large nested embeddings of input dimensions in order to tackle the high-dimensionality challenge. The paper proposes separate embeddings for different class of input variables i.e. continuous, binary, categorical and ordinal. A trust region based approach is employed to enable parallel candidate evaluations in the proposed approach. Experiments are performed on multiple benchmarks to demonstrate the efficacy of the approach. Strengths: - The problem space considered in the paper is quite relevant and arises in multiple real-world applications. The paper is written really well describing the scope of the problem and associated applications. - The proposed idea (although building on BAxUS and TurBO) is principled and explained well. The method clearly outperforms all state-of-the-art baselines on several benchmarks demonstrating its efficacy. - The related work discussion and the amount of effort to compare with all the baselines in proper way is commendable and deserves credit. Overall, I think the paper will be useful and practical contribution to this problem space of high dimensional combinatorial spaces. Weaknesses: Please see my question below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The intuition or principle behind the embedding specific to categorical variable is not entirely clear to me. While for continuous variables, the embedding comes from the idea of count-sketches, it is not clear immediately what is captured by assigning categorical variables of eve different cardinalities to the same bin. For example, if there is an outlier variable with very large cardinality or if the range of cardinalities is quite large, the embedding would be very sensitive to the outlier? Please expand the description about the categorical embedding more if possible. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We're grateful for the reviewer's input and will be addressing their comments. We expanded the empirical section further and added RDUCB [1] and SMAC [2]. > The intuition or principle behind the embedding specific to categorical variable is not entirely clear to me. While for continuous variables, the embedding comes from the idea of count-sketches, it is not clear immediately what is captured by assigning categorical variables of eve different cardinalities to the same bin. For example, if there is an outlier variable with very large cardinality or if the range of cardinalities is quite large, the embedding would be very sensitive to the outlier? Please expand the description about the categorical embedding more if possible. In the camera-ready version, we will add a visualization of the proposed binning procedure. We agree with the reviewer that binning variables of very different cardinalities can lead to undesired effects. Consider the case of ordinal variables. Representing a variable of low cardinality with a high number of ‘levels’ may cause the surrogate model to exhibit variability where there is none in the unknown black-box function: many levels correspond to the same function value since they are mapped to the same x-value of the low-cardinality variable. However, there is a sudden change between two levels if the next level corresponds to another x-value of the low-cardinality variable. Therefore, the surrogate model has wide flat regions with jumps between two levels. We have ideas to tackle this problem but left them for future work. We will discuss this limitation in the paper. [1] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation?, ICML, 2023 \ [2] Sequential Model-Based Optimization for General Algorithm Configuration, LION, 2011 --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your time in responding to my queries. I am happy with the response and would like to keep my score of acceptance.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable feedback. We are pleased that the reviewers appreciated the problem's relevance, as acknowledged by reviewers ACyR, nHAf, and HPPy, along with their positive assessment of Bounce's performance across various benchmarks, as highlighted by reviewers ACyR, nHAf, HPPy, and FW4t. We are happy to read that the reviewers, including reviewers nHAf, HPPy, and FW4t, found the paper to be well-written. Several reviewers asked for a deeper analysis of the causes of the performance degradation we observed for BODi [1] and COMBO [2]. We want to point the reviewers to the in-depth analysis in Appendix D of the supplementary material that provides such an analysis — please notice that this year’s author guidelines do not allow the supplementary material to be submitted together with the main text. Some reviewers asked for a comparison with additional methods. We submit a PDF file with the rebuttal that shows the updated figures. We added an evaluation of RDUCB [3], a recent algorithm presented at ICML 2023 that belongs to the class of additive methods. We also added a comparison with the well-known SMAC [4]. Bounce outperforms both the recent RDUCB algorithm and SMAC on all benchmarks. Furthermore, we increased the number of function evaluations to 500 for the 50D-Labs and 125D-ClusterExpansion benchmarks. Due to the high computational cost, we could only run ten repetitions for BODi but will increase the number of repetitions for the camera-ready version. We don’t expect the results to change significantly since BODi has shown low variability in our experiments. Furthermore, not all 500 iterations have finished yet for COMBO due to its large running time. The plots show the mean averaged over fifty runs. We observe that Bounce continues to outperform the other algorithms. Based on the comment by reviewer FW4t, we fixed COMBO’s bug that affected its performance on the PestControl benchmark. Fig. 1 (c) in the PDF shows both the performance of COMBO with (marked “COMBO” in the legend) and without the bug (marked “COMBO (fixed)” in the legend). Fixing the bug improves COMBO’s performance on this benchmark and, as expected, makes COMBO agnostic towards the modification of the benchmark. Note that Bounce still outperforms COMBO and all other algorithms. We note that we contacted the authors of BODi, and they acknowledged the sensitivity of BODi toward the location of the optimal point. We further contacted the authors of COMBO and are currently awaiting a response. [1] Bayesian Optimization over High-Dimensional Combinatorial Spaces via Dictionary-based Embeddings, AISTATS, 2023 \ [2] Combinatorial Bayesian Optimization using the Graph Cartesian Product, NeurIPS, 2019 \ [3] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation?, ICML, 2023 \ [4] Sequential Model-Based Optimization for General Algorithm Configuration, LION, 2011 Pdf: /pdf/812d8bebc94c46868887d28dc9278a8cde13904f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes an algorithm called Bounce, using nested embedding for mixed and combinatorial search space. Bounce partitions input variables into ‘bins’ and set all variables within the same bin to a single value to reduce the dimension. During the optimization, Bounce splits up bins into smaller bins for more refined optimization. Strengths: 1/ The combinatorial and mixed optimization is significant and has wide range of applications in the real world. 2/ The algorithm is reasonable. Bounce designs a reasonable mapping for binary, categorical and ordinal variables, while a similar algorithm, BAXUS, which also uses nested random subspaces, has shown good performance on continuous optimization. Weaknesses: I don't have any major complaints. The proposed algorithm is a natural extension of BAXUS for categorical and mixed space. The experiments show that Bounce is reliable and can achieve good performance. Here are some minor comments: 1/ One minor weakness is the the high-dimensional continuous spaces part in Section 2. A recent survey paper [1] categorizes these works into several categories, i.e., low-dimensional embeddings, decomposition, and variable selection. This paper mainly discuss the work in low-dimensional embeddings I think discuss the recent works in decompostion [3-4] and variable selection [5-6] would further strengthen this paper. [1] A Survey on High-dimensional Gaussian Process Modeling with Application to Bayesian Optimization. ACM TELO, 2022. [2] High-Dimensional Bayesian Optimization via Tree-Structured Additive Models. AAAI, 2021. [3] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation? ICML, 2023. [4] Monte Carlo Tree Search based Variable Selection for High Dimensional Bayesian Optimization. NeurIPS, 2022. [5] Fast and Scalable Spike and Slab Variable Selection in High-Dimensional Gaussian Processes. AISTATS, 2022. 2/ It would benificial to see the the disscussion (even experimental comparison) with a work with similar topic [6]. [6] Tree ensemble kernels for Bayesian optimization with known constraints over mixed-feature spaces Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am puzzled about why the performance of BODi and COMBO can be sensitive to the location of the optima, which is equivalent to shuffling the labels of the categories of each variable. BODi uses Hamming distance and COMBO uses the combinatorial graph to model the relation of different labels for each variable. Different labels for the same variables are the same, so shuffling the labels will not impact the performance. Can you provide more explanation? I think the variable selection methods [4-5] maybe also can be applied in the mixed spaces tasks. I want to see some discussions. I'm glad to increase my score if you address the proposed isssues. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable remarks and are happy to address them. > 1/ One minor weakness is the the high-dimensional continuous spaces part in Section 2. A recent survey paper [1] categorizes these works into several categories, i.e., low-dimensional embeddings, decomposition, and variable selection. This paper mainly discuss the work in low-dimensional embeddings I think discuss the recent works in decompostion [3-4] and variable selection [5-6] would further strengthen this paper. We agree that comparing against decomposition-based techniques is a valuable addition to the paper. We, therefore, compare against the recent RDUCB [1] algorithm that relies on additive decompositions. Furthermore, we compare against the SMAC [2] Random Forests surrogate model. We will further discuss the suggested papers in the related work section. The appendix submitted in April in the supplementary material contains an extended related work section that discusses Monte Carlo Tree Search (MCTS) based techniques [3, 4]. We plan to integrate this section into the main text if there is sufficient space and will expand the discussion on the recent work on decomposition and variable selection as suggested by the reviewer. > I am puzzled about why the performance of BODi and COMBO can be sensitive to the location of the optima, which is equivalent to shuffling the labels of the categories of each variable. BODi uses Hamming distance and COMBO uses the combinatorial graph to model the relation of different labels for each variable. Different labels for the same variables are the same, so shuffling the labels will not impact the performance. Can you provide more explanation? We hope Appendix D in the supplementary material answers the open questions about BODi and COMBO’s behavior. In the camera-ready, we will refer to this appendix more prominently in the main text since it contains important insights into these methods. Due to space constraints, we cannot move it to the main text. [1] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation? ICML, 2023 \ [2] Sequential Model-Based Optimization for General Algorithm Configuration, LION, 2011 \ [3] Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search, NeurIPS, 2020 \ [4] Monte Carlo Tree Search based Variable Selection for High Dimensional Bayesian Optimization, NeurIPS, 2022 --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will increase my score to 7. However, there is a question that may be missed. "it would benificial to see the the disscussion (even experimental comparison) with a work with similar topic [6] " [6] Tree ensemble kernels for Bayesian optimization with known constraints over mixed-feature spaces. NeurIPS, 2022. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We will add the suggested paper to the discussion and investigate whether an experimental comparison is feasible for the camera-ready version. We added RDUCB [1] and SMAC [2] to the comparison and were not able to run more algorithms in the time frame of the rebuttal. [1] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation? ICML, 2023 \ [2] Sequential Model-Based Optimization for General Algorithm Configuration, LION, 2011
null
null
null
null
null
null
Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry
Accept (poster)
Summary: This paper seeks to present a biologically plausible learning approach for supervised learning in deep neural networks. Unlike backpropagation, the approach does not require symmetric weights in the forward and backward directions. The approach relies on an information theoretic approach which seeks to maximize mutual information between layers in the forward and backward direction. The approach is demonstrated on simple data sets (e.g. MNIST) as well as on 3-compartment models of pyramidal neurons. Strengths: The manuscript addresses an important problem--essentially how can supervised learning be implemented in biological neural networks. It proposes a solution to the well-known weight symmetry/transport problem. And it seeks to do so in a principled way using information theoretic notions Weaknesses: Unfortunately, the paper is poorly written, with heavy notations and equations which often obscure the approach rather than clarifying it. There is no clear expression for the learning algorithm. It is hard to see that the learning algorithm is local both in space and time, which is a major requirement in a biologically plausible network. For the experiments, the authors report the test accuracy. However other metrics would also be interesting, for instance, the degree of symmetry between the final weights in both directions. It seems that the approach still requires propagating error information over long distances (across many layers) which may also be problematic from a biological point of view. Supervised learning is not particularly biologically plausible. This point should be addressed, as a minimum using self-supervised learning in combination with the proposed approach. The authors should mention more clearly that the weight transport problem is completely solved by random backpropagation or feedback alignment. Thus the advantages of their approach, if any, should be contrasted with feedback alignment. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: It is not clear what you do at the top layer with the targets and the error. Are you clamping the output layer to the targets and then applying your backward pass? The term "disputed" in the abstract is too weak. I think there is a strong general consensus that plain backpropagation is not biologically plausible. line 42: "maximize the linear dependence"--why linear, since the networks are typically non-linear? furthermore it is easy to maximize linear dependence by making the signals identical (i.e. identical activities in two neighboring layers of the same size, which is NOT interesting. line 59: "predictors" of what? Figure 1 is not easy to understand at this stage of the paper. In fact, the use of compartmental models at the beginning of the paper is confusing. Maybe these should be moved entirely toward the end of the paper or in an appendix to improve readability. To the best of our knowledge, the result by Liao et al is not reliable --it has not been confirmed systematically. The term polytopic is not very common and should be defined. Is this assumption needed in the case of artificial neural networks? line 148: why do you mention that the objective is stochastic, and stochastic in which sense? Also, to be clear, you should specify whether you are trying to maximize or minimize the objective. Is there some Gaussian assumption behind equation 2 (if so it should be stated explicitly). Also, any connection or divergence from better-known concepts, such as mutual information, should be clarified. Equation 2 and what follows are particularly unclear and hard to follow. 153: "problem" Which problem exactly? page 6: the learning rule is not clear. Can you write it in the standard form of "\delta w_{ij}=learning rate x update urle....$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See some of the remarks above. There is no discussion of the limitations of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review and useful feedback. We had to truncate some of our responses to adhere to the length constraints. We are very eager to provide more details for any questions you might have during the discussion period. >..poorly written..heavy notations.. We acknowledge your feedback concerning the accessibility. In the revision, we refined the presentation in Section 2 by relocating certain equations to appendix and providing more comprehensive explanations for enhanced clarity. For a detailed outline of these changes, please refer to items 1 to 3 in our global rebuttal response (GRR). >..no clear expression for learning..hard to see the algorithm is local.. In initial submission, we presented the learning rule (see Eq. 26-27) with limited details due to space constraints. To rectify this, we've enriched our revision with an appendix section, "Learning Dynamics", with succinct pseudocode for our algorithm and additional elaboration, ensuring the locality constraints are clearly met. Refer to Alg. 1 in our global rebuttal PDF for more details. >..other metrics would be interesting, e.g., the degree of symmetry.. We indeed evaluated the angle between forward and backward weights using the cosine angle metric (Eq. A.7), detailed in our appendix. Refer to Fig. 2 (App. B.3) for the angle's evolution in our 2-layer experiments, starting with $90^\circ$ converging to $\sim70^\circ$. Figs 10&15 illustrate the angle patterns in 3-layer case and sparse networks, highlighting the inherent asymmetry for our framework. >..prop. error information over long distances.. The CorInfoMax networks solely include feedback projections from the next layer in the hierarchy, aligning with known biological connectivity patterns. They lack a separate error prop. mechanism or direct long-range top-down projections >..Supervised learning is not particularly bio-plausible.. We recognize the essential role of unsupervised/self-supervised learning in natural learning processes. Our main contribution lies in the dynamics and structure of info. propagation, which can be extended to unsupervised objectives. This point will be clarified in the revision as per your valuable suggestion. >..weight transport problem is completely solved by random backprop.. We agree that random BP is one plausible solution. We revised Sec. 1.1.2 to include "For example, the feedback alignment approach, which fixes randomly initialized feedback weights and adapts feedforward weights, was offered as a plausible solution [17]". CorInfoMax is offered as an info theory based principled alternative hypothesis where networks of segregated neurons with recurrent and asymmetric feedback connections governed by local learning rules naturally emerge. Such a normative framework is useful in obtaining potential insights such as the role of lateral connections in embedding space expansion and avoiding degeneracy, feedback and feedforward connections for prediction to reduce redundancy, and activation functions/interneurons to shape feature space and compress. We will elaborate more on these in the revision by benefitting the extra page. >Questions >..not clear..at the top layer We utilize weak, not full, clamping based on the objective in (9a). As shown in Eq. (21) for the output layer, the error between the network output and labels influences network dynamics via the gain $\beta$. >"disputed" is too weak We agree. We will consider replacing the word "disputed" in the revised article. >"maximize linear dependence" why linear..identical activities.. This is an important point to be clarified: typical network models use linear segments between layers (modelling synaptic integration), succeeded by nonlinear activations. In our framework, these linear segments in both directions emerge via correlation maximization. Additionally, set membership constraints, like the $\ell_1$-norm ball for sparsity, bring in nonlinearity. Thus, **linear mappings followed by nonlinear activtions** emerge from **CMI maximization under domain set constraints for layers**. >"predictors" of what? To clarify, in the revised article we write "predictors of layer activation signals". >Fig. 1 is not easy to understand at this stage.. We value your suggestion. Our aim is to maintain Fig.1 early on to offer a preview and motivate the discussion, if feasible. >..result by Liao.. We will look into it more closely, thanks. >The term polytopic is not common.. Polytopes, as compact intersections of half-spaces, allow flexible characterization of bounded layer activations. The choice of a polytope influences feature combinations like sparsity, antisparsity, and nonnegativity. Imposing these constraints leads to piecewise activations like ReLU and clipping functions, and introduces interneurons for sparsity. We will be happy to provide a brief summary as an appendix in the revision. >..stochastic in which sense?..specify..trying to maximize of minimize.. First, we define the obj. function in ensemble average form using expectations. Later, we provide its sample average based form. We modified line 148 to specify the objective is "to be maximized". >Is there some Gaussian assumption.. We make no Gaussian assumption. The correlative mutual information (CMI) is a second-order statistics-based measure, independent of the probability density functions, and it assesses the correlation level between its arguments. Please refer to Appendix A for an introduction to correlative entropy and CMI, which we expand in the revision. >..particularly unclear and hard to follow.. We will modify Section 2 for better presentation and accessibility in the revised article. >Which problem? In the revision, we clarified the term "problem" to mean the optimization for finding the optimal linear regularized MMSE predictor, $\mathbf{W}\_{ff,\*}^{(k)}$, given in Eq. (3). >..no..limitations.. We include a limitations section following your suggestion. Please see GRR item 5. --- Rebuttal 2: Comment: I agree that the reviewers have answered a fair number of comments--hard to tell something more precise without seeing the revised version. I am happy to move my score up by 1.
Summary: The authors present a novel strategy for learning in neural networks. In particular, the authors derive update rules for neurons/synapses which maximises the correlative information between layer activations. This strategy avoids the weight transfer problem, and naturally gives rise to a biologically emulating architecture of multi-compartment pyramidal neurons with lateral inhibition. Strengths: - the authors present what seems a mathematically sound and creative strategy for credit assignment. Without extensive knowledge in this area, the derivation of update rules seems original and of good quality - The resulting likeness to a multi-compartment model with lateral inhibition is interesting - the text is generally well written (though the presentation itself is dense, see below) Weaknesses: - in general I found the paper very dense - I personally think 27 equations is too many for a main text. I appreciate that the main contribution of this paper is analytical, but the think the authors would do well to sacrifice some of the less key equations (move to SM) to make space for additional intepretation/experiments - As stated above, I would have liked to have seen more intepretation and experiments with respect to the model. For example, what predictions does the model make in terms of the balance of bottom-up/top-down signals? How does this change over learning? How does it compare to biology? Same for interneurons - The actual performance of the model does not seem too impressive, at least compared to standard backprop (e.g. on the CIFAR-10 dataset). Moreover, given that a key property of the model is to avoid the weight symmetry issue, I would think it sensible to compare the model to backprop with random feedback weights (feedback alignment). - I think the authors coud make it more explicit what are the differences between their model and the model in Golkar et al. 2022. In particular, explicitly highlighting the similar and new terms when presenting the mathematical formulation Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The segway at line 34 was unclear to me - I found section 2.1 confusing to read because the equation which derives the activity r^k of a given layer is not expressed. Is this intentional to keep it general? It's confusing to understand whether the activities between the layers have a relationship at all at this point. - line 140: the inequality used to express the hypercube is not well defined for vectors - The CMI metric (equation 2) is a complicated equation with determinants and auto/cross-correlation matrices. I would have liked an intuitve (perhaps geometric) description of this measure - line 150: R_{r^k r^{k+1}} hasn't appeared yet but is already being described - in section 2.3.1 the variable s is introduced without any explanation as to what it represents. Is it time? i.e. u[t, s] is the t dataset example at time s? - line 231: I didn't understand the decomposition of M. Where does D come from and why does it mirror autapses? Is it the identity matrix in the expression of M? WHere does the negative O part (interneuron) come from? - equations 26,27: sorry if this is a naive question, but why can't the weights just be updated directly using equations 23,25? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I would recommend a limitations section (or at least more discussion). For example, I would be interested to know if the sensitivity of the model to hyperparameter choices is high, or whether there is a strict need for the feedback matrix at the last layer to be the identity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly value your comprehensive review and constructive suggestions. While length constraints have necessitated brevity in our response, we anticipate the opportunity to discuss further details and address any outstanding queries during the discussion phase >..paper..dense..less key eq.s (move to SM).. Thanks. We followed your advise. Please see items 1 to 3 in global rebuttal response (GRR). >... more intepretation and experiments... what predictions does the model make..the balance of bottom-up/top-down signals? As noted in GRR-item 4, we've included new experiments. As for the interpretations you've quoted, we believe our info. theory-based framework can provide insightful contributions and we are currently focused on development. Although the balance of bottom-up and top-down signals presents an interesting research direction, we haven't researched this area yet. However, our principled approach offer potential insights such as the role of lateral connections in embedding space expansion and avoiding degeneracy, feedback and feedforward connections for prediction to reduce redundancy, and activation functions/interneurons to shape feature space and compress. We will elaborate more on these in the revision by benefitting the extra page. >..performance..not..too impressive..compared to standard BP..compare..to BP with random feedback.. Following your suggestion, we performed additional experiments for comparisons with BP and feedback alignment for fully connected architecture. Updated Table 1 is available in Rebuttal pdf, which shows that CorInfoMax has on par performance with these benchmarks. >.. more explicit..differences between Golkar et al. ..highlighting...the mathematical formulation We can provide the following comparison between the CorInfoMax and the constrained predictive coding (C-PC) framework in Golkar et al.: * C-PC enhances existing maximum likelihood (ML) based predictive coding frameworks by integrating secondary forward prediction terms into the ML formulation. The minimization of negative log-likelihood can be expressed as: \begin{align} \min\_{\mathbf{Z},\mathbf{W}\_a,\mathbf{W}\_b} \hat{L}=\frac{1}{2}\sum_{l=1}^{n-1}\left[\frac{ \|\mathbf{Z}^{(l)}-\mathbf{W}^{(l-1)}_b\mathbf{Z}^{(l-1)}\|_F^2}{2{\sigma^{(l)}}^2}+\frac{ \|\mathbf{Z}^{(l+1)}-\mathbf{W}^{(l)}_a\mathbf{Z}^{(l)}\|_F^2}{2{\sigma^{(l+1)}}^2}\right]. \end{align} Note that both terms in the summation are two separate forward prediction error terms. * In CorInfoMax, we propose maximization of correlative mutual information between sequential branches, where both forward and backward prediction matrices emerge from two alternative but equivalent forms of the CMI, * C-PC approach makes use of a whitening constraint on layer activations, which is utilized to convert forward prediction matrix $\mathbf{W}_a$ to a feedback matrix $\mathbf{W}_a^T$, * In CorInfoMax framework, there is no whitening but a set membership constraint on layer activation vectors. * In C-PC lateral weights are based on Lagrangians of the covariance constraints, * In CorInfoMax lateral weights are inverse of the activation correlation matrix to maximize correlative entropy of activations, * The updates for feedforward and feedback matrices are different for both approaches (forward and backward prediction errors are used in the CorInfoMax). >Questions: >segway at line 34..unclear.. The first paragraph addresses two main critiques concerning bio. plausibility: weight transport and simple neuron models. The subsequent paragraph delves into the weight symmetry issue, while the third one explores neuron models. We will ensure a more seamless transition between these. >sec. 2.1 confusing..the eq. which derives the activity $\mathbf{r}^k$..is not expressed. Is this intentional? Indeed, this is intentional. Equations reflecting activity, network structure, dynamics, and learning updates are not predetermined but they emerge from correlative information maximization with activation domain constraints. >line 140: the ineq..is not well defined Apologies for notation ambiguity. The ineq. $\mathbf{0} \leq \mathbf{r} \leq \mathbf{1}$ denotes elementwise comparison. We can change $\leq$ to $\preccurlyeq$. >The CMI metric (2) is a complicated..an intuitve (perhaps geometric) descrip. of this measure Thanks for the suggestion. We included the following in Sec. 2.2: *"If we interpret the maximization of CMI in (2): the first term on the right side of (2) encourages the spread of $\mathbf{r}^{(k+1)}$ in its presumed domain $\mathcal{P}^{(k+1)}$, while the second term incites the minimization of redundancy in $\mathbf{r}^{(k+1)}$ beyond its component predictable from $\mathbf{r}^{(k)}$."* >line 150: $\mathbf{R}_{\mathbf{r}^k \mathbf{r}^{k+1}}$ hasn't appeared yet Thanks. Removed. See GRR item 2.i. >in sec. 2.3.1..s is introduced without any explanation.. Section 2.1 defines $t$ as the discrete data index, while $s$ is the continous time index for the network dynamics. We clarify this in the new appendix for network dynamics. >... the decomposition of M. Where does D come from ... We clarify it in the revision. Briefly, we define $\mathbf{D}^{(k)}$ as $\mathbf{D}^{(k)} = \text{diag}(\mathbf{M}^{(k)})$, i.e., a diagonal matrix containing the diagonal elements of $\mathbf{M}^{(k)}$. Then we define $\mathbf{O}^{(k)} = \mathbf{D}^{(k)} - \mathbf{M}^{(k)}$. Therefore, we can express $\mathbf{M}^{(k)}$ as $\mathbf{M}^{(k)} = \mathbf{D}^{(k)} - \mathbf{O}^{(k)}$. >..(26-27):..why can't the weights just be updated..using..23,25? Equilibrium Propagation updates in 26,27 target minimization of MSE error, whereas 23,25 are gradients for the CMI based energy function determining system dynamics. The use of 23,25 still provides some accuracy but significantly below the EP updates. >Limitations: >I would recommend a limitations section ... Thanks, we include a limitations section following your suggestions. See GRR item 5. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response and explanations. Regarding the additional experiments, may I ask why feedback alignment with cross entropy loss is not included in the experiments presented in the new Tables 1/2 (whilst cross entropy loss is used with standard BP)? --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment Comment: Thank you again for your comments and questions. We considered MSE loss appropriate for the feedback-alignment experiments, especially since we employ MSE loss within the CorInfoMax framework. However, we will be happy to incorporate feedback-alignment experiments utilizing cross-entropy loss. We have initiated these experiments and will share the results once they are available.
Summary: The authors introduce a biologically plausible training paradigm for a deep neural network that sidesteps the weight transport problem while achieving competitive results. Their approach is normative, in that both the network's architecture as well as its learning rules can be derived from an information maximization approach. The asymmetry between forward and backward weights is achieved by leveraging two different formulations of the inter-layer correlative information. Strengths: *Originality:* The work provides a novel approach for deriving biologically plausible strategies for learning in deep neural networks. *Quality:* The paper contains a significant amount of work to support its findings. Importantly, both theory and computation are used in tandem. *Clarity:* The presentation is mostly clear, but some significant explanations are missing or too sparse. See below. I also want to praise the authors for including the code that they used with the submission (something that I believe should be true for all papers, but sadly is not). *Significance:* The work is significant for neuroscience because the learning algorithms used by the brain are not yet understood. Having a good grasp over the range of possible mechanisms that biology could have used to train natural neural networks is essential to allow experimentalists to probe what choice(s) is (are) actually used. The work is also of potential significance for machine learning, since the algorithms used by the brain might provide advantages over the gradient descent with backpropagation methods used to train artificial neural networks. Weaknesses: 1. The correlative mutual information metric requires a bit more discussion. The regularization coefficients $\epsilon_k$ appear in eq. (2) but are not discussed at all until much later, and even in the derivation in Appendix A, the need for this regularization is not explained. On first guess, the need for $\epsilon_k \ne 0$ is due to having a low rank covariance matrix $\mathbf R_{\mathbf r^{(k+1)}}$. However, this seems inconsistent with the importance of these coefficients in the network dynamics and learning rules. This requires a more detailed discussion in the main text, and especially in the Appendix. (If space is an issue, I suggest removing most of lines 170-174, which are almost identical to 149-153; it can simply be stated that the sample covariance matrices from eq. (6) need to be used instead of their exact counterparts to get online training rules.) 2. Related to the regularization coefficients, I am a bit perplexed by eqns. (10), (11). The Taylor expansion in these equations is performed around the identity, but that makes the expansion parameter be $1 / \epsilon$. Since $\epsilon$ is small, $1 / \epsilon$ should be big, making it hard to justify ignoring subsequent terms in the Taylor expansion. This needs to be explained in detail. 3. The jump to the dynamics equations (15)–(17) is too abrupt. Either an explanation should be provided or a reference to a relevant Appendix section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Questions * for me, phrasing the method in terms of using two *equivalent* forms of the correlative mutual information is misleading: if they are equivalent, how could they lead to the desired asymmetry between feed-forward and feed-back weights? I may be wrong, but I believe that the resolution lies in the Taylor expansion around line 178 -- it's not the differing exact expressions that lead to the asymmetry, but the different *approximations* * the definition of $\mathbf M$ on line 212 is a bit confusing. Should the factor of 2 apply to both terms? I would have thought that the leak term in the expression for $\mathbf M$ is supposed to counteract the leak term in eq. (15) as long as $\mathbf u$ is inside the feasible domain * also regarding $\mathbf M$: since $\epsilon$ is small, my understanding is that $\mathbf M$ is also small; in this case, I would suggest leaving $\epsilon$ out of the definition for $\mathbf M$, to make the scale of each term more apparent; e.g., in eq. (18), I would imagine $\mathbf W_{fb}$ has the leading contribution to $\mathbf v_A$, while the $\mathbf M$ term is a sub-leading correction * were hyperparameters optimized for each task for all of the algorithms in Table 1? Minor comments: * in eq. (12), the notation $\hat J_k$ is used but it had never been introduced before. Please define $\hat J_k$ first before using * the update rule for the lateral coefficients (lines 280-281) should be written in terms of the biologically motivated parameters ($\mathbf M$, or even better, $\mathbf D$ and $\mathbf O$) instead of $\mathbf B$. * in the numerical experiments section, please include at least basic details about the networks that are used (e.g., number of layers) * below eq. (A.12), the clipping operation $\sigma_+$ is invoked out of nowhere. I think the point is to show how this operation can be justified as a way of enforcing the KKT conditions; this should be made clearer * the plots in Appendix Figures 7–8, 12–14 are very sparse and it's not clear what information we are to glean from them; I suggest combining some of these, e.g., compare the accuracy attained by different network architectures and / or activation functions * in the studies from Appendix F, the range over which the hyperparameters are varied seems too small because the variation in accuracy is very low, barely beyond the variability over different runs; I suggest using larger variations to show the trend Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately discussed limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review and constructive comments. Due to the strict length constraint some details we prepared were needed to be removed. We will be happy to provide more details and answer your potential questions during the discussion period. >Strengths: We sincerely value your recognition of our work's novelty and significance > Weaknesses: >..Discussion on CMI metric, and regularization parameter $\epsilon_k$.. Thanks a lot for this constructive suggestion. Please see items 2 - 3 in our global rebuttal response. >..Taylor exp. in (10-11) is performed around the identity.. Our revised article includes an appendix section detailing the linearization based on the truncated Taylor series. Briefly, the linearization around $\mathbf{A}$ with perturbation $\mathbf{\Delta}$ can be expressed as $\log\det(\mathbf{A}+\mathbf{\Delta})\approx \log\det(\mathbf{A})+\text{Tr}(\mathbf{A}^{-1}\mathbf{\Delta}).$ For the correlative entropy of the prediction error, $\log\det(\epsilon \mathbf{I}+\mathbf{R}\_\mathbf{e})$, we assume $\epsilon \mathbf{I} \succ \mathbf{R}\_\mathbf{e}$ and linearize around $\mathbf{A}=\epsilon \mathbf{I}$ with perturbation $\mathbf{\Delta}=\mathbf{R}\_\mathbf{e}$, yielding $\log\det(\epsilon \mathbf{I}+\mathbf{R}\_\mathbf{e})\approx \log\det(\epsilon \mathbf{I})+\epsilon^{-1}\text{Tr}({\mathbf{R}\_\mathbf{e}})$. In summary, the perturbation term is not $\epsilon \mathbf{I}$ but ${\mathbf{R}\_\mathbf{e}}$. The assumption follows from the discussion on the impact of $\epsilon$, where maximizing the CMI is achieved by pushing the eigenvalues of ${\mathbf{R}_\mathbf{e}}$ below $\epsilon$, for *reasonable* values of $\epsilon$. For our nominal choice of $\epsilon=0.15$ in our experiments, it is indeed the case. >..jump to the dynamics equations (15-17) is too abrupt.. Thanks again for this suggestion. We include a new appendix section on the more detailed derivation of (15-17). >Questions: >phrasing the method in terms of using two equivalent forms of the CMI is misleading: .. how could they lead to the desired asymmetry between feed-forward and feed-back weights? ..it's not the differing exact expressions that lead to the asymmetry, but the different approximations? Briefly, asymmetry is **not** due to Taylor series based approximation: Equations (7) and (8) indeed represent two equivalent alternatives for Correlative Mutual Information (CMI), but individual components vary. Specifically, the first terms of (7) and (8) represent the Correlative Entropy (CE) of layer $k+1$ and layer $k$ activations, while the second terms are the CEs of forward and backward prediction errors. These are not necessarily equivalent, leading to inherently unequal forward and backward error entropies and corresponding weight matrices. Taylor series based approximation is just for the linearization of the forward and backward prediction entropies. As discussed in Appendix B.2, we can write the forward and backward predictor weights as $\mathbf{W}\_{ff,\*}^{(k)}=\mathbf{R}\_{\mathbf{r}^{(k+1)}\mathbf{r}^{(k)}}(\mathbf{R}\_{\mathbf{r}^{(k)}}+\epsilon_k \mathbf{I})^{-1},$ $\mathbf{W}\_{fb,\*}^{(k)}=\mathbf{R}\_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}(\mathbf{R}\_{\mathbf{r}^{(k+1)}}+\epsilon_k \mathbf{I})^{-1}$ $(\mathbf{R}\_{\mathbf{r}^{(k)}}+\epsilon_k \mathbf{I})^{-1}$ and $(\mathbf{R}\_{\mathbf{r}^{(k+1)}}+\epsilon_k \mathbf{I})^{-1}$. Consequently, the condition $\mathbf{W}\_{ff}^{(k)}={\mathbf{W}\_{fb}^{(k)}}^T$ does not generally hold true. Symmetry might be anticipated in very specific scenarios - such as diagonal autocorrelation matrices. This analysis only considers the mutual information maximization component of the objective, yet it offers insight into the expected asymmetry of the forward and backward weights. >..the definition of $\mathbf{M}$.. the factor of 2 apply to both terms? Thanks for pointing the typo. We fixed as $\mathbf{M}^{(k)}[t] = \epsilon_k(2 \gamma\mathbf{B}^{(k)}[t] + g_{\text{lk}} \mathbf{I}$) >... $\mathbf{M}$ is also small ... I would suggest leaving $\mathbf{\epsilon}$ out of the definition... Thanks. The goal was to obtain compact representation. We will evaluate your suggestion for the revision. >were hyperparameters optimized for each task for all of the algorithms in Table 1 Indeed, we put considerable effort into optimizing the hyperparameters for most of the tasks and all the algorithms in Table 1, using grid search. Our shared code explicitly include the grid search parameters associated with each algorithm. Additionally, our Python notebooks under "AnalyzeSimulations" present the train and test results in a comprehensive table, enabling easy comparison between various settings. >Minor Comments: >in eq. (12), ..$\mathbf{\hat{J}_k}$.. had never been introduced.. Thanks. In the revision, we explicity include ${\hat{J}}\_k(\mathbf{r}^{(k)}) = \overset{\rightarrow}{\hat{I}^{(\epsilon\_{k-1})}}(\mathbf{r}^{(k - 1)},\mathbf{r}^{(k)})[t]+\overset{\leftarrow}{\hat{I}^{(\epsilon\_k)}}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[t]$ for $k=1, \ldots, P$ and $\hat{J}\_P(\mathbf{r}^{(P)})[t]=\overset{\rightarrow}{\hat{I}^{(\epsilon\_{P-1})}}(\mathbf{r}^{(P - 1)},\mathbf{r}^{(P)})[t]-\frac{\beta}{2}\|\mathbf{r}^{(P)}[t]-\mathbf{y}\_T[t]\|_2^2$. >the update rule for the lateral coefficients ... in terms of ... ($\mathbf{M}$, or ..., $\mathbf{D}$ and $\mathbf{O}$) We'll revise the update equations as suggested (see Alg. 1 in global PDF). >... experiments section...include...details about the networks.. Thanks. We provide them in the appendix but we will include in the main text of the revision. >below (A.12)...$\sigma_+$ is invoked out of nowhere .. We reworded this sentence. >... Figures 7–8, 12–14 are very sparse...suggest combining... Following your advice, we merged 7-9 and 12-14. >Appendix F...hyperparameters...suggest using larger variations to show the trend... Revised manuscript will include an extended ablation study section in appendix. --- Rebuttal Comment 1.1: Comment: Thanks for the explanations. It seems like the answer regarding the $\epsilon$ terms is that they are actually relatively *large*, not small. I suppose this is a valid choice, although I'd be curious to know whether the quadratic term in the Taylor expansion is actually negligible compared to the linear one that you keep. Regardless of this, such a large value for $\epsilon$ goes well beyond regularization in the sense of avoiding divergences, it appears to act more like a prior, pushing your system in a direction that presumably is desirable for some reason. What is that direction and why do you want to bias the system like this? In other words, by using a large $\epsilon$, you are not quite optimizing correlative mutual information anymore, and in that case, you should explain what you are optimizing instead, and why. --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment - Part 1 Comment: Thank you for your comments and questions, which enhance the understanding of our article. Before directly addressing your query, we will first delve deeper into the discussion on the $\epsilon$ parameter, and then offer our responses: Upon examining the expression for correlative mutual information (CMI), $$\overset{\rightarrow}{{I}^{(\epsilon_k)}}(\mathbf{r}^{(k)}, \mathbf{r}^{(k+1)}) = \frac{1}{2} \log \det \left(\mathbf{R}\_{\mathbf{r}^{(k+1)}} + \epsilon_k \mathbf{I}\right)- \frac{1}{2} \log \det \left(\mathbf{R}\_{\overset{\rightarrow}{\mathbf{e}^{(k+1)}\_\*}} + \epsilon_k \mathbf{I}\right), \quad (2)$$ $\epsilon_k$ appears to function as a correction factor to compensate for rank-deficient correlation matrices of degenerate random vectors. From this perspective, this adjustment serves two primary purposes: * To establish a finite lower bound for the entropy, and * To circumvent numerical optimization issues, given that the derivative of the $\log\det$ function is the inverse of its argument. In fact, robust matrix factorization methods that rely on determinant-maximization use this perturbation for aforementioned reasons [a]. Additionally, a recent study links $\epsilon_k$ with a parameter of the **Inverse Wishart prior** distribution on the covariance matrix of the row vectors of one of the factors [b]. Moving beyond these interpretations, we first observe that (2) defines **a family** of correlative mutual information definitions. **For each $\epsilon_k$ choice, we have a valid alternative correlative mutual information definition [35]. Hence, we do not necessarily interpret them as approximations of $\epsilon_k=0$ case, instead they are alternative CMI measures.** To see the impact of $\epsilon_k$ choice: -We note that the prediction error covariance matrix in (2) $$\mathbf{R}\_{\mathbf{e}^{(k+1)}\_\*}=\mathbf{R}\_\mathbf{r^{(k+1)}} - \mathbf{R}\_{\mathbf{r}^{(k)} \mathbf{r}^{(k+1)}}^T(\mathbf{R}\_\mathbf{r^{(k)}} + \epsilon_k \mathbf{I})^{-1} \mathbf{R}\_{\mathbf{r^{(k)}}\mathbf{r}^{(k+1)}} \hspace{0.2in}(A)$$ corresponds to the error correlation matrix for the best linear regularized minimum mean square estimator of $\mathbf{r}^{(k)}$ from $\mathbf{r}^{(k+1)}$. This estimator is obtained as the solution of the optimization problem \begin{eqnarray} \underset{\mathbf{W}\_{\mathbf{r}^{(k+1)}|\mathbf{r}^{(k)}}}{\text{minimize }} {E(\||{\mathbf{r}^{(k+1)}}-\mathbf{W}\_{\mathbf{r}^{(k+1)}|\mathbf{r}^{(k)}}\mathbf{r}^{(k)}\||\_2^2)+\epsilon_k\||\mathbf{W}\_{\mathbf{r}^{(k+1)}|\mathbf{r}^{(k)}}\||\_F^2}. \end{eqnarray} In this context, $\epsilon_k$ parameter acts as a regularizing coefficient for the linear estimation problem integral to measuring linear dependence between the two arguments of the CMI. Maximizing the CMI given by equation (2) can be accomplished by increasing the correlative entropy of $\mathbf{r}^{(k+1)}$ while decreasing the correlative entropy of the estimation error $\mathbf{e}^{(k+1)}\_\*$. Based on Eq. (A) above, we have $\mathbf{R}\_{\mathbf{r}^{(k+1)}}\succeq \mathbf{R}\_{\mathbf{e}^{(k+1)}\_\*}$, and given that we can write the CMI in (2) as, $$\overset{\rightarrow}{{I}^{(\epsilon_k)}}(\mathbf{r}^{(k)}, \mathbf{r}^{(k+1)})=\frac{1}{2}\sum_{l=1}^{N_{k+1}}(\log(\lambda_l(\mathbf{R}\_{\mathbf{r}^{(k+1)}})+\epsilon_k)-\log(\lambda_l(\mathbf{R}\_{\mathbf{e}^{(k+1)}\_\*})+\epsilon_k))$$ where $\lambda_l(\mathbf{R})$ denotes the $l$-th eigenvalue of the matrix $\mathbf{R}$. We anticipate that the choice of $\epsilon_k$ will primarily influence the correlative entropy of $\mathbf{e}^{(k+1)}\_\*$. Indeed, since $\epsilon_k$ is added to all the eigenvalues of $\mathbf{R}\_{\mathbf{e}^{(k+1)}\_\*}$, reducing its eigenvalues below $\epsilon_k$ would have only incremental increase in the mutual information. As such, a smaller $\epsilon$ value will place greater emphasis on reducing the estimation error $\mathbf{e}^{(k+1)}\_\*$. Consequently, one can consider $\epsilon^{-1}$ can be viewed as an indicator of the sensitivity of the CMI to the levels of estimation error $\mathbf{e}^{(k+1)}\_\*$ (hence act as a conductance for basal/appical-soma connections), determining how far we need to push down the estimation error values to increase the CMI. In brief, the choice of $\epsilon_k$ refers to choice of a CMI from a family of CMIs. Furthermore, the choice of $\epsilon_k$ determines the relative contributions of first and second terms in (2), where smaller $\epsilon_k$ (or larger $\epsilon_k^{-1}$) choice give more emphasis to prediction error entropy. [a] Xiao Fu et. al. Robust volume minimization-based matrix factorization for remote sensing and document clustering. IEEE TSP, Aug 2016. [b] Tatli G et. al. A Bayesian Perspective for Determinant Minimization Based Robust Structured Matrix Factorization. In ICASSP 2023 Jun 2023.
Summary: The paper proposes Correlative Information Maximization as an underlying objective for biologically plausible learning. The objective produces a multi-compartmental neuron model, and can operate with feedback connection that are plastic, but not tied to the feedforward ones. Strengths: The (approximation to the) CorInfoMax objective produces a tractable model of a neuron with several compartments. This resonates with previous ideas of credit assignment with apical dendrites, and (I guess) generates experimentally testable predictions due to the specific interactions between compartments and weights. The approach to weight symmetry is interesting and might implicitly lead to weight symmetry (although see weaknesses). Overall, this is a novel idea, even though it is very related to previous works that use apical dendrites/predicting coding/etc. as a mechanism for credit assignment. Weaknesses: The experiments in Tab. 1 have multiple problems. There's no comparison to backprop and no explanation of used architectures in the main text. Presumably the architectures were pretty small, given poor CIFAR10 performance. Related, all experiments show feedback alignment-level performance (i.e. good on MNIST, OK on CIFAR10 for a small network that reaches about 50% accuracy). Thus, we can’t draw any conclusions about the effectiveness of this approach without considering at least larger networks and maybe harder datasets (as feedback alignment doesn't scale beyond those cases). The minimum aim would be to train a standard ResNet18 on CIFAR10 with backprop (should be around 90% accuracy), and compare it to all algorithms in Tab. 1. The authors also missed an important previous work -- Deep Learning without Weight Transport by Akrout et al. (2019). That paper proposes a simple mechanism for the weight transport problem that is a bit different from the one here, but it is still worth discussing in the context of backprop approximations/alternatives. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Main question: are the feedback weights different from the feedforward at the end of training? I think it is an interesting question in itself, since "no" would mean your algorithm approximates backprop (similar to predictive coding approaches?). If they are different, I wonder if that could hurt performance on harder tasks and if that could be fixed somehow (akin to weight symmetry in Akrout et al.?) Is CorInfoMax here similar to the Information Bottleneck ideas? The abstract says > The backpropagation algorithm... it remains an open question whether the brain employs supervised learning mechanisms akin to it Backprop doesn’t imply supervised learning (see VAEs, self-supervised methods and so on). The overall claim is fine, but it shouldn't be about supervised learning. Eq. 27: should the second $e^{k+1}$ be $e^k$? **Overall**, it is an interesting contribution but the experiments are very small-scale and the proposed approach is not compared to the main competitor, backpropagation. ---------- **Post-rebuttal**: feedback alignment-level performance is a limitation of this work, but the principled approach to derive multi-compartment models and additional evaluations done during the rebuttal justify an increase of the score from 5 to 6. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations and potential impacts have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and insightful feedback. Due to strict length constraints, we had to omit some details in our responses. We eagerly look forward to providing further clarifications and answering any additional questions during the discussion period. > Strengths: Thank you for your positive comments. >..might implicitly lead to weight symmetry.. Our theoretical discussion in Appendix B.2, supported by experimental angle measurements between feedback and transpose feedback weights in Appendices B.3, E.4.3, and E.5.4, indicates that CorInfoMax networks do not lead to implicit weight symmetry. > Weaknesses: >..no comparison to backprop.. In the revised article, we've updated Table 1 with backpropagation and feedback alignment results, available in the rebuttal PDF. >.. no explanation of used architectures.. The revision incorporates architecture information into the main text. Our initial submission's appendix detailed hyperparameters and network structures for each experiment. We used 2- or 3-layer fully connected networks with hidden sizes of $500$ for MNIST and $1000$ for CIFAR10 in 2-layer networks, and $500$-$500$ for MNIST and $1000$-$500$ for CIFAR10 in 3-layer networks. >...to train a standard ResNet18 ... Our key objective is to propose a normative learning rule leading to segregated pyramidal neuron models, addressing the weight transport problem. We focused on fully connected neural networks to establish our theoretical foundations. In our new experiments, we've achieved similar performance to biologically plausible benchmarks and BP-trained fully connected networks. We deliberately excluded CNNs, including ResNets, due to their weight-sharing feature that doesn't align with locally connected neuronal models in the brain. >... also missed an important previous work -- Deep Learning without Weight Transport by Akrout... Thanks a lot for pointing this relevant reference. We inserted the following change to Section 1.1.2: *"For example, the feedback alignment approach, which fixes randomly initialized feedback weights and adapts feedforward weights, was offered as a plausible solution [17]. Later Akrout et.al. [18 ] proposed its extension by updating feedback weights towards to the transpose of the feedforward weights"* >Questions: >..are the feedback weights different from the feedforward at the end of training?.. If they are different, I wonder if that could hurt performance.. We appreciate the opportunity to clarify our model's distinctiveness. As discussed in Appendices B.2, B.3, E.4.3, and E.5.4, our model's feedback weights are not transposes of the feedforward weights, unlike conventional backpropagation networks. As discussed in Appendix B.2 of our initial submission, we can write the forward and backward predictor weights as $$\mathbf{W}\_{ff,\*}^{(k)}=\mathbf{R}\_{\mathbf{r}^{(k+1)}\mathbf{r}^{(k)}}(\mathbf{R}\_{\mathbf{r}^{(k)}}+\epsilon_k \mathbf{I})^{-1},$$ $$\mathbf{W}\_{fb,\*}^{(k)}=\mathbf{R}\_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}(\mathbf{R}\_{\mathbf{r}^{(k+1)}}+\epsilon_k \mathbf{I})^{-1}$$ Inspecting these, they do not just involve $\mathbf{R}\_{\mathbf{r}^{(k+1)}\mathbf{r}^{(k)}}$ and $\mathbf{R}\_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}$ which are transposes of each other, but also the inverse autocorrelation matrices $(\mathbf{R}\_{\mathbf{r}\^{(k)}}+\epsilon_k \mathbf{I})^{-1}$ and $(\mathbf{R}_{\mathbf{r}^{(k+1)}}+\epsilon_k \mathbf{I})^{-1}$. Consequently, the condition $\mathbf{W}\_{ff}^{(k)}={\mathbf{W}\_{fb}^{(k)}}^T$ does not generally hold true. Symmetry might be anticipated for diagonal autocorrelation matrices. In the feedforward network based standard backpropagation setting, the output of the feedback network does not directly influence the feedforward network's output, instead it generates credit signals for updating the feedforward weights. For output mean square error minimization, the feedback weights should be the transposes of the feedforward weights. However, CorInfoMax networks operate differently. Being recurrent network with feedback, these networks' dynamics—and thus their intermediate and output signals—are directly influenced by the feedback weights. Our use of equilibrium propagation-based learning ensures that the weights adapt to minimize the mean square error loss function, guided by the CorInfoMax objective. Consequently, there's no requirement for feedback weights to mirror the feedforward weights. > Is CorInfoMax here similar to the Information Bottleneck ideas? While both CorInfoMax and the Information Bottleneck principle derive from information theory, their relationship isn't straightforward. Traditionally, the Information Bottleneck method aims to maximize the Shannon Mutual Information (SMI) between a hidden vector and the output label, whilst simultaneously minimizing its SMI with the input. This dual goal ensures the relevance of the hidden layer to the output while promoting compression. On the other hand, the CorInfoMax framework is designed to maximize the correlative information flow across the input, hidden layers, and output in a bidirectional fashion. CorInfoMax achieves potential compression by adopting specific domain sets, such as polytopes, for the hidden and output layers. Consequently, this leads to piecewise linear activation functions and lateral inhibition neurons at these layers. > ..Backprop doesn’t imply supervised learning... We agree. We focused on supervised learning, primarily due to its notational convenience and alignment with the traditional form of backpropagation. However, it's crucial to note that our framework could be feasibly extended to other unsupervised and self-supervised paradigms. > ... should the second $\mathbf{e}^{k+1}$ be $\mathbf{e}^k$? Thanks. We corrected this in the revision. > ... the proposed approach is not compared to the main competitor, backpropagation. We included new experiments with BP. Please see rebuttal pdf for the updated Table 1. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the response! Overall, I appreciate the clarifications and additional evaluations. However, the additional experiments confirm my concern in the original review -- the method scales similarly to vanilla feedback alignment (FA). This is evident by similar performance, and a large performance gap between CorInfoMax/FA and backprop in Tab. 2 of the rebuttal pdf. While a performance gap is not a bad thing per se, I suspect the method will fail at hard tasks just like FA. And I also suspect it could be fixed by explicitly aligning the feedback weights with the feedforwards ones. Since fixing the weight transport problem was one of the main goals of the paper, I think presenting a more capable method than FA is crucial. > We deliberately excluded CNNs, including ResNets, due to their weight-sharing feature that doesn't align with locally connected neuronal models in the brain. Many papers on biologically plausible deep learning use CNNs though, so including those results might help to show that the algorithm is capable of learning hard tasks (without implying that this is how the visual stream works). I'm open for a discussion, but currently I remain skeptical of the method's value for the field. --- Reply to Comment 1.1.1: Title: Response to Reviewer MCJo's Response Comment: Thank you once again for your thoughtful feedback and the time you’ve invested in evaluating our work. We genuinely value your concerns regarding the performance on more challenging tasks. We acknowledge that achieving high performance on hard machine learning tasks remains a collective challenge for all biologically plausible models. However, this may not be the most relevant and only evaluation metric from the perspective of explaining how brains work. In fact, matching biological reality and interpretability via the use of normative principles are important criteria (See, e.g., [a,b,c,d,e,f] below) . Indeed, the primary contribution of our article lies in its principled approach: we introduce a normative framework grounded in information theory. Biologically plausible networks comprised of multi-compartment neurons with both recurrent and asymmetric feedforward/feedback connections naturally emerge as solutions of the optimization settings put forward through this framework. At the same time, our approach provides principled interpretations of lateral, feedback, and feedforward connections, as well as activation functions. We can use information theoretic lens to interpret the role of these network components in terms maintaining bidirectional information flow, avoiding embedding space degeneracy (through lateral connections and interneurons) while achieving compression by eliminating redundancy (through feedback/feedback connections) and domain constraints (activation functions/interneurons). At the same time, resulting networks achieve similar performance to existing biologically plausible benchmarks without weight reuse. We are confident that the foundational nature of this framework, when extended and combined with additional biological and normative constraints, has the potential to address more practical concerns in the field. [a] Golkar S, Lipshutz D, Bahroun Y, Sengupta A, Chklovskii D. A simple normative network approximates local non-Hebbian learning in the cortex. Advances in neural information processing systems. 2020;33:7283-95. [b] Meulemans A, Zucchet N, Kobayashi S, Von Oswald J, Sacramento J. The least-control principle for local learning at equilibrium. Advances in Neural Information Processing Systems. 2022 Dec 6;35:33603-17. [c] Alonso N, Millidge B, Krichmar J, Neftci EO. A theoretical framework for inference learning. Advances in Neural Information Processing Systems. 2022 Dec 6;35:37335-48. [d] Song, Yuhang, et al. “Can the brain do backpropagation?---exact implementation of backpropagation in predictive coding networks.” Advances in neural information processing systems 33 (2020): 22566-22579. [e] Bredenberg C., Savin C. Desiderata for normative models of synaptic plasticity, arXiv:2308.04988, 2023. [f] Lipshutz D, Bahroun Y, Golkar S, Sengupta AM, Chklovskii DB. A normative framework for deriving neural networks with multi-compartmental neurons and non-Hebbian plasticity. PRX Life 2023.
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their comprehensive evaluations and insightful feedback. This response covers main points and shared concerns. **Owing to the strict length constraint**, we've made every effort to respond to individual comments and questions here and within each reviewer's rebuttal. **We will be more than happy to provide more details during the discussion phase.** Our article offers a method grounded in information theory for the development of biologically plausible neural networks. Enhancing the clarity and accessibility of the theoretical content has been identified as an area for improvement. In addition, there have been requests for clearer derivations, explanations, and additional experiments. The revision we prepared addresses these aspects, as highlighted below: 1. Revise Section 2.2 for better readability by *moving some details to the appendix and eliminating some unnecessary expressions*: a) Move linear approximation of correlative entropy (based on Taylor series) in (10)-(11) to a new appendix section, b) transfer CorInfoMax objective function gradient derivation details in (12)-(14) to a new appendix, c) add a new appendix section for the derivation of network dynamics equations (15)-(17), d) eliminate cross correlation matrix and detailed definition of error correlation matrix after (2). 2. Revise Section 2.2 for *better explanations and clarifications*, e.g., i. Replace the description after (2), which contains the definition of correlative mutual information, for clarification: "$$\overset{\rightarrow}{{I}^{(\epsilon_k)}}(\mathbf{r}^{(k)}, \mathbf{r}^{(k+1)}) = \frac{1}{2} \log \det \left(\mathbf{R}_{\mathbf{r}^{(k+1)}} + \epsilon_k \mathbf{I}\right) - \frac{1}{2} \log \det \left(\mathbf{R}\_{\overset{\rightarrow}{\mathbf{e}^{(k+1)}\_\*}} + \epsilon_k \mathbf{I}\right) \quad (2)$$ is the correlative mutual information between layers $\mathbf{r}^{(k)}$ and $\mathbf{r}^{(k+1)}$, $\mathbf{R}\_{\mathbf{r}\^{(k+1)}}=E(\mathbf{r}^{(k+1)}{\mathbf{r}^{(k+1)}}^T)$ is the autocorrelation matrix corresponding to the layer $\mathbf{r}^{(k+1)}$ activations, and $\mathbf{R}\_{\overset{\rightarrow}{\mathbf{e}^{(k+1)}\_{\*}}}$ corresponds to the error autocorrelation matrix for the best linear regularized minimum MSE predictor of $\mathbf{r}^{(k+1)}$ from $\mathbf{r}^{(k)}$. Therefore, the mutual information objective in (2) makes a referral to the regularized **forward** prediction problem represented by the optimization ...* " ii. Provide a intuitive description for the maximization of CMI in (2): *"If we interpret the maximization of CMI in (2): the first term on the right side of (2), i.e., the correlative entropy of $(k+1)^{\text{th}}$ layer's activation vector, encourages the spread of $\mathbf{r}^{(k+1)}$ in its presumed domain $\mathcal{P}^{(k+1)}$, while the second term, i.e., the correlative entropy of forward prediction error, incites the minimization of redundancy in $\mathbf{r}^{(k+1)}$ beyond its component predictable from $\mathbf{r}^{(k)}$."* 3. Provide **a new appendix section on the role of $\epsilon$ parameter**. To summarize a) $\epsilon$ sets a finite lower bound for the correlative entropy, b) $\epsilon$ addresses numerical optimization issues since the derivative of the $\log\det$ function is the inverse of its argument, c) $\epsilon$ acts as a regularizer for the forward and backward prediction problems (see (3) and (5) in the main article), d) $\epsilon^{-1}$ can be viewed as an indicator of the sensitivity of the CMI to the prediction error levels. This last property can be viewed from two perspectives: - Inspecting the CMI expression (2) above: $\epsilon_k$ is added to eigenvalues of the correlation matrices, and by definition $\mathbf{R}_{\mathbf{r}^{(k+1)}} \succeq {\mathbf{R}}\_{\overset{\rightarrow}{\mathbf{e}\^{(k+1)}\_*}}$. With $\epsilon_k$ chosen below eigenvalues of $\mathbf{R}\_{\mathbf{r}\^{(k+1)}}$, we can assume $\mathbf{R}\_{\mathbf{r}\^{(k+1)}}+\epsilon_k \mathbf{I}\approx \mathbf{R}\_{\mathbf{r}\^{(k+1)}}$. Thus, the choice of $\epsilon_k$ essentially determine how much we can reduce the correlative prediction error entropy in (2) to maximize the CMI since reducing the eigenvalues of the prediction error correlation matrix below $\epsilon_k$ would not significantly decrease the prediction error entropy. As a result, smaller $\epsilon_k$ implies more emphasis on decreasing prediction error entropy. This is in accordance with how $\epsilon^{-1}$ acts as a conductance parameter channeling prediction errors to output computation. To underline this connection, we will add additional explanation to the discussion at the end of Section 2.3.1: "The inverse of the regularization coefficient $\epsilon_k$ is related to the conductance between soma and dendritic compartments. This is compliant with the interpretation of the $\epsilon^{-1}$ in Appendix A.2 as the sensitivity parameter that determines the contribution of the prediction errors to the CMI." - Alternatively, consider the approximation of (2) with linearized prediction error entropy where $\epsilon_k^{-1}$ appears as the scale of prediction error matrix dependent term: $$\overset{\rightarrow}{{I}^{(\epsilon_k)}}(\mathbf{r}^{(k)}, \mathbf{r}^{(k+1)}) \approx \frac{1}{2} \log \det \left(\mathbf{R}\_{\mathbf{r}\^{(k+1)}}\right)- \frac{\epsilon_k^{-1}}{2} \text{Tr} \left(\mathbf{R}\_{\overset{\rightarrow}{\mathbf{e}^{(k+1)}\_*}} \right)+const$$ 4. Provide **additional numerical experiments** involving comparison with standard backpropagation and feedback alignment algorithms. Table 1 updated with these experiments are provided in the PDF attachment, which confirm that the CorInfoMax has performance on par with available benchmarks. 5. Provide a section on the **limitations** of the proposed framework including hyperparameter sensitivity, contrastive optimization, and training time of our method. Pdf: /pdf/f0d07fb671c3f22ffed6f6ef2b7dad1ab6a70bb9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
NAP: Neural 3D Articulated Object Prior
Accept (poster)
Summary: This paper introduces a diffusion-based generative approach targeting daily articulated objects as a novel target category. The model outputs part shapes and joint configurations based on the proposed graph representation. The paper also proposes a novel transformer network to accommodate the graph structure in the generative process. The proposed method is evaluated on the Part-Mobility dataset using the newly proposed evaluation metric for the task, and it demonstrates superior performance over self-made baselines of non-diffusion-based and diffusion-based generative model architectures. The paper also demonstrates several applications of conditional generation on seen and unseen synthetic datasets. Strengths: * The paper tackles the previously unaddressed setting of generating daily articulated objects and the proposed approach seems promising. * The approach demonstrates superior generative performance over the baselines both qualitatively and quantitatively. * The paper demonstrates novel conditional generative applications and demonstrates generalizability to unseen datasets. Weaknesses: # Major Although I found this work exciting and potentially interesting to the related audience, I feel the current writing quality is not sufficient for a conference paper. * Inconsistent/confusing/unexplained notations * L111,116: The bold font of T_{gi} is inconsistent. * L118,216: The italic font of SE is inconsistent. * L113,116, and supp. L9: The implicit shape latent code s_i is written as f_i or f, which is used as a node feature in L194. * In L109, L129: The transformation from local to global coordinate is implicitly expressed as switched subscript: _{ig} and_{gi}. But it’s hard to understand at first glance. * L109,217: "i-th" and "ith" are inconsistent. * L215: The definition of T_{part} is missing. * Missing reference to supp.: The implementation details of the Graph layer are missing in the supp., although it’s referred to in L199. How global pooling is applied is unclear. # Minor The training detail of the pre-trained shape prior network is missing in the supp. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: # Questions * How is either prismatic or revolute enforced? In L125, how is a node’s joint type decided as prismatic, revolute, or hybrid using r(i,j)? Is there some threshold? * Why not use an indicator variable to determine either prismatic or revolute? * In S.1.1, how do you reflect node existence o in MST over chirality? Do you make the chirality of the edges having the node zero? # Suggestions I suggest adding the following visualizations for a better understanding of the paper: * Qualitative visualization of the ablation * Visualization of failure cases * Schematic visualization of the variables described in Sec. 3.1. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Limitations are explained in the conclusion of the main paper. However, visualizations of some of the limitations and failure cases are missing, such as physically implausible generation. Adding those visualizations would help further understand the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Here are our responses to your questions and comments and we hope that they could help to address your concerns: - **Writting and schematic visualization**: We really appreciate these careful checks and suggestions and we take all these seriously. Please see [G1] in the global response as well. As suggested, we append a draft of the schematic visualization in the attached Fig.R8. In L215, $T_{part}$ represents the part pose in the global object frame. We will correct all the typos, inconsistent notations, and confusing subscripts as mentioned, and complete the missing details in our revision. - **Graph layer and shape prior training details**: We apologize that some graph layer details are missing in Suppl. Sec.S.1: The global pooling in the graph layer is similar to a point-net – doing a max pooling over all node features and concatenating this global pooled feature back to each node again. Regarding the shape prior training, besides L8 in Suppl. here are more details: the optimizer is a standard Adam with lr=0.0001 and the learning rate has a step decay at [100000, 150000, 200000] iterations with factor 0.3. The batch size is 32 and we use the model checkpoint at epoch 737. We will add all these details to the supplementary, and the code that contains all implementation details will also be released once the paper is published. - **Joint type**: Leveraging the screw representation [56], as in L126, if a joint is revolute, its ground truth prismatic working range will be [0,0] and vice versa. This is softly enforced by the loss that learns to predict [0,0] through denoising. To decide the joint type for visualization purposes (in Fig.4 left top corner), yes, there is a threshold to test whether the working range is large enough so the corresponding mode really exists. We use th=0.003 in the paper visualization for both prismatic and revolute modes. “Why not use an indicator for these?” We agree that this is a valid design but it would potentially add two additional variables that need to be predicted, so we directly decide the joint type from the joint range prediction, fully exploiting the simplicity in the screw representation. - **Existence of nodes:** Sorry for this confusion. There are two “indicators” in our representations, The variable o serves as the node existence indicator, while c indicates edge existence. During the final output extraction (as described in L163 and Suppl. L39), we first identify node existence by thresholding the node indicator o at 0.5. All existing nodes at this point form a complete graph (a sub-graph of the original padded one). We then utilize the chirality absolute value −∣c∣ of each edge to perform the MST on this sub-complete graph, determining which edges exist. - **Visualization of ablation and failure cases:** We append the visualization of ablation in the attached Fig.R6 and of failure cases in Fig.R7. Note that in Fig.R7 highlighted area, the drawer is not physically plausible, which means it can not be simulated with self-collision in a physical simulator, we hope future work will tackle this problem. We’ll append these figures to our revision. --- Rebuttal Comment 1.1: Comment: I thank the authors for their effort to address my questions. I have no further questions or comments at this time. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thanks for your feedback and we really appreciate your time! --- Rebuttal Comment 1.2: Comment: Once again I thank the authors for the detailed responses. After I carefully read all the reviewers' comments and the corresponding authors’ responses, I would like to raise my score to accept. The reasons are as follows. The authors addressed my suggestion in the attached figures. The visualization of the failure cases and schematic visualization of the variables are satisfactory and would help better understand this paper for the readers. In writing, considering the response both to my review and the reviewer ncuk, the authors added a detailed response to the listed writing errors/questions + the draft. I would expect the authors carefully revise the manuscript before submitting camera-ready if accepted. Therefore, I believe the technical merits of this paper outweigh the remaining concern about the limited size of the dataset.
Summary: The paper presents the task of generating articulated objects, encompassing the generation of both structurally and geometrically plausible objects. To address this task, the authors propose a novel articulation "complete-graph" parameterization. This parameterization encodes the geometry and part poses within the nodes, while representing the joint constraints through the edges. To implement the generation process, the authors employ a diffusion model with their designed graph fusion module. The paper also introduces a novel distance metric for evaluating the distribution of the generated articulated objects. Additionally, the paper delves into various applications leveraging conditioned generation. Strengths: * The proposed task to generate articulated objects is interesting and the motivation to learn the distribution of the articulated objects is intuitive. * The “complete graph” representation for the articulated objects is effective to decompose the geometry and structures into the nodes and edges of the graph. Through setting a max number of nodes K, the parameterization can successfully encode most of the articulated objects into a consistent complete graph representation for the following generation process. * The graph layer to fuse and update the information in the node and edges is helpful based on the results of the ablation studies. Weaknesses: * For a generation task, it’s hard to evaluate the novelty of the generated objects with existing evaluation metric mentioned in the paper or the user study to compare the results from different baselines. The dataset used in this paper is PartNet-Mobility only contains 2346 objects, after filtering objects with more than 8 parts, there are about 2000 objects. Based on the split mentioned in the paper, there will only be about 1400 articulated objects in the train set used to train the diffusion model. It’s highly easy for the diffusion model to overfit on such a small set of articulated objects. Some potential way to evaluate the novelty and uniqueness of the generated model is to fetch the most similar objects in the train set and fetch the most similar objects in the generated set to qualitatively see the difference. Current evaluation and qualitative results cannot show if the generated models are a copy of the models in the train set. * In this paper, actually the task simplify the general articulated object generation task, which only generate articulated objects whose all parts are in the rest state. The authors also assume the initial state of the articulated models in the PartNet-Mobility dataset is close state, which doesn’t hold true for all models in the dataset. For example, some cabinets in the PartNet-Mobility dataset have some door open, while some are close as the initial state. * The generation task of the articulated objects is demonstrated to focus on both geometry and the motion structure. However, for the part geometry, the authors choose to retrieve the most similar parts in the latent space for some visualization and potentially for the evaluation. In this way, actually the generative task focuses on generate the combinations of the parts instead of really generate the articulated models. * The direction of the edge is explained in the parameterization, however, not used when constructing the minimum-spanning tree. It’s unclear how the edge chirality (+1, -1) are used. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * For the evaluation, the part geometry uses the retrieved ones or directly the results from the occupancy network? Because from the rendered results used for evaluation in the supplemental material, it seems that the part geometry is more likely the retrieval results. * For the conditioned generation, it’s easier to compare the results with some existing work. For example, for the part2motion, is it possible to quantitatively evaluate the generated results? How many generated models will cover the GT motion? And based on the results from Part2Motion, it seems that the handle hints in the part are not fully understood by the network. Are there some results to show that the NAP can really understand the joint constraints based on the part geometry? * For the generated structures, is there some statistics on the number of nodes and number of edges of the generated set and the train/test set? * For the parameterization of the articulated objects, the order of the node seems also important to be somehow consistent. Are there some operations for the node order when parameterzing the articulated objects (e.g. part from top to down) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors mention the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Here are our responses to your questions and comments and we hope that they could help to address your concerns: - **Overfitting?**: Please see response G2 in the global block. - **Rest state**: We appreciate the reviewer’s careful observation of PartNet-Mobility. We will explain this more clearly in Sec 3.1: (1.) NAP does not require a strict restriction of what is “rest”: NAP chooses to define the parts and joints in a global object coordinate system for simplicity and any well-defined articulated object can be instantiated and represented in such an object frame. The object articulation pose does not have to be what human thinks is “rest”. (2.) A relatively “canonical” rest state is a reasonable situation in the current existing datasets, from which NAP can take advantage: As articulated object annotation is expensive, most of the objects have some consistent state presented when labeling the dataset. For example, PartNet-Mobility is mostly from PartNet which is eventually from ShapeNet. No matter whether the door is closed or open, there does relatively exist global consistency in the dataset that forms relative canonical states (even if half of the doors are open and half of the doors are closed, there is a two-mode canonical state), from which NAP can take advantage and learn a more canonical and stable prior. - **Retrieval or reconstruction?** We will emphasize this more obviously in our revision. (1.) We evaluate with both the implicit part reconstruction and the retrieval in Tab.1 and Tab.2 (left is recon and right is retrieval). Fig.4 (the third column of each object viz group) in our main paper also visualizes both cases. (2.) Our primary emphasis is on the generation of structured objects. Like many scene synthesis methods [19-30], we retrieve the nearest part shape from the training set to enhance quality. We anticipate that future works can further refine the quality of individual parts using advanced shape-generation techniques. - **Chirality:** Sorry for this confusion, and we will provide a clearer explanation in our revision: (1.) In L132, we highlight the simplicity of global plucker coordinates (against defining in local parent or child coordinates), i.e. $p_{(i,j)} = -p_{(j,i)}$ and $r_{(i,j)}=r_{(j,i)}$, which motivate us to only represent compactly $K(K-1)/2$ edges as in L133. But note that the nodes have no specific order (randomly permuted) so that the edge between node i and j in these $K(K-1)/2$ may sometimes has direction $(i,j)$ but sometimes $(j,i)$. For a more elegant representation, we explicitly model the negative sign of $p_{(i,j)} = -p_{(j,i)}$ into chirality as $+1,-1$ and let the representation always have one consistent plucker $p$ for the edge between node i and j ignoring the edge direction. (2.) In practice, after applying the MST based on negative chirality $-|c_{(i,j)}|$, we determine the direction of each existing edge: if $c_{(i,j)}>0$ we do nothing; if $c_{(i,j)}<0$ we multiply the predicted plucker coordinates with $-1$, which is equivalent to claiming that this predicted joint parameter is actually with direction $(j,i)$ instead of $(i,j)$. - **Part2Motion**: Our primary focus in this first step towards deep articulated object generation is unconditional synthesis, and the applications further demonstrate our prior is useful and flexible. (1.) As requested, in Part2Motion, we count how frequently the ground truth joint axes are covered by the generated objects (within 5 degrees, 0.05 distance, 20 generations per GT object) and the results are reported in Fig.R2. On the testing set, we observe an average of 38.64% probability that the ground truth joint will be covered by the conditional generated joints. As these application tasks are quite new and their evaluation and task definition are non-trivial, we leave them for future work in the area to further explore. (2.) Indeed, we found that the small handle hints are not fully parsed in Fig.5 in our main paper: the generated motion includes joints near the handle side. This is a limitation of our current method. One potential reason is that the simple pretrained shape Encoder-Decoder (PointNet+OccNet) is not detailed enough and the learned part shape latent space is not sensitive enough to these small geometry details. We hope this would be improved by future approaches and we will append this to our limitation, thanks! - **Statistics on Nodes and Edges**: These are usually reported when studying the generation of general graphs[83]. We report the statistics of the number of nodes and node degrees (reflecting some structure) in attached Fig.R3. Note that since the generation always produces a tree, there is no need to count the number of edges. We observe that NAP can generate close distribution to the training and test set. - **Order of nodes**: We don’t have any specific spatial order of the node in the parameterization. However, in order to make the network more expressive, especially during the early denoising stages, we do add the positional encoding of the order (the #th position in the list) to the network (inspired by [22]), and during training, we randomly permute the node order in the graph. --- Rebuttal Comment 1.1: Comment: Thanks for the responses from the authors. Although I still have the concern on limited data size for articulated models and the usefulness for such generation task, I think this paper can motivate more papers to follow up on this direction. The learned distribution definitely can support a variety of downstream tasks. I hope the author can address the limitation of current work more clearly in the final version. I will raise my score. --- Reply to Comment 1.1.1: Title: Thank you!! Comment: Thanks for your feedback and we really appreciate your time! Definitely, we will improve the presentation and address more clearly the current limitation in our final version, thanks!
Summary: This paper proposes Neural 3D Articulation Prior (NAP) to synthesize 3D articulated object models. The key contributions include (a) an articulation tree parameterization for the diffusion denoising probabilistic model and (b) a new distance function for evaluation. The paper also shows quantitative and qualitative improvements over prior methods. Strengths: 1. This paper introduces a new problem of articulated object synthesis. While there are some prior works for (mostly) category-specific articulated object modeling, this paper focuses on generative modeling across different categories. 2. There are a few key innovative components in the proposed methods such as the articulation tree parameterization and graph-attention denoising network. Experiments (Tab. 2) have also shown the effectiveness of these new components. 3. This paper has also proposed new metrics (distance) to evaluate this new task, which may benefit future works for a fair and efficient comparison. 4. Implementation details are well-documented. Weaknesses: 1. A neural implicit surface is used to fit each part. From figures such as Fig. 4, it seems the geometry of the proposed method is not very high-quality. However, it seems the geometry of single parts is simpler than the geometry of the entire shape, and prior work (eg, [100, 110]) has shown much better geometric results even for an entire object. Could the authors explain why the synthesized neural part surface looks worse than synthesized neural single-object surface from prior work? 2. The entire pipeline and the key components all seem very valuable to the community. To help us better understand what is easier for the network to learn, could the authors share some examples of failure cases / more challenging types or categories? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: One suggestion is to improve Fig. 3. Currently, it seems too packed and the text is too small to be read. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations and broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Here are our responses to your questions and comments and we hope that they could help to address your concerns: - **Part shape quality**: We appreciate this insightful observation. Here are some potential explanations: (1.) **Focus on articulated object generation**: Many scene synthesis methods, similar to ours, don't primarily focus on individual object shapes but rather retrieve them from a database. As our main emphasis is on generating structured objects instead of single-part shapes, we take the most naive way to model the part shape, an Auto-Encoder, even with NO regularization in the latent space. Here we do an additional experiment: when learning the pre-trained part shape module, we add regularization to the latent space (VAE loss) and reduce the latent dim from 128 to 64, aiming to cultivate a more meaningful part shape latent space. After learning NAP on this updated part shape prior, we do observe the part quality is improved a little, and the quantitative results evaluated with the predicted part SDF geometry improve as shown in the table: | | MMD ↓ | COV ↑ | 1-NNA ↓ | |----------------------------|------------|------------|------------| | Paper | 0.0268 | 0.4944 | 0.5690 | | With VAE regularized prior | **0.0229** | **0.5167** | **0.5490** | We believe that leveraging advanced part representations, like tri-planes or local feature grids, can further enhance reconstruction quality. We will highlight this in our limitation section. (2.) **Thin structures**: Unlike shapes in ShapeNet, the rigid parts of an object usually exhibit very thin structures. In the PartNet(Mobility), a lot of parts are non-watertight and have large areas of single side surface: for example, a door may be just one plane, with zero thickness. To learn an SDF, we apply heavy pre-processing on these ill-defined thin structures. We know that vanilla DeepSDF or OccNet struggle with such thin structures (e.g. the airplane wings), which might be the biggest reason for the quality drop. Potentially, more expressive local methods or unsigned distance functions may help to solve these problems. (3.) **Part shape variance**: We are modeling rigid parts, whose shape variance may not be as small as we first thought. For example, the network has to learn very simple shapes – a door, a round button, or a drawer that may appear repeatedly in the dataset, and simultaneously learn relatively complex shapes – a whole chair seat back plus arms and the bottom, or the whole body plus base of the storage furniture that are relatively rare in the dataset. What’s worse, as the rigid part in our dataset has no semantic labels or the concept of categories, it’s not easy to balance the part samples during training between these simple and hard shapes. As a result, the network may be biased to small and simple repeating parts but have difficulty capturing more complex ones. This issue may be resolved by a smarter training strategy or network design for the part shape prior. (4.) **Prediction**: as the shape latent code is predicted from the denoising, we can not guarantee that the accuracy in the latent space is perfect. As small changes in the latent space may lead to un-plausible changes in the output mesh, suggesting room for improvement in this task. - **Failure cases and more challenging types or categories**: We've included visualizations of failure cases in the attached Fig.R7. Fig.R5 showcases examples that currently violate our assumptions, which might be addressed with dummy nodes. Fig.R4 highlights the long-tail distribution of articulated objects in our dataset, with items like remotes and keyboards being particularly challenging. We'll incorporate these insights into our revision. - **Figure suggestion**: Thanks for this suggestion. Due to page constraints, the figure was downsized. We'll restructure the figure layout in our revision for better clarity. --- Rebuttal Comment 1.1: Comment: The authors have answered my questions. After reading all reviewers' comments and the responses from the authors, I am leaning towards keeping my original rating. --- Reply to Comment 1.1.1: Title: Thank you!! Comment: Thanks for your feedback and we really appreciate your time!
Summary: This paper proposes a diffusion model based 3D generative model for articulated objects. The major contribution is the 1) tree representation of the articulated objects, 2) the corresponding graph-based diffusion models, and 3) a distance metric for evaluation. The proposed framework works well on PartNet-Mobility objects, beating baselines based on other generative models, and enables a series of conditional generation applications. Strengths: - The motivation is clear. To catch up the latest trends and use the diffusion models, the tree/graph parameterization is developed and implemented in a neat way. - The proposed solution works well on the tested dataset and outperforms other generative baselines including a latent diffusion model. - The downstream applications are interesting and demonstrates one important advantage of this framework: conditional generation is easy. - The graph attention denoiser is reasonable and serves well in the entire framework. Usually graph-based network doesn't generalize very well. - The proposed distance metric makes lots of sense. Weaknesses: ### Major: (my current rating is based on my concerns as listed below) - Doing diffusion process on a complete graph is computationally heavy and slow. I'm not sure how well this method scale. The illustrated graph are all relatively easy with few nodes and edges. Please see below section for my detailed question regarding this point. - The writing is unclear and I quickly lost in the writing, especially in the approach section. Some of the issues can still be figured out through context, but others really hurts the readability (eg., I had a hard time understanding the edges section). Please see my detailed comments in Question section. I try to list as many as I can but probably still miss a lot. - In many places, this paper claims they are "the first 3D deep generative model to synthesize 3D articulated objects" or "introducing the articulated object synthesis problem". This is a bit over-claimed since 3D articulated object is a very broad domain, which also includes animals, human, and many other things. There is big literature on 3D human/animal generation in computer vision field. Meanwhile, the motion patterns and the graph-scale studied in this work probably only works for robots and simple objects. Therefore, it's better to tone down a bit. - Is PartNet-Mobility the only dataset can be used? The objects in this dataset are relatively simple in geometry and motion. Is it possible to generalize to more diverse and realistic dataset containing more articulated objects like animals/human or more challenging objects? ### Misc: - In the official Formatting Instructions Line101-102, it stats that "The table number and title always appear before the table". The submitted paper doesn't follow this. - Approaches [3.1]: Why using Plücker coordinate to represent joints? Is this the only option? If not, what's the advantage of using this over the other alternatives? The motivation is a bit unclear. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: There are some unclear writings which can be clarified: - L109: is the initial pose $T_{gi}$ a node property? This is different to the statement in L106 "Every joint (edges) has an initial pose" - L112: why a 3D bbox $b_i$ is in $\mathbb{R}^3$? Three numbers represents a point in 3D space. - L115: $o_i$ is a **per-part** binary indicator of part existence. If there are K parts, I think $o_i$ should be in dimension {0,1}$^K$? - L116: $f_i$ is a new notation and not introduced before. I assume this is a typo, which should be $s_i$? - L121: the domain of $l$ is $\mathbb{S}^2$: what is $\mathbb{S}$? - L124: "... to avoid local coordinate changes caused by parent-child order flips" -- > when will parent-child order flips happen? How bad is this problem in practice? - L125: for two ranges $r$ domain $\mathbb{R}^{2\times2}$: can the lower range be negative values? - L128: joint states $(\theta, d)$ is not defined. What are the meanings of these two variable? - L128: the relative transformations $T(\theta, d)$ is between two parts, let's say i and j. Then is the joint state $(\theta, d)$ of i or j? - L129: global joint axis $(l_g, m_g)$ is undefined before using. How are this global axises defined? - L129: $R_{ig}$ and $t_{ig}$ are undefined before using. The authors might want to state in L109 that $T_{gi} = [R_{gi};t_{gi}]$. In addition, the subscripts are not consistent. - L136: based on the description in L121, $l\in\mathbb{S}^2$ and $m\in\mathbb{R}^3$. Then $p_{i,j}$ should be of dimension 5 not 6? - L136: $c$ is a discrete value and hence not in $\mathbb{R}$. - L146: I find this domain definition strange. v and e are different variable and you cannot simply adding their dimension as the graph dimension. - L155/Eq.4: $\theta$ is already defined in Eq1. Please change variable name. - Occupancy Networks predicts occupancy score not SDF value. In many places of this work, SDF is mentioned as OccNet output. Just want to clarify, which one is used? Other more conceptual questions: - Following my point regarding memory consumption in weakness section, what's the average graph size and the corresponding memory usage and training time for current framework? What's the biggest graph size the current solution can afford on a standard GPU during inference time and what's the corresponding memory usage? The illustrated graphs in this work are all relatively easy with few nodes and edges (L240 maximum 8 rigid parts). I saw that the limitation section first point mentioned this, but please be more detailed. - It's unclear to me why doing a graph diffusion on a complete graph is better than doing in latent space. What advantages do we gain for using this graph diffusion process? On the other hand, the disadvantages are obvious: high memory consumption and tricky graph-network design. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors discussed the limitations in L311-318. I agree with these points in general but I do think one important point is missing. The approach is based on two assumptions: Tree assumption and Screw joints. The limitation of these two assumptions should be mentioned. For what kind of objects will they break? I agree that the potential negative societal impact isn't an concern for this work as mentioned in L325. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Here are our responses to your questions and comments and we hope that they could help to address your concerns: - **Graph size and resource**: (1.) **Dataset property**: PartNet-Mobility(Sapien) stands as one of the largest and most widely used and compared datasets for everyday articulated object modeling. We provide statistics on the number of rigid parts from this dataset in the attached Fig.R4, where most objects have less than 8 rigid parts and the distribution has a long tail. Some extreme examples are the remote controller and the keyboard. We hope future work can address this challenge as in L313. (2.) **Larger K and resource consumption**: As detailed in Suppl. Tab.S2, scaling NAP’s max number of nodes from 8 to 15 and 20 still yields effective results (note that 20 nodes almost cover most objects in the dataset). Because the parameterization is compact (different from image generation) NAP can be trained extremely fast (L26 in Supp): when the max number of nodes K=8, the training batch size=64, and supervise 10 diffusion steps in a forward, we only have to consume 8465MB vRAM. The training can be done in 9 hours on one RTX3090 GPU. For inference: when set K=20, generating 10 objects in parallel on an RTX3080-Laptop GPU only takes 20 seconds with 2545MB vRAM cost. - **Articulated objects claim**: Yes, there is a huge literature on humans and animals, but this paper aims to focus on synthesizing, specifically, the daily articulated objects. We can change our title from “Neural 3D Articulation Prior” to “Neural 3D Articulated Object Prior” and explicitly emphasize the scope precisely in the paper. While humans and animals usually have some template parametric models, it’s very interesting to study how to generate these deformable objects. We will update our related work section with more literature about humans and animals in our revision. - **Why is latent diffusion worse**? As in L259, latent diffusion first generates a latent code of the object. However, since the decoder is learned, maybe a slight error made by the generated code in latent space will lead to the very wrong decoded structure in the 3D space. On the contrary, NAP directly diffuses the whole graph and its attributes in the 3D space so the errors will be controlled better from the loss. Another advantage is that all the convenience and flexibility of conditional generation come from this explicit graph space diffusion. - **Assumptions as limitations**: Thanks, we will append a new item to the limitation. As in the attached Fig.R5, two examples violating the assumptions are shown. The left object has a chain in the middle, which can be converted to a tree with one dummy node plus motion synchronization; the Airbus handle on the right violates the screw assumption where the joint can have two revolute DoF, which can also be resolved by adding a dummy node for the second DoF. We leave for future works to study these more complex kinematics. - **Why Plucker?** Plucker is not the only way to represent joints but an elegant and simple way. As studied in [56], such representation unifies the revolute and prismatic joints and avoids the redundancy caused by defining additional joint coordinate systems (e.g. moving the joint frame along the joint axis still represents the same axis). - **Clarification, writing, and format**: Thanks for all these careful notes and checks! We take all these seriously. Please see also the global response [G1]: - *L109:* Yes $T_{gi}$ is a node property, which is computed from the joint initial state ($\theta=0,d=0$). - *L112:* The $\mathbb R^3$ means the concatenation of three real numbers, representing the lengths of the bbox. - *L115:* The subscript $i$ for $o_i$ means for one node, which either exists or does not, but if we concatenate all nodes together then $o$ should be in {0,1}$^K$. - *L116:* Yes, sorry for this typo, $f_i$ should be $s_i$ - *L121:* The $\mathbb S^2$ means a unit sphere, which means that $l$ should be a unit-length vector. - *L124:* L124 is to motivate defining the joint parameters in the global frame. For the same joint, its parameters written in the child and the parent frames are quite different (since the axis line is located differently). If defined locally in either the child or parent frame, when the parent and child node order flips, the edge has to change significantly, which may be harmful to the later learning process. - *L125:* Yes, it can be negative - *L128-1:* $\theta$ means the revolute joint angle and $d$ is for prismatic displacement. - *L128-2:* Sorry for this confusion, we realized that the text definition of $T(\theta, d)$ is not precise. Let’s say i is the parent part and j is the child part. $T(\theta, d)$ is the motion of the child part from its initial rest state caused by joint states $(\theta, d)$ expressed in the parent frame. So $(\theta, d)$ can be regarded as defined in the reference of parent i. This is illustrated in the attached Fig.R8 (right) and we will append a detailed text explanation to our revision. - *L129-1:* Here the subscript means: written in which coordinate frame. $(l_g, m_g)$ is the plucker coordinate of the joint written in the global object frame. - *L129-2:* Yes, $T_{gi}=[R_{gi};t_{gi}]$, we’ll clarify our subscripts convention as in L110 in our revision. - *L136-1:* We represent the unit vector $l$ with 3 numbers, and $p_{i,j}$ has a dimension of 6. - *L136-2:* Will update this. But note that during prediction, the network still predicts a continuous scalar. - *L146:* To better link to the diffusion formula (all stuff as one $x$ in later equations), we can change the $+$ to, for example, $\oplus$. - *L155/Eq.4:* We will change the notation. - *Occupancy or SDF?:* The SDF is used for prediction and supervision, but the network architecture is an OccNet encoder-decoder. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: First of all, thanks for the detailed rebuttal, not only to answer my questions, but also to address the doubts from the other reviewers. There are quite a lot reviews for this work and many of the questions are properly discussed. I have read the entire reviewing thread and I appreciate all the efforts from the authors. I agree with the others that this paper studies an interesting problem through introducing LDM into articulated object generation. This is why I'm totally fine if this work goes into the final proceeding. At this time point, I choose to keep my original rating since my concern regarding the writing and claims remain without seeing the improved version. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thanks for your feedback and we really appreciate your engagement in the discussion! NeurIPS guidelines do not allow us to post a complete improved revision of the paper. However, we share in the following a revised draft of two paragraphs from Sec.3.1--parameterization (from L106 - L143). Hope this would help to address your concerns. If you have any questions or suggestions, please let us know, thank you!
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive comments and are glad that they all have a consensus on our contributions/novelty: 1. **Contribution and novelty of studying a new problem** 2. **Contribution of new evaluation metrics to this novel task** 3. **Technical contribution of the first deep-learning framework on this problem** 4. **Performance and effectiveness of our approach including novel applications** We believe that, despite some detailed imperfections, our work has the potential to inspire future works and pave the way for new research areas. We will answer the review comments and questions and hope this could help address some concerns. Here we will first respond to some common questions in this block and then provide individual responses to each reviewer. To streamline the discussion, we will reference this global response in the individual sections when necessary. An accompanying PDF with figures for this rebuttal is also attached. **Global response to common questions:** - **[G1]** **Paper writing**: We are thankful to the careful and constructive feedback from all the reviewers including typos, inconsistencies, missing details, figure labels, and table layout. Indeed, our presentation, especially in Sec.3.1, is not that easy to follow due to the compression of the original manuscript to fit the page limits. We will revise paragraphs, update all the issues the reviewers bring up, and check all the text carefully. The discussions in this response will also be incorporated into our revised paper. As suggested by WYGQ, a draft schematic visualization of the variables described in Sec. 3.1 is provided in Fig.R8 in the attached PDF. - **[G2]** **Overfitting**(PAAU, HNRT): Regarding this problem, we would like to argue the following points: 1. We retrieve the nearest training object (w.r.t. our ID metric) and visualize some samples in the attached Fig.R1. As it shows, both geometric and structural differences can exist in the (generated objects, nearest neighbors) pairs. 2. We mainly demonstrate our effectiveness in modeling articulated-object distributions because this is the most important feature of generative models. Other baselines, on the contrary, cannot efficiently fit the distribution with similar or even larger network capacities. 3. The overfitting problem is also an active research problem for diffusion models in general [1]. Even the well-known image diffusion models show overfitting properties, yet they are widely used for their effectiveness in many applications – this also applies to our method. For example, in Sec.4.4 we show that the learned articulation prior can be used for articulating static models (Fig.6) or doing part-to-articulated-object completion with multiple proposals using conditioned generation. 4. One bottleneck here compared to the widely-existing image diffusion models, as pointed out by HNRT, is the size of the dataset for training. Potentially, collecting larger datasets would be very beneficial to the community. On the other side, new techniques that learn articulation prior from images or videos may also help to address this issue. 5. For evaluation, we have different training and testing dataset splits, ensuring the evaluations of the learned distribution are valid. [1] Carlini, Nicholas, et al. "Extracting training data from diffusion models." arXiv preprint arXiv:2301.13188 (2023). Pdf: /pdf/f56599388ef108fe85cf024e01ad0879018ee79d.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents the first generative model over articulated shapes. The paper presents several contributions: (1) a parameterization of articulated shapes that are easy to be used with neural networks, (2) design of denoising diffusion architectures that are structured and can effectively denoise the shapes in the introduced parameterization, and (3) extensive evaluations using novel and meaningful metrics. Strengths: The paper presents the first such generative model of articulated shape, and demonstrates excellent results. The extensive baseline comparisons support the claims. While latent diffusion comes close in performance, I believe the structured models introduced in this paper would become even more relevant with more complex shapes. The ideas presented are original and are very well explained. I appreciated the supplemental video visualizations. Weaknesses: - The model is trained on a dataset of around 2k shapes. I wonder if there is significant overfitting. Visualizing the nearest training shape from the generated samples would be helpful. - The forward process adds noise to discrete indicator variables (both for nodes and edges). How is the binning done during sampling, i.e., from a floating point value, how is the indicator value computed? - Adding noise to the introduced parameterization leads to intermediate noisy states that do not correspond to any valid 3D shape. Is that true? If not, it would be great to add a visualization of the diffusion process. - The baseline details are not all obvious. The technical exposition is at a high level. Without publicly available code, these evaluations would not be reproducible. - How is the random sampling over joint poses done for implementing the ID metric? Is the distribution over poses known for each shape? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Listed in weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Here are our responses to your questions and comments and we hope that they could help to address your concerns: - **Overfitting**: Please see [G2] in our global response block. - **Indicator variables**: We apologize for any confusion caused. As detailed in Suppl. L39, we initially determine node existence by thresholding the node indicator o at 0.5. For all identified existing nodes (which form a sub-complete graph), we utilize the edge indicator/chirality c value, specifically -|c|, to construct a Minimum Spanning Tree (MST). The binarization process occurs within the MST during the selection of tree edges. We will clarify this procedure in our revised manuscript. - **Diffusion process**: Yes, when adding noise, for example, to the plucker coordinates, the intermediate steps may not correspond to valid joint parameters anymore. While we currently treat the parameterization as if it exists in continuous Euclidean space and employ standard Euclidean diffusion (which has proven effective), we concur that leveraging advanced manifold diffusion techniques (like Grassmannian for Plucker) or discrete diffusion methods for indicators could enhance the generation process. This point is highlighted in our limitations section (L316). We also visualize the intermediate steps by projecting the noisy parameterization back to the nearest valid parameterization, the bottom three small figures in Fig.2 and the small animation in our video starting from 2:51 are these visualizations. We will enhance the clarity of these visualizations in our revised version. - **Baselines**: We provide more details of the baselines in Suppl. L53 Sec.S1.2, and we will release the code once the paper is published. Hope these baselines would also help the new area. - **ID Pose sampling**: As outlined in L222, each joint possesses working ranges (predicted for generated objects and sourced from ground truth for references) for both prismatic and revolute modes. We uniformly sample within these ranges to obtain the sampled joint states, implying that the distribution is simply a uniform distribution, parameterized by the range's endpoints. We will make this more clear in our revision. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It answers all my questions. The nearest neighbors seem very close to the generated samples, implying that the model does not generalize / interpolate much. This could be due to the scale of the training data, as the authors mention. I believe the method and the results still provide useful insights. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thanks for your feedback and we really appreciate your time! Another potentially exciting direction we are recently thinking about is to learn such articulated object priors from large-scale image or video datasets, from where we may hybrid the knowledge in accurate but expensive smaller URDF datasets (like PartNetMobility) and large-scale but unlabelled video/image dataset.
Summary: This paper introduces the novel task of articulated 3D object generation. The method devises a parameterization of an articulated 3D object by representing it as a complete graph with nodes corresponding to parts and edges corresponding to joints. The method then trains a diffusion model on this parameter space. Once parameters are generated with the diffusion model, they can be converted back to an articulated 3D object. This paper also proposes a novel metric for evaluation that considers both the geometry and motion of the object. The paper compares to adapted baselines both quantitatively (using the new metric) and qualitatively. Results suggest that this approach outperforms the adapted baselines. Strengths: 1) Novel task. NAP solves the task of articulated 3D object generation. The task is useful for creating data that incorporates motion and this work could inspire future research in this new area. 2) This paper devises a parameterization of 3D articulated objects that can be integrated with a diffusion model. 3) The paper takes a thorough approach to evaluation creating a novel distance metric that considers both geometry and motion. Results outperform adapted baselines both quantitatively and qualitatively. Weaknesses: 1) This paper could benefit from more clarity on distinctions between contributions from this work versus existing techniques. Specifically, consider adding more detail differentiating the parameterization contributions of this paper that are distinct from prior work as it seems that the parameterization is closely based on URDF [1] for converting the articulated object into the graph representation that can then be used with diffusion. Additionally, further distinction between general graph-attention denoising networks and the contribution of this work’s architecture would be helpful. 2) This method relies heavily on existing techniques such as diffusion models and graph representations of articulated objects. Significant contribution seems to come from the novelty of the task itself and the way in which the paper combines these existing techniques to solve it. Minor comments: Figure 4 showing the qualitative comparisons is slightly confusing. More labels on the image itself would be helpful for clarity. References: [1] Morgan Quigley, Brian Gerkey, and William D Smart. Programming Robots with ROS: a practical 407 introduction to the Robot Operating System. " O’Reilly Media, Inc.", 2015. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please further clarify the contributions regarding representation parameterization and graph-attention architecture as compared to existing methods. My understanding for the parameterization is that the novelty comes from choosing what features need to be contained in nodes (indicator, pose, bbox, shape code) and edges (chirality [indicator], Plucker, joint limits). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors discuss the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! Here are our responses to your questions and comments and we hope that they could help to address your concerns: - We agree that the introduction of a new task and the establishment of a benchmark metric are pivotal contributions of this paper. We also make the first attempt to provide a deep-learning solution for the articulated object synthesis problem: - **Parameterization/Representation contribution**: The URDF format, widely adopted for modeling robots and articulated objects, served as an inspiration for our representation. However, it's crucial to understand that our representation is tailored for neural network compatibility – designed to be fed into and extracted from a learned model, ensuring it can seamlessly represent a diverse range of different structured articulated objects in a dataset. While URDF is intuitive, our representation has several differences: - (1.) We define the joint and part relationships in a more unified and elegant way: in URDF, one must specify the joint type as revolute or prismatic and define the parent-to-joint and optionally the joint-to-child transformation between two parts, which leads to redundancy (e.g. translating joint coordinate frame along the joint axis won’t change the joint). Instead, we model the joint in the global object coordinate system with screw representations and use per part pose in the global coordinate frame to define the initial relative transformations between the parent and child, which unifies the revolute and prismatic joint and is more stable and cleaner for the network to learn. - (2.) We pad the tree to a complete graph, adding indicators to the complete graph, and use MST postprocessing to extract back the tree from the network output. With all these careful designs of representation that do not come with URDF, we enable the processing of deep networks on this highly irregular collection of different articulated objects. Compared to existing methods of modeling articulated objects, they either model a fixed/simple structure of articulation, a fixed number of parts (usually two), or implicitly plug the joint states into the latent code instead of explicitly modeling the structure. Our representation, inspired by URDF and with the above-mentioned difference to URDF, is designed for neural processing and is an explicit and holistic description of various structured objects. In short, the representation contribution is a complete description of various articulated objects across the dataset that is oriented to the neural network. - **Deinosing network distinction**: Indeed, there are many graph-based networks for denoising and some diffusion models for generating large and general graphs. We leverage the general attention mechanism on graphs and tailor it to our unique task. In our problem, we pay more attention to the node and edge attributes and the key observation is that the information exchange between edge and nodes is important (joints and parts talk). Thus we utilize a graph attention network as in Sec.3.3 that explicitly fuses information of both node and edge. We also hybridize a pointnet-like global pooling over all node features to gain more global information. While we don't highlight this network as our primary contribution, its reasonable design is rooted in our deep understanding of the task at hand. In summary, our work is not just a combination of existing methods: besides the significant contribution of our novel task, we also contribute a novel representation that is tailored to neural network diffusion and design a denoising network with our insight into this task. We hope our first attempt toward this brand-new direction may inspire future works to propose better solutions. - **Figure suggestion**: We appreciate the constructive feedback. To enhance clarity, we will incorporate labels into the sub-images in our revised version. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It addresses my concerns. I still think that this paper solves an interesting new problem and could inspire future work on this topic. After reading the other reviews and the rebuttal, I am convinced of the contributions of this paper and will raise my score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thanks for your feedback and we really appreciate your time!
null
null
null
null
Scalarization for Multi-Task and Multi-Domain Learning at Scale
Accept (poster)
Summary: The paper studies scalarization for multi-task and multi-domain learning, which is a method to combine the losses of different tasks/domains. The authors conduct substantial experiments to draw insights into the effect of scalarization weights on multi-task/domain learning. They also propose an efficient method to search for a good set of weights. Strengths: The authors make a valuable attempt to understand the scalarization of multi-task/domain learning. It is an important problem in the literature as despite the existence of various automatic weight selection methods, it is unclear under what circumstances these methods will outperform a static scalarization. The insights are critical and novel, especially the one about conflict gradient and scalarization v.s. dynamic weight update, which is different from common belief. Weaknesses: The writing can be improved. The goal of this work is ambitious because the authors attempt to study the effect of scalarization from a wide range of aspects. However, such ambition also makes the paragraphs very condensed and jumping. In many cases, the authors bring up several insights in one paragraph with little logical flow (e.g. sec 4.1). It is better to use latex paragraph if many parallel information needs to be conveyed. Some conceptual analysis is lacking. Since this paper is purely empirical, the generalizability of the insights is questionable if proper conceptual analysis is missing. For instance, in fig. 3, based on the two plots, it is difficult to reach any conclusions about the influence of conflicting gradient on performance because the two experiments themselves have conflicting results. For this part, an important question remained to be answered is that how conflicting gradient affect training and why in some cases it has less effects. There are a few “jump to conclusions” situations especially in sec. 4.1 and 4.2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In section 4.1, to obtain the optimal scalarization weight, do you perform grid search? I wonder how you are able to select a single “optimal” weight since firstly, it is impossible to search for the weight exhaustively, and secondly, it is likely that several sets of weights have undistinguishable results. Could you clarify this? 2. In fig. 4(b), it seems that training on a single task is always better than training the two tasks jointly regardless of model selection and weight tuning. Any insights on that? 3. For sec. 5, if possible, it would be interesting to see how much gap PBT closes compared to some oracle (more exhaustive grid search), even on datasets with few tasks/domains. It is unclear how good a performance a static weighting strategy can achieve because the only static baseline is uniform. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Improved writing:** We will carefully revise the manuscript to ensure that the logical progression of ideas is clear and coherent, and emphasize the most important insights. **Generalizing the conclusions of Figure 3:** We expanded the results of Figure 3, as detailed in the attached PDF to the global response (Figure 1). This extended analysis includes more tasks and varying learning rates to convey our insight more clearly: the MDL/MTL models with the highest generalization performance are not necessarily the ones with the least amount of gradient conflicts, and vice versa. We also observe that, while the global amount of gradient conflicts tends to increase with a higher number of tasks, the overall trend of each curve is rather consistent across model sizes. **Question 1 (optimal scalarization weight in analysis)** Indeed, for Figure 4.1, our approach essentially involves a grid search: For each setting, we perform a sweep for the task weight over $p_{task_1} \in \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9\}$ and then report the relative improvement of the MTL/MDL model over the corresponding single task baseline for the best ratio (where the best ratio is defined as the one that yields maximum accuracy averaged across the two tasks/domains): This can essentially be interpreted as having an oracle to select the task weight. In Section 2.1 of the supplemental material we also report some plots illustrating the results for all the ratios $p_{task_1}$ we sweep over. **Question 2 (Figure 4b):** Indeed, on the Taskonomy dataset we sometimes observe that the STL baseline outperforms the MTL models still. While this could simply be a symptom of the specific architecture/hyperparameters we considered, we also hypothesize that this might show the limits of forcing all tasks to share a common encoder: In fact, the Taskonomy dataset was initially introduced as a benchmark to uncover groups of tasks that can benefit from training together, from ones that should be kept fully separate to avoid interference [f, g]. Building on this orthogonal line of work, we posit that some task interferences may only be resolved through explicit architectural modifications. For example, allocating dedicated encoders for specific tasks might mitigate the observed performance discrepancy. **Question 3 (PBT vs static grid search oracle)** We address this question from both theoretical and practical perspectives: * **In theory**: While PBT itself does not provide any guarantee, its more recent variant PB2 [e] (which uses Bayesian Optimization to guide the exploration) provides a theoretical regret bound as a function of the population size $N$. * **In practice**: In Table 2 of the attached global response PDF, we performed an experiment for comparing grid search (with uniformly distributed grid points) to PBT in a small setting with only three tasks/attributes of CelebA (`Five_o_Clock_Shadow`, `Arched_Eyeberows` and `Attractive`). Due to time constraints, we were only able to run experiments up to using 5 (uniformly distributed) grid points per task weight for the grid search, resulting in 125 models to run. In Table 2a, we observe that grid search can outperform the dynamic search of PBT, but with a much higher computational budget. We further illustrate the search space covered by PBT during its dynamic search in Figure 3b: each point corresponds to a 3-scalarization weights configuration encountered during the dynamic PBT search. This visualization contrasts PBT's exploration pattern with the regular uniform grid used in a classic grid search, providing insights into how the two methods differ in navigating the search space. **References** * [e] Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits, Parker-Holder et al * [f] Which Tasks Should Be Learned Together in Multi-task Learning? , Standley et al * [g] Disentangling Task Transfer Learning, Zamir et al --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the detailed feedback, which addresses most of my concern. The newly included results on PBT v.s. grid search in the global response PDF provides extra insight for the problem. Hence, I will raise my score to 6.
Summary: This paper seeks to better understand the complexities of unitary scalarization for multitask and multi-domain learning. The paper explores the impact of model size, degree of gradient conflict and variations in scalarization weights in order to derive a set of guiding principles for MTL/MDL. The authors also propose to leverage existing population based HP optimization procedures to efficiently search for the best set of scalarisation weights Strengths: 1. The paper presents an expansive set of experiments to understand the impact of model size, multi-task setting (MTL vrs MDL) and task affinity on the performance of the unitary scalarization approach 2. The paper is actionable -- it proposes to leverage pre-existing population based HP optimization approaches to search for the best scalarization weights. 3. The paper is well written and the experimental methodology is described in sufficient (reproducible) detail Weaknesses: My main issue with the paper is that I am hesitant about specific parts of the experimental methodology. Primarily, in section 4, the experimental procedure is described as tuning HP for single task and then using the best single task HP for all followup multitask experiments. This creates an unfair comparison since it assumes that the HP setting that is best for the single task is also optimal for the MT setting thus bringing the robustness of the results to question. (This is especially considering that, forcing the scalarization weights to sum to 1, means that the effective per-task learning rate is always smaller than for the single task setting) Also, there exists confounding variables for the the experiments on **model capacity and gradient conflict** that are not addressed. 1. Are the models used ResNet models used for Figure 3 pre-trained or learned from scratch ? 2. How were the learning rates for this experiment chosen ? In general, I suspect that there is also a dependence of the degree of conflict (after a reasonable number of epochs like 1 in your case) on the learning rate used. It would be important to see if the degree of conflict (at a fixed model size) -- varies substantially with learning rate and whether this variance is smaller or larger than the variance that comes changing model size at fixed LR. ----- Update ------ Updated score after rebuttal. Thanks for the responses @ Authors I am willing to raise my score if these concerns are addressed. Missing relevant citation 1. Exploration of model capacity and scalarization weight https://arxiv.org/pdf/2302.09650.pdf Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Questions 1. Is the sweep to find the best single dataset HP performed for each model size (For figure 2) or is it performed for 1 model size and then used across all sizes ? 2. For figure 3, how do the hyper-parameters like learning rate differ from for model size 1. Do you use a fixed learning rate across all model sizes ? Or use a pre-determined best learning rate for each model size ? 3. Are the resnets in Figure 3 pre-trained resnets ? 1. It would be interesting to see if the effect still holds for resnets trained from scratch vrs pre-trained on say Imagenet Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Authors have addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Gradient conflict and impact of learning rate:** Following reviewer ytBJ's suggestion, we added gradient conflict measurements experiments with varying learning rates, as illustrated in Figure 1 of the attached PDF (global response). The results show that the variance of gradient conflict measurements tends to be higher across learning rates than across the different model sizes we considered. Notably, we observe that when the learning rate is excessively high, leading to divergence in training loss, there are distinct peaks in gradient conflicts. Nevertheless, when the loss is well-behaved (does not diverge), the general trend of gradient conflict measurements across training epochs remains similar across learning rates. **Learning rate choice:** * **For the different model sizes in the analysis:** We do conduct a sweep across different learning rates for each single task architecture (e.g. `[3e-1, 3e-2, 3e-3, 3e-4]` for DomainNet). However, in practice, we found that the optimal single-task learning rate remained consistent across different model sizes. While this might vary with a more fine-grained learning rate grid search or different family of architectures, our analysis led us to a specific learning rate for one model with different depths/widths for practical purposes. In practice, the only parameter we adapt for different model sizes is the batch size and the number of gradient accumulation steps. * **PBT experiments:** For the experiments of section 5 in the paper (PBT and MTO comparison), we train all models for different learning rates and report results for the best performing one (specifically the sweep is done over `[5e-5, 5e-4, 5e-3]` for CelebA and `[3e -3, 3e-2]` for DomainNet) * **Normalizing scalarization weight to sum to 1:** We acknowledge that removing the constraint of normalizing scalarization weight to sum to 1 could be akin to further tuning the learning rate for each MTL model, potentially leading to improvements over the single task baseline. Since our primary goal was to compare the performance of the different MTL/MDL against each other (mainly using single-task performance to compute relative improvement), we chose to keep the normalization constraint to keep the search space at a reasonable size and make the analysis more scalable. **Pretrained vs from scratch models:** In all experiments, we train the models from scratch. It is very likely that different pretraining strategies would impact task interference and MTL/MDL performance, but we didn't investigate this angle in this work. **Missing citation:** Thank you for the suggestion. We will add the citation on designing scaling laws for multi-lingual language models in the related work section --- Rebuttal Comment 1.1: Title: Question about PDF uploaded Comment: Hi Authors, Thank you very much for your responses. I have quick question about Fig 1(a) in the PDF that you uploaded. Since the model sizes are not marked on the line, it is really hard to see what is going on here. I guess the question I was trying to have answered with the learning rate vrs capacity question is this : at any fixed point in training (say 50%), if we consider two learning rates $l_1$, $l_2$ that are sufficiently different but not divergent (w.r.t the model), and we consider $\mathrm{model}_1$, $\mathrm{model}_2$ where size(model1) < size(model2), could $$\text{gradconflict}(\mathrm{model}_1, l_1) < \text{gradconflict}(\mathrm{model}_2, l_2)$$ but $$\text{gradconflict}(\mathrm{model}_2, l_2) < \text{gradconflict}(\mathrm{model}_1, l_2)$$ This would mean that the conclusion that larger models have higher conflict would be invalid except when conditioned on a specific choice of learning rate. --- Reply to Comment 1.1.1: Title: Impact of the learning rate on the relative model sizes' ranking wrt. gradient conflicts Comment: Hello reviewer ytBJ, thanks for your response and for clarifying the question. Please find our answers below **1.** We did not claim that larger models always imply more gradient conflicts, sorry if the text was misleading in that regard. Rather our main observation regarding model capacity was that changing model capacity does not significantly impact the magnitude of gradient conflicts, and yet it does have a visible impact on the MTL/MDL performance (e.g. line 216 and Figure 3a). **2.** Following your question, we looked further into whether the relative ordering of model sizes based on gradient conflicts changes with learning rate. Our methodology was as follows: * We take the data from Figure 1a and rank each model size in terms of gradient conflict, for each learning rate and time step (in ascending order, rank of 1 = lowest gradient conflict). * Across time steps, we compute the most common rank, as well as how many times the rank at any time step matches the most common one (we call this ratio `consistency`) for each model size and learning rate * We report these values (most common rank and consistency across time steps) for different model sizes pairs and learning rate **summary:** Our main observation is that generally, larger models do exhibit more gradient conflicts (higher global rank), but the consistency of this behavior indeed is impacted by the learning rate: the relative ranking fluctuates more at lower learning rates (lower consistency) ### DomainNet - 6 tasks - ResNets | | r26 | r50 | consistency | |:---------|------:|------:|:--------------| | lr=0.003 | 1 | 2 | 56.7% | | lr=0.03 | 1 | 2 | 73.3% | | lr=0.3 | 1 | 2 | 93.3% | ### CelebA - 40 tasks - ViT-S/4 - fixed depth | | w=0.5, d=3 | w=1, d=3 | consistency | |:----------|-------------:|-----------:|:--------------| | lr=0.0005 | 1 | 2 | 52.0% | | lr=0.005 | 1 | 2 | 74.0% | | | w=0.5, d=9 | w=1, d=9 | consistency | |:----------|-------------:|-----------:|:--------------| | lr=0.0005 | 1 | 2 | 66.0% | | lr=0.005 | 1 | 2 | 70.0% | ### CelebA - 40 tasks - ViT-S/4 - fixed width | | w=1, d=3 | w=1, d=9 | consistency | |:----------|-----------:|-----------:|:--------------| | lr=0.0005 | 1 | 2 | 60.0% | | lr=0.005 | 1 | 2 | 58.0% | | | w=0.5, d=3 | w=0.5, d=9 | consistency | |:----------|-------------:|-------------:|:--------------| | lr=0.0005 | 2 | 1 | 52.0% | | lr=0.005 | 1 | 2 | 70.0% |
Summary: This work is interested in analyzing the extent to which scalarization is an effective strategy against negative transfer in multi-task and multi-domain learning. Scalarization focuses on selecting an adequate weights for a convex combination of task losses, rather than employing expensive or complex conflict mitigation strategies. Although the search space for scalarization weights grows exponentially with the number of tasks, during training minimizing a weighted sum of task losses is fast and simple compared to many complex optimization methods and has recently been shown to be just as good. This work therefore attempts to better understand the dynamics of scalarization in multi-task models by studying multi-task generalization under scalarization alone. They find a few findings which appear consistent across the settings they consider: the first is that scalarization is more effective as model capacity grows; the second is that uniform scalarization is rarely optimal, implying that for each MTL setting the scalarization weights must be tuned; the final observation is that while gradient conflict can predict e.g. task affinity, it does not predict generalization because model capacity does not observably affect gradient conflict. Using these observations, the authors posit that a strong scalarization approach to MTL can be extremely effective, but the search for the optimal scalars makes it more expensive than other proposed methods. To this end, the authors leverage population based training to efficiently explore the parameter space of task weights. They find that models trained with scalarization weights from PBT outperform the uniform scalarization baseline, as well as several other sota optimization methods. Strengths: - The paper is well written. It conveys its core ideas and motivation clearly. - The in-depth, rigorous exploration into multi-task learning dynamics is important as recent work has shown that most prior optimization work is not actually beneficial for standard MTL problems. - The observation that uniform weighting is rarely optimal is useful, even if not surprising. - The findings w.r.t. task conflict and generalization are very interesting, and helpful to consider in further development of MTL methods. - In total, the analysis of section 4 could be helpful for the design of future methods which aim to target scalarization, and they serve to motivate the proposed method in section 5. - The proposed method is clearly useful empirically, and demonstrates the effectiveness of scalarization vs. other, much more complex, optimization methods. Weaknesses: - All 3 key conclusions come from experiments which study only 2 tasks at once. While it is not unreasonable to extrapolate some conclusions from this setting, some conclusions could be at least verified for larger task settings, even up to 3 or 4 simultaneous tasks just to ensure the trends still hold. For example, does model capacity really not affect gradient conflict levels when considering all 40 tasks of CelebA? - The models all use the optimal single-task parameters but this might be unfair to the MTL models, e.g. [1] suggested that the learning rate should scale with the number of tasks if all else is fixed. - The final results should probably use random scalarization [2] as an additional baseline. I find this to be especially true given that the uniform models almost uniformly outperform the other optimization methods, so it is not extremely surprising that additional tuning of the task weights will result in the best performance on the tables. - The comparison to previously considered SOTA methods only goes up to 7 or 8 tasks, whereas many of the methods were tested on e.g. up to all 40 tasks on CelebA. It’s not clear if PBT can efficiently scale up to 40 tasks. To that end, a comparison of the overall compute used by the tested methods would be really helpful. [1] the importance of temperature in multi-task optimization, Mueller et al., 2022 [2] reasonable effectiveness of random weighting: a litmus test..., Lin et al., 2021 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Do you know how PBT scalarization compares to random scalarization? - How does the entire procedure of PBT compare to other optimization methods w.r.t. total compute time or flops? - Finally, I’m particularly interested in whether or not the trends w.r.t. model capacity and gradient conflict hold as the number of tasks increases. Do you happen to know if they do? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1. Gradient conflicts:** In response to reviewer wCAe's inquiry about gradient conflicts, we have expanded our analysis to include experiments that encompass all 40 tasks/attributes of CelebA and all 6 domains of DomainNet, when training a uniform MTL model. These additional experiments are illustrated in Figure 1 of the attached PDF. Generally, we observe that while the proportion of gradient conflicts may increase with the number of tasks, the overarching trend remains consistent: Low gradient conflict does not necessarily correlate with optimal MTL/MDL performance. This observation holds true across different model sizes and aligns with the conclusion C2 of the submission. **2. Population Based Training (PBT)**: **a. Computational cost:** A nice feature of Population-based Training search is that the computational cost can be easily decorrelated from the number of tasks. In fact, the main computational cost comes from the number of models in the population ($N$): While increasing $N$ does increase the search space coverage, it is also possible to control the exploration/exploitation trade-off through other parameters; namely the number of epochs before models in the population pause and are compared against one another ($E$) and the proportion of the population killed in each exploration step ($Q$); both $E$ and $Q$ have an impact on the computational cost which is often negligible in practice as it only incurs a few additional checkpointing/writing operations. In other words, even for a large number of tasks, we can use a low value of $N$ which introduces a natural trade-off between the compute budget allocated to the scalarization weight search and search space coverage. To better highlight this trade-off, we report additional results of PBT search while varying the hyperparameters $N$ and $E$ in the global response (Table 1a); As suggested by reviewer wCAe, we scale the experiments to cover all 40 tasks on CelebA. In that setting, while increasing population size generally improves the search result, we observe that *(i)* we can get good performance even when the population size is significantly smaller than the number of tasks and *(ii)* in the regime of a large population, the cost of going from e.g. N=24 to N=40 is not worth the gain in accuracy. **b. Comparison to multi-task optimization methods (MTO):** In the attached PDF (Table 1b), we summarize the computation costs of PBT and some MTO methods. The key difference between gradient-based MTO and PBT lies in the memory bottleneck: PBT requires training $N$ models, but independently, hence the algorithm does not require additional storage compared to training a single model. In contrast, gradient-based MTO methods only train a single model, but require storing each per-task gradient to compare them against each other in every training iteration. Nevertheless, the efficiency comparison between MTO methods and PBT search is highly dependent on available resources. For scenarios with ample RAM/GRAM and a reasonable number of tasks, MTO methods may still be computationally viable. Conversely, PBT's natural parallelism is favorable in scenarios with many memory-constrained devices. **3. STL vs MTL learning rate** **a. PBT experiments**: For the experiments of section 5 in the paper (PBT and MTO comparison), we did train all models for different learning rates and report results for the best performing one (specifically `[5e-5, 5e-4, 5e-3]` for CelebA and `[3e -3, 3e-2]` for DomainNet) **b. Analysis**: Indeed, further tuning the learning rate for MTL models could lead to better performance with respect to the STL baseline; in our setting, this would be roughly equivalent to dropping the constraint that the scalarization weights have to sum to 1 (line 115) but would also greatly increase the search space / number of experiments. In our analysis, we mainly use the single task as a reference baseline with the primary goal to compare the relative improvement of the different MTL/MDL against each other (for different task ratios and model sizes). This is why we chose to select the learning rate as to favor the STL baseline rather than a specific MTL setting (e.g. the uniform weights). Nevertheless, after reviewing the suggested reference [1], it would indeed be interesting to investigate how their conclusions on optimal learning rate/task temperature evolve across different model sizes. **Random scalarization baseline**. We added results for the suggested random scalarization baseline (RLW) of **[2]** for Table 2a in the attached global response PDF for the CelebA setting with 40 tasks. Following the reference, in every training iteration we sample task weights from N(0, 1) and normalize them via softmax function. In that setting, we observe that RLW yields a slightly stronger baseline than uniform weighing on average, and that the configuration found by PBT outperforms both when searching with more than $N=6$ population size. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: Although the review did not engage, I'll carefully read and consider it during the decision period. AC --- Rebuttal Comment 1.2: Comment: Thank you for your response. You have addressed my two key concerns (limited analysis to 2 tasks and comparing to RLW), and so I will raise my score to a 6!
Summary: This paper analyzes multi-task learning (MTL) and multi-domain learning (MDL) setting. The paper observes several points: 1. MDL/MTL improvements are more significant with bigger network capacity, 2. Gradient conflicts are not necessarily well correlated with MDL/MTL performances, 3. Tuning scalarization weight is important for MDL/MTL performance, 4. population-based training (PBT) can be an efficient way to tune the scalarization weights when there are many tasks. The experimental results support each of their claims, and lastly this paper show that PBT can be even competitive with memory-expensive gradient-based methods, such as PCGrad. Strengths: - Paper was easy to read. Weaknesses: [major comments] - overall, almost all observations found by this paper seem quite trivial to me. Currently, I don't think this paper provides very useful insights or something that has not been investigated before. - For instance, in page 2, (C1) is already quite trivial - we already know that larger network capacity can mitigate negative interference because larger capacity means it can accommodate more diverse information. See [1] for the reference. - (C3) is also trivial - we already know that tuning scalarization weights is important and they should be tuned differently for each task/domain/architecture, and so on. - The conclusion of (C2) is misleading, in my opinion. The authors observed that MTL/MDL performance improves with bigger network capacity while the degree of gradient conflict remains the same, and they conclude that gradient conflict do not correlate well with the actual MTL performance in practice (L219-220). This conclusion sounds weird because the network architecture is different. What if the network architecture remains the same and we resolve the gradient conflict, which is the usual assumption of other papers, such as PCGrad? - (C4) is simply an application of the existing technique (PBT) to scalarization weights, which is not very surprising. And the authors did not provide other baselines than Uniform, although there should be many existing methods that allows to carefully tune the scalarization weights. - The same for the conclusion in section 4.1. All (C1), (C2), and (C3) sound obvious to me. - (C1): we already know that larger network capacity can mitigate negative interference effect, as mentioned above. - (C2): Of course the best-performing scalarization weight would not be p1=p2=0.5. - (C3): Of course MTL/MDL has a regularization effect, so the training loss converges slower but the test accuracy is higher. [minor comments] - in (1), the notation $\frac{\nabla}{\nabla}$ looks very weird. It should be either $\frac{\partial}{\partial}$ or simply $\nabla_\theta \mathcal{L}$. - in (2), the definition of $f$ is missing. What is it? (I assume that it's $\nabla_{\theta_i} \mathcal{L}_t(x_t,y_t)$?) - missing baseline - Sequential Reptile [2], which can resolve gradient conflict issue without heavy memory overhead. [reference] [1] Wang et al., On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment, 2020 [2] Lee et al., Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning, 2022 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See the comments above. Overall, I don't think this submission is above the acceptance bar. Most of the observations seem obvious and not very informative. In order for such an analysis-style paper to be accepted, the analysis should be 1. better organized with clear insight, 2. providing novel and useful insights that have not been found by other researchers. ------------------------------------------------------------------ [After rebuttal] I read the author's rebuttal and other reviewer's comments. Unfortunately, I'm still not convinced of the rebuttal, thus I maintain my current score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: The authors have properly addressed the limitations of this paper in Sec 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Intuitiveness is not a weakness:** While some of the observations in our paper may align with intuitive understanding, intuition does not always translate into empirical evidence. Our work aims to provide a rigorous and systematic analysis of multi-task learning (MTL) and multi-domain learning (MDL), and we believe that our findings contribute novel insights to the field. In particular, insights such as C1 and C3 are rarely taken into account or emphasized in previous multi-task optimization works, hence labeling them as "obvious" and "trivial" does not reflect current literature. Nevertheless, if reviewer iXKX would like to suggest additional references we should examine, we would be happy to further address those during the discussion period. **(C1) [Model size]** We are not aware of a previous reference analyzing and generalizing the link between model size and MTL/MDL performance. In fact, many MTO works (e.g. [a, b] for recent references) are still evaluated on very specific architecture/dataset combinations, omitting the effect of model size that may affect model comparison. We have reviewed the suggested reference [1], and it focuses primarily on the impact of dataset size on negative interference in multi-lingual models. While this is an important aspect, it does not directly address the link between model size and MTL/MDL performance that our paper explores. **(C2) [Gradient conflict]** Our conclusion (C2) does not undermine the effectiveness of existing gradient conflict resolution methods in multi-task learning (MTL) or multi-domain learning (MDL). Indeed, as highlighted in reference [c], multi-task optimization methods such as IMTL or PCGrad can have a beneficial regularization effect. However, our analysis shows that minimizing gradient conflict does not necessarily lead to optimal MTL/MDL performance. We believe this perspective adds valuable insights as it challenges the common assumption that reducing gradient conflicts to zero should be the de-facto way to solve task interference in MTL/MDL (additional figures and results can be found in the PDF attached to the global response). **(C3) [Tuning scalarization weights]** We do not think it is obvious that uniform weights p1=p2=0.5 are never the best-performing solution: Uniform weighing can be a reasonable assumption as MTL models are usually evaluated by taking the uniform average of their respective task metrics, in particular when the training losses and test metrics coincide (e.g., in the Taskonomy example we consider in the main paper). In fact, the use of uniform weighing for task losses remains a prevalent scalarization method in both vanilla MTL and gradient-based multi-task optimization methods. Moreover, the claim of (C3) is not solely on the benefits of tuning scalarization weights, a concept that has indeed been explored, for example, in [d]. Our work also investigates how and whether the optimal scalarization weights evolve across different model capacities (lines 56-60 and section 4.3). This exploration adds a new dimension to the understanding of scalarization in MTL/MDL. and provides insights that extend beyond existing literature. **(C4) [Application of PBT]** To the best of our knowledge, there does not exist a well-established efficient method to tune scalarization weights in large-scale settings: Classical search algorithms such as grid search or Bayesian optimization do not scale well to larger number of tasks/tunable parameters. While PBT itself is not a novel search algorithm, showing that it can be successfully applied to the context of scalarization is a novel contribution of our work; We additionally show that the learned schedule of dynamic task weights through PBT can compete with automatic task weighing from state-of-the-art MTO methods. In conclusion, we believe that (C4) offers valuable insights and an efficient practical solution for tuning scalarization weights. **(Minor comments)** * $f$ refers to an arbitrary function of $(x, y)$ to illustrate the link between the reweighing and resampling formalisms; but indeed in the context of equation (1), it can be replaced with $\nabla_{\theta_i} \mathcal{L}_t(x_t, y_t)$ * The suggested reference **[2]** tackles the problem of transfer learning/catastrophic learning in multilingual learning which differs from our setting (multi-task learning from scratch). It also does not incur memory overhead compares to other MTO methods, but it trades it off for additional computations (each parameter update necessitates K gradient steps in Equation (7) of **[2]**). **References** * [a] RotoGrad: Gradient Homogenization in Multitask Learning, Javaloy et al * [b] Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout, Chen et al * [c] In Defense of the Unitary Scalarization for Deep Multi-Task Learning, Kurin et al * [d] Do Current Multi-Task Optimization Methods in Deep Learning Even Help?, Xin et al --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Although the review did not engage, I'll carefully read and consider it during the decision period. AC
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful and detailed feedback. We have addressed each reviewer's concerns separately through comments, and we would like to use this global response to highlight the additional experiments we mention in these responses, which are illustrated in the attached PDF. Finally, we would be happy to address any further questions during the discussion period. **Gradient conflict (Figure 1):** To reinforce the conclusion *(C2)* (Figure 3 of the main submission), we follow the reviewers' suggestion to investigate how gradient conflict measurements evolve with respect to the number of tasks (reviewer wCAe) and the learning rate (reviewer ytBJ). In these experiments, we follow the same protocol as in Section 4.2, by measuring the percentage of gradient conflict pairs in each training epoch. We treat each attribute in CelebA as a separate task (40 tasks total). We vary the learning rate (in `[5e-4, 5e-3, 5e-2]`) and model size (depth in `[3, 9]`; width in `[0.5, 1]`). We believe these additional experiments reinforce our observations from Section 4 demonstrating that while high gradient conflict is indeed indicative of bad MTL/MDL performance, low gradient conflict does not correlate well with best MTL/MDL performance. **Population-based Training:** * **a) Computational cost (Table 1):** To address reviewer wCAe's question, we scale the PBT search to all the 40 tasks of CelebA and report the results in Table 1a. We report results across different population sizes (N), illustrating that the population size (hence the computational cost) does not have to scale linearly with the number of tasks. In addition, in Table 1b, we present a brief overview of the theoretical computational/memory cost of PBT compared to uniform MTL and state-of-the-art MTO methods. * **b) Comparison to grid search (Table 2):** To address reviewer kG9J's question, we report results for classic grid search on a small-scale MTL setting consisting of three attributes/tasks of CelebA. We also illustrate how the search space covered by PBT differs from the classic uniformly distributed grid search space. Pdf: /pdf/01d5cedd50f06b4b9ede8d0e6f250b4c5fcc2c71.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Context Shift Reduction for Offline Meta-Reinforcement Learning
Accept (poster)
Summary: This paper proposes CSRO, an offline Meta-RL algorithm that deals with the distributional shift problem in offline meta-RL with online adaptation. CSRO addresses this problem by constraining the task encoding to only contain information about transition and reward, but not contain information about state-action distribution. CSRO also proposes to use random exploration at the start of exploration to further address the distribution mismatch problem. Experiment results show improved performance on MuJoCo task sets. Strengths: 1. The distributional shift problem is a fundamental problem in offline meta-RL with online adaptation. 2. The information-theoretic regularization on task embeddings is novel and interesting. 3. Presentation is clear, and the paper is easy to follow. Weaknesses: 1. The evaluation tasks are a bit too simple. I expect the authors to evaluate on more complex task distributions like Meta-World ML1, which is more challenging and convincing. 2. I am concerned about the efficiency of the proposed exploration method. Random exploration can be very ineffective and may struggle on hard tasks like Meta-World or sparse reward tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors evaluate CSRO on Meta-World ML1? These tasks are more challenging and convincing. 2. There is a recent work that also addresses the problem of offline meta-RL with online adaptation [1]. Although this work is contemporary to CSRO and I do not require the authors to compare these two algorithms empirically, I expect the authors to discuss pros and cons of CSRO compared to [1]. 3. Is random exploration a reasonable choice? Will it fail on more complex or sparse-reward tasks? [1] Offline Meta Reinforcement Learning with In-Distribution Online Adaptation. https://openreview.net/forum?id=dkYfm01yQp Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I encourage the authors to add some discussion on CSRO in the paper. I think the random exploration is one important limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed review. We are glad to discuss your concerns one by one. > **Q1**: Can the authors evaluate CSRO on Meta-World ML1? These tasks are more challenging and convincing. **A1**: We conduct experiments on meta-world ML1, and we can see that CSRO has higher performance than FOCAL, which proves that our method is effective. | Env | Reach-v2 | | ----- | -------- | | CSRO | 0.19 | | FOCAL | 0.10 | > **Q2**: There is a recent work that also addresses the problem of offline meta-RL with online adaptation [1]. Although this work is contemporary to CSRO and I do not require the authors to compare these two algorithms empirically, I expect the authors to discuss pros and cons of CSRO compared to [1]. **A2**:We reproduced the performance of IDAQ on the three environments and found that CSRO and IDAQ are comparable. | Env | Point-Robot | Half-Cheetah-vel | Walker-Rand-Params | | ---- | ----------- | ---------------- | ------------------ | | CSRO | -6.4 | -48.4 | 344.2 | | IDAQ | -5.2 | -60.9 | 297.0 | Because the IDAQ collection context tends to choose a context with a higher reward, its performance in the expert datasets should be better. However, due to its approximate greedy iteration, if the number of adaptation steps is reduced, the performance will drop. We found that IDAQ on Half-Cheetah-Vel, adapting to 1000 steps drops to using 600 steps, and the performance will drop from -60.9 to -90.1, while CSRO uses 600 steps and performance is -48.4. > **Q3**: Is random exploration a reasonable choice? Will it fail on more complex or sparse-reward tasks? A3: We described in Figure 1 and Section 4.3 that the common exploration method collects context, and the collected context will be related to the sampled initial $z_{0}$. Subsequent posterior $q_{\phi}(z|c)$ are also affected by $z_{0}$, leading to wrong task inferences. Therefore, we propose that we need to eliminate this kind of erroneous prior, first, use a small amount of random data for initial context collection and then meta-policy $\pi_{\theta}(a|s,z)$ continues to explore and collect context. Since it is infeasible to train an exploration strategy like online meta RL using only offline data, our approach is reasonable. The CORRO and OffPearl methods we compared were also not tested in sparse environments, and our method may not work in sparse environments. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I thank the authors for their efforts and their response has largely addressed my concerns. I extremely appreciate the authors for testing IDAQ within such limited time. I have a further question: For Q3, a benefit of posterior sampling is that it has some potential ability to deal with some reward sparsity, as it iteratively updates its belief and explores the environment. As shown in Figure 4. in [1], In sparse reward environments, PEARL's posterior sampling will actively explore possible goals and update its belief. E.g., if it explores a potential goal region and finds that it is not the real goal, it will update its task belief to exclude that explored goal. This mechanism makes PEARL (also IDAQ and OffPEARL)'s exploration possibly more efficient than CSRO. This might be a limitation of CSRO, as CSRO discards posterior sampling to some extent (to fix context distribution shift) and may harm exploration efficiency as well as performance on sparse reward tasks. I would like to raise my score if the authors add a discussion on this limitation (E.g., conduct experiments in simple sparse reward environments like Figure 4. in [1]) and add this discussion in the final version of the paper. [1] Rakelly, Kate, et al. "Efficient off-policy meta-reinforcement learning via probabilistic context variables." International conference on machine learning. PMLR, 2019. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response. We will further discuss your question. Firstly, we would like to clarify a viewpoint: As mentioned in lines 219-226 of the paper, CSRO also incorporates posterior sampling. Both CSRO and PEARL follow a similar process of initially collecting data from the environment, iteratively updating the posterior, and then utilizing the updated posterior for environment exploration. The distinction is: PEARL initially employs $z\sim p(z)$ to explore, and CSRO initially employs random exploration to collect data. We evaluated the performance of CSRO, FOCAL, and OffPEARL in the sparse-point-robot environment, as depicted in the following table: | CSRO | FOCAL | OffPEARL | | ---- | ----- | -------- | | 0.76 | 0.78 | 0.61 | We can observe that all three methods exhibit poor performance, significantly below the performance of the expert policy at 10.6. For the sparse-point-robot environment, although PEARL can effectively explore the testing environment during online RL, OffPEARL cannot recognize the testing environment during offline RL. In the online RL, the policy for collecting context during the testing phase remains consistent with that used during the training phase. For the sparse-point-robot environment, whether it's prior or posterior exploration, PEARL's agent moves in some directions, potentially without receiving rewards, which makes it unable to directly infer the specific environment. However, it can eliminate certain environments from consideration, so it contains some useful environmental information. Furthermore, PEARL encounters similar exploratory behaviors during the training phase empowers it to effectively use contextual environmental information. This allows it to continuously refine its beliefs throughout the entire iteration process. In the offline RL, the training phase's context solely originates from the behavior policy $\mu$, while this is not the case during the testing phase. In the sparse-point-robot environment, even though PEARL's exploration process collects context with partial environmental information, but the collected contexts exhibit significant distributional shifts and remain previously unseen. This hinders PEARL's ability to effectively utilize environmental information and update its belief accurately, resulting in poor performance. Our method CSRO is primarily suited for dense environments with the aim of reducing context shift between training and testing phases. We don't incorporate additional design for sparse environments. In the sparse-point-robot environments, even though CSRO can collect contexts with minor distributional shifts, it encounters difficulty in capturing meaningful environmental information. As a result, its performance is also poor. In the sparse environments of offline RL, addressing this issue necessitates the simultaneous collection of contexts with minimal distributional shifts and containing pertinent environmental information. We will subsequently include a discussion in the paper about the limitations of our method in the sparse environments of offline RL.
Summary: This paper studies the context shift problem of task representation learning in offline meta-reinforcement learning (OMRL). The proposed method, CSRO, optimizes a combination of FOCAL's objective and an adversarial objective, to maximize task information and minimize behavior policy information in task representations. Experiments in various MuJoCo tasks show that CSRO outperforms baseline methods and learns good representations. Strengths: 1. The problem of context shift in in OMRL is significant. The proposed adversarial method for minimizing the information of behavior policies is novel and makes sense. 2. In experiments, the baselines selected are representative. The test performance of the method is great. Weaknesses: Some claims in the paper are not very accurate and can be improved: Line 175: Equation 5 is not equal to the mutual information. It should be explained as an approximation. Equation 6~8: The meaning of expectation over i and j should be explained. Line 225: In context collection, taking random actions can also cause context shift to the training distribution. Also, the context collection strategy does not appear to be an original contribution, since CORRO (section 5.6) also uses a random exploration policy to collect context. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Since all the baselines are reimplemented with BRAC, why do the results for CORRO and BOReL are presented with horizontal lines rather than training curves in Figure 3? 2. Is there any difference between FOCAL and the ablation CSRO w/o minMI & Np? 3. Performance of ablation methods is close to CSRO in most experiments according to Figure 4. Does this mean datasets in Half-Cheetah, Humanoid and Hopper cannot reflect the context shift problem? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not discussed in this paper. I hope the authors address the above issues to improve the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad to answer your questions and would appreciate any further response. > **Q1**: Line 175: Equation 5 is not equal to the mutual information. It should be explained as an approximation. **A1**:Thank you for your suggestion. This is an approximation, we will modify it later to make it clear. > **Q2**: Equation 6~8: The meaning of expectation over i and j should be explained. **A2**: Here, $z_{i}$ represents task embedding obtained by $(s_{i}, a_{i}, r_{i}, s_{i}')$ through the context encoder. $E_{j}[\log p(z_{j}|(s_{i},a_{i}))]$ means fixing $i$ and calculating the mean of all $z$. $E_{i}[\log p(z_{i}|(s_{i},a_{i}))\cdots]$ means calculating the mean of each corresponding $z_{i}$ and $(s_{i}, a_{i})$. We will modify it later to make it clearer. > **Q3**: Line 225: In context collection, taking random actions can also cause context shift to the training distribution. Also, the context collection strategy does not appear to be an original contribution, since CORRO (section 5.6) also uses a random exploration policy to collect context. **A3**:There is also some context shift in the random strategy, and in Appendix F we show that this also brings about a slight performance gap. But it produces a smaller distribution shift than the common exploration strategy. The purpose of CORRO is to learn a more robust meta-policy, and then test performance under the context collected by different policies, including random policy. We do it differently, we use warmup data collection with a non-priori random strategy and continuously update the posterior distribution of the context to continue collecting, finding that this can alleviate the context shift. > **Q4**: Since all the baselines are reimplemented with BRAC, why do the results for CORRO and BOReL are presented with horizontal lines rather than training curves in Figure 3? **A4**: This is because CORRO and BOReL train the encoder first, and then train the policy. It is more appropriate to draw a horizontal line. The rest of the methods are to train the encoder and policy at the same time, so it is more appropriate to draw a curve. In this way, it is easier to compare fairly. > **Q5**:Is there any difference between FOCAL and the ablation CSRO w/o minMI & Np? **A5**: FOCAL and CSRO w/o & Np are the same. > **Q6**: Performance of ablation methods is close to CSRO in most experiments according to Figure 4. Does this mean datasets in Half-Cheetah, Humanoid and Hopper cannot reflect the context shift problem? **A6**: In fact, the gap is quite large. From Figure 4, we can see that CSRO has a significant improvement over the CSRO w/o minMI. As for the performance close to CSRO w/o Np, This is because the minMI part has basically solved this problem. In Appendix F, we gave the offline results, as you can see that the performance after using minMI is close to the highest offline performance, and NP has no performance room for improvement. --- Rebuttal Comment 1.1: Comment: Thanks! Your reply addresses most of my concerns. I will keep my initial score.
Summary: The paper presents a new method called Context-Shift Robust Offline Meta-Reinforcement Learning (CSRO) to tackle the issue of context shift in offline meta-reinforcement learning. The main contributions of the paper lie in introducing max-min mutual information representation learning during meta-training to lessen the impact of behavior policy, and employing a non-prior context collection strategy during meta-testing to alleviate the impact of the exploration policy. The experimental results demonstrate that CSRO surpasses prior methods in effectively addressing context shift and enhancing performance in demanding domains with reward or dynamic function variations. Strengths: * The paper is well-written and well-organized. * This paper addresses an important issue in offline meta RL, specifically the context shift problem that arises due to disparities between training and testing contexts. * The proposed method introduces a mutual information objective to reduce the reliance of the behavior policy on task representations utilizing FOCAL, and incorporates context-independent random exploration during the initial meta-testing stage. * Empirical evidence substantiates that the proposed method consistently outperforms other baselines in online test experiments. Additionally, the thorough ablation study validates the effectiveness of the individual components. Weaknesses: * Regarding the mutual information objective, an additional insight is that $(s,a)$ can be shared across different tasks, while the reward function $r$ plays a vital role in task inference. Equation (8) in the paper focuses the predictions more on the reward rather than solely on $(s,a)$. This insight holds particular significance for point and ant-goal tasks where state-action sharing is more prominent. However, for two Rand-Param tasks, the task can be inferred from the $(s,a)$ pairs, resulting in similar performance between CSRO and CSRO w/o minMI. If this insight holds true, I suggest the authors discuss it in the main paper. * Another concern is that the non-prior context exploration method directly employs random action exploration, which can be inefficient. Are there other more efficient non-prior context exploration methods that could be utilized for CSRO? The prior exploration method in off-policy meta RL [1] could provide inspiration in this regard. * To enhance comprehension, it would be beneficial to include a figure in Figure 1 that demonstrates the performance drop of prior works, such as FOCAL, when online exploration is employed to acquire context, as compared to offline context. * In Figure 5, it appears that CSRO, FOCAL, and CORRO exhibit similar performance. Could you clarify the metric used to compare these three methods? Additionally, to provide a comprehensive understanding, it would be beneficial to include more task representation visualization results in the Appendix, beyond just the HalfCheetah-Vel task. * Furthermore, it would be advantageous to include a comparison with the recent context correction in offline meta RL [2]. [1] Zhang J, Wang J, Hu H, et al. Metacure: Meta reinforcement learning with empowerment-driven exploration[C]//International Conference on Machine Learning. PMLR, 2021: 12600-12610. [2] Wang J, Zhang J, Jiang H, et al. Offline Meta Reinforcement Learning with In-Distribution Online Adaptation[J]. arXiv preprint arXiv:2305.19529, 2023. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * The authors could consider incorporating my insight on the mutual information objective into the main paper for a clearer explanation. * Are there alternative methods for non-prior context exploration that are more efficient and suitable for CSRO? * It would be helpful to include a figure in Figure 1 that illustrates the performance drop of prior works like FOCAL when using online exploration to acquire context. * In Figure 5, where CSRO, FOCAL, and CORRO appear to perform similarly, what metric was used to compare these three methods? * In addition to the HalfCheetah-Vel task, it would be beneficial to include more task representation visualization results in the Appendix. * It would be advantageous to include a comparison with the recent context correction work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: This paper lacks a discussion on its limitations. One limitation I identified is the inefficient random exploration strategy used during the meta-testing stage with non-prior context. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your advice on further improving this paper. We would like to discuss them one by one. > **Q1**: The authors could consider incorporating my insight on the mutual information objective into the main paper for a clearer explanation. **A1**: Thank you very much for your suggestion. We will modify the mutual information part in the future so that it can be explained better. First of all, ideally, the agent should pay more attention to the reward function of the environment where the reward changes, and the dynamic function of the environment where the dynamic changes, and only infer the task from there. However, our behavior policy and task are highly correlated. Both the environment of reward change and the environment of dynamic change can infer the task from $(s,a)$, but the policy and task during the test are not related, which will cause context shift problems, so we need Mutual information processing. Regarding the Rand-Param tasks you mentioned, because we use medium data, there is some interference with task inference. In addition, if the agent samples the same $(s,a)$ as the training environment when exploring the test environment, this will also cause interference, so it is still necessary to minimize the mutual information. In the ablation experiments in Figure 4, minMI improves the performance of both environments. > **Q2**: Are there alternative methods for non-prior context exploration that are more efficient and suitable for CSRO? **A2**:We add the MetaCURE method to CSRO and use the offline dataset to train the exploration policy. The experimental results are as follows: | Env | Point-Robot | Half-Cheetah-Vel | | ------------- | ----------- | ---------------- | | CSRO | -6.4 | -48.4 | | CSRO+MetaCURE | -13.2 | -87.9 | We can see that experimenting with MetaCURE does not work well. Because in offline settings, there is no way to interact with the environment, and using offline datasets to train the exploration policy, the exploration policy is limited to the vicinity of the datasets, and the conservatism of offline is in conflict with exploration, so by training an exploration policy to solve this problem in offline settings is difficult. We will discuss in the paper why the way of training exploration strategies in online meta RL is difficult to use in offline meta RL, and cite this article. > **Q3**: It would be helpful to include a figure in Figure 1 that illustrates the performance drop of prior works like FOCAL when using online exploration to acquire context. **A3**: Thank you for your suggestion. The two test results of FOCAL and OffPearl are given below, and the obvious performance degradation can be seen. We will add it to the paper later to enhance comprehension. | Env | Point-Robot | | Half-Cheetah-Vel | | | -------- | ----------- | ------ | ---------------- | ------ | | | offline | online | offline | online | | FOCAL | -4.4 | -14.9 | -45.7 | -69.5 | | OffPearl | -5.1 | -17.8 | -123.0 | -162.8 | > **Q4**: In Figure 5, where CSRO, FOCAL, and CORRO appear to perform similarly, what metric was used to compare these three methods? **A4**: The metric we use to compare several methods is whether the task embeddings of the same task can be clustered together and whether different tasks can be distinguished. In Figure 5, although FOCAL and CORRO cluster the same tasks together, the degree of differentiation of different tasks is not as good as CSRO. Similar colors in Figure 5 represent similar tasks. We can see that similar tasks of FOCAL and CORRO are connected together. > **Q5**: In addition to the HalfCheetah-Vel task, it would be beneficial to include more task representation visualization results in the Appendix. **A5**: Thank you for your suggestion. We added the t-sne visualization of CSRO, FOCAL, and CORRO in the Point-Robot in the pdf of the global response. We can see that FOCAL is worse. In CORRO, similar tasks are closer, and many points of the same task are far apart. We will add more visualizations of the environment in the appendix. > **Q6**: It would be advantageous to include a comparison with the recent context correction work **A6**: We reproduced the results of IDAQ on our offline datasets, and we can see that CSRO and IDAQ are comparable. We will add this experiment later, citing this article. | Env | Point-Robot | Half-Cheetah-vel | Walker-Rand-Params | | ---- | ----------- | ---------------- | ------------------ | | CSRO | -6.4 | -48.4 | 344.2 | | IDAQ | -5.2 | -60.9 | 297.0 | --- Rebuttal Comment 1.1: Comment: While most of my concerns have been addressed, I still have reservations regarding the inefficient random exploration strategy employed during the meta-testing stage. Although the authors have acknowledged the challenges of training exploration strategies in offline meta RL compared to online meta RL, this concern remains. Therefore, I would keep my current score.
Summary: This manuscript proposes Context Shift Reduction (CSRO) for the offline meta reinforcement learning problem. It aims at solving the context shift problem with only offline datasets, and demonstrates superior empirical performance. Strengths: The paper is easy to understand and the experiments look reasonable. Weaknesses: One major weakness the reviewer identifies is limited discussion of the disadvantages of prior methods / or major novelties of the proposed method. For example, there are several potential improvements the authors can take: 1. Provide theoretical justifications for the proposed method, such as under what condition the method can outperforms other methods 2. Provide intuition on why prior methods do not solve the context shift problem well enough 3. What are the main benefits/novelties of the CSRO method in solving context shift problem Overall the reviewer thinks the empirical results look solid and promising, and the reviewer would like to adjust the rating if the author can adjust the writing to better present the proposed method. Minor presentation issues: 1. There is a missing space in line 78, between “[6,28]” and “methods”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why is BRAC chosen as the offline backbone algorithm instead of CQL [1] or IQL [2]? 2. If the reviewer understands correctly, in line 125 the context is defined as a subset from the offline dataset $\{(s_j,a_j,r_j,s_j’)\}_{j=1}^n$. Is there any specific the context need to defined in this way rather than defined in a general context space? [1] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33 (2020): 1179-1191. [2] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. "Offline reinforcement learning with implicit q-learning." arXiv preprint arXiv:2110.06169 (2021). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing your comprehensive review. We greatly appreciate your insights and are glad to address each of your concerns in a detailed manner. > **Q1**: Provide theoretical justifications for the proposed method, such as under what condition the method can outperforms other methods **A1**: Denote context $c=\\{(s,a,r,s')\\}$ as an experience collected by explore policy $\\pi_{e}$, test task $M_{i}=(S,A,P_{i},\\rho, R_{i})\\sim p(M)$. The expected return of test task $M_{i}$ which is evaluated by a learned meta-policy $\\pi_{\\theta}(a|s,z)$ is $J_{M_{i}}(\\pi_{\\theta},\\pi_{e})=E_{s_{0}\\sim\\rho(s_{0}),z\\sim q_{\\phi}(\\cdot|c),a_{t}\\sim\\pi(\\cdot|s_{t},z), r_{t}\\sim R_{i}(\\cdot|s_{t},a_{t}),s_{t}'\\sim P(\\cdot|s_{t},a_{t})}[\\sum_{t=0}^{H-1}r_{t}]$. The meta-policy gets the highest expected return when explore policy $\\pi_{e}=\\mu_{i}$. However, for any exploration policy $\\pi_{e}$, we hope $J_{M_{i}}(\\pi_{\\theta},\\pi_{e})=J_{M_{i}}(\\pi_{\\theta},\\mu_{i})$, if and only if for any $\\pi_{e}$, $q_{\\phi}(z|c)$ is the same. Since $c$ is collected by $M_{i}$ and $\\pi_{e}$, $q_{\\phi}(z|c)$ can be written as $q_{\\phi}(z|M_{i},\\pi_{e})$, i.e. for any $\\pi_{e}$, $q_{\\phi}(z|M_{i},\\pi_{e})$ is the same. So $z$ and $\\pi_{e}$ are independent, that is, the mutual information $I(z;\\pi_{e})=0$. Therefore, we minimize the mutual information between the policy and $z$ to alleviate the context shift problem. The mutual information upper bound on minimizing mutual information has been proved in the paper "Efficient off-policy meta-reinforcement learning via probabilistic context variables". In addition to the offline datasets, other methods also require additional information related to the environment, which does not solve the problem well. We focus on the case of using only offline datasets and alleviate this problem by training a more essential encoder and a more appropriate collection context strategy, so our solution is better than the mentioned method. > **Q2**: Provide intuition on why prior methods do not solve the context shift problem well enough **A2**: As for why the prior methods did not solve this problem, we briefly described it in the introduction. Specifically, FOCAL does not consider this problem and directly uses the pre-collected context in the test environment to test evaluation. BOReL thinks that the problem comes from MDP ambiguity, by assuming that the reward function of all tasks is known and using the reward function to relabel different task data to solve it. SMAC thinks that the problem comes from the policy of collecting context during training is different from that of testing. It is solved by destroying the complete offline setting and performing some online training. The latter two methods require additional information beyond the use of offline datasets alone, and the solutions are not ideal, we hope to solve this problem with only offline datasets. We will revise the article later with a more detailed description. > **Q3**: What are the main benefits/novelties of the CSRO method in solving context shift problem **A3**: Unlike prior methods that used information beyond offline datasets, our proposed method uses only offline datasets. We propose that the cause of the problem is the influence of policy information on task embeddings, and it is not enough to maximize the mutual information between the environment and task embedding, and it is also necessary to minimize the mutual information between the policy and task embedding. We empirically find that a non-a prior context collection strategy consisting of a small number of random explorations can alleviate this problem. > **Q4**: There is a missing space in line 78, between "[6,28]" and "methods". **A4**: Thanks for your advice. We will fix it later. > **Q5**: Why is BRAC chosen as the offline backbone algorithm instead of CQL or IQL? **A5**: Because the baseline method FOCAL uses BRAC, for a fair comparison, we also use BRAC. > **Q6**: If the reviewer understands correctly, in line 125 the context is defined as a subset from the offline dataset $(s_{j},a_{j},r_{j},s_{j}')^{n}_{j=1}$. Is there any specific the context need to defined in this way rather than defined in a general context space? **A6**: Your understanding is correct, context is defined as a subset of offline datasets. This is because we only have collected offline datasets to train the encoder, so the context is defined as a subset of the offline datasets. The prior work is also defined in this way, such as "Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning". --- Rebuttal Comment 1.1: Title: Response to the Rebuttals Comment: Dear Authors, Thank you very much for your responses. The reviewer has no more other questions, and since all of my questions/concerns are adequately addressed, I have adjusted my rating accordingly. Best, Reviewer 4mWP
Rebuttal 1: Rebuttal: Dear reviewer, PDF has our supplementary experimental diagram. Pdf: /pdf/66b4902b6221ef7016f5415533080d08cc58cd5e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
BayesTune: Bayesian Sparse Deep Model Fine-tuning
Accept (poster)
Summary: The authors introduce BayesTune, a method for choosing which are the parameters to fine-tune in a pre-trained model. Their formulation is based on Bayesian inference where they use a Laplace prior over the parameters/network weights. The prior has two variables: mean, which is the value of the pre-trained parameters, and scale which specifies how important is to fine-tune it. These hyperparameters are also controlled by a hyperprior that is fixed among all the experiments. For inferring the posterior distribution of the weights and scales. they adopt Langevin dynamic method. After obtaining the scale value, they compute a cut-off value that determines the parameters to be updated. Experiments performed in Computer Vision and NLP tasks demonstrate that the method is competitive and outperform common techniques. Strengths: * The problem of efficient fine-tuning is very important nowadays given the availability of large pre-trained models. * The method is intuitive and interpretable, as it is based on the Bayesian interpretation of the network weights. * There is a large set of experiments demonstrating the utility of the method. Weaknesses: * This model is close to related work such as SP-regularization [1] which uses a regularization term to encourage the updated weights to stay close to the original pre-trained weights. Using an L1-SP regularizer might have a similar effect as the one introduced in this paper. I think it is important the authors mention this and compare BayesTune against it. * The efficiency of the method is not clear to me. For instance, in an attention layer, updating only some parameters (sparse updates) still demands computation of the matrix multiplications for query, key, and value matrices. The same logic applies when backpropagating. Thus, choosing only some parameters to update might only save memory demands, but not time. If my reasoning is correct, then the authors should specify this in the paper. Also, some measures of time and memory consumption might be useful. * There is a discrepancy between Algorithm 1 and the method described in the paragraph starting at line 206. Thus, I think Algorithm 1 is incomplete. * Their method seems competitive for NLP but not so for Computer Vision, based on Table 2. [1] Xuhong, L. I., Yves Grandvalet, and Franck Davoine. "Explicit inductive bias for transfer learning with convolutional networks." International Conference on Machine Learning. PMLR, 2018. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Did you optimize the hyperparameters of the other methods? * Why table 2 does not have standard deviations? * Could you elaborate further on how you choose the cut-off point? * How did you choose the hyperprior values ($\alpha, \beta$)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: No limitation was mentioned by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**1. This model is close to related work such as SP-regularization [1] which uses a regularization term to encourage the updated weights to stay close to the original pre-trained weights. Using an L1-SP regularizer might have a similar effect as the one introduced in this paper. I think it is important the authors mention this and compare BayesTune against it.** Thanks for the citation and good question. We will discuss this paper in the revision. We agree that SP [1] provides an alternative regularisation-based approach to fine-tuning (but without the principled Bayesian modeling solution), and that if SP [1] is extended to a L1 regulariser it might provide an alternative approach to sparse fine-tuning. Thus we have conducted some extra comparison experiments. Note that despite using an L1 regulariser, SP does not necessarily lead to exactly sparse solutions. Therefore we offer a two-stage extension of L1-SP. in the first stage, we run the L1-SP training, and in the second stage those weights to be updated are selected based on the relative L1 distances from the pretrained weights (ie, taking those $p$% weights with the largest relative changes from the pretrained weights). The results on NLP tasks are as follows: Avg 10 runs | CoLA | STS-B | MRPC | RTE | CB | COPA | WSC | **AVG** :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: L1-SP (stg 1) | 50.50 | 88.07 | 84.54 | 50.00 | 62.44 | 60.40 | 52.88 | 64.12 L1-SP (stg-2, $p$%) | 54.59 | 88.11 | 89.25 | 68.85 | 81.55 | 70.75 | 55.38 | 72.64 Our SGLD ($p$%) | 60.85 | 90.40 | 90.61 | 77.87 | 91.25 | 75.00 | 60.87 | 78.12 For the L1 penalty balancing constant hyperparameter, we choose optimal values by grid search from {$10^{-3},10^{-4},10^{-5},10^{-6}$}. We can see that L1-SP considerably lags behind our SGLD, which is mainly attributed to its failure to capture uncertainty in L1 regularisation, thus being potentially sensitive to noise in data (a similar reason as the MAP estimate). Moreover, only penalising the parameters deviation from the pre-trained weights as in Stage 1 significantly underperforms the sparse cut-off strategy in Stage 2, signifying that sparse update is critical. >**2. The efficiency of the method is not clear to me. For instance, in an attention layer, updating only some parameters (sparse updates) still demands computation of the matrix multiplications for query, key, and value matrices. The same logic applies when backpropagating. Thus, choosing only some parameters to update might only save memory demands, but not time. If my reasoning is correct, then the authors should specify this in the paper. Also, some measures of time and memory consumption might be useful.** Thank you for the insightful comment. Yes, we agree on the overall backpropagation timing overhead in attention layers. We emphasise that our selective sparse fine-tuning competitors are also edge-wise (eg. SAM, DiffPrune, MagPrune), and thus our method is the same as theirs in this regard. We will clarify this in our revised paper. If saving latency/computation is important, our algorithm can easily be modified to provide this by sharing $\lambda$ across blocks or layers. EG: A layer-wise sparse $\lambda$ could avoid the need for attention computation in a certain layer. We actually explored this in our preliminary studies, but didn’t ultimately go down this path in the paper. Obtaining improved computation efficiency in this way would typically lead to slightly worse accuracy than the current edge-wise sparsity assumption. But we emphasise that this would be the same for all competitors, if they were correspondingly modified for layer/block-wise sparsity. >**3. There is a discrepancy between Algorithm 1 and the method described in the paragraph starting at line 206. Thus, I think Algorithm 1 is incomplete.** We left out the modifications from Alg.1 to make it concise, but we will add them to Alg.1. >**4. The method seems competitive for NLP but not so for Computer Vision, based on Table 2.** Yes. As we commented in the text Line 308-310, the main benefit/message is to avoid complex heuristic (evolutionary) search used by the state of the art competitor NOAH. >**5. Did you optimize the hyperparameters of the other methods?** We did not do it for ourselves as we excerpted the results from the respective previous papers. But each of the competing methods reports its best result after some hyperparameter tuning. >**6. Why Table 2 does not have standard deviations?** Because of the high cost of running the VTAB tasks, it is difficult to have many runs. We rather follow the official train/val/test split. Reporting the point estimates is also common practice for this benchmark. >**7. Could you elaborate further on how you choose the cut-off point?** For the NLP, we have a strict user-specified cut-off point $p=0.005$, being fair with other competing methods. For the VTAB, we have different cut-off points across tasks, and the optimal one is chosen as per the performance on the validation sets. >**8. How did you choose the hyperprior values ($\alpha$, $\beta$)?** It is just a heuristic choice. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thanks to the authors for the detailed rebuttal. I consider updating my score from 4 to 5. Most of my doubts are cleared, however, still there are two points I would like to know: * How did you choose the hyperprior values and the cut-off points? How sensible are these hyperparameters across tasks? Some empirical measures regarding this would be beneficial. * How much is the average execution time for fine-tuning a network per task? Again, some empirical values are important. --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up comments and questions. Our responses are as follows: > How did you choose the hyperprior values and the cut-off points? How sensible are these hyperparameters across tasks? Some empirical measures regarding this would be beneficial. **Hyperprior values:** We heuristically chose the hyperprior values $\alpha=0.01,\beta=100$ based on the mean/variance/skewness/kurtosis of the Gamma distribution to make the prior of $\lambda$ sharply decreasing away from 0. As stated in our paper (footnote-1, p.3), we also tested with models with further hierarchy by placing priors on $\alpha$ and $\beta$, however, there was no significant advantage over the manually chosen ones. **Cut-off points:** As stated in Line 305 (p.9), the cut-off points for the VTAB vision tasks were chosen by the grid search ($p \in [0.05, 0.1, 0.2, …, 1.0]$) on the validation set. For the sensitivity of the performance to the cut-off values $p$, Figure 3 (in p.9) can be referred to. As shown, there exist significant differences in test accuracies for large changes of $p$, however, the sensitivity is rather minor near the optimal values. > How much is the average execution time for fine-tuning a network per task? Again, some empirical values are important. We have the running time records for the VTAB, where we ran our model on a single Tesla-V100 GPU. The task-wise averaged per-epoch running times are as follows. Other competing methods (eg, LoRA) have similar running times as our finetuning times in column Stage-2. | (seconds) | Stage-1 | Stage-2 | | :---: | :---: | :---: | | cifar100 | 7.6 | 6.5 | | caltech101 | 7.3 | 7.5 | dtd | 7.4 | 7.9 | | flower102 | 8.0 | 8.1 | | pets | 7.9 | 7.6 | | svhn | 7.0 | 6.9 | | sun397 | 7.5 | 7.5 | | camelyon | 7.8 | 7.6 | | eurosat | 7.1 | 7.5 | | resisc45 | 7.5 | 7.5 | | retinopathy | 8.1 | 7.8 | | clevr-count | 7.0 | 7.6 | | clevr-dist | 7.6 | 7.9 | | dmlab | 8.1 | 7.8 | | kitti | 8.0 | 8.4 | | dsprite-loc | 7.9 | 7.8 | | dsprite-ori | 7.5 | 7.7 | | snorb-azim | 7.6 | 7.7 | | snorb-ele | 7.9 | 7.4 | | AVERAGE | 7.6 | 7.6 |
Summary: The paper proposes an approach for selecting a subset of weights in a foundation model to fine-tune on a downstream task. The method consists of a two-stage pipeline, where in the first stage a Laplace prior is placed on each weight with a Gamma hyper-prior on the scale. Samples are obtained via SGLD and only the weights with a mean posterior scale above some threshold are then trained via SGD in the second stage. The method is evaluated on GLUE and SuperGLUE tasks with RoBERTa and on VTAB-1k image prediction tasks with a vision transformer and compares overall favorably to a range of baselines from the literature. There are quite a few design choices constituting the proposed method and, unfortunately, none of their added complexity is justified via ablation studies. Further, the paper in my view overstates how principled it is quite significantly, so that at this point I would lean towards rejection. Strengths: * The approach is new as far as I am aware. * The technical description of the method is clear. * Performance seems to be good and approaches for better fine-tuning are of high interest to the community. Weaknesses: * The core problem in my view is that the method consists of quite a few moving parts, but these aren’t justified via ablations. It is not clear at all where the performance improvements come from and whether all parts of the method are needed. E.g. it might be the case that the two-stage procedure with magnitude-based pruning would be enough (at least my understanding of MagPruning based on the description in the paper is that the smallest pre-trained values are pruned). Similarly I wonder if sampling in Stage 1 is necessary or if MAP estimates would be good enough for the scale parameters. * I don’t really see what makes the proposed method particularly principled as claimed at various points in the paper. I don’t think there is a probabilistic justification for the two-stage procedure and fudging the dataset size and noise scale for SGLD is just a hack. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * How do the ablation baselines I mention in weakness 1 perform compared to the method as described in the paper? * Is adapting the noise scale and dataset size really needed for stage 1? I’m aware of the cold posterior effect when sampling for a Bayesian model average, however here we don’t seem to need the predictions but the actual parameter samples? How much does tuning these improve performance over the principled choice? **Typos/minor**: * l155: “algorithmm” -> “algorithm”, “pseudocodes” -> “pseudocode” * The abstract is extremely long without being particularly descriptive (e.g. the two-stage nature of the method isn’t even mentioned explicitly) and reads more like a small introduction. I’d suggest making it significantly more concise. * For me there is way too much going on in Tab 1. I’d suggest at least dropping the bold-facing and rank indicators for the standard deviations, and making the rank indicators gray rather than red (unless you consider them primary information, in which case they should be bigger. But at the moment they are secondary in terms of size and position, so overall the table is unnecessarily difficult to process). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful comments and suggestions. Our responses are as follows. >**1. Regarding ablation studies:** In response to the reviewer’s request, we have done the ablation studies: 1) Mag-Pruning with two-stage, and 2) MAP instead of SGLD. Please see the table below: Avg 10 runs | CoLA | STS-B | MRPC | RTE | CB | COPA | WSC | **AVG** :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: 2-stg MagPrune | 53.51 | 88.33 | 88.96 | 70.83 | 80.36 | 67.60 | 58.07 | 72.52 MAP | 58.55 | 90.13 | 90.54 | 76.72 | 86.71 | 71.09 | 60.34 | 76.30 SGLD (Ours) | 60.85 | 90.40 | 90.61 | 77.87 | 91.25 | 75.00 | 60.87 | 78.12 The results show that two stage mag pruning is not sufficient, and that posterior mean estimation via SGLD provides empirical benefit over MAP optimisation. We will add this ablation study in the revised version. We are grateful to the reviewer for helping improve the paper. >**2. Doubts about principled method:** The reviewer raised three components of the algorithm which s/he felt undermined the claims of a principled solution. We explain these as follows: * **Two-stage procedure**: Sparse Bayesian learners express a prior that prefers unmodified weights, but to enforce this as a hard constraint that can practically be used for memory saving, it is standard practice to threshold after posterior probability inference (e.g. in the seminal Bayesian Compression for Deep Learning, NeurIPS’17; and the recent “Masked Bayesian Neural Networks” mentioned by reviewer **F3ir**). Since there is solid precedent for this standard step to bridge Bayesian models with practical implementation, we do not see it as compromising the principle of our method. * **Dataset size inflation**: In Bayesian deep learning which depends on an explicit measure of dataset size, there has been discussion about how to quantify dataset size when using Data Augmentation. While this is not yet a completely solved question, several prior Bayesian deep learning studies have made similar suggestions on inflating the original training data size to account for augmentation (References below). Thus we believe this is entirely reasonable, and disagree that it is a hack: - Disentangling the roles of curation, data-augmentation and the prior in the cold posterior effect, L. Noci et al., NeurIPS 2021. - Practical deep learning with Bayesian principles, K.Osawa, et al., NeurIPS 2019. - What are Bayesian neural network posteriors really like?, P. Izmailov et al., ICML 2021. * **Noise scale**: Purely finding the posterior mean (ie, SGLD without noise discount) risks performing poorly if the posterior is truly multi-modal, because it may converge to a low probability parameter. Also, purely searching for the posterior mode (ie, MAP instead of SGLD) may be sensitive to data noise, because no stochasticity is taken into account properly. So for combining principle and practice it’s reasonable to prefer a discounted noise procedure that balances between identifying a particular mode, but gets a mean estimate in the vicinity of that mode. --- Rebuttal Comment 1.1: Comment: Thank you for the ablation results, I have decided to raise my score. --- Reply to Comment 1.1.1: Title: Thank you very much! Comment: Thank you very much!
Summary: A principled approach for selecting a subset of parameters to fine-tune in large foundational models is proposed. The authors rely on Bayesian inference to identify this subset. They begin by placing a Laplace prior over the model weights and a gamma hyperprior over the weight scale. Next, they employ an MCMC method to obtain a posterior distribution. Finally, they rank the weights based on the posterior scale values and proceed to fine-tune the parameters with the highest inferred scale values. Across standard NLP and vision adaption tasks, they demonstrate a strong empirical performance compared to previous state-of-the-art approaches. Strengths: - I find the proposed methodology technically sound, novel, and appealing. Though it should be kept in mind that I am not particularly familiar with the related work on the finetuning of large models. Similarly, I like methods that are Bayesian, so this might add to my potentially biased evaluation here. - I like the simplicity of the proposed method. It does not happen often that I understand the method upon the first pass through the paper, but I think that was the case when reading this manuscript. - The empirical results are strong and I want to commend the authors on extensive evaluation (i.e., they do not stop at the language modality, but additionally consider a vision domain as well). Weaknesses: A presentation could be somewhat improved to further strengthen the manuscript. Some concrete suggestions: - I find it a bit strange to start introducing the notation already in the Introduction section. Hence, I would change your current section 1.1 into a separate section 2. - Similarly, I find it a bit weird to list out the related work as bullet-points they way you do in Section 3. I find it more natural to use separate paragraphs for that (see Section 8 in [1] for an example of that). This way you can also directly talk about how each related approach connects to your proposed model, instead of doing that in a separate paragraph as is currently done (lines 192-198). - Line 227: I assume you will include a GitHub link here, not put the code in the Supplementary material (whatever that means). - Grammar and style could be improved at some points to improve readbility. This is easily done these days via tools like Grammarly, ChatGPT... Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Although the choice of Laplace prior is adequately discussed and justified in Section 2, the explanation of the mean-field assumption in equation (1) could benefit from further elaboration. While I understand that this simplification is made for computational tractability, it would be valuable to provide additional justification for this decision. Additionally, I am curious whether the authors anticipate any additional performance improvements by attempting to model correlations between different model parameters. Similarly, I am interested in their choice of the approximate inference scheme (SGLD). Did the authors consider any other approaches (e.g., variational inference)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are currently not discussed. Perhaps authors could use the gained space from restructuring the related work section (see above) to add a paragraph or two on the limitations of their approach. [1] Daxberger, E., Nalisnick, E., Allingham, J.U., Antorán, J. and Hernández-Lobato, J.M., 2021, July. Bayesian deep learning via subnetwork inference. In International Conference on Machine Learning (pp. 2510-2521). PMLR. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**1. (The reviewer provided various detailed comments on paper layout.) Line 227: I assume you will include a GitHub link here, not put the code in the Supplementary material (whatever that means). Grammar and style could be improved.** Thank you very much for careful reading and all detailed suggestions that can improve the paper. We will refine the paper as the reviewer suggested. Regarding GitHub link: Because of the blind reviewing, we were not able to include the GitHub link at this point, but we will do it after the review phase. >**2. Although the choice of Laplace prior is adequately discussed and justified in Section 2, the explanation of the mean-field assumption in equation (1) could benefit from further elaboration. While I understand that this simplification is made for computational tractability, it would be valuable to provide additional justification for this decision. Any additional performance improvements by attempting to model correlations between different model parameters. About their choice of the approximate inference scheme (SGLD). Did the authors consider any other approaches (e.g., variational inference)?** The main reason for mean-field assumption (we guess that the reviewer meant *the prior being factorised over individual parameters*), is for simplicity. There might be benefits to correlation modeling/block sparsity, (which is easy to implement in our framework by using a common $\lambda$ shared across a block), but we did not do this initially as it raises the question of how to choose the block structure. Therefore we leave this to future work for now. Variational inference can be used in principle, but as it incurs additional complexity (both memory and FLOPS) for handling doubled posterior parameters (means and scales), we opted for SGLD. It may also suffer from lower accuracy due to needing an additional assumption on the posterior form (eg., Gaussian), which may introduce an additional source of approximation inaccuracy. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal, I acknowledge I have read it together with other reviews. > guess that the reviewer meant the prior being factorised over individual parameters Indeed, that's what I meant. > Variational inference can be used in principle, but as it incurs additional complexity (both memory and FLOPS) for handling doubled posterior parameters (means and scales), we opted for SGLD. Hm, interesting, I thought VI is meant to be a cheaper alternative to MCMC approaches.
Summary: The paper proposes a new Bayes-based framework for sparse fine-tuning. The proposed method applies a Laplace prior (centered at the pre-trained weights) and a hyper Gamma prior over the scale parameter \lambda of the Laplace prior. The method then performs posterior inference over \lambda to determine whether or not a parameter is "useful": A large value of \lambda indicates a flat prior, i.e. an informative parameter, and vice versa. The paper then adopts SGLD to perform inference over \lambda and then uses human assistance to determine the cut point for \lambda given a budget. The method is evaluated on language and vision tasks and compared against a wide range of parameter efficient fine-tuning methods. The proposed method overall demonstrates performance superior and acquires a sparser model than baseline methods. ## Post rebuttal update: I raised my score from 4 to 5 seeing the new results provided by the author. Strengths: - The proposed method is technically sound and well presented. - The proposed method is evaluated on a wide range of tasks. - The authors consider a thorough amount of baseline methods. Weaknesses: The biggest issue of the proposed method, in my opinion, is the novelty. The use hierarchical prior, which induces sparsity, is a widely known idea since the last century, from Radford Neal and David MacKay's early works to recent works such as [1, 2, 3, 4]. Although these works do not consider the fine-tuning setting, they can technically be applied to the fine-tuning of large pre-trained models by simply letting the prior to be centered at the pre-trained weigh rather than zero. In addition, the configuration of SGLD is not presented very clearly. I would suggest the author use formula to describe the modifications to SGLD described from line 212 to line 223, e.g. a modified version of Eq.5 where a few additional hyper-parameters are added to the likelihood and the injected noise term. *Minor: Some figures are not in vector format and the fonts are too small. *Minor: It would be interesting to see the performance of the proposed model on *Large* language model. [1] Masked Bayesian Neural Networks : Theoretical Guarantee and its Posterior Inference [2] Dropout as a structured shrinkage prior. [3] Posterior concentration for sparse deep learning. [4] Bayesian compression for deep learning Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What optimizers are used for BayesTune? Is it standard SGD through out the entire algorithm or it is a mix of SGD and Adam. - Does the baseline approach use same optimizer or they use different optimizers? The choice of optimizer can potentially affect the final performance. - What happen if you increase the rank of LoRA to let LoRA have the same the number of parameter used by BayesTune? - What's the advantage of posterior inference over MAP estimation in this setting, i.e. remove the last term in Eq.5? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The proposed method introduces extra hyper-parameter for SGLD (besides step-size scheduling): Effective data size and noise discount factor. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**1. The biggest issue is the novelty. They use hierarchical prior, which induces sparsity, is a widely known idea since the last century, from Radford Neal and David MacKay's early works to recent works [1,2,3,4]. Although these works do not consider the fine-tuning setting, they can technically be applied to the fine-tuning of large pre-trained models ...** Hierarchical Bayes (HB) is indeed well known, and there were some works that adopt HB for sparse deep learning as the reviewer listed. But as far as we know, our approach has three main differences from these previous works: - i) Prior works are all about sparse *training*, instead of sparse *fine-tuning*. Thus they focus on zeroing out many parameters, instead of retaining pre-trained weights. The reviewer said “These works can technically be applied to fine-tuning …”, but to the best of our knowledge, no one has done it before. - ii) Importantly, the neural networks used in those previous studies are rather small/toy scale (mostly focusing on MLPs and LeNet sized architectures up to RN18 at largest) while our method can obtain state of the art results on large-scale foundation models (ViT, RoBERTa). For example the largest model considered by the cited papers, RN18 in [1] is ~11M parameters vs RoBERTa’s ~123M parameters, a 10X scale difference. The reason is that the Bayesian learning methods the reviewer referred to used inefficient MCMC approaches like Metropolis-Hastings, or methods that entail extra memory cost like variational inference (VI). All of this impedes applicability to big networks. We have just done some experiments that compare the computational resources required by VI and SGLD: On ViT networks, the training time is increased by 1.7 times if we replace SGLD by VI; the GPU memory footprint is increased by 2.1 times. - iii) As far as we know, no one has used SGLD in sparse deep learning at these large network scales. Although [1] (Masked BNN) used MCMC, it adopted Metropolis-Hastings, which is less sample efficient due to the rejection probability. >**2. I would suggest the author use formula to describe the modifications to SGLD described from line 212 to line 223.** As the reviewer suggested, and also for the purpose of better clarification, we will re-write Eq.5 to reflect the modification as described in Line212-223. >**3. What optimizers are used for BayesTune? Does the baseline approach use same optimizer?** The Adam optimizer is used for all competing methods on Tab 1, following the common practice in [Lee19, Jiant20, Xu21]. For the vision tasks (VTAB), all methods use the AdamW optimizer following standard practice in the VTAB benchmark suite. - [Lee19] Mixout: Effective regularization to finetune large-scale pretrained language models, C. Lee et al., ICLR 2019. - [Jiant20] Jiant2.0: A software toolkit for research on general-purpose text understanding models, J. Phang et al., http://jiant.info/, 2020. - [Xu21] Raise a child in large language model: Towards effective and generalizable finetuning, R. Xu et al., EMNLP 2021. >**4. What happens if LoRA have the same the number of parameter used by BayesTune?** For NLP tasks, we have the same sparsity level ($p=0.005$) for all competing methods, so it is already a completely controlled comparison. For vision (VTAB) tasks, we think that the reported LoRA with dim=8 was the optimal hyperparameter choice. However, as the reviewer suggested, we can match the sparsity levels of LoRA and our BayesTune for a more controlled comparison. Please recall that LoRA-dim8 amounts to updating 0.29M parameters while BayesTune updated 0.38M parameters. Instead of increasing the dimension of LoRA, which might require corresponding re-tuning of other LoRA learner hyperparameters, thus potentially being unfair to LoRA, we instead decrease the sparsity level for our BayesTune so that we have the same 0.29M parameters updated. The results are: Model (# params updated) | Average Rank | # times Rank=1 :------: | :---: | :----: LoRA (0.29M) | 2.68 | 4 BayesTune (0.38M) | 2.37 | 7 BayesTune (0.29M) | 2.58 | 6 So, even when the number of parameters are made equal, BayesTune outperforms LoRA. (Note here that we used the latest version Table 3 in Appendix, instead of Table 2). Please also note that the other existing competitors in the VTAB comparison used to compute ranks above are not parameter count controlled like this, as parameter count controlling was not standard practice in prior work. >**5. What's the advantage of posterior inference over MAP estimation in this setting, i.e. remove the last term in Eq.5?** MAP vs. SGLD: MAP aims to find a mode of the posterior distribution, which might be more sensitive to data noise than the mean of the posterior. The following is some empirical comparison between MAP and SGLD on NLP tasks, showing that this distinction does lead to empirical benefit: Avg 10 runs | CoLA | STS-B | MRPC | RTE | CB | COPA | WSC | **AVG** :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: MAP | 58.55 | 90.13 | 90.54 | 76.72 | 86.71 | 71.09 | 60.34 | 76.30 SGLD | 60.85 | 90.40 | 90.61 | 77.87 | 91.25 | 75.00 | 60.87 | 78.12 Another benefit of SGLD is that we can also exploit the variance of the $\lambda$ posterior in parameter selection (e.g., for two parameters with similar posterior mean $\lambda$ values, we prefer to select the one with smaller posterior variance). In future algorithms, we can also exploit this idea of variance-based weight pruning (but we didn’t do this yet). >**6. (Minor) Some figures are not in vector format and the fonts are too small. It would be interesting to see the performance of the proposed model on Large language model.** We apologize for this. After the review/rebuttal phase, we will replace them with vector format figures. We plan to do it on LLMs even larger than RoBERTa, eg, LLAMA, in our future work. --- Rebuttal Comment 1.1: Title: Post rebuttal comment Comment: I would like to thank the author for the detailed response, however I cannot agree with the authors' argument regarding MCMC and hierarchical Bayes methods - The author should consider adding more discussions on the many prior arts for hierarchical Bayes prior's application in Bayesian deep learning. At this point, I did not see any citations for these literatures. - It is NOT accurate to say using Metropolis Hastings (MH) adjustment for SGLD is **inefficient**. If I understand correctly, the author of [1] uses MH to ensure the unbiasedness of posterior approximation, which is not discussed in this submission. From my opinion, it is OK and a common practice to drop MH adjustment in Bayesian deep learning, but it is not reasonable consider this as an advantage. In fact, I believe the SGLD used in the submission is just the most standard SGLD without any modifications, with that said, I do agree that **it is a novel application of SGLD** but the authors do not tailor on the inference algorithm for foundation model fine-tuning setting. - I apologize for the confusion in my question regarding the choice of optimizer, the major question I have is: When implementing SGLD, does the author strictly follow Eq.5 or uses "Adam + gradient noise", which is a commonly seen incorrectly implemented SGLD. In fact, if the author needs preconditioning and adaptive step size together with SGLD, the author should consider the version of SGLD provided by the paper "Bayesian Neural Network Priors Revisited". - I would like to thank the author for the additional experiments on LoRA rank, it resolves my concern. - MAP v.s. SGLD: I would like to thank the author for providing the additional experiments. It would be good if the author can provide more comparision on the value of \lambda acquired and details on how the MAP is acquired in later revisions. I decided to raise my score however I still believe the paper's details in the Bayes part should be discussed more clearly, as SGLD is an algorithm that involves many details, e.g. step size scheduling, temperature setting, model/parameter ensembling, etc. Presenting the details (and potentially the sensitivity to those hyper parameters) more clearly can allow better reproducibility. --- Reply to Comment 1.1.1: Title: Thank you for the post rebuttal comments! Comment: We thank the reviewer again for valuable post rebuttal comments. Our responses to the follow-up questions and comments are as follows: > **1. The author should consider adding more discussions on the many prior arts for hierarchical Bayes prior's application in Bayesian deep learning. At this point, I did not see any citations for these literatures.** We promise that we will add those papers on hierarchical Bayesian methods with applications to deep learning. We will do extensive investigation on the most relevant and recent prior works, as well as including the followings as suggested by the reviewer, - [1] “Masked Bayesian Neural Networks : Theoretical Guarantee and its Posterior Inference”, Insung Kong, Dongyoon Yang, Jongjin Lee, Ilsang Ohn, Gyuseung Baek, Yongdai Kim, ICML 2023 - [2] “Dropout as a structured shrinkage prior”, Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth, ICML 2019 - [3] “Posterior concentration for sparse deep learning”, Nicholas Polson, Veronika Rockova, NeurIPS 2018 - [4] “Bayesian compression for deep learning”, Christos Louizos, Karen Ullrich, Max Welling, NeurIPS 2017 > **2. It is NOT accurate to say using Metropolis Hastings (MH) adjustment for SGLD is inefficient. If I understand correctly, the author of [1] uses MH to ensure the unbiasedness of posterior approximation, ... I do agree that it is a novel application of SGLD but the authors do not tailor on the inference algorithm for foundation model fine-tuning setting.** We agree with the reviewer’s concerns and we will remove the parts we said that MH is computationally inefficient. Our statement in the response was over-exaggeration. Yes, we used basic settings for SGLD. Investigating further optimisation for SGLD as the reviewer suggested, will be valuable and promising, and we will pursue this in the future work. Thank you for your insightful comments! > **3. I apologize for the confusion in my question regarding the choice of optimizer, the major question I have is: When implementing SGLD, does the author strictly follow Eq.5 or uses "Adam + gradient noise" ...** We used the Adam optimiser for updating the model parameters, thus we believe that there was some sort of (internal) gradient adaptation and momentum effect under the hood. To be honest, we were not aware of the implementation detail that the reviewer mentioned although we found some previously proposed strategies that considered adaptive drift and momentum in SGLD (eg, https://arxiv.org/pdf/2009.09535.pdf). In this regard, we thank the reviewer for the in-depth comments. Even though we doubt that our SGLD update scheme with Adam could lead to a significantly different solution compared to the original SGLD formulation, we will consider (and cite) those references the reviewer pointed out in our revised paper. > **4. MAP v.s. SGLD: I would like to thank the author for providing the additional experiments. It would be good if the author can provide more comparison on the value of $\lambda$ acquired and details on how the MAP is acquired in later revisions.** *1) The learned $\lambda$ values comparison between MAP and SGLD*: In response to reviewer's request, we have prepared figures comparing the sparsity patterns of the learned $\lambda$s for our SGLD and the MAP solution, on the NLP benchmarks (similar to Figure 6 and the rest in our submitted appendix). Due to the difficulty of sharing figures at this discussion stage, we only visualise some small snapshot as a table at the bottom of this thread. This is for the COPA task and module-wise sparsity patterns of the learned SGLD and MAP. Visually we find that they exhibit quite different sparsity patterns, indicating that the impact of the noise/drift term in our SGLD is significant. We will add the full figures of SGLD and MAP that we prepared in our revised paper. *2) How the MAP is acquired*: For the MAP we dropped the last noise term in Eq.(5). The rest steps are the same as SGLD. | Module# | Module name | SGLD (% updated) | MAP (% updated) | | :--- | :--- | :---: | :---: | | 150 | encoder.layer.9.attention.self.query.bias | 0.91 | 0.00 | | 45 | encoder.layer.2.attention.output.LayerNorm.weight | 0.65 | 3.26 | | 182 | encoder.layer.11.attention.self.query.bias | 0.78 | 0.00 | | 93 | encoder.layer.5.attention.output.LayerNorm.weight | 0.52 | 3.12 | | 178 | encoder.layer.10.output.dense.bias | 1.17 | 0.39 | | 13 | encoder.layer.0.attention.output.LayerNorm.weight | 0.65 | 3.12 | | 166 | encoder.layer.10.attention.self.query.bias | 0.91 | 0.13 | | 4 | embeddings.LayerNorm.bias | 0.39 | 2.86 | | 134 | encoder.layer.8.attention.self.query.bias | 1.04 | 0.26 | | 99 | encoder.layer.5.output.LayerNorm.weight | 0.39 | 2.34 | | 181 | encoder.layer.11.attention.self.query.weight | 0.73 | 0.05 | | 115 | encoder.layer.6.output.LayerNorm.weight | 0.39 | 2.34 | We will keep the reviewer’s points in mind when we prepare a revised version. Thank you very much.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose an automated sparse fine-tuning method for foundations model, that bypasses the need for human intuition-based heuristics. The neurons to update are revealed during the posterior inference of the sparse scale parameters of a Laplace prior, by thresholding the scale parameters. The method is experimentally validated in both vision and NLP tasks. Strengths: The proposed method is principled, and relies on a hierarchical Bayesian model. Posterior approximation is done with Langevin MCMC, which does not introduce a significant computational overhead. The experimental results show that the proposed method convincingly improves upon already existing heuristics. Weaknesses: I wonder what's the rationale behind using the elbow rule (figure 1) to select the proportion p of parameters to update. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Why was the Gamma distribution chosen as a hyperprior ? - If the step after SGLD is to evaluate the mean of the scale parameters, aren't there cheaper methods that evaluate the mean, without stochastically approximating the posterior ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**1. I wonder what's the rationale behind using the elbow rule (figure 1) to select the proportion** $p$ **of parameters to update.** This is just an illustration of one potential heuristic method to select the sparsity level $p$. Other criteria could also be used. >**2. Why was the Gamma distribution chosen as a hyperprior ? If the step after SGLD is to evaluate the mean of the scale parameters, aren't there cheaper methods that evaluate the mean, without stochastically approximating the posterior ?** We adopt Gamma because with $\alpha<1$ it has mode at $0$ and it is decreasing, thus allowing us to express our preference of small $\lambda$ a priori. But any other distribution with this property could also be used as a hyperprior. *Regarding SGLD posterior mean*: Yes, one can use other methods to evaluate the posterior mean or any surrogate for it, eg, SWAG or EMA could potentially be used to estimate the posterior mean. However SGLD is also as efficient as these other alternative methods, and benefits from the fact that SGLD’s stochastic dynamics guarantee (in theory) to lead to the posterior mean exactly. --- Rebuttal Comment 1.1: Comment: Thank you for your answers ! --- Reply to Comment 1.1.1: Title: Thank you very much! Comment: Thank you very much!
null
null
null
null
null
null
GLOBER: Coherent Non-autoregressive Video Generation via GLOBal Guided Video DecodER
Accept (poster)
Summary: This paper proposes a new video generation framework based on extracting the global features of the video and conditional diffusion model to predict frame features, leading to the frame. The paper argues the proposed method outperforms prior video generation methods on various benchmarks, including UCF-101, Taichi-HD, and SkyTimelapse. Strengths: - Compared with the prior video generation methods, the proposed method considers non-autoregressive approach for generating the video, which can improve the efficiency in inference time. - The proposed method shows better performance compared with prior works. Weaknesses: - The overall framework is quite complex, including so many notations, and a bit difficult to follow. For instance, Why $I_j$ is put into the video decoder model as well as $I_j$ in Figure 2? Moreover, are KL-VAE and video encode/decoder, discriminator jointly trained or not? What is the intuition of letting the video decoder network as a conditional diffusion model instead of letting simple 2D CNNs? Why do we need to consider "keyframes" for extracting global features from a given video? Does DiT for modeling global features is trained in post-hoc manner after the training of the entire framework? - The paper misses an efficiency comparison with recent latent video diffusion models to improve the efficiency in training and efficiency: e.g., LVDM [He et al., 2023] and PVDM. Compared with these frameworks, what is the advantage and disadvantages of the method? - Typo: Specificcally -> Specifically in L197. --- [He et al., 2023] Latent Video Diffusion Models for High-Fidelity Long Video Generation [Yu et al., 2023] Video Probabilistic Diffusion Models in Projected Latent Space, CVPR 2023 Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - I guess the proposed method may show a worse performance if the targeting video length becomes large because the quality of global features might have a limitation and the decoder that synthesizes a frame in frame-index conditioned manner has a limited capacity. What is the (empirical) maximum length for high-quality modeling with this framework? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The paper adequately addresses the limitations in Conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### R4.1 To facilitate following our framework. [Author] To faciliate following, we will release our codes and checkpoints as stated in the footnote of the paper. #### R4.1.1 Why Ij is put into the video decoder model as well as Ii in Figure 2? [Author] In Figure 2, we depict the training procedure of our framework. As specified in L164-166 and L186-188, the video discriminator takes paired frames <xi,xj> as inputs during training. Thus Ij should be inputted into the video decoder as well as Ii to obtain corresponding frames xi and xj. #### R4.1.2 Are KL-VAE and video encode/decoder, discriminator jointly trained or not? [Author] They are not joint trained. As specified in L102-104, KL-VAE is pretrained and fixed, and the video encoder/decoder as well as the video discriminator are trained jointly. #### R4.1.3 What is the intuition of letting the video decoder network as a conditional diffusion model instead of letting simple 2D CNNs? [Author] Our video auto-decoder utilizes much fewer latents to encode an input video compared to previous methods as presented in the Table Q1.3 (in **[common question 1]**). To this end, we have to rely on the powerful generation capability of the conditional diffusion model to help reconstruct local characteristics of video frames under the guidance of video global features. Moreover, training consumption can be reduced by initializing the conditional diffusion model with parameters of successful image diffusion model as specified in L49-50. #### R4.1.4 Necessity to consider "keyframes" for extracting global features from a given video. [Author] It’s time- and computation- consuming to process all video frames for extracting global features, especially when we encode long videos such as 128. As depicted in Fig. R4.1.4 **[in the PDF file]**, when the number of input video frames increasing, the maximum training batchsize dramaticly decreases and the required training time boosts. Thus it is necessary to use keyframes rather than all video frames to obtain better training efficiency. #### R4.1.5 Does DiT is trained in post-hoc manner? [Author] Yes, as specified in L191-194, DiT is separately trained to fit the distribution of global features for video generation. ### R4.2 Comparison with LVDM and PVDM. We find that PVDM has not provided generation codes and model checkpoints, and LVDM only released checkpoints and scripts for short-video generation. Thus, we follow the experimental setups in PVDM to compare the effeciency among PVDM, LVDM, and our GLOBER. The results are reported in Table Q1.2 (in [common question 1]). We discuss the advantages and disadvantages of our method as follows: **Advantages:** - GLOBER can take advantage of the powerful generative capability of pretrained image diffusion models (e.g. stable diffusion) to synthesize reconstructed video frames, thus requiring a much smaller dimension of latent features to represent videos as demonstrated in Table Q1.3 (in [common question 1]). - GLOBER is more flexible than PVDM and LVDM when decoding video frames from video latents. The video decoder in GLOBER can decode arbitrary video frames without length or interval limitations by taking the normalized indexes of target video frames as inputs. - GLOBER is more efficient than PVDM and LVDM when training for video generation and synthesizing long videos as demonstrated in Table Q1.2 (in [common question 1]). - As reported in Table. R4.2.1, GLOBER obtains better performance than PVDM and LVDM on UCF-101 for 16-frame video generation and on SkyTimelapse for 128-frame video generation. Table 4.2.1 Quantitative comparison for video generation with the resolution of 256^2. N/M-s for PVDM names N DDIM steps for genration the initial video clip and M DDIM steps for synthesing following video clips. N/M-s for GLOBER means N DDIM steps for the generation of global features and M DDIM steps for decoding video frames. |||UCF-101||SkyTimelapse| |:-:|:-:|:-:|:-:|:-:| || FVD16 | Total Sampling Steps | FVD128 | Total Sampling Steps | |StyleGAN-V | 1431.0 | - | 197.0 | - | |LVDM | 372 | - | 185.0 | - | |PVDM-S; 100/20-s | 457.4 | 100 | 159.9 | 240 | |PVDM-L; 200/200-s | 398.9 | 200 | 137.2 | 1600 | |PVDM-L;400/400-s|343.6|400|125.2|3200| |GLOBER(ours); 50/50-s|252.7|100|125.5|100| |GLOBER(ours);100/100-s|**248.9**|200|**122.4**|200| **Disadvantages of GLOBER:** - PVDM and LVDM poses stronger constraints on the correlations of adjacent video frames than GLOBER, and thus obtains better performance than GLOBER on short-video generation on simple domain dataset like SkyTimelapse and long-video generation on multi-motion datasets like UCF-101. ### R4.3 Spelling error. [Author] Thanks, we will fix the spelling error in the revision. ### R4.4 Explore the capacity of global features and the video decoder. [Author] To explore the empirical maximum length of our method, we first calculate the distribution of video length in the SkyTimelapse and TaiChiHD datasets. As depicted in Fig. R4.4.1(a) **[in the PDF file]**, TaiChiHD contains much more long videos than SkyTimelapse, thus we conduct analysis experiments on the TaiChiHD dataset. We train our video auto-encoder to model L-frame videos with (EXP1) L=16, FPS=16, 1s; (EXP2) L=64, FPS=16, 4s; (EXP3) L=128, FPS=16, 8s; (EXP4) L=256, FPS=16, 16s; (EXP5) L=1024, FPS=32, 32s. Particularly, EXPi loads the last checkpoint of EXPi-1 and is trained for 1500 epochs (15h). As depicted in Fig. R4.4.1(b), the performances of different models are comparable when the video length is no more than 512, while significantly performance drop can be seen when the video length being 1024 due to the increasement of video information. Moreover, we find that frame interpolation (FI) can help our model obtain longer videos with comparable FVD scores. In conclusion, our global features can well-capture a video with 16 seconds, and our video decoder can decode videos with FPS being at least 32. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the response. It helps me a lot to understand the details of the method. However, at the current status, it is difficult for me to recommend acceptance. Specifically, I still have a doubt about the capability of your video autoencoder to encode possibly long videos through relatively low-dimensional latents (4,096). The response states that the proposed method can compress video (128, 256, 256, 3) to 4,096, but I really don't think this can achieve high-quality reconstructions, especially on complex datasets (e.g., UCF-101, Kinetics, and so on). In the authors' response to my review, the authors do not provide any reconstruction/generation results on such a long video with complex datasets. Without this result, it is hard for me to believe whether this method indeed scales up well to large-scale and complex datasets. In addition, the authors state "PVDM and LVDM obtain better performance than GLOBER on short-video generation on simple domain datasets like SkyTimelapse and long-video generation on multi-motion datasets like UCF-101.", I think short-video generation on simple domain dataset and long video generation on multi-motion datasets has no similarity and thus think the analysis provided in the response is not that insightful. Considering all of these aspects, I will retain my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer P49b Comment: Thank you for your response! In response to your two questions, we have the following explanations: ### R4.5: Generation results on long videos with complex datasets **We have provided the qualitative and quantitative results for 128-frame video genration on the UCF-101 dataset in the A.2 section (L9-18) in the appendix.** As reported in the Table 1 of the appendix (which is copied in the following), our GLOBER outperforms previous methods StyleGAN-V (CVPR2022) and VIDM (AAAI2023) by a large margin. We also visualize the generated long videos on the UCF-101 and SkyTimelapse datasts in the link provided in the L21 of the appendix. Moreover, for short video generation, our method can obtain comparable performance with current SOTA models on the much more complex dataset WebVid-10M as reported in the Table Q1.4 of the common question 1. Table 1: Quantitative Results of FVD comparison on the SkyTimelapse and UCF-101 datasets for 128-frame long video generation. | Method | UCF-101 | Sky Timelapse | |:-|:-:|:-:| | MoCoGAN [CVPR18] | 3679.0 | 575.9 | | +StyleGAN2 backbone | 2311.3 | 272.8 | | MoCoGAN-HD [ICLR21] | 2606.5 | 878.1 | | DIGAN [ICLR22] | 2293.7 | 196.7 | | StyleGAN-V [CVPR22] | 1773.4 | 197.0 | | VIDM [AAAI23] | 1531.9 | 140.9 | | GLOBER (ours) | **1177.4** | **125.5**| ### R4.6: Explanation of model performance compared to PVDM and LVDM **Performance on the multi-motion dataset** Since PVDM and LVDM auto-encode a fixed number of video frames while our GLOBER pursue flexible decoding and use much less number of latent elements, PVDM and LVDM can put stronger constraint on the consistency of decoded video frames than our GLOBER when video motion is dramatic, e.g. long videos in a multi-motion dataset. Thus they obtain better performance on 128-frame video generation on UCF-101. However, for short video generation on the multi-motion dataset, our GLOBER can obtain comparable video consistency and much better video realism since PVDM and LVDM requires much more number of elements to represent a video clip than GLOBER (Table. Q1.3 in the common question 1), making their video generators difficult to fit the distribution of video latents, and our GLOBER employs the powerful diffusion model as the video decoder. **Performance on the simple domain dataset** The key reason of PVDM outperforming our GLOBER for short-video generation on simple domain dataset like SkyTimelapse lies in that since videos in such dataset contain mostly simple and statistic scenes (city or nature scenes), the determinstic video decoder in PVDM may obtain better video reconstruction than our diffusion video decoder. Notably, despite that LVDM also employs a determinstic video decoder, it performs inferior to our GLOBER (95.2 vs 78.1 FVD) since it requires three times the number of latent features of this method, thus being too difficult to fit the latent distribution well. When the length of video increases, the video consistency of our GLOBER is still comparable with them due to videos containing small motions (clouds floating and other variation of sky), while the video realism of PVDM and LVDM drops due to error accumulation (both of them employ the auto-regressive generation strategy). Thus our GLOBER can obtain a better score against PVDM and LVDM in such case. --- Reply to Comment 1.1.2: Title: For Reviewer P49b Comment: Dear Reviewer P49b, There is not so much time left for the discussion stage, if you still have questions about our work, please let us know and we will reply as soon as possible, thanks for your effort and time! Best wishes, Author
Summary: This work introduces a novel non-autoregressive method, GLOBER, that first generates global features for comprehensive global guidance and then synthesizes video frames based on these global features to produce coherent videos. The authors propose a video auto-encoder to encode videos into global features and a video decoder to decode the global features and synthesize video frames in a non-autoregressive manner. Notably, the video decoder uses normalized frame indexes to perceive temporal information, allowing it to synthesize any video clips with predetermined frame indexes. The authors also introduce a unique adversarial loss to enhance global coherence and local realism of the synthesized video frames. Finally, a diffusion-based video generator is employed to fit the global features produced by the video encoder for video generation. The effectiveness and efficiency of the proposed method are demonstrated through extensive experiments, and it sets new state-of-the-art results on multiple benchmarks. Strengths: 1. The inclusion of Coherence and Realism Adversarial Loss is a novel approach compared to previous diffusion-based architectures. 2. Extensive experiments have been performed on various benchmarks, all demonstrating the significance of GLOBER. Weaknesses: 1. The authors identify VideoFusion as the most closely related work due to its use of non-autoregressive generation. However, there are other public models, such as ModelScope Text-to-Video, that use non-autoregressive generation in the latent space similarly to the authors' work. I suggest that the authors compare their work to these models as well. 2. It appears that GLOBER outperforms VideoFusion in all tasks, which generates videos in the pixel space. This superiority seems to result from the CRA loss proposed by the authors. Therefore, a direct comparison between GLOBER (without CRA loss) and VideoFusion would be intuitive. However, inconsistent results in Table 1 and 3 make this comparison unfeasible. Could the authors explain this inconsistency and provide justifications for GLOBER's superiority over VideoFusion, aside from the CRA loss? 3. My interpretation of Equation 9 suggests it's an estimation of the frame feature. However, this estimation might not be accurate because, like in DDPM (or DDIM), one could perform T steps of reverse denoising to generate images. What is the quality difference between these two types of images? I assume that the frame feature generated by Equation 9 will be of lower quality. 4. In Table 2, why does GLOBER use 50+50 diffusion steps? 5. The optimization objective of Equation 2 in the Video Encoding section appears to be derived from the Variational Autoencoder. Could the authors provide a justification for this design? My understanding is that video encoding trains a dataset-dependent distribution to be sampled as z_t. 6. In Line 127, "Gauss distribution" appears to be misspelled. 7. The authors seem to have overlooked specifying the dimension of C'. My score could be revised upward if my concerns are adequately addressed. **Reference:** [1] ModelScope Text-to-Video Technical Report, arXiv. (Model: **https://modelscope.cn/models/damo/text-to-video-synthesis/summary**). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: In this work, the authors mentioned limitations and broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### R3.1 More comparison with contemporary methods. We present more efficiency and quality comparsion with contemporary methods like Modelscope, LVDM, and PVDM in our global rebuttal **[Common Question 1]**, please view it for more details. ### R3.2 #### R3.2.1 Inconsistency of Table 1 and Table 3. [Author] Table 1 reports the generation performance that involves both the video auto-encoder and DiT, while Table 3 measures the reconstruction FVD that only involves the video auto-encoder as specified in L276-277. We will modify the FVD in Table 3 to FVDrec to avoid confusion in the paper later. #### R3.2.2 Comparison of VideoFusion and GLOBER without CRA is unfair. [Author] Different from VideoFusion, which naturally incorporates constraints on the local consistency of frame-wise characteristics by decomposing video components, our GLOBER requires the CRA loss to obtain local constraints by punishing video frames that violate such consistency. Thus it may be unfair for our GLOBER to compare with VideoFusion without the CRA loss. ### R3.3 Quality difference between two types of images is acceptable. [Author] As depicted in Fig. R3.3.1 **[in the PDF file]**, the quality deterioration in estimated video frames is acceptable for most samples. It is reasonable since the diffusion process is under the guidance of global features, which contain sufficient local and global information. ### R3.4 Why we use 50+50 diffusion steps in GLOBER. [Author] As specified in L220-224, 50+50 diffusion steps are the default settings for all our experiments. Such setups follow the setting of stable diffusion, which uses 50 ddim steps in default, and obtains well efficiency and generation quality as reported in our experiments. ### R3.5 The design of Equation 2. [Author] Our target is exactly to train global features to conform to a dataset-dependent distribution similar to z_t. In fact, the distribution of z_t is also punished towards a standard normal distribution during training by employing the KL loss (Equation 2) to avoid high shift and high variance, as specified in [1]. The formulation of the KL loss is derived from the KL divergence between two multi-variable Gauss distributions (one is the distribution outputted by the video encoder and another is the standard Gauss distribution), and the KL-penalty is slight with a small loss weight of 1e-6 as specified in L217 following [1], thus the final distribution of global features is still dataset-dependent. The effectiveness of the KL loss has been proven in [1]. For better understanding, we conduct an ablation study on the variance prediction and the KL loss. As reported in Table R3.5.1, removing variance prediction brings improvements on FVDrec, but deteriorates FVDgen significantly since the video decoder is no longer robust to disturbance of generated features. Adding variance prediction improves the generation performance to some extent. The performance further boosts after employing the KL loss since the KL loss brings neglectable decrease on FVDrec but can effectively facilitate DiT to model the distribution of global features using the diffusion theory. Table R3.5.1 Ablation study on variance prediction and the KL loss. All experiments are conducted on the TaiChiHD dataset for 16-frame video generation with 256^2 resolution. Autoencoders are trained for 1500 epochs (15h) and DiTs are trained for 2000 epochs (14h). | Design of the video auto-encoder | FVDrec | FVDgen | | :-: | :-: | :-: | | Determinstic (w/o variance) | 68.9 | 773.4 | | +variance prediction (w/o KL loss) | 71.5 | 549.5 | | +variance prediction + KL loss | 75.3 | 332.7 | [1] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models”. ### R3.6 Spelling Error. [Author] Thank you for bringing this to our attention. We will rectify the spelling error in revision. ### R3.7 The dimension of C'. [Author] The dimension of C’ is 4 for 256x256 resolution and 3 for 128x128 resolution. We will add explanations in the revision. --- Rebuttal Comment 1.1: Title: Post-rebuttal discussions Comment: Dear the authors of Paper 6343, Many thanks for your detailed reply. It solves most of my concern. I have one follow-up question about Question3.5. Is the variance prediction in Table R3.5.1 indicates the sampling process in Eq.1? Btw, is the auto-encoding stage in [1] trained in an end-to-end manner with the denoising UNet using KL loss, like what GLOBER does? I look forward to your reply. Kind regards, Reviewer JPZt --- Reply to Comment 1.1.1: Title: Response to Reviewer JPZt Comment: Thank you for your response! For the first question, **yes**, the variance prediction in Table R3.5.1 is the sampling process in Eq. 1. For the second question, **no**, the auto-encoding stage in [1] utilized simple CNNs as its encoder and decoder. The overall structure of the auto-encoder in [1] (i.e. KL-VAE) is similar to the traditional VQ-VAE [2] or VQ-GAN [3] except for that KL-VAE represents images with continuous latent features while VQ-VAE and VQ-GAN represents images with discrete tokens through vector-quantization, which are difinitely different from our model. [2] Zero-Shot Text-to-Image Generation. [3] Taming Transformers for High-Resolution Image Synthesize.
Summary: In this paper, the author studies the text-to-video task and proposes a method called GLOBER. The proposed method first generates a global guidance feature, then the video frames are generated through a diffusion model that takes the frame index as a condition. An adversarial loss is also proposed to improve global coherence and local realism. Strengths: 1. The overall presentation of the proposed method is clear and easy to follow. 2. The author conducts experiments on three widely used datasets (e.g., Sky Time-lapse, TaiChi-HD and UCF-101). Weaknesses: 1. As the author claims their method is capable of generating videos from text. It would be great if the author could compare their methods with SOTA open-sourced methods (DAMO-text2video, VideoCrafters, CogVideo and VideoFactory) on WebVid-10M. 2. The author should also consider comparing their methods with PVDM. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### R2.1 Experiments on WebVid-10M. [Author] We present quality comparsion with ModelScope (DAMO-text2video)[1], VideoCrafters[2], LVDM[3], and VideoFactory[4] on WebVid-10M in our global rebuttal **[Common Question 1]**, please view it for more details. We find the open-sourced code of CogVideo does not support parallel generation given different input descripitons and requires ~10 minutes to synthesize a video, which is time-consuming thus we only add comparison with CogVideo on the UCF-101 dataset. [1] VideoFusion: Decomposed diffusion models for high-quality video generation. 2023. CVPR. [2] Inference with codes from https://github.com/VideoCrafter/VideoCrafter [3] Latent video diffusion models for high-fidelity long video generation. 2022. ARXIV. [4] VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation. 2023. ARXIV. ### R2.2 Comparison with PVDM. Thanks for bringing this to our attention, we discuss the differences between PVDM and our GLOBER, and then analyze the advantages and disadvantages of our GLOBER compared to PVDM as follows (all reported results of GLOBER are evaluated using previous checkpoints), which we will add to the paper later. #### Differences - Motivation and auto-encoder: To address the computation- and memory-inefficiency problem of video generation, PVDM utilizes 3D-to-2D projection to encode a video into three 2D latent features, and reconstructs videos in a deterministic manner, where the video decoder directly decodes local characteristics of reconstructed frames from the latent features. Different from PVDM, our GLOBER focuses on providing global guidance for long-video generation and encodes a video into a single latent feature. Thus GLOBER utilizes the powerful generation capability of the pretrained image diffusion model to synthesize local characteristics of reconstructed images, which significantly reduces the total dimension of flattened latent features. - Video generator and generation strategy: The video generator of PVDM has to fit three 2D latents simultaneously for each video, and modeling long video generation in an autoregressive manner (i.e. generating following 16 frames given previous 16 generated frames). However, the video generator of GLOBER only needs to model one 2D latent for each video and employs a non-autoregressive generation strategy to synthesize long videos (i.e. generating 128 frames with one sampling). #### Advantages of GLOBER compared to PVDM: - GLOBER can take advantage of the powerful generative capability of pretrained image diffusion models (e.g. stable diffusion) to synthesize local characteristics of reconstructed video frames under the guidance of global features, thus requiring a much smaller dimension of latent features to represent videos as demonstrated in Table R2.2.1. Table R2.2.1 Comparison on the dimension of flattened latent features for representing a T-frame video with HxW resolution. LVDM encodes videos using 3D CNN with the spatial and temporal downsampling rates being 8 and 4 respectively, obtaining latents with shape (T//4, H//8, W//8, 3). PVDM utilizes 3D-to-2D projection to encode a video into three 2D latents with shape (T, H//d, 4), (T, W//d, 4) and (H//d, W//d, 4) respectively, with d=8 in default. GLOBER extracts video global information and encodes an input video into a latent feature with shape (H//16, W//16, 16). | (T, H, W, C) | LVDM | PVDM | GLOBER(ours) | | :---: | :---: | :---: | :---: | | (16, 256, 256, 3) | 12288 | 8192 | **4096** | | (128, 256, 256, 3) | 98304 | 65536 | **4096** | - GLOBER is more flexible than PVDM when decoding video frames from video latents. In PVDM, the video decoder reconstructs video frames with fixed length (i.e. 16 frames) and fixed interval (i.e. predefined FPS). However, the video decoder in GLOBER can decode arbitrary video frames without any length or interval limitation by taking the normalized indexes of target video frames as inputs. - GLOBER is more efficient than PVDM when training for video generation and synthesizing long videos as demonstrated in Table Q1.2 in **[Common Question 1]**. - As reported in Table. R2.2.2, GLOBER obtains better performance than PVDM on UCF-101 for 16-frame video generation and on SkyTimelapse for 128-frame video generation. It is reasonable since by initializing from pretrained image diffusion model, the video decoder in GLOBER becomes much more powerful than that in PVDM, thus enhancing the generation of complex videos like UCF-101. For the small domain dataset SkyTimelapse, the global guidance provided by our GLOBER is able to improve the global coherence of synthesized long videos, thus obtaining better FVD. Table R2.2.2 Quantitative comparison for video generation with the resolution of 256x256. N/M-s for PVDM names N DDIM steps for genration the initial video clip and M DDIM steps for synthesing following video clips. N/M-s for GLOBER means N DDIM steps for the generation of global features and M DDIM steps for decoding video frames. | | | UCF-101 | | SkyTimelapse | | :---: | :---: | :---: | :---: | :---: | | | FVD16 | Total Sampling Steps &#124; | FVD128 | Total Sampling Steps | | StyleGAN-V | 1431.0 | - | 197.0 | - | | PVDM-S; 100/20-s | 457.4 | 100 | 159.9 | 240 | | PVDM-L; 200/200-s | 398.9 | 200 | 137.2 | 1600 | | PVDM-L; 400/400-s | 343.6 | 400 | 125.2 | 3200 | | GLOBER (ours); 50/50-s | 252.7 | 100 | 125.5 | 100 | | GLOBER (ours); 100/100-s| **248.9** | 200 | **122.4** | 200 | #### Disadvantages of GLOBER compared to PVDM: - PVDM poses stronger constraints on the correlations of adjacent video frames than GLOBER, thus obtaining better performance than GLOBER in short-video generation on simple domain dataset like SkyTimelapse and long-video generation on multi-action datasets like UCF-101. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I think most of my concerns have been well addressed. I will raise my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer xuBL Comment: Thank you for your recognition of our job! We will incorporate the above discussions when we revise our paper.
Summary: The study introduces a unique non-autoregressive approach called GLOBER. This method initially generates global features, offering a thorough global guidance, which then synthesizes video frames using these global features to produce cohesive videos. Furthermore, the study suggests a coherence and realism adversarial loss to improve the quality of the videos. Strengths: The suggested non-autoregressive technique is simple and effective. The empirical tests and ablation studies conducted are adequate Weaknesses: For Table 2, the absence of some contemporary methods such as modelscope implies that the assertion regarding inference time and GPU memory may not be as robust as claimed. What strategies are in place to ensure that the distribution of global features produced by DiT during inference aligns with the features acquired by the video encoder during the training phase? I'm intrigued to find out whether this non-autoregressive approach is effective with lengthy videos, for example, those consisting of 128 or 256 frames. If it's not, the benefits of this method could be significantly reduced. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### R1.1 More efficiency comparison with contemporary methods. [Author] We present more efficiency and quality comparsion with contemporary methods like Modelscope, LVDM, and PVDM in our global rebuttal **[Common Question 1]**, please view it for more details. ### R1.2 Align the distribution of global features. [Author] We first train the video auto-encoder as well as the discriminator jointly until convergence, and then train DiT with the parameters of the video auto-encoder being fixed to ensure the alignment. As specified in L53-54 and L191-194, DiT is an independent generative model and is optimized after we finish training the video auto-encoder. The training setups for the video auto-encoder and DiT are also separately presented in Section 4.1. ### R1.3 The effectiveness of GLOBER on synthesizing long videos. [Author] Given that 128 is a commonly used length when testing long video generation, we have presented quantitative and qualitative results of 128-frame video generation on the UCF-101 and Sky Time-lapse datasets with 256x256 resolution in the appendix, which demonstrate the effectiveness of our GLOBER on long video generation. Samples are also visualized in the link of the A.3 section in the appendix. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. My concerns have been well addressed. But as a video generation paper, it is necessary to provide the video results, in addition to quantitative results. Could you please provide an anonymous links containing video results? As indicated in PC emails, such a link is allowed and encouraged for video generation papers. --- Reply to Comment 1.1.1: Title: Response to Reviewer 4poX Comment: We have provide an anonymous link in Section A.3 (L20-21) of the appendix, and that is https://anonymouss765.github.io/GLOBER. Please refer to this link for both short and long video samples! Thank you for your response!
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for your valuable reviews and suggestions. We have carefully replied to each question and we are welcome for more discussions!! ### [Common Question 1] More comparison with contemporary methods. ### Efficiency comparison: [Author] We add comparisons with contemporary methods such as modelscope and LVDM in Table Q1.1, where all settings are the same as that in Table 2. We also compare with the recent method PVDM as reported in Table Q1.2 using a 3090 GPU due to lacking 3090ti GPUs. Our method obtains outstanding efficiency for the following two reasons: - 1) We adopt a non-autoregressive strategy to synthesize all video frames with only one sampling while most methods employ either the autoregression or interpolation strategy, which requires multiple sampling to create a long video. - 2) We use a latent vector (i.e. global feature) with the least number of elements to represent an input video as demonstrated in Table Q1.3 Notably, many methods (including VIDM, VDM, and VideoFusion) model video generation at the frame level, thus being time-consuming. Table Q1.1 Comparison of sampling time/memory using different methods for generating multiple video frames with 256x256 resolution, 1 batch size, default diffusion steps, and comparable GPU memory on a v100 GPU. F represents the number of video frames. | | VIDM | VDM| LVDM | Modelscope | VideoFusion | TATS | GLOBER(ours) | | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | | DDIM Steps | 100 | 100 | 50 | 50 | 50 | N/A | 50+50 | | F=16 | 192s/20G | 125s/11G | 75s/9G | 31s/6G | 22s/7G | **6s**/16G | **6s**/7G | | F=32 | 375s/20G | 234s/11G | 141s/13G | 48s/8G | 39s/9G | 26s/16G | **11s**/11G | | F=64 | 771s/20G | 329s/11G | 288s/20G | 82s/12G | 76s/13G | 65s/16G | **21s**/19G | Table Q1.2 Maximum training batch size and required inference time (in seconds) for different methods to synthesize a 256x256 resolution video. Results with * are taken from PVDM and measured with a single NVIDIA 3090ti 24GB GPU. The rest are evaluated on a single NVIDIA 3090 24GB GPU by us due to lack of 3090ti. LVDM has not released models and scripts for 128-frame video generation. Limited by memory, our GLOBER decodes a 128-frame video by parallelly and no-overlapped decoding every 32 video frames. DiT is the video generator of GLOBER. | | Train Batch Size | Inference Time(16-frame) | Inference Time(128-frame) | | :-------: | :-------: | :-------: | :-------: | | TATS* | - | 84.8 | 434 | | VideoGPT* | - | 139 | N/A | | VDM* | - | 113 | N/A | | LVDM | - | 98 | N/A| |PVDM-L*| 2 | 20.4 | 166 | |GLOBER(ours) | 4 | 21.4 | 145.7 | |GLOBER(DiT only) | **8** | **3.57** | **3.57** | Table Q1.3 Model design of different methods and comparison of the number of elements used to represent a T-frame video with HxW resolution. LVDM encodes videos using 3D CNN with the spatial and temporal downsampling rates being 8 and 4 respectively, obtaining latent features with shape (T//4, H//8, W//8, 3). PVDM utilizes 3D-to-2D projection to encode a video into three 2D latent features with shape (T, H//d, 4), (T, W//d, 4) and (H//d, W//d, 4) respectively, with d=8 in default. TATS uses 3D VQGAN to encode videos with the spatial and temporal downsampling rates being 8 and 4 respectively, obtaining discrete video latent feature with shape (T//4, H//8, W//8). Modelscope adopts KL-VAE to encode videos by frame, obtaining video latent feature with shape (T, H//16, W//16, 4). GLOBER encodes video global information into a latent feature of shape (H//16, W//16, 16), which is time-independent and can be flexibly decoded into video frames with arbitrary FPS. | | VIDM | VDM | LVDM | PVDM | VideoFusion | TATS | Modelscope | GLOBER (ours) | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | Non-arutoregression | &#10008; | &#10008; | &#10008; | &#10008; | &#10004; | &#10008; | &#10004; | &#10004; | | Video Encoding | &#10008; | &#10008; | &#10004; | &#10004; | &#10008; | &#10004; | &#10004; | &#10004; | | (16, 256, 256, 3) | - | - | 12288 | 8192 | - | **4096** | 16384 | **4096** | | (128, 256, 256, 3) | - | - | 98304 | 65536 | - | 32768 | 131072 | **4096** | ### Quality Comparison We add comparisons with contemporary methods on UCF-101 in Table Q1.4 and train our method on the webvid-10M dataset for 5 epochs to compare with other SOTA methods as reported in Table Q1.5. Our GLOBER significantly outperforms other methods on the UCF-101 dataset and obtains comparable performance on the Webvid-10M dataset. It can be attributed to two main reasons: - 1. By initializing our video decoder with pretrained Stable Diffusion, we inherit its powerful ability to synthesize high-quality video frames. - 2. The reduction in the element number of video latents makes it easier for our video generator to fit the distribution of video latents for the video generation task. Table Q1.4 Quantitative comparison for video generation with the resolution of 256x256 on UCF-101. | Method | Zero-shot | FVD | | :---: | :---: | :---: | | CogVideo | &#10004; | 701.6 | | MagicVideo| &#10004; | 699.0 | | LVDM | &#10004; | 641.8 | | ModelScope | &#10004; | 639.9 | | Video LDM | &#10004; | 550.6 | | VideoCrafters | &#10004; | 516.2 | | VideoFactory | &#10004; | 410.0 | | VideoGPT | &#10008; | 2880.6 | | MoCoGAN | &#10008; | 2886.8 | | StyleGAN-V | &#10008; | 1431.0 | | CogVideo | &#10008; | 626 | | LVDM | &#10008; | 372 | | PVDM | &#10008; | 343.6 | | GLOBER (ours) | &#10008; | **252.7** | Table Q1.5 Quantitative comparison for video generation with the resolution of 256x256 on Webvid-10M. | Method | FVD | CLIPSIM | | :---: | :---: | :---: | | VideoCrafters | 759.30 | 0.2981 | | LVDM | 455.53 | 0.2751 | | ModelScope | 414.11 | 0.3000 | | VideoFactory | 292.35 | **0.3070** | | GLOBER (ours) | **234.84** | 0.2816 | Pdf: /pdf/a12633d168f60897b5d3752845b4b8695704afc0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Neural Injective Functions for Multisets, Measures and Graphs via a Finite Witness Theorem
Accept (spotlight)
Summary: This paper studies moment-injective functions defined by neural networks. A moment injective function f can be used to define injective multiset function $g(\{\{x_1, x_2, ..., x_n\}\}) = \sum_{i = 1}^n f(x_i)$. Study of infectivity of multiset functions is motivated by recent developments in Graph neuran networks and Message-Passing Neural networks. Prior work shows that when $x_i \in \mathbb{R}^d$ moment injective function f should be an embedding of dimension at least nd, moreover one can construct moment-injective polynomial embedding of dimension 2nd+1. This paper extends prior work by showing that depth 1 neural networks of width at least 2nd+1 with analytic non-polynomial activation unit defines a moment injective embedding for almost all choices of weights. Moreover, the paper shows that the result can be generalized to neural networks of higher depth (from theoretical perspective depth-1 case is the most interesting, as it involves the least amount of parameters). Strengths: The paper provides theoretical explanation why neural networks are successful in providing injective multiset functions that gained popularity since the seminal Deep-Sets paper. While neural networks are long-known to be universal approximations for $f: R^k -> R^n$. Much less is known about multiset functions represented by neural networks. The main result proven in this paper (Theorem 3.3) shows that depth-1 neural networks can be used to define injective multiset functions, moreover, they achieve currently best-known embedding dimension. The paper also shows that one cannot use piecewise-linear activations to construct injective multiset functions and should use analytic non-polynomial activations instead (for example, sigmoid). The paper is well-structured and provides a clear comparison to the prior work. The proofs are rigorous and well-written and as far as I can tell are correct. Weaknesses: I think the paper may benefit from slightly more detailed discussion of applications of injective multiset functions. While injectivity is a natural property that proves to be useful in various setups, I believe it will be nice to include a slightly more detailed discussion of the applications in the introduction. Some example applications are presented in Section 6. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) The paper shows that MLPs define injective multiset functions if activation is analytic non-polynomial, and fail to define injective multiset functions if activation is piecewise linear. What happens if activation is polynomial? Is it easy to see that such activations fail similarly to piecewise linear activations? 2) I think it worth replacing "up to a multiplicative factor of 2" everywhere in the paper with "up to a multiplicative factor of 2+o(1)" to be mathematically precise. If one is pedantic, 2nd+1 is not within a factor of 2 from nd. I believe the paper contains several typos: L47: a Euclidean -> an Euclidean L129: this paper -> that paper (?) L456 quality -> qualify. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her helpful comments. Below are our responses. **Response to Weaknesses** We thank the reviewr for this comment. Should the paper be accepted, we intend to add to the introduction a broader explanation of the importance of injective multiset functions for practical applications. **Response to Questions** 1. A possible direction to show how polynomial activations fail would be as follows: If $\sigma : \mathbb{R} \to \mathbb{R}$ is a polynomial of degree $r$, then $\text{span} \\{ \sigma(ax+b) \\, \mid \\, a \in \mathbb{R}^d, b \in \mathbb{R} \\}$ is a subset of the span of all polynomials of degree up to $r$ over $\mathbb{R}^d$. Denote the latter by $\mathcal{P}\_{r}$. Using the fact that $\mathcal{P}\_{r}$ is not dense in $\bigcup\_{t=0}\^{\infty} \mathcal{P}\_{t}$, show that for a large enough $n$ there exist $n$ points $x_1,\ldots,x_n \in \mathbb{R}^d$ and a polynomial $q \in \mathcal{P}\_{r'}$ for some $r'>r$ such that: (a) $\sum_{i=1}^n q(x_i) p(x_i) = 0$ for any $p \in \mathcal{P}\_{r}$; (b) not all of $q(x_1),\ldots,q(x_n)$ are zero. Let $\mu = \sum_{i=1}^n w_i \delta_{x_i}$ be the discrete signed measure with weights $w_i = q(x_i)$. Then (a) implies that for any embedding $\hat{f}$ comprised of moments of $\sigma(ax+b)$, $\hat{f}(\mu) = 0$, although by (b), $\mu$ is not the zero measure, hence injectivity is violated. 2. We thank the reviewer for this comment. We will replace this statement by a more accurate one. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response to my questions and keep my positive score unchanged.
Summary: In this paper, the authors study the injectivity of functions of multisets, which has garnered a lot of attention recently in the machine learning community following work on point clouds and graphs. Unlike earlier work, which prove results for generic continuous functions or polynomials, then may resort to the universality of MLPs but with unbounded width, here the authors directly focus on MLPs with finite width and prove injectivity with near-optimal width, for almost all parameters. This holds with analytic non-linearity, and on the contrary negative results are given for piecewise linear functions like ReLU. Their result relies mostly on a new "finite witness" theorem, which extend previous results known for semi-algebraic sets and functions to sub-analytic sets and functions. Several corollaries are presented, as well as some illustrative numerical experiments. Strengths: This is honestly a fantastic paper, I very much enjoy the read, it is clear, pedagogical, and refreshingly honest. The results presented are of great interest for the community. Although it is an extension of a known results on semi-algebraic sets, as acknowledged by the authors, the extension seems anything but trivial. The large part dedicated to negative results and/or limitations is very much appreciated. Weaknesses: Minor weaknesses; maybe a tad more explanation on the difference between semi-algebraic and sub-analytic proof (I understand both lies on the same underlying proof technique initiated with works on phase retrieval, but with different tools, how would you describe the main difference if possible?), and a bit more outlooks: do you have any more insight what would be a desirable property to show on ReLU networks, since they are definitely not injective, but are used in practice? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Some questions/minor typos: - l45: nessecary - equation (***): missing indices $i$ on the $x$'s - same equations: might be clearer if the $a$ and $b$ were indexed by $i$ and the $x$'s by $j$ (or vice versa) - thm 3.4: co-existence of $D$ and $D_\theta$ - eq (6) and lines below: in your definition of Wasserstein, do you mean "uniform" weights instead of "unit"? $S_1$ and $S_2$ may have different cardinality, but the associated measure must be normalized to compute (the classical) Wasserstein - l 288: seleceted - l 298: Lipshcitz - the experiment on bi-Lipschitz is not really illustrative of the theory, since functions are not bi-Lipschitz. It is not uninteresting in describing another phenomenon (the "almost" bi-Lipschitzness on finite data, etc), but I wouldn't say that it "corroborates Thm 5.1" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Authors have focused extensively on negative results and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the supportive and helpful comments. Below are our responses to the questions and weaknesses. **Response to Weaknesses** 1. We thank the reviewer for the suggestion. We intend to add to the text an explanation of the main challenges and ideas in this generalization. In a nutshell, The proof of the finite witness theorem in [Dym and Gortler] relies on several nice properties of semialgebraic sets: This family is closed to linear projections, finite unions, finite intersections, and complements. Moreover, they are always a finite union of smooth manifolds. Using these properties, [Dym and Gortler] prove a finite witness theorem for semialgebraic sets and corresponding functions, called semialgebraic functions. These functions include polynomials, but do not include other analytic functions of interest. The generalization of the finite witness theorem to the analytic setting relies on the mathematical study of o-minimal systems, which searches for larger families of sets that enjoy the same properties as semialgebraic sets. There are several known o-minimal systems, such as globally subanalytic sets, which do have the same nice properties. For these sets, the generalization of the finite witness theorem is rather straightforward. However the corresponding collection of globally subanalytic functions still does not include all analytic functions. A crucial observation in the proof is that we can make it carry through also when considering countable unions of globally subanalytic sets. Consequently, the corresponding functions we can work with include all analytic functions. 2. Although any finite-size ReLU-activated network is not moment injective, it can be shown that moments of shallow ReLU networks are universal approximators of continuous injective functions on multisets; this is due to the ReLU function being discriminatory, as commented below the statement of Corollary 6.1. Thus, taking a high enough embedding dimension, it is possible to construct invariant embeddings using ReLU activations that are practically injective. This comes with a caveat, though, as we noted in our response to Reviewer ezv6: Each such embedding has a nonzero-measure subset of the input domain on which it is provably not injective. Should the paper be accepted, we intend to comment on this in the camera-ready version. **Response to Questions** We thank the reviewer for the corrections. 1. In Eq. (6) and below, the term $\text{\emph{uniform}}$ is indeed more suitable, as the proof works well with a total mass of 1. 2. We rephrased the text describing the bi-Lipschitzness experiment (Line 296) and removed the statement that the results corroborate Thm 5.1. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the rebuttal. I keep my good score as is.
Summary: Injective functions on multisets are commonly employed in the literature for universality results on (multi)set architectures and graph neural network separation results. Usually, the assumption is that MLPs can implement moment injective functions. The paper studies whether MLP architectures are moment-injective in the space of multisets and the conditions that are required to ensure that. The paper argues that for a (shallow) MLP to be moment injective, its activations have to be analytic and discriminatory. The paper considers a more generalized treatment that views multisets from the perspective of signed discrete measures. The goal of the proof for moment injectivity is to show that a shallow MLP separates all pairs of distinct measures in the space of measure parameters. There are two key conditions for the main theorem : - the activation of the MLP needs to be analytic and discriminatory, which are classic conditions for MLP results - the finite witness theorem, which is used to reduce an infinite number of equalities down to a finite one. The finite witness theorem (and its generalized version) can be leveraged to show the moment-injectivity of various functions. The paper then examines cases (alphabets) where moment injectivity doesn't hold for piece-wise linear activations. The stability properties (stability is studied through the lens of bi-lipschitzness) of injective multiset functions induced by moment injective MLPs are then studied and it is shown that as long as any moment injective function is differentiable at some point, then the induced multiset function won't be stable (bi-lipschitz). Finally, the paper provides a couple of applications of the technical results to function approximation and graph separation and provides some experimental evidence to back some of the technical claims up. Strengths: - The paper contains several interesting mathematical results which could be useful to the broader graph/set NN community. - The paper is well written and generally provides enough context and guidance for the results to be understandable. - The theoretical results are focused on architectures that are used in practice. - The experiments provide additional context and intuition for the applicability of the results. - The paper is upfront about its scope, limitations, and practical applicability. Weaknesses: - The initial motivation of the paper is the moment injectivity of 'practical' MLPs. While several results are ultimately established, we can see from the experiment on graph separation (and it is clearly stated in the conclusion) that even non-analytic functions just barely fail in a few cases to match 1-WL. This seems to suggest that the practical relevance of the results is a bit questionable, or maybe there are other factors that could be mitigating the non-analyticity of the activations. - A few extra explanatory lines could be included in the section of the finite witness theorem (or perhaps in the introduction), that provide context about its exact role in the proof. Currently, the theorem shows up a bit abruptly when theorem 3.3 is presented. OK, it can be inferred from the proof of 3.3 that infinitely many equalities are reduced down to a finite number, but I believe the readability of that section could improve if some of the context that is provided in the supplementary material was moved in the main text to make sure that the role of the theorem is very clearly explained and emphasized. Specifically, when explaining proof 3.3, around line 154 or perhaps right below the statement of 3.4, in subsection 3.1. Those could be good places to provide a few more sentences that restate the purpose of the theorem and explain the context around it (papers by Balan et al., and Dym and Gortler). I am skeptical about the significance of the results when it comes to practical considerations but the paper is well written and provides mathematical insights that can be relevant to modern deep learning architectures so I lean towards accepting. Technical Quality: 3 good Clarity: 3 good Questions for Authors: We see in the second experiment a somewhat significant difference between leaky ReLU and ReLU. Could that be a matter of randomness from different initialization, or could the properties of the function that could lead to worse outcomes when it comes to matching the 1-WL? I don't see any obvious reason why the leaky activation would be much different. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her constructive review. Below we address the weaknesses and questions raised by the reviewer. **Response to Weaknesses** 1. While indeed in our experiments ReLU-based embeddings performed similarly to analytical activations when the embedding dimension was high enough, this can be contrasted by the following fact: For any moment function $\hat{f} : \mathcal{S}_{\leq n} (\mathbb{R}^d)$ based on ReLU-activated networks, there exists some $\delta > 0$ and neighbourhood of radius $\delta$ on which $\hat{f}$ is not injective. In contrast, for any $\delta>0$, any injective analytical embedding $\hat{f}$ is guaranteed to be bi-Lipschitz on all pairs of point-sets whose distance is at least $\delta$. Namely, assuming that the domain is compact, for any $\delta>0$ there exist constants $c(\delta),C(\delta) > 0$ such that if $W_2(X_1, X_2) \leq \delta$, then $c(\delta) W_2( X_1, X_2 ) \leq \lVert \hat{f}(X_1) - \hat{f}(X_2) \rVert \leq C(\delta) W_2( X_1, X_2 )$. The reason why we did not enounter pathological $\delta$-neighborhoods for ReLU in our experiments with high $m$ is because we drew the input clouds randomly rather than explicitly looking for adversarial examples. We intend to clarify on this in the camera-ready version if our paper is accepted. 2. We revised Section 3 to clarify the role of the finite witneess theorem in the proof of moment injectivity. In the revised version, the theorem is gently introduced to the reader before proving Theorem 3.3, and only then it is applied to prove the result. Our new proof of Theorem 3.3 is, in our opinion clearer, and places more emphasis on the essential role of the finite witness theorem. **Response to Questions** We conjecture that the differences in favor of Leaky ReLU over ReLU (Figure 1) result from the latter having a region where it is identically zero. An interesting perspective on this is to compare a ReLU, a leaky ReLU and a nonpolynomial analytic activation, by regarding their restrictions to the region x < 0 as polynomials of degree 0, 1 and $\infty$ respectively. In light of Theorem 3.3, it seems plausible that as the degree of the polynomial increases, the likelihood of getting the same output $\hat{f}(X_1)=\hat{f}(X_2)$ for a distinct pair of inputs $X_1,X_2$ should decrease. --- Rebuttal Comment 1.1: Title: Update Comment: Thanks for the rebuttal and the interesting remarks! As far as I am concerned the paper is solid and should be accepted so I maintain my score.
null
null
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models
Accept (poster)
Summary: This paper aims to achieve both parameters and computation efficiency for the transfer learning of pre-trained VLMs. They combine the previous ladder-side tuning adapter with the proposed dynamic architecture skipping technique to achieve such a goal. Strengths: It is a good idea to achieve both parameters and computation efficiency for the transfer learning of VLM. Weaknesses: Although it is a good idea to achieve both parameters and computation efficiency for the transfer learning of VLM, the author does not show the pros and cons of the proposed methods clearly. 1. The training cost of the DAS and the benefit of the DAS is not clear. In Table 1, the authors do not show the running time in practice. FLOPs are not equal to the inference speed, especially the reduction of FLIOPs is marginal. Plus, please provide the training cost of DAS. 2. Why only take experiments on classification tasks? The authors claim they propose a new problem, PCETL, but lack experiments on import VL tasks - image captioning. Is the parameter pruning of VLM not suitable for the image captioning tasks? Does the layer dropping break the text generation ability of pre-trained VLM? 3. Why only provide retrieval results on Flickr30K, it is small and easy. A large model may be unnecessary on Flickr30K. Please provide retrieval results on MSCOCO. 4. How does the proposed DAS compare to the pruning method in EfficientVLM[1] or other layer pruning methods in the pruning area? Will DAS be necessary? A random baseline in Figure 3 is not enough. [1] https://arxiv.org/pdf/2210.07795.pdf Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Why the method is called "Dynamic Architecture Skipping"? Where the dynamic comes from? From my point of view, the skipped modules are fixed after training and it will not change with the input during inference. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: As I discuss in the Weaknesses, the authors do not fully address the limitations of the proposed mehtods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer #fJhT We highly appreciate your time and effort in reviewing this paper, and your valuable feedback has been instrumental in improving the paper. Below, we response to your key concerns point by point. **Comment 1:** Although it is a good idea to achieve both parameters and computation efficiency for the transfer learning of VLM, the author does not show the pros and cons of the proposed methods clearly. The training cost of the DAS and the benefit of the DAS is not clear. In Table 1, the authors do not show the running time in practice. FLOPs are not equal to the inference speed, especially the reduction of FLIOPs is marginal. Plus, please provide the training cost of DAS. **Response:** Thanks for this constructive comment. Following your suggestion, we report these results in the following table. Table A: Comparison of DAS and PETL methods on efficiency for METER. |Method|VQA Test-Dev|Training Memory (GB)|Training Time|Inference Memory (GB)|Inference Speed (Sample/s)| |-|-|-|-|-|-| |Full Tuning|77.43|\>40G|N/A|6.8|133.27 | |LoRA|74.00|21.5|27h|6.8|133.27 (+0.00%)| |Adapter | 74.70 | 22.9 | 28h | 7.2 | 130.93 (-1.75%) | |Scaled PA | 75.11 | 23.1 | 30h | 7.1 | 126.50 (-5.08%) | |DAS4-Global | 75.09 | 21.7 (search) / 18.1 (training) | 10h (search) + 20h (training) | 6.5 | 146.34 (+9.81%) | |DAS4-Fusion | 74.80 | 21.7 (search) / 20.6 (training) | 10h (search) + 18h (training) | 6.5 | 158.79 (+19.14%) | It can be first seen that our training expenditure is comparable to most PETL methods. Our memory overhead is similar with LoRA during layer searching, and it is also slightly reduced during training. Meanwhile, the search process is quick, and the training hours are also fewer than the PETL methods since DAS has fewer adapters to train. In this case, the overall training expenditure is indeed not expensive. The inference efficiency is also notable. The memory saving is about 4.4%, while the inference speeds up to 19.14\%. Similar improvements can be also seen on LLaMA-7B, while the performance is even better. **Comment 2:** Why only take experiments on classification tasks? The authors claim they propose a new problem, PCETL, but lack experiments on import VL tasks - image captioning. Is the parameter pruning of VLM not suitable for the image captioning tasks? Does the layer dropping break the text generation ability of pre-trained VLM? **Response:** Thanks for this constructive comment. Following your suggestion, we apply our DAS to LLaMA and report its results on ScienceQA [a], a generative QA benchmark. Here, we follow the settings of LaVIN [c], which can be regarded as our baseline. Table B: Comparison of DAS and PETL methods on ScienceQA for LLaMA. | Method | Updated Parameters | FLOPs | Modality Natural | Modality Social | Modality Language | Context Text | Context Image | Context No | Grade G1-6 | Grade G7-12 | Avg | |-|-|-|-|-|-|-|-|-|-|-|-| | LLaVA-13B | 13B | - | 90.36 | 95.95 | 88.00 | 89.49 | 88.00 | 90.66 | 90.93 | 90.90 | 90.92 | | LaVIN-7B | 3.8M | 833 | 89.25 | 94.94 | 85.24 | 88.51 | 87.46 | 88.08 | 90.16 | 88.07 | 89.41 | | DAS4-7B | 44.26M | 729 (-18.61%) | 90.54 | 94.26 | 86.82 | 89.74 | 87.65 | 89.76 | 90.97 | 89.26 | 90.36 | | DAS6-7B | 44.26M | 678 (-24.85%) | 89.96 | 94.71 | 87.18 | 89.00 | 87.7 | 89.97 | 90.75 | 89.32 | 90.24 | It can be seen that our DAS can not only greatly reduces the FLOPs and even improve the performance by skipping 6 layers of LLaMA, which is indeed significant. These results also validate the generalization of DAS on text generation tasks. **Comment 3:** Why only provide retrieval results on Flickr30K, it is small and easy. A large model may be unnecessary on Flickr30K. Please provide retrieval results on MSCOCO. **Response:** Thanks for this suggestion. Following your suggestion, we report the retrieval results on MSCOCO in the following table, where the target of PCETL can be still met on this benchmark. Table C: Comparison of DAS and Finetuning methods on COCO retrieval for METER. |Method|Update Parameter | Additional FLOPs | COCO IR@1 | COCO IR@5 | COCO IR@10 | COCO TR@1 | COCO TR@5 | COCO TR@10 | |-|-|-|-|-|-|-|-|-| | Full Tuning | 323.31M | 0.0 | 54.85 | 81.41 | 89.31 | 72.96 | 92.02 | 96.26 | | DAS4-Fusion | 5.34M | -9.54% | 54.22| 79.36 | 87.67 | 71.56 | 91.17 | 94.79 | | DAS4-Global | 6.23M | -8.68% | 54.60| 80.36 | 88.42 | 72.06 | 91.42 | 95.42 | **Comment 4**: How does the proposed DAS compare to the pruning method in EfficientVLM[1] or other layer pruning methods in the pruning area? Will DAS be necessary? A random baseline in Figure 3 is not enough. **Response:** Thanks for recommending this excellent work. However, pruning methods like EfficientVLM are not applicable to PCETL. On one hand, most pruning methods require another full tuning on the downstream tasks, against the target of PCETL. On the other hand, existing PETL methods like Adapter are unable to be combined with the pruning methods, since pruning methods often skip/prune parameter-wise components. We will cite and discuss EfficientVLM in our new version. **Comment 5:** Why the method is called "Dynamic Architecture Skipping"? Where the dynamic comes from? From my point of view, the skipped modules are fixed after training and it will not change with the input during inference. **Response:** Thanks for this question. Large pre-trained models like LLaMA are often transferred to various down-stream tasks for practical use. In this case, we think that our DAS can provide optimal inference paths for each task during its real-world applications. **Reference** [1] EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning. [a] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering. [b] LLaMA: Open and Efficient Foundation Language Models. [c] Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models. --- Rebuttal Comment 1.1: Title: Respone to the rebuttal Comment: Thanks for your response. - **Comment 1** Does the inference speed mean the model can process 100+ samples in one second on VQA Test-Dev? - **Comment 2** It is surprising to see the method can also achieve good performance after dropping 6 layers. However, there are still a few questions. - Why the #updated parameters is more than LaVIN-7B? What is the performance of the PEFT baseline with roughly the same #updated parameters? - The Mem and Time should also be reported with LLaMA. - So, METER + DAS will not work for image captioning? - **Comment 3** The performance of DAS on COCO is not as good as the performance of DAS on Flickr30k. So, there is a relationship between model size, the difficulty of the task, and the pruning choice. - **Comment 4** We can consider the work as pruning + Adapter. Not many parameters have been pruned (compared to previous pruning methods), so the performance can be recovered by tuning the adapter. A big problem here is that the paper does not discuss and compare previous pruning methods thoroughly. Why do we need Dynamic Architecture Skipping? Why we cannot use some similar pruning methods like [1,2,3]? I do not see any discussion of previous pruning methods in the paper. Personally, I think this is not respectful of the previous pruning works. - **Comment 5** The authors are talking about the generalization of the proposed methods. What does "dynamic" mean? Why use the word dynamic? [1] https://aclanthology.org/2021.emnlp-main.829.pdf [2] https://arxiv.org/pdf/2111.15127.pdf [3] https://proceedings.mlr.press/v202/shi23e/shi23e.pdf --- Reply to Comment 1.1.1: Comment: # Comment to Reviewer #fJhT Many thanks for your reply. We hope that our following responses can further address your concerns. **Response to Comment 1:** Thanks for this comment. It is tested with a batch size of 32 on one A100. For on-line inference, the speed is about 4.96 samples per second (4.96 (DAS) v.s. 4.16 (Full Tuning)). For a better clarity, we will replace it with on-line inference in our final version. **Response to Comment 2.1:** Thanks for your insightful question. The main reason is that DAS not only serves to feature adaption, i.e., PETL, but also to connect the skipped layers. Since LLaMA is a giant model with much larger feature dimensions, its skipped layers require a larger Adapter to connect. In contrast, the PETL method LaVIN still follows the low-rank property [a], so increasing updated parameters is in fact counterproductive, see the table below. | Method | Updated Parameters|FLOPs|Modality Natural|Modality Social|Modality Language|Context Text|Context Image|Context No|Grade G1-6|Grade G7-12|Avg| | -| -|-|-|-|-|-|-|-|-|-|-| |LaVIN-7B|3.8M|833|89.25|94.94|85.24|88.51|87.46|88.08|90.16|88.07|89.41| |LaVIN-7B|44.26M| 838| 84.37 | 74.35 | 86.27 | 82.70 | 73.48 | 89.55 | 83.33 | 81.74 | 82.76 | |DAS6-7B|44.26M|678 (-24.85%)|89.96|94.71|87.18|89.00|87.7|89.97|90.75|89.32|90.24| In fact, the number of updated parameters is still small for LLaMA, only takes about 0.63%. **Response to Comment 2.2:** Thanks for your suggestion, and the expenditure of DAS on LLaMA is given below. | Method | Avg | Training Memory (GB) | Traning Time | Inference Memory (GB) | Inference Speed (sample/s, batch size=64) | FLOPs (G) | |-|-|-|-|-|-|-| |LaVIN-7B | 89.41 | 35 | 6h |40.1|3.51|833| |DAS4-7B | 90.36 | 36 (search) / 33 (training) | 1h (search) + 4h (training) | 36.5| 3.88 (+10.54%)| 729 (-18.61%)| |DAS6-7B | 90.24 | 36 (search) / 32 (training) | 1h (search) + 4h (training) | 34.7 | 4.09 (+16.52%)| 678 (-24.85%)| **Response to Comment 2.3:** Due to the time limit, we only report the new results of LLaMA on ScienceQA, which is also a generative task and can better validates our generalization. Follow your suggestion, we report the results of BLIP+DAS for image captioning in the following table since METER cannot be directly applied to this task. Note that, these experiments are directly conducted without careful tuning. |Method|Update Parameter|FLOPs (G)|Bleu@4|CIDEr| |-|-|-|-|-| |Full Tuning|223.97M|100.00%|39.4|131.4| |DAS4|5.37M|67.49%|37.8 (95.93%)|124.5 (94.74%)| It can be seen that DAS can reach about 96 % BLEU performance of full tuning, while saving about 97.6\% updated parameters and up to 33.17% FLOPs. These results are consistent with the target of PCETL, and we think that they can be better with more experimental trials. **Response to Comment 3:** In fact, the performance on COCO is slightly better than that on Flickr 30k. For instance, the performance of DAS-4 is about 99.1% of full tuning on COCO, while it is about 97.34% on Flickr30k. **Response to Comment 4:** Thanks for your detailed comment. The main difference between DAS and your mentioned works [1,2,3] is that DAS not only needs to reduce the redundant computation, but also to consider the parameter efficiency, which are the targets of PCETL. We agree that existing pruning methods can greatly reduce the parameter size of the target model, but most of them [1,2,3] require another full tuning on the downstream tasks, which is against the target of PCETL. In contrast, DAS can effectively skip the redundant layers and connect the remaining ones with adapters, thereby achieving both of the above goals at the same time. In this case, we think that the contributions of existing pruning methods and our DAS are orthogonal. More importantly, the other contribution of this paper is the propose of a new transfer learning task for large-scale pre-trained models, i.e., Parameter and Computation Efficient Transfer Learning (PCETL), which is of great significance to the community and highly recognized by other reviewers. Following your suggestion, we will add more discussions about the existing pruning methods to our final version. **Response to Comment 5:** As discussed in the previous comment, Large pre-trained models like LLaMA are often transferred to various down-stream tasks. During their practical use, we can add a task notification and apply DAS to dynamically change the routing paths of the model for the inputs of different tasks. In this case, we term this method as the ``dynamic'' one. **Reference** [a] Edward J. Hu, Yelong Shen, Phillip Wallis, *et al*; LoRA: Low-Rank Adaptation of Large Language Models. [1] Francois Lagunas, Ella Charlaix, Victor Sanh, *et al*; Block Pruning For Faster Transformers. [2] Hao Yu, Jianxin Wu; A unified pruning framework for vision transformers. [3] Dachuan Shi, Chaofan Tao, Ying Jin, *et al*; UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
Summary: The paper proposes a dynamic architecture skipping (DAS) method for the parameter and computation efficient transfer learning (PCETL) problem. DAS explores the optimal short-cut pathway in VLP models. Extensive experiments show the effectiveness of DAS both in reducing computation and parameters. Strengths: The paper introduces a novel and intriguing approach by considering network jumps as k-armed bandit sampling. It highlights the significance of reducing computational complexity in Visual Language Pretraining (VLP) models, providing strong motivation for the proposed methodology. The paper is well-written and effectively communicates its ideas. Experiments are solid, with necessary analysis and ablations for different parts of the method. Weaknesses: Compared to the number of parameters and FLOPS, it would be better to add the training time and GPU memory cost as metrics. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: N/A. Already written in the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 5vAx We highly appreciate your time and effort in reviewing this paper. Your comments and feedback are instrument to the improvement of our work. **Comment 1:** Compared to the number of parameters and FLOPS, it would be better to add the training time and GPU memory cost as metrics. **Response:** Thanks for this comment. Following your suggestion, we report the training time and GPU memory cost in the following table. | Method | VQA Test-Dev | Training Memory (GB) | Training Time | |-|-|-|-| | Full Tuning | 77.43 | \>40G | N/A | | LoRA | 74.00 | 21.5 | 27h | | Adapter | 74.70 | 22.9 | 28h | | Scaled PA | 75.11 | 23.1 | 30h | | DAS4-Global | 75.09 | 21.7 (search) / 18.1 (training) | 10h (search) + 20h (training) | | DAS4-Fusion | 74.80 | 21.7 (search) / 20.6 (training) | 10h (search) + 18h (training) | It can be seen that our memory overhead is similar with LoRA during layer searching, and it can be further slightly reduced during training. Meanwhile, the search process is quick, and the training hours are also fewer than the PETL methods since DAS has fewer adapters to train. In this case, the overall training expenditure is not significantly more expensive than the PETL methods. --- Rebuttal Comment 1.1: Comment: Thank you for your response. From the perspective of training memory and time, It seems that DAS does not have a great advantage. However, this method is still worthy of recognition in terms of novelty. --- Reply to Comment 1.1.1: Comment: Many thanks for your reply, and your valuable feedbacks have been instrumental in improving this paper.
Summary: The paper presents dynamic architecture skipping (DAS), which can drop some transformer layers during inference. The routing of DAS is learned by reinforcement learning. To make the training parameter efficient, every layer has an adapter for training. After training, DAS drops several redundant layers and replaces them with adapters to reduce the inference FLOPs. Strengths: 1. Most parameter-efficient training methods add extra cost in inference, and it is interesting to explore how to reduce the cost. 2. The design of the proposed approach, which replaces some transformer layers of the backbone model with adapters, is reasonable. Weaknesses: 1. Several baselines [2] and related works [1, 2] are missing. I think the paper would be stronger if it can compare to [2]. 2. I am not sure what is the uniqueness of the approach only applies to the VL domain. If the approach is general, I would expect applying the approach to some other models and tasks too (LLM or ViT). [1] Fan, Angela, Edouard Grave and Armand Joulin. “Reducing Transformer Depth on Demand with Structured Dropout.” [2] Din, Alexander Yom, Taelin Karidi, Leshem Choshen and Mor Geva. “Jump to Conclusions: Short-Cutting Transformers With Linear Transformations." Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The training pipeline is more complex than the other approaches since the training involves search, redundancy observation, and final fine-tuning. I wonder how much cost (e.g. training time and memory) is needed to train the method compared to other approaches? 2. Are the results on one run or multiple runs? If it is on one run, I would suggest using the average of multiple runs to justify the robustness of the approach. 3. Given the GFLOPs saving, how much the approach can improve the inference speed and inference memory? --- **Post-rebuttal** Thank you for the authors' response. I have read it and it addressed my questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitation in the paper is sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer #pBnL We highly appreciate your time and effort in reviewing this paper, as well as your encouraging and constructive comments on our work. Below, we response to your key concerns point by point. **Comment 1:** Several baselines [2] and related works [1, 2] are missing. I think the paper would be stronger if it can compare to [2]. **Response:** Thanks for this suggestion. We will cite and discuss your mentioned excellent works. Following your suggestion, we supplement the comparison with J2C [2] in the following table, which is also combined with adapters like our DAS. Table A: The comparison between DAS and the suggested baseline for METER. | Method|Updated Parameter|VQA test-dev|VQA Additional FLOPs|NLVR2 test-P|NLVR2 Additional FLOPs | | -|-|-|-|-|- | | Full Tuning|323.31M|77.43|0%|83.05|0.00| | Classifier Only|-|69.93|0%|73.23|0.00 | | J2C-2|4.18M|67.34|-11.52%|68.89|-8.83% | | J2C-4|3.58M|69.26|-18.60%|69.08|-15.95% | | DAS4-Fusion|5.34M|74.80|-11.97%|80.11|-9.70% | | DAS6-Fusion|5.34M|75.67|-18.86%|79.30|-17.72% | It can be seen that in the absence of layer redundancy evaluation, the effect of layer skipping by J2C is not satisfactory, and its performance greatly lags behind our DAS. **Comment 2:** I am not sure what is the uniqueness of the approach only applies to the VL domain. If the approach is general, I would expect applying the approach to some other models and tasks too (LLM or ViT). **Response:** Thanks for this constructive suggestion. We have applied our DAS to LLaMA-7B [a] on ScienceQA [b] following the settings of LaVIN [c]. The results are given in Table. B. Table B: Comparison of DAS and PETL methods on ScienceQA for LLaMA. | Method|Update Params|FLOPs(G)|Modality Natural|Modality Social|Modality Language|Context Text|Context Image|Context No|Grade G1-6|Grade G7-12|Avg | |-|-|-|-|-|-|-|-|-|-|-|-| | LLaVA-13B|13B|-|90.36|95.95|88.00|89.49|88.00|90.66|90.93|90.90|90.92 | | LaVIN-7B|3.8M|833|89.25| 94.94| 85.24| 88.51| 87.46| 88.08| 90.16| 88.07| 89.41 | | DAS4-7B| 44.26M|729 (-18.61%)|90.54|94.26|86.82|89.74|87.65|89.76|90.97|89.26|90.36 | | DAS6-7B|44.26M|678 (-24.85%)|89.96|94.71|87.18|89.00|87.7|89.97|90.75|89.32|90.24 | By saving up to 25% FLOPs, our DAS can even achieve better results than LaVIN on LLaMA-7B. Notably, its best performance is very close to LLaVA[d] with LLaMA-13B while saving much more expenditure. These results well confirm our DAS towards the target of PCETL. **Comment 3:** The training pipeline is more complex than the other approaches since the training involves search, redundancy observation, and final fine-tuning. I wonder how much cost (e.g. training time and memory) is needed to train the method compared to other approaches? **Response:** Thanks for this comment. Following your suggestion, we report the training costs of DAS and the compared methods in the following table. Table C: Comparison of DAS and PETL methods on efficiency for METER. | Method|VQA test-dev|Training Memory (GB)|Training Time | |-|-|-|-| | Full Tuning|77.43|\>40|N/A | | LoRA|74.00|21.5|27h | | Adapter|74.70|22.9|28h | | Scaled PA|75.11|23.1|30h | | DAS4-Global|75.09|21.7 (search) / 20.6 (training)|10h (search) + 20h (training) | It can be seen that our memory overhead is similar to LoRA during layer searching, and it can be further slightly reduced during training. Meanwhile, the search process is quick, and the training hours are also fewer than the PETL methods since DAS has fewer adapters to train. In this case, the overall training expenditure is not significantly more expensive than the PETL methods. **Comment 4:** Are the results on one run or multiple runs? If it is on one run, I would suggest using the average of multiple runs to justify the robustness of the approach. **Response:** Thanks for this suggestion. In fact, the search result of our DAS is very stable. On METER, we test DAS with three random seeds, and the skipped layers are the same. In this case, we only report the training result on one run, similar to the PETL methods. Following your suggestion, we will provide the multiple-run ones in our new version. **Comment 5:** Given the GFLOPs saving, how much the approach can improve the inference speed and inference memory? **Response:** Thanks for this question. The detailed improvements are given in the following tables. In terms of METER, the memory saving is about 4.4%, while the inference speeds up to 19.14\%. Similar improvements can be also seen on LLaMA-7B, while the performance is even better. Table D: Comparison of DAS and PETL methods on inference efficiency for METER. | Method|VQA Test-Dev|Inference Memory (GB)|Inference Speed(Sample/s)|FLOPs | |-|-|-|-|-| | Full Tuning|77.43|6.8|133.27|93.2 | | LoRA|74.00|6.8|133.27 (+0.00%)|93.2 (-0.0%) | | Adapter|74.70| 7.2|130.93 (-1.75%)|94.9 (+1.82%) | | Scaled PA|75.11|7.1|126.50 (-5.08%)|94.3 (+1.18%) | | DAS4-Global|75.09|6.5|146.34 (+9.81%)|88.7 (-4.82%) | | DAS4-Fusion|74.80|6.4|158.79 (+19.14%)|82.1 (-11.9%) | Table E: Comparison of DAS and PETL methods on inference efficiency for LLaMA-7B. | Method|Avg|Inference Memory (GB)|Inference Speed (sample/s)|FLOPs (G) | |-|-|-|-|-| | LaVIN-7B|89.41|40.1|3.51|833 | | DAS4-7B|90.36|36.5|3.88 (+10.54%)|729 (-18.61%) | | DAS6-7B|90.24|34.7|4.09 (+16.52%)|678 (-24.85%) | **Reference** [1] Reducing Transformer Depth on Demand with Structured Dropout. [2] Jump to Conclusions: Short-Cutting Transformers With Linear Transformations. [a] LLaMA: Open and Efficient Foundation Language Models. [b] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering. [c] Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models. [d] Visual Instruction Tuning.
Summary: This work focuses on the problem of transfer learning in the context of vision-language pre-trained (VLP) models. Existing works that adapt VLP models primarily address the issue of parameter efficient transfer learning (PETL). However, these methods do not effectively reduce the computation complexity of VLP models. Therefore, this work takes PETL a step further and introduces a new setting called parameter and computation efficient transfer learning (PCETL). The goal of PCETL is to reduce the computation complexity of pre-training models, enabling faster inference, while only tuning a fraction of parameters. To achieve PCETL, a method called dynamic architecture skipping (DAS) is proposed. DAS assists in finding the optimal subnetwork routing of VLP models for downstream tasks. Although the concept of dynamic architecture skipping has been extensively explored in transfer learning for image spaces, it has not yet been investigated in the era of large-scale pre-trained models. The proposed approach demonstrates promising results on two VLP models and three vision-language benchmarks." Strengths: 1. The paper is well-written and easy to follow. 2. The concept of dynamic skipping, as yet unexplored in the era of large-scale VLP models, holds considerable promise. Its introduction to VLP models could significantly reduce computational and time complexity. 3. As the daily release of pre-trained large-scale models continues to increase, methods for efficiently transferring learned features become increasingly significant. 4. The experimental results compellingly illustrate the potential of the proposed methodology. Weaknesses: 1. The concept of dynamic architecture skipping is not new and is already well-established in the case of transfer learning for standard imagery (see references below). It would be beneficial if the authors could provide reasons why existing approaches are unsuitable for the current problem and setting. 2. The proposed approach is not end-to-end differentiable and models the problem as a k-armed bandit. A more comprehensive end-to-end approach would be ideal (refer to references 2 and 4). 3. The paper lacks a comparison with existing dynamic architecture skipping methods, which is a crucial element for an encompassing evaluation. 4. The authors acknowledge that the number of layers to be skipped must be manually determined. Some of the transfer learning methodologies listed below are capable of handling this automatically, borrowing ideas from there can benefit this work. **References** 1. [BlockDrop: Dynamic Inference Paths in Residual Networks](https://arxiv.org/abs/1711.08393) 2. [SpotTune: Transfer Learning through Adaptive Fine-tuning](https://arxiv.org/abs/1811.08737) 3. [Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?](https://arxiv.org/abs/2106.02890) 4. [$\Delta$-Networks for Efficient Model Patching](https://arxiv.org/abs/2303.14772) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Looks like random skipping is a very strong baseline for this work (Line 256-257). The performance difference between random skipping and the proposed approach appears marginal, yet considering that random skipping is less costly, are these gains still substantial? 2. Could you please explain the rationale for modeling this as a k-armed bandit problem? Additionally, what would be the primary challenges in solving this problem in an end-to-end differentiable manner? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations Section discusses the limitations of this work clearly Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer #LdaL We highly appreciate your time and effort in reviewing this paper, as well as your encouraging and constructive comments on our work. Below, we response to your key concerns point by point. **Comment 1:** The concept of dynamic architecture skipping is not new and is already well-established in the case of transfer learning for standard imagery (see references below). **Response:** Thanks for this suggestion. In this paper, we propose a new task called Parameter and Computation Efficient Transfer Learning (PCETL) for large-scale pre-trained models, which should not only reduces the computation redundancy but also needs to avoid the expensive full fine-tune on various downstream tasks. This is a new field yet unexplored, and we find it difficult for existing solutions to directly achieve the above goals simultaneously. Thus, we propose a novel approach called Dynamic Architecture Skipping based on the k-armed bandit theory. In terms of your mentioned methods, most of them have obvious shortcomings towards PCETL. BlockDrop is an example-dependent method that uses a policy network to predict the routing path for each input. Its combination with PETL methods like Adapter is intractable, since the problem definition as well as the training scheme should be largely changed. SpotTune is an approach for choosing which layers to finetune rather than to skip. Δ-networks is unable to reduce the computation since its weighted connections only serve to adaption. A potential solution is MRM, which learns gating functions to decide layering options. In fact, its computation reduction is very non-deterministic, and in some cases, it does not skip redundant layers, see Table A. **Comment 2:** A more comprehensive end-to-end approach would be ideal (refer to references 2 and 4). **Response:** Thanks for this comment. Following your suggestion, we compare our DAS with MRM [3] and Δ-Networks [4] in the following table. Table. A: The comparison between DAS and alternative methods. | Method|Updated Parameter|Training Memory (GB)|Training Time|VQA test-dev|VQA Additional FLOPs|NLVR2 test-P|NLVR2 Additional FLOPs | | -|-|-|-|-|-|-|- | | Full Tuning|323.31M|\>40|N/A|77.43|0.0|83.05|0.00 | | Classifier Only|\-|21.4|27h|69.93|0.0|73.23|0.00 | | MRM(1e-5)|6.23M|26.3 (search) /24.0 (training)|8h (search) +22h (training)|75.28|+1.80%|81.39|-4.16% | | Δ-Networks|36|21.5|27h|67.34|+0.00%|71.49|+0.00% | | DAS2-Global|6.23M|21.6 (search) /20.5 (training)|11h (search) +22h (training)|75.24|-4.25%|81.37|-4.16% | | DAS4-Global|6.23M|21.7 (search) /20.6 (training)|10h (search) +20h (training)|75.09|-4.84%|80.69|-6.95% | We can first observe that the adaption of Δ-Networks is much inferior to our DAS, and it also cannot reduce the computation as we mentioned above. MRM can be extended to a PETL method when combined with Adapters. However, its computational efficiency is unstable and difficult to directly constrain on different VL tasks. Simply put, MRM is likely not to choose to skip layers during training. More importantly, its GPU memory overhead is much more expensive than our method. Overall, our DAS is still the best choice considering efficiency and effectiveness. **Comment 3:** The paper lacks a comparison with existing dynamic architecture skipping methods. **Response:** Following your suggestion, we supplement the comparison with MRM in Tab. A, from which we can see that our DAS is the best trade-off between efficiency and performance. **Comment 4:** The authors acknowledge that the number of layers to be skipped must be manually determined. Some of the transfer learning methodologies listed below are capable of handling this automatically, borrowing ideas from there can benefit this work. **Response:** Thanks for this insightful comment. In fact, the manual setup of DAS is an advantage over the automatic solutions. To explain, different from Network Architecture Search (NAS), the pre-trained VL models have a fixed structure, which greatly limits the choice of network skipping. In this case, gradient-based methods can only rely on the training loss to automatically select which layer should be skipped, and this process is unstable and uncontrollable. As shown in Tab. A, MRM keeps all Transformer layers of METER on VQA2.0, but our method can reduce up to four layers with similar performance. Overall, our DAS can better meet the PCETL requirement in practice. **Comment 5:** Looks like random skipping is a very strong baseline for this work (Line 256-257)? **Response:** In fact, the number described in Line 256-257 refers to the performance deviation of random skipping. The actual performance gains of our method are obvious, e.g. +5.6\% when skipping 8 layers, as shown in Fig. 3. **Comment 6:** Could you please explain the rationale for modeling this as a K-armed bandit problem? Additionally, what would be the primary challenges in solving this problem in an end-to-end differentiable manner? **Response:** Thanks for this question. As discussed above, in PCETL, the structures of VLP models are fixed, so we cannot change the network depth to reduce the computation just like NAS. In this case, we model network skipping as a k-armed bandit problem, *i.e.*, which k layers can be skipped, and evaluate the policy via numerous single-shot samplings. This setting can directly set the computation cost for PCETL. In contrast, in PCETL, the gradient-based or differentiable methods like MRM are uncontrollable for layer skipping, as shown in Tab. A, since its search results are attributed to the training loss rather than the pre-defined target. **References** [1] BlockDrop: Dynamic Inference Paths in Residual Networks. [2] SpotTune: Transfer Learning through Adaptive Fine-tuning. [3] Can Subnetwork Structure be the Key to Out-of-Distribution Generalization? [4] Δ-Networks for Efficient Model Patching. --- Rebuttal Comment 1.1: Comment: I have carefully reviewed the initial submission and the authors' response. I appreciate the effort that has been invested in addressing the concerns raised, and I would like to thank the authors for the new results. The responses provide answerers to my questions, and in light of this, I have updated my rating accordingly.
Rebuttal 1: Rebuttal: We highly appreciate AC for pushing forward NeurIPs 2023, and also thank all reviewers for their valuable and encouraging comments on this paper, such as \`\`*Its introduction to VLP models could significantly reduce computational and time complexity.*\'\' by Reviewer LDaL, \`\`*The experimental results compellingly illustrate the potential of the proposed methodology.*\'\' by Reviewer LDaL, \`\`*The design of the proposed approach is reasonable.*\'\' by Reviewer pBnL, \`\`*introduces a novel and intriguing approach*\'\' by Reviewer 5vAx , \`\`*It highlights the significance of reducing computational complexity in Visual Language Pretraining (VLP) models*\'\' by Reviewer 5vAx, *et al*. During the rebuttal phrase, our main responses include: 1. The details of training costs, including GPU memory and training time, are similar to most PETL methods. 2. The actual inference speed-up, which can be +19% in practice. 3. The comparison with alternative skipping methods, where our merits can be still witnessed. 4. The application to LLaMA on ScienceQA, where the target of PCETL is still achieved by our methods. Meanwhile, the key concerns of all reviews are also point-by-point responded in each rebuttal. Here, we would like to emphasize our key contributions again: 1. We raise a new problem called Parameter and Computation Efficient Transfer Learning (PCETL) for VLP models. 2. We propose a novel Dynamic Architecture Skipping (DAS) for PCETL, which can greatly reduce the computation redundancy on downstream tasks. Lastly, the new results in rebuttal will be added to our final version, and our source codes will be publicly released after acceptance. Best, The authors.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multi-task Graph Neural Architecture Search with Task-aware Collaboration and Curriculum
Accept (poster)
Summary: This paper puts forth an intriguing and innovative research problem: how to effectively search for graph neural architectures within multi-task domains. In addressing this question, the authors introduce MTGC, a methodology involving three primary stages. Firstly, forward propagation is carried out through the structurally diverse supernet in combination with the soft task-collaborative module. Secondly, both the architecture parameters and the soft task-collaborative module are updated. Finally, model weights are modified through task-wise curriculum learning. The aforementioned steps seem practical and are validated by their effectiveness in the experimental contexts. Strengths: (1) The work presented in this paper is well articulated and comprehensible apart from a few minor presentation issues. (2) The paper adds a novel angle to the existing discourse by applying GNAS to a multi-task setting, which is a noteworthy approach. (3) The research problem has been meticulously defined and the related challenges have been effectively pinpointed. (4) The proposed methods come across as plausible. I like the manner in which the graph structure has been disentangled. It is generalizable. (5) The experiment stage of the research is replete with ample benchmarks, which efficiently validate the effectiveness of the proposed method. Weaknesses: (1) The left part (three graphs) in Figure 2 is confusing. Are there three different input graphs? or just three disentangled graphs from the same input graph? It should be clarified. (2) What kind of knowledge should different GNN architectures share in the multi-task setting? Can you present more discussions about this? or raise some examples? There are some related works that should be cited. [1] Factorizable graph convolutional networks. [2] Automatic relation-aware graph network proliferation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As stated in weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitation part is included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviewing efforts and constructive comments. We address your comments point by point. *Q1: The left part (three graphs) in Figure 2 is confusing. Are there three different input graphs? or just three disentangled graphs from the same input graph? It should be clarified.* Response 1: We appreciate the reviewer's feedback regarding Figure 2. There are indeed three different input graphs in the left part of Figure 2, and each of these graphs represents the input of a task. We will make sure to provide additional clarification in the revised version to avoid any ambiguity. *Q2: What kind of knowledge should different GNN architectures share in the multi-task setting? Can you present more discussions about this? or raise some examples?* Response 2: We appreciate the reviewer's value question about the shared knowledge in our multi-task architecture. If the labels of two tasks have a high correlation, they will share more information inside. For example, if the hidden feature of a chunk's output in a layer is useful for the inference of two different tasks, the supervised signals of both tasks will further optimize the learning of the parameters of this chunk. Although the form of the shared knowledge is parameter that cannot be intuitively understood. The strength of the sharing can be estimated by $p_{ij}$ in our framework, as shown in Figure 4. *Q3: There are some related works that should be cited.* Response 3: Thank you for your suggestion, we will ensure that the revised manuscript includes the references of these papers to address the reviewer's concern.
Summary: This paper proposes a method called MTGC3 for searching GNNs in multi-task learning. Firstly, it highlights the importance of designing different GNNs for different tasks. Then, it introduces the Structurally Diverse Supernet and Soft Task-Collaborative modules, which enable the generation of task-specific architectures that can operate separately or collaboratively. The task-wise training strategy is employed to address the task-imbalance problem. Strengths: The paper presents an interesting and novel approach that jointly searches for different architectures for different tasks, considering task-specific and shared information. Weaknesses: Method design: The relationship with GNNs is not clear, and further comparisons with general multi-task+NAS methods should be discussed. While this paper seems to be the first to propose a method for searching multi-task GNNs, the designed method (i.e., separated trunk, soft collaboration module, and task-wise learning) appears to have weak connections with GNNs. Why can't existing NAS+MTL methods be applied? From this perspective, the first contribution of this paper appears to be weakened. In terms of experiments, the evaluations of the proposed method could be further improved: 1. Performance comparisons with general multi-task methods should be included, such as in Line 316. 2. In Table 2, the results of the ablation study are remarkably similar to each other, indicating that they represent the basic functionality of the designed modules in this paper. 3. Evaluation of the Cross-mix head. This module is proposed to allocate sufficient hidden units for each task. When evaluating this module, it seems necessary to remove the masked tensors and utilize only a few units for each task instead of showing the α_{ik} values? In summary, the key contribution of this paper is not adequately justified. The paper overlooks MTL+GNN baselines, which are essential for demonstrating the effectiveness of the proposed method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please check the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please check the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviewing efforts and constructive comments. We address your comments point by point. *Q1: The relationship with GNNs.* Response 1: Thank you for the comment. Here we would like to furtherly clarify that we designed our method based on the specific requirements of graph multi-task learning scenarios. (1) Our approach considers different graph structures for different tasks, which is a critical consideration in graph multi-task settings. Our structurally diverse supernet enables different tasks to learn with different graph structures, which is not considered in existing general multi-task NAS methods that mainly focus on searching for sharing parts. (2) Our approach contains practical considerations for graph scenarios, such as backbone search and handling a large number of tasks. The semantic information of the multi-tasks in the graph domain is diverse and complex, leading to significant performance differences across GNNs. Figure 1 in our paper demonstrates these disparities. Existing general multi-task NAS methods[1][2][3][4][5] mainly focus on searching for sharing parts while manually fixing the backbone. In contrast, our method combines backbone searching and shared parameter searching to effectively address the unique challenges of graph multi-task problems. We also incorporate a cross-mixed head design that significantly reduces the number of parameters required for scenarios with an extremely large number of tasks, which is a practical consideration that is not addressed in existing general multi-task NAS methods. In the revised version, we will include the discussion that addresses the unique challenges in graph multi-tasks NAS, which motivates the need for a specialized approach. We also acknowledge that our method can be extended to other domains by considering more domain-specific priors. By providing this analysis, we aim to strengthen the justification for our proposed approach and clarify the distinct contributions of our work. *Q2: Performance comparisons with general multi-task methods.* Response 2: We appreciate the reviewer's comment. Besides, we have compared our method with several representative general multi-task methods such as MTL-NAS[1], Sparse Sharing[2], Raychaudhuri et al.[3], AdaShare[4] and AutoMTL[5]. The results are shown in Appendix D due to the space limit. | Method | Tox21 | ToxCast | Sider | | --- | --- | --- | --- | | MTL-NAS | $74.77_{0.24}$ | $63.14_{0.52}$ | $55.31_{0.64}$ | | AdaShare | $67.34_{1.08}$ | $62.91_{0.41}$ | $60.41_{0.46}$ | | AutoMTL | $73.02_{0.90}$ | $62.69_{0.39}$ | $53.94_{1.87}$ | | Sparse Sharing | $75.17_{1.26}$ | $64.10_{0.70}$ | $57.65_{1.15}$ | | Raychaudhuri et al.[5] | $75.86_{0.55}$ | $62.85_{0.24}$ | $55.90_{1.25}$ | | MTGC3 | $78.01_{0.68}$ | $66.74_{0.57}$ | $62.26_{1.42}$ | The results demonstrate the effectiveness of our methods on multi-task graph learning. We will revise this section in the revision. [1] MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning. CVPR 2020. [2] Learning Sparse Sharing Architectures for Multiple Tasks. AAAI 2020. [3] Controllable Dynamic Multi-Task Architectures. CVPR 2022. [4] Adashare: Learning what to share for efficient deep multi-task learning. NIPS 2020. [5] AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning. NIPS 2022. *Q3: The results of the ablation study are remarkably similar to each other, indicating that they represent the basic functionality of the designed modules in this paper.* Response 3: We appreciate the reviewer's comments. In order to provide a clearer illustration of the contributions of the modules. We conducted another experiment with the variant model MTGC^3-NoAll (NoStru+FullCollab+MLPHead+NoCL). This variant model only keeps the separate chunks with different architectures in our proposed design. The results are as follows: | Method | MGL-WS | Tox21 | ToxCast | | --- | --- | --- | --- | | DARTS| $64.16_{0.29}$ | $76.96_{0.57}$ | $65.23_{0.60}$ | | MTGC^3-NoAll | $66.68_{0.32}$ | $77.39_{0.20}$ | $65.62_{0.61}$ | | MTGC^3-NoStru | $67.17_{0.60}$ | $77.42_{1.00}$ | $66.30_{0.30}$ | | MTGC^3-FullCollab | $67.21_{0.55}$ | $77.83_{0.71}$ | $66.01_{0.50}$ | | MTGC^3-MLPHead | $64.47_{1.16}$ | $77.62_{0.57}$ | $64.61_{0.53}$ | | MTGC^3-NoCL | $67.15_{0.52}$ | $77.68_{0.48}$ | $66.00_{1.07}$ | | MTGC3 | $67.39_{0.42}$ | $77.99_{0.42}$ | $66.36_{0.26}$ | Compared with DARTS, we find this variant does achieve a stable improvement in performance, indicating that the proposed key idea (using different architectures in different chunks for different tasks) contributes a lot. All other designs in our paper are based on this key idea. And the performance can still be improved by these designed modules. Although the improvements they bring may not be as significant as the main contributing factor, they are still valuable for the overall framework. Besides, we find that in some cases MTGC^3-MLPHead behaves even worse than MTGC^3-NoAll, this may be due to that our designed modules are led to poorer learning when receiving more mixed gradients with the MLP classification head. We will add the experiments and discussions in the revised version. *Q4: Evaluation of the Cross-mix head.* Response 4: We appreciate the reviewer's suggestion regarding the evaluation of the Cross-mix head module and the allocation of hidden units for each task. In our implementation, we use 128 dimensions for ToxCast and $p=1/16$ in the Bernoulli distribution. Therefore, 8 units are assigned to each task in expectation. Following your suggestion, we conduct experiments with a few units for each task. We pre-assigned only 2 units for each task. The evaluation metric on ToxCast is $61.70_{0.58}$, while that of our method is $66.36_{0.26}$, indicating that only using very few units for each task does not work well in this case. Our cross-mixed head is a reasonable solution to this situation.
Summary: Existing GraphNAS algorithms search for well-performing architectures for a single task, this paper searches architectures for multiple graph task at the same time to share common knowledges. Specifically, this paper use structurally diverse supernet and soft task-collaborative module to search for the optimal architectures and the collaborative pattern of different tasks. The paper further proposes to leverage curriculum learning to balance gradient scales of different tasks during searching. Empirical results show that this method can improve GraphNAS in multi-task scenarios. Strengths: 1. Different from existing GraphNAS papers, this paper introduces a complete framework to search for multiple architectures for multi-tasks at the same time. 2. This paper introduces curriculum learning on multi-task scene to balance task gradients in searching phase. 3. The experiments on different datasets and ablation study are sufficient to show how the designed method works. Weaknesses: 1. Some designs of this paper are not well supported. At the beginning of Section 3 the authors present Assumption 1, which is the basis of the structurally diverse supernet. The authors also verified their assumption through experiments that different tasks require different architectures. However, the graph structure differences for different tasks are proposed in the supernet and are not experimentally verified. 2. In Section 4.3, the absolute value of p is shown in Figure 4. Do positive and negative values have the same meaning of transferred knowledge? Please give more explanation of the meaning. 3. In the algorithm proposed by the authors there are many hyperparameters that need to be optimized, such as the learning rates of different parts. This may lead to difficulties in tuning when switching between different datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviewing efforts and constructive comments. We address your concerns point by point. *Q1: The graph structure differences for different tasks are proposed in the supernet and are not experimentally verified.* Response 1: Thank you for the comment. Following your suggestion, we validate the necessity of graph structure differences for different tasks as we did for architecture differences. To maintain consistency with the process in our model, we first randomly sampled 12 pairs of different $S_u, S_v$. We train all these tasks separated with the best architecture searched by DARTS algorithm and these pairs of $S_u, S_v$. These $S_u, S_v$ are fixed during training, and we use them to generate edge weights as in Equation (7). We rank these pairs according to their performance and calculate the Kendall rank correlation between rankings of different tasks as well. We find the Kendall correlation values are very low. In Tox21, 73.6% values are less than 0.2, 47.2% values are less than 0. In ToxCast, 78.6% values are less than 0.2, and 47.0% values are less than 0. The results indicate that different graph structures also behave differently on different tasks, illustrating the necessity of graph structure differences. We will add the experiment and results in the revised version. *Q2: In Section 4.3, the absolute value of p is shown in Figure 4. Do positive and negative values have the same meaning of transferred knowledge? Please give more explanation of the meaning.* Response 2: We appreciate the reviewer's comment regarding the interpretation of positive and negative values in Figure 4. We can consider $p_{ij}$ as a part inside the operation $o_{ijk}$. Since the parameters in $o_{ijk}$ can be negative, $p_{ij}$ can also be negative. Positive or negative $p_{ij}$ represents the embedding is positively or negatively correlated with downstream parameters While positive and negative values of $p_{ij}$ represent different magnitudes, they both indicate the presence of transferred knowledge. The absolute value is used to emphasize the strength or magnitude of the transferred information, regardless of its direction (positive or negative). The sign of $p_{ij}$ only represents the direction of it. We will provide a detailed explanation in the revised version to clarify the meaning of the absolute value of $p_{ij}$. *Q3: Difficulties in tuning when switching between different datasets.* Response 3: Thank you for the comment. We have provided the typical hyper-parameter settings, including the learning rates of different parts, in Appendix C. Besides, we explore the sensitivity of these hyper-parameters. The results are shown below. | $\eta_S$ | 0.0008 | 0.001 | 0.0012 | | --- | --- | --- | --- | | MGL-WS | $67.01_{0.66}$ | $67.39_{0.42}$ | $67.13_{0.48}$ | | $\eta_w$ | 0.004 | 0.005 | 0.006 | | --- | --- | --- | --- | | MGL-WS | $66.74_{0.52}$ | $67.39_{0.42}$ | $67.40_{0.63}$ | | $\eta_\alpha$ | 0.01 | 0.012 | 0.014 | | --- | --- | --- | --- | | MGL-WS | $67.54_{0.52}$ | $67.39_{0.42}$ | $66.83_{0.47}$ | The results demonstrate that our method is not very sensitive to these hyper-parameters, indicating that they can be easily tuned. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their comprehensive response. The additional experiments provided by the authors on the need for graph structure differences for different tasks and hyperparameters resolve my concerns. I think this is overall a good work and will be happy to increase my score to 7. --- Reply to Comment 1.1.1: Title: Thanks for the follow-up Comment: We thank the reviewer for the detailed check and response to our rebuttal content, and we believe this fruitful rebuttal further improves our paper.
Summary: This paper proposes a graph multi-task neural architecture search technique, which is a new scene in graph NAS. This paper takes some reasonable measures to handle the problem, including structurally diverse supernet, soft task-collaborative module, and task-wise curriculum training. Its performance is worthy of recognition. Strengths: - The proposed supernet and collaborative module of this article are well-motivated. - This article introduces task-wise curriculum learning, which make sense in the multi-task NAS problem. - The experimental results in this paper are very good, which show the mechanism of the method clearly. - This article performs a full ablation analysis. Weaknesses: - Multi-task NAS is not a new technique in other NAS area. Prior arts have had some exploration. It is better to provide more comparison with those methods, especially the issue of how to exchange information between different tasks. - In the article, the definition of search space is vague. Please give more details of it. - Figure 2 is pleasing but hard to understand. It proves to be rather challenging to comprehend the inner workings of the framework and what the input-output formats are for each module. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviewing efforts and constructive comments. We address your comments point by point. *Q1: Multi-task NAS is not a new technique in other NAS area. Prior arts have had some exploration. It is better to provide more comparison with those methods, especially the issue of how to exchange information between different tasks.* Response 1: Thank you for the comment. Multi-NAS has been indeed explored in areas like CV and NLP. However, we designed our method based on the specific requirements of graph multi-task learning scenarios. (1) Our approach considers different graph structures for different tasks, which is a critical consideration in graph multi-task settings. Our structurally diverse supernet enables different tasks to learn with different graph structures, which is not considered in existing general multi-task NAS methods that mainly focus on searching for sharing parts. (2) Our approach contains practical considerations for graph scenarios, such as backbone search and handling a large number of tasks. The semantic information of the multi-tasks in the graph domain is diverse and complex, leading to significant performance differences across GNNs. Figure 1 in our paper demonstrates these disparities. Existing general multi-task NAS methods[1][2][3][4][5] mainly focus on searching for sharing parts while manually fixing the backbone. In contrast, our method combines backbone searching and shared parameter searching to effectively address the unique challenges of graph multi-task problems. We also incorporate a cross-mixed head design that significantly reduces the number of parameters required for scenarios with an extremely large number of tasks, which is a practical consideration that is not addressed in existing general multi-task NAS methods. (3) Our design of information exchange between different tasks is new. Our proposed soft task-collaborative module captures the complex relationships between tasks and is specifically designed for the supernet, allowing for simultaneous optimization. Furthermore, our task-wise curriculum training strategy is tailored to our layer-wise disentangle network, and our re-weighing technique rebalances the partial derivatives from different tasks within our framework. Extensive experiments further support the effectiveness and superiority of our method. Besides, we have compared our method with several representative general multi-task methods such as MTL-NAS[1], Sparse Sharing[2], Raychaudhuri et al.[3], AdaShare[4] and AutoMTL[5]. The results are shown in Appendix D due to the space limit. The results demonstrate the effectiveness of our methods on multi-task graph learning. In the revised version, we will include the discussion that addresses the unique challenges in graph multi-tasks NAS, which motivates the need for a specialized approach. We also acknowledge that our method can be extended to other domains by considering more domain-specific priors. By providing this analysis, we aim to strengthen the justification for our proposed approach and clarify the distinct contributions of our work. [1] MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning. CVPR 2020. [2] Learning Sparse Sharing Architectures for Multiple Tasks. AAAI 2020. [3] Controllable Dynamic Multi-Task Architectures. CVPR 2022. [4] Adashare: Learning what to share for efficient deep multi-task learning. NIPS 2020. [5] AutoMTL: A Programming Framework for Automating Efficient Multi-Task Learning. NIPS 2022. *Q2: In the article, the definition of search space is vague. Please give more details of it.* Response 2: Thank you for the comments, we introduce the candidate operators in Section 2.2, we contain GCN, GAT, GIN, SAGE, k-GNN, ARMA, and MLP in our search space. The entire GNN backbone is a layer-by-layer architecture without sophisticated connections. *Q3: Figure 2 is pleasing but hard to understand. It proves to be rather challenging to comprehend the inner workings of the framework and what the input-output formats are for each module.* Response 3: We appreciate the reviewer's feedback regarding Figure 2. The structurally diverse supernet is the backbone of the framework, the input is the graphs of different tasks, and the output is the prediction for the target tasks. The soft-collaborative module is a part that contains learnable parameters in the supernet. The task-wise curriculum learning strategy is the optimization method that controls the calculation of gradients. We will ensure that the revised manuscript includes an improved Figure 2 with accompanying descriptions for better clarity and understanding. --- Rebuttal Comment 1.1: Title: Raise my score Comment: Thank you for your thorough rebuttal and addressing my concerns. I have carefully considered your responses and revised my assessment of the paper. The paper initially lacked sufficient comparison with prior arts in the multi-task NAS field. However, the authors have addressed this concern by providing comparisons with existing general multi-task NAS methods in Appendix D and clarifying the differences between the approach of this work and the general Multi-task NAS approach, including aspects of model design and applicable scenarios. I think highlighting this part of the main paper could have emphasized the value of this paper even more. Other concerns are also addressed in the rebuttal. Overall, I appreciate the authors' efforts in addressing my comments and improving the paper. I'd like to raise my score to show my support. --- Reply to Comment 1.1.1: Title: Thanks for the follow-up Comment: Thank you for the suggestions and response to our work and the rebuttal content. We believe with the rebuttal content, our paper will be made more clear.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a multi-task graph NAS approach by learning the relationships between tasks. It uses the structurally diverse supernet to learn multiple architectures and structures together, the soft task-collaborative module to learn task relationships to exchange information, and then use task-wise curriculum training to balance task difficulties. Empirical results on OGB datasets are strong. Strengths: 1. This paper is well-organized and easy to follow. 2. This paper introduces the multi-task graph NAS problem, which is important in graph learning, yet unexplored in previous works. 3. The motivations of the proposed three designs are good. They are reasonable for solving the problem. 4. The results on OGB datasets are good compare with SOTA baselines. Weaknesses: 1. After calculating the edge weights, how do you perform the architectures using these edge weights in the continuous space? The implementation of this part is missing in the paper. 2. Lack of detailed motivation for some parts of the methods. In Equation 9, there are other functions that can also be chosen to represent the relationship between tasks such as sigmoid, why tanh function is used here? 3. For the soft task-collaborative module, the parameters $\theta$ are learned in a continuous space. How do you keep them in the final architecture? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Check the weaknesses parts. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your reviewing efforts and constructive comments. We address your comments point by point. *Q1: After calculating the edge weights, how do you perform the architectures using these edge weights in the continuous space? The implementation of this part is missing in the paper.* Response 1: We appreciate the reviewer's comments. The edge weights represent the importance of different weights. Once the edge weights are calculated, we multiply the weights on the message passed by this edge. If we denote $w_{ij}$ as the edge weights between node $i$ and $j$, then the message passing process shown in Equation (1) can be changed to: $$\mathbf{m_i}^{(l)} = \text{Agg}(w_{ij}\mathbf{h}_j^{(l)}|j\in \mathcal{N}_i)$$ In this way, we can use edge weights for all GNN operations. We will add a detailed explanation of how to use the edge weights in the revised version. *Q2: Lack of detailed motivation for some parts of the methods. In Equation 9, there are other functions that can also be chosen to represent the relationship between tasks such as sigmoid, why $\tanh$ function is used here?* Response 2: We appreciate the reviewer's feedback. In Equation 9, the choice of the $\tanh$ function is motivated by its desirable properties for capturing task relationships within our framework. We have the reasons as follows: (1) Symmetry around the origin: The $\tanh$ function is symmetric around the origin, which allows it to model both positive and negative relationships between tasks. This is particularly important as tasks can exhibit different types of relationships, including positive correlations, negative correlations, or no correlations at all. (2) Bounded output range: The $\tanh$ function outputs values between -1 and 1, which provides a bounded range for representing the strength of task relationships. This range can be interpreted as the degree of collaboration or interdependence between tasks, with values closer to 0 indicating weaker relationships and values closer to -1 or 1 indicating stronger relationships. In summary, $\tanh$ function has great properties which exactly match the demand of task relationship representation in our framework. *Q3: For the soft task-collaborative module, the parameters $\theta$ are learned in a continuous space. How do you keep them in the final architecture?* Response 3: We appreciate the reviewer's question. In our soft task-collaborative module, the parameters $\theta$ are indeed learned in a continuous space. After the whole training procedure, we keep the continuous values of $\theta$ in the final architecture, and we evaluate the performance directly with the parameters in the supernet. --- Rebuttal Comment 1.1: Title: Thanks for you reply. Comment: My concerns have been resolved, thanks. It is an interesting paper, and I would like to increase my score. --- Reply to Comment 1.1.1: Title: Thanks for the follow-up Comment: Thank you very much for your valuable comments and the careful check of our rebuttal. We believe this discussion will greatly contribute to our paper.
null
null
null
null
null
null
Information Geometry of the Retinal Representation Manifold
Accept (poster)
Summary: This paper uses a fitted surrogate neural network model to approximate the Fisher information metric induced on image space by the retinal ganglion cell population code. They use this analysis to argue that noise correlations in the retina are information-limiting. Strengths: 1. The question of whether noise correlations in early sensory areas are information-limiting is a classic and broadly interesting issue in theoretical and systems neuroscience. This paper offers a novel approach to the problem of Fisher information estimation. 2. I think the observation that the most discriminable stimulus depends on the base point (Lines 196-208) is an intriguing finding, though perhaps not so surprising. Making a convincing link between this variation and neural adaptation would be an interesting topic for future study (I fully acknowledge that a detailed characterization of this phenomenon is likely beyond the scope of the present manuscript). Weaknesses: 1. I have several concerns regarding the robustness of the paper's conclusions to the architecture and goodness of fit of the surrogate model. One key strength of some past work on information limiting correlations---I have in mind Rumyantsev et al, cited as [8] in the submitted manuscript---is the demonstration that the quantities of interest can be accurately resolved given a number of measurements comparable to the number of experimental recordings. It is not clear to me whether the same should be true here. Can the authors provide evidence that their approach gives accurate estimates of the directions of maximal variation, and that the surrogate model is not overfit? 2. In a similar vein, the authors argue reasonably convincingly that it is reasonable to neglect stimulus-dependence of the noise covariance (i.e., $d\Sigma/dx \simeq 0$) for a ReLU network model, but do not give direct evidence that this is a reasonable assumption for the retinal population code. 3. In several places, data supporting the authors' analysis decisions are not shown, and their choices are not always clearly described. For example, in the truncated approximation (8) for the Fisher information matrix, can you specify (at least in the SI) the precise criterion used to select the "hundreds of the most stochastic modes" included (Lines 170-171)? **A pedantic concern:** The paper and supplementary material contain several small violations of the anonymity requirements. My score for the paper does not take this concern into account. - Lines 257-259: "Acknowledgements: This work was supported by grants from the NEI, R01EY022933, R01EY025087 and P30EY026877 (SAB)." - In the supplementary ZIP, under code, the LICENSE file contains the line "Copyright (c) 2023 Baccus Lab." Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Line 33: In addition to Wang & Ponce 2021, earlier work by Shao et al., "The Riemannian geometry of deep generative models" (2018) should be cited. - Lines 73-81: It could be useful to mention that (8) has been termed the "linear Fisher information" in past works, and to cite Beck et al., "Insights from a Simple Expression for Linear Fisher Information in a Recurrently Connected Population of Spiking Neurons" (NECO 2011) and Kanitscheider et al., "Measuring Fisher Information Accurately in Correlated Neural Populations" (PCBI 2015). - Line 84: Why is [1] cited rather than a general text on Riemannian manifolds (or, indeed, [14])? - Line 86: Using the acronym "MDS" for "most discriminable stimulus" conflicts with the standard use of "MDS" to mean "multidimensional scaling." Please consider using an alternative acronym, e.g., "MDI" for "most discriminable input." - Lines 152-153: Though the binomial noise model yields the best fit, could it still be worthwhile to reproduce Figure 5 for the alternative noise models, as a sort of robustness check? - Line 157: Please provide more detailed information on compute resources than "on NVIDIA GPUs." - Lines 185-186: "Statistics for preparations with longer test sequences tend to be more reliable and consequently our model performs better on these." Data to support this claim is not shown, correct? It would be useful to show more clearly the effects of this heteroskedasticity. - Figure 4: Please state in the caption whether these measures are computed on held-out stimuli. - Lines 206-208: Could you make this apparent link to ideas of hierarchical predictive coding more precise? - Figure 6a-b: It would be useful to remind the reader of the definitions of stochasticity, sensitivity, and discriminability in the caption. - Figure 6a: In the three sub-panels of Panel (a), there are so many dots overlaid on top of each other that it is hard to tell how single neurons are distributed within the blobs of data. Showing 2D histograms might be more informative. It would also be useful to plot stochasticity, sensitivity, and discriminability against one another rather than only against firing rate (depending on the result, this could be deferred to the supplement). - Figure 6b: The linear fit to the discriminability-mean firing rate relationship is not very convincing because of the substantial spread in the data. - Lines 244-247: I agree that it would be interesting to compute geodesics, but I'm less optimistic that this could be done in practice due to the numerical challenges associated with solving the geodesic equation in high dimensions. One previous attempt by Hénaff and Simoncelli ("Geodesics of learned representations," ICLR 2016) used an optimization-based approach, with somewhat mixed success. Can you elaborate on why you think this is a realistic possibility? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors provide some discussion of the limitations of their work, but I think a more comprehensive assessment of the possible failings of each step of their approach would enhance the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful review and helpful suggestions. Weaknesses: 1. We are analytically computing the Fisher information given an excellent model of the retina. Rumyantsev et al. in contrast had to estimate Fisher information directly from data without a model. Therefore they had to worry about whether they had enough trials and neurons. We don't have to worry about this because our model is accurate to two essential properties under natural scenes, the sensitivity as captured by an accurate model of the neural code, and an accurate fit of noise correlations. Given these measures, everything afterwards in computing Fisher information is analytic. 2. $\frac{d\Sigma}{dx}$ is expected zero for any point in stimulus space that yields mean hidden neuron activities where every ReLU neuron's activation is a reasonable distance away from a zero-crossing, measured in units of the standard deviation of noise injected into the ReLU. Therefore, for the model, this statement is true over most of stimulus space except for those stimuli which place the mean response of one or more ReLU's close to their zero crossing. In essence, whenever every ReLU's mean response is just greater than its own noise input's standard deviation away from its zero crossing, small changes in stimulus do not change the linear response of the overall model, and therefore do not change the noise correlations $\Sigma$. Now given our model is an excellent model of the retina itself, this statement should be a good approximation for the retina. One could test this by providing two nearby stimuli to the retina and directly measuring noise correlations and showing they do not change much. But this new biological experiment is beyond the scope of this paper. We hope our direct prediction of measured noise correlations suffices. 3. We usually consider the top 500 most stochastic modes (top 500 principal components) that can explain 85% to 90% of the total variance depending on the stimulus. Additionally, as shown in Fig. 5b, MDS mostly correlates with the top 30 most stochastic modes. Therefore, we claim that our summation converges for the purpose of computing MDS. We can state our criterion of this selection more explicitly in the final version. Questions: 1. Thank you for adding the new citation, we will cite it in the final version. 2. We will mention the terminology and add the citations in the final version. 3. We cited reference [1] as the source of the insight that a Riemannian metric can transform between stimulus space and representation space, whereas a general text typically doesn't concern neural networks or sensory systems. 4. Thanks for bringing this up, we will use MDI instead in the final paper. 5. Fig. 5 is about the geometry of noise correlations. That is to say, the result in Fig. 5 is a product of Gaussian noise added before the final layer. Noise added to the final layer, no matter it is binomial noise or other noise models, is independent noise, will not affect the conclusion in Fig. 5. To confirm this, we have plotted Fig. 5 even without the final independent noise, and the conclusion holds. We can clarify the role of the final independent noise. 6. We will provide the detailed GPU information in the final paper. 7. In the caption of Fig. 3, we stated that "the circle radius is proportional to the square root of the total length of the test set". And one can indeed see from Fig. 3 that larger circles are matched better. 8. The stimuli belong to the test set. 9. The correlation between the MDR and mean response provides a mechanism for a higher brain region to estimate the most informative response changes of retinal ganglion cells in the immediate future based on the current response. Using a strategy known as Bayesian Inference, the higher brain by detecting the response, $R$, could potentially increase sensitivity to the most informative $\Delta R$ in order to extract the most salient and informative features of the stimulus, analogous to attentional cueing. In addition, by subtracting out the mean response to emphasize the informative $\Delta R$, the higher brain could use predictive coding, which yields a more efficient representation by encoding the prediction error. A combined optimal readout strategy may combine both elements, known as Bayesian Predictive Coding (Aitchison et al. 2017). 10. Yes, we can remind readers that the definitions are in the theory section. 11. We agree that the suggested plots directly comparing different quantities could be put in the supplementary material, since they are less relevant to our main conclusions but some readers might be interested. 12. Yes, actually we should have set the y-axis to start from 0 since we are looking at the fluctuation relative to the absolute value. In that case we believe that the plot will look much more convincing. (See Fig. A4 in the attached pdf.) 13. We also believe that it would be impractical to solve the high-dimensional differential equation to find the geodesic rigorously for our model. But there could exist approximation methods to help answer the question. The first step could be computing the geodesic distances of a finite set of paths. With these data it becomes possible to refine the path searching space if paths with small geodesic distances share any common features (dimensionality reduction methods could be used here). Applying some constraints to the path in the pixel space with Euclidean metric could accelerate the searching. Reducing the full model to some approximately equivalent small model would also help. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my comments and those of the other reviewers. I think this is an interesting contribution, and I will raise my score.
Summary: This paper uses a CNN-based model fit to recorded responses from salamander retinal ganglion cells to explore the Fisher information matrix $I$ of neural firing. They report that top eigenvectors of $I$ vary markedly with the stimulus being shown and that most discriminative response modes often align with top eigenvectors of $I$. They further argue, on the basis of their analysis, that noise correlations in the retina are likely to be information-limiting because signal and noise are propagated via feedforward mechanisms through the same channels. This is an interesting paper that employs information geometry methods to provide some insights into retinal coding. However, the results are somewhat preliminary, and it is unclear how closely these insights are tied to a particular model architecture. The presentation of some aspects of model training is also somewhat confusing. Strengths: - Good fits to experimental data. - Attempts to link models to known physiology. Well-grounded in current neuroscientific theories and questions. - While Fisher information has been widely used to investigate neural population coding, the use of a model fit to natural image data to estimate Fisher information over a wider range of stimuli is innovative. Weaknesses: - Several modeling choices seem somewhat _ad hoc_, and it's unclear how much the results depend on them. For instance, numbers of hidden layers, types of nonlinearities, and structure of injected noise might possibly play a role, but it's unclear. The authors would likely argue that their good fits to summary statistics of data obviate these questions, since the role of the model is simply to interpolate between measured data for purposes of the manifold analysis, but that's to be shown. - Section 3.2 details a somewhat surprising hodgepodge of optimization strategies for different components of the model, suggesting the results depend fairly sensitively on how training is done. - Results are intriguing but seem somewhat preliminary. For instance, the analysis of MDS and MDR in Figure 4 is interesting, but I'm missing the bigger picture about what to take away from this. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Is there a reason the authors have chosen the non-standard binomial parameterization in Equation 7? Of course it is binomial with sufficient statistic $n$ and natural parameter $k\log \frac{r}{1-r}$, which would normally be written $\log \frac{p}{1-p}$, but calling $k$ a "variability parameter" seems somewhat misleading, as this is still a binomial distribution with a single parameter. It is stated that $N$ is cell-dependent. What about $r$ and $k$? - I found the exposition of the one-hot layer in Section 3.1 very difficult to follow. I understand that the goal is to mimic retinal organization, but I'm still unclear how this matches up with the math in Eqs. 5 and 6. For instance, I don't know what `input` means in (5). Are $i$ and $j$ indices in the tensor of the last layer, e.g., `W[C(k), i, j]`? And $k$ indexes neurons? Are distinct $(i, j)$ supposed to correspond to distinct RF locations? Similarly, in (6), what do $(p, q)$ correspond to conceptually? It took me a while staring at the formula to understand why a 1-hot $\mathbf{W}$ is optimal for a given $p$ and $q$, and then the sum just takes care of all possible assignments? And I'm still not clear on how this one-hot assignment for each $k$ prevents output units with different $k$ from using the same $(C, i, j)$. I think a conceptual figure would be a real help here. - Minor: Line 166: This $w$ is not the same as the one in (6), correct? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: - As stated in the paper, the calculations are predicated on the assumption that the noise correlations $\boldsymbol{\Sigma}$ are locally independent of the stimulus. This may be optimistic in some cases. - As noted above, the results are conditioned on a particular choice of model architecture, which may or may not affect the results. - Since the model is based on fits to a limited number of ganglion cells in salamander, it may or may not generalize well to the entire retinal population or other species. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and helpful critiques. Weaknesses: 1, 2. Please see the overall author rebuttal as to choices of model architecture and optimization. Most importantly, we note that our neural architecture is building on previous work that accounted for the mean deterministic but not variable stochastic responses of the salamander retina to natural movies (see our cited [11,12,13]). In that work an extensive architecture search was performed which varied number of layers, number of channels per layer, etc... to find the simplest model that can capture mean ganglion cell responses. We are building on this extensive architecture search by taking the SOTA model and showing, remarkably, that without any further modification, other than the injection of optimized independent noise in each layer, we can accurately capture many many pairwise noise correlations in the retinal ganglion cell output. This in and of itself is a major contribution to computational neuroscience: nobody has ever accurately captured such retinal noise correlations, for natural scenes, using a feedforward neural circuit model, whose internal structure matches that of the biological retina. 3. We agree that many MDSs and MDRs are difficult to interpret visually. Thus, our main conclusions lie in Fig 5,6, particularly related to the ongoing question about the role of noise correlations as described in the overall author rebuttal. In addition, our recent research finds that when constrained on a natural scene manifold, MDS will become more visually interpretable. Questions: 1. The rationale for choosing the non-standard binomial distribution is the refractory period of retinal ganglion cells which induces a maximal spike count in a time bin. So $N$ is the cell-dependent parameter that we can directly determine from the observed refractory period in our experimental recording. After fixing $N$, $k$ is the parameter controlling the variability of the distribution (also cell-dependent), and $k$ can only be determined through model optimization. $r$ is not a parameter to be optimized, it rather controls the firing rate which is updated according to the current CNN output. We can improve our description about the binomial noise in the final paper to make it more clear. 2. The input to the one-hot layer is the output of the last convolutional layer, which is a 3d tensor where the first index is the channel index and the other two indices $i,j$ refer to the location. $k$ is the index of recorded neurons. So Eq.5 says that each output unit is a linear combination of units in different locations in a channel, and such combination weights will eventually converge to a one-hot vector after model training. In Eq. 6, $p,q$ are also location indices like $i,j$. To intuitively understand why a one-hot $w^k$ can minimize the loss function, the key observation is that, suppose $w^k$ is a one-hot vector, then $|\prod_{ij}(w^k_{ij}+\delta_{pq,ij}-1)|=1$ for $(p,q)$ such that $w^k_{pq}=1$. While if $w^k$ is far from a one-hot vector, then such product will produce an extremely small number for any $(p,q)$. We will describe this more clearly in the final paper. 3. Correct. We will use a different notation to denote the eigenvalue in the final paper. Limitations: 1. $\frac{d\Sigma}{dx}$ is expected zero for any point in stimulus space that yields mean hidden neuron activities where every ReLU neuron's activation is a reasonable distance away from a zero-crossing, measured in units of the standard deviation of noise injected into the ReLU. Therefore, for the model, this statement is true over most of stimulus space except for those stimuli which place the mean response of one or more ReLU's close to their zero crossing. In essence, whenever every ReLU's mean response is just greater than its own noise input's standard deviation away from its zero crossing, small changes in stimulus do not change the linear response of the overall model, and therefore do not change the noise correlations $\Sigma$. Now given our model is an excellent model of the retina itself, this statement should be a good approximation for the retina. One could test this by providing two nearby stimuli to the retina and directly measuring noise correlations and showing they do not change much. But this new biological experiment is beyond the scope of this paper. We hope our direct prediction of measured noise correlations suffices. 2. See the overall author rebuttal as to the motivation of model architecture. In particular we are building on an extensive prior architecture search and our results are robust and reproducible across 4 different model fits to 4 different biological retinas. 3. It is a well-known anatomical principal that ganglion cells tile the retina, so at each location there is only one cell of a given type. This mosaic organization also occurs in the salamander retina (Fig. 2 in Kastner & Baccus, 2011) implying that single recorded neurons can be generalized by the one-hot layer to the population of the same cell type. It is acknowledged that all cell types may not have been recorded from. As to generalization to different species, the primary properties at play here, strong nonlinearity under natural scenes, the presence of noise correlations of a similar magnitude, similar nonlinear phenomenology and a mosaic ganglion cell organization have all been observed across different species of vertebrate retina. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thoughtful responses. Replies to selected replies below: > 1, 2. Please see the overall author rebuttal as to choices of model architecture and optimization. Thanks for the clarification. This is helpful, but it should be informative to the authors that this point was made by three of four reviewers. Either additional pointers to the previous work or more discussion of modeling choices would not hurt. > $\frac{d\Sigma}{dx}$ is expected zero for any point in stimulus space that yields mean hidden neuron activities where every ReLU neuron's activation is a reasonable distance away from a zero-crossing, measured in units of the standard deviation of noise injected into the ReLU. Sorry, just to make sure I understand: this statement is predicated on the _assumption_ that the only factors affecting $\Sigma$ are how many neurons contribute, not where in stimulus space we are, correct? That is, once a neuron is into the linear portion of the ReLU, it is _assumed_ that the covariance is constant? I apologize if I'm missing something obvious here. --- Reply to Comment 1.1.1: Title: Constancy of Noise covariance is not an assumption when all ReLU's are far above or below threshold Comment: To Reviewer SiWR: sorry we were not clear. The local constancy of the noise covariance with respect to stimulus variation is not an assumption. It can be proven to be true to good approximation when all ReLUs are either far enough above or far enough below threshold (i.e. 0 activation) in units of the input noise standard deviation. Here is the proof. First consider a linear network where the output $r$ is given by: $r = W (x + e^{in}) + e^{out}$. Here $x$ is the input stimulus vector, $e^{in}$ is an input noise vector, and $e^{out}$ is an output noise vector. The noise covariance matrix $\Sigma^r$ of $r$ is straightforwardly computed to be: $\Sigma^r = W \Sigma^{in} W^T + \Sigma^{out}$, where $\Sigma^{in}$ and $\Sigma^{out}$ are the covariances of $e^{in}$ and $e^{out}$ respectively. This recovers the well known result that the output noise covariance $\Sigma^r$ of a linear network, conditioned on the input $x$, is actually completely independent of the input $x$. Now consider the nonlinear case: $r(x) = f(x + e^{in}) + e^{out}$. If the nonlinear map $f$ is a ReLU network where the input $x$ keeps the mean activity of each ReLU far from threshold (either above or below), in units of noise standard deviation, then over most of the noise distribution of $e^{in}$ and $e^{out}$, we can Taylor expand $f(x + e^{in})$ about $e^{in}=0$, and the first order linear expansion will be a very good approximation. Then the above result that the output noise covariance $\Sigma^r$ is independent of stimulus $x$ for a linear network applies to good approximation for the ReLU network: i.e. $\frac{d\Sigma^r}{dx} = 0$ for the ReLU network. We hope this explains why $\frac{d\Sigma^r}{dx} = 0$ is not an assumption but can be proven to be true to good approximation over almost all of stimulus space $x$.
Summary: - This study uses an information geometry approach to studying visual coding in retinal populations. - Using local-linear analyses (eigenmode + spectra) at different conditional responses of a CNN + fitted noise model, the authors observe - The authors analyze the most sensitive coding directions of the population, as well as the co-alignment of the noise, and find that noise in the retina is information limiting. - This is an interesting and conceptually clear/simple paper that adds to our understanding of neural coding in the retina. UPDATE: Sep 1, 2023. I have read the rebuttal, it addressed my Qs and I already adjusted my confidence accordingly. Strengths: - I appreciate the clear writing style. - The model and eigenvector analyses are conceptually simple to follow. - The finding that noise correlations are information limiting is interesting. Weaknesses: - I am slightly concerned with the lack of effective null models against which to compare here. Given the size of the dimensionality. Either a positive or negative control. For example, is it conceivable that a different model with different noise eigenbasis, but with the same fit performance? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Similar Jacobian eigendecomposition analyses were conducted in Berardino et al. ("Eigen-distortions of hierarchical representations", Neurips 2017) to analyze model perceptual alignment with human visual perception. (In this paper they assume isotropic noise in the response domain, but in follow-up analyses in their dissertation, I believe they have more complex noise models.) Importantly, they not only consider the top eigenvector, but also the null space (eigenvectors with eigenvalue=0) as a means to probe model perception. I am wondering if such an analysis, systematically examining the null space of the model, would supplement your results with unique or complementary findings. - Is there a difference in noise geometry for natural vs synthetic stimuli? Would these be captured by the model? I think a useful tool to quantify this would be recent work by Duong et al. ("Dissimilarity metric spaces for stochastic neural networks"; ICLR 2023), analyzing covariance orientation and scale for different classes of stimuli. - Could there be a normative explanation for the noise and coding sensitivity co-alignment? - Fig 5A should probably explicitly cite the Averbeck and Pouget review in the caption, saying that it is modified from there. Unless I am mistaken, it does look very similar. - Minor: The abstract is a little long. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: There was arguably some discussion of limitations but nothing explicit. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and helpful suggestions. Weaknesses: For the main result about the effect of noise correlation in the retina, we had already performed a null / control analysis that could be added to the final paper. As shown in Fig. A3 in the attached pdf, we shuffled trials of the model response to set all noise correlations to zero, thus creating a null model with independent noise and compared it with the original model. We found that the neural response with independent noise has significantly higher discriminability than the response with correlated noise, which further confirms our finding of how noise correlations reduce information under natural scenes. We also note that our neural architecture is building on previous work that accounted for the mean deterministic but not variable stochastic responses of the salamander retina to natural movies (see our cited [11,12,13]). There alternative architectures were considered, including the Generalized Linear Model (GLM), which was shown to perform poorly in modeling the mean response compared to the 3-layer CNN model (Maheswaranathan, Niru, et al. Neuron 2023). Therefore we are extending a SOTA architecture and extending its applicability from mean responses to stochastic second order correlations in responses to natural movies, which in and of itself is a major contribution that has never been achieved before in the retina using a feedforward network whose internal hidden units and computations match that of the biological retina. Questions: 1. Thank you for alerting us to this very relevant citation. We have also computed the least discriminable stimulus direction, and compared to the MDS which is more localized, they look more scattered and indeed difficult to discriminate for human eyes (Fig. A2 in the pdf). Since the least discriminable directions are less relevant to our main conclusions, they are not presented in the paper but we could add them in revision. 2. We believe that there is a large difference in the geometry for natural and synthetic stimuli. We also trained a model using checkerboard white noise stimuli and the resulting noise parameters and noise covariance geometry looks very different from that for natural scene, implying that noise in the retina highly depends on the overall stimulus statistics. Since the scope of our paper focuses on the much more ethologically important regime of natural scenes, we didn't present our results for white noise, but we can briefly describe describe such stimulus dependency. 3. Suppose that all Gaussian noise arises at the level of the stimulus, then obviously the noise and the signal directions will align. Deviation between noise and signal directions can only occur when the dominant noise source arises deeper in the network. In our case, a large fraction of noise source often arises earlier, implying that signal and noise will still be more aligned. Although we do not have a normative explanation as to whether the allocation of noise earlier in the network arises due to some advantage related to information or that it occurs due to mechanistic constraints (e.g. photoreceptors having noise in the biochemical transduction cascade), we will improve our explanation of such co-alignment in the final paper. And as stated in the discussion section, one interesting future direction is to better understand under what circumstances such alignment would happen. 4. Yes, Fig. 5A is modified from the Averbeck and Pouget review, cited in the text, and we will also add the citation in the caption in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for your responses to my comments & Qs, and additional analyses. The model explaining retinal noise is already interesting by itself. However, I still believe additional work needs to be done (more controls) to dive into _why_ it works, and if this particular 2-step optimization procedure's fit could be produced by a different noise optimization procedure that may give a different conclusion. After reading other reviews, I believe what I'm getting at is related to other reviewers' comments on what feels like many ad hoc choices in this study, and the "hodge podge" of optimizations. That being said, I do like the main message of the paper (noise vs signal corrs of retina), and think it's a noteworthy result about an outstanding question in the field. So I will maintain my score of 6, but am inclined to raise my confidence from 3 -> 4 to reaffirm that this would be of interest to others in the field. --- Reply to Comment 1.1.1: Comment: The reason for using a two-step optimization procedure is that the second order statistics are functions of the entire dataset and cannot be decomposed into terms depending on the individual stimulus. Thus it is not practical to optimize second order statistics along with mean firing rates. Fisher information requires an accurate measurement of both sensitivity and stochasticity, two properties that have never been optimized together before for natural visual scenes for any part of the nervous system. A highly accurate model of sensitivity has already been published after an extensive architecture search that included number of layers (1 - 3), number of channels (1 - 36), filter sizes and batch normalization. All of these deterministic properties were then completely fixed. Given the likelihood that optimizing sensitivity and stochasticity together might be intractable, it is amazing that the simplest possible procedure, namely fixing the deterministic sensitivity and just optimizing noise inputs for each layer (not even each channel separately) via grid search as well as the variability of the final independent spiking noise yields an accurate model of second order statistics. It didn’t have to be that simple - but it is remarkable that it was. Therefore our procedure to promote a model from capturing sensitivity to capturing discriminability, far from being ad hoc, is both simple and accurate - the best of both worlds. Given that stimulus discriminability under natural scenes is the real function of the visual system (not just sensitivity), our work is the first that shows that the true function of the retina can be modeled. The success of our method inspires confidence in two step optimization methods like this in future applications. As we stated in the overall rebuttal, for an accurate model of sensitivity and stochasticity, computing Fisher information is analytic. Therefore, although there may be other untested strategies for optimization, it is unlikely that there will be any solution of discriminability that differs from the current model.
Summary: The authors present a new framework to understand stimulus discriminability for models of the visual pathway. They derive a Riemannian metric on the representation manifold and from there, different measures such as most discriminative/sensitive stimulus directions and their counterparts in the response space. The authors apply the method to a CNN model of the retina, fitted on experimental RGC data from tiger salamander. They find that discriminative stimulus directions are often aligned with stochastic modes and that populational codes can help in high firing rate regimes. Strengths: The authors present a framework which is quite general and can be applied to different models. This could even help to design better experiments and advance the geometrical understanding of neural representations. Weaknesses: While the theoretical framework is quite strong, the model presentation and the conclusions are not clear in some places. Especially the drawn conclustions are quite strong, and further investigation (for example (noise) model dependency) should be carried out. More specifically I have the following remarks: - Can the authors reiterate on the deduction of eq. 1 and how this defines a Riemannian metric on $\{ P(y|x)\}$? - The model could be described more clearly. - The optimization of the model is not completley clear to me and some clarification is needed: - how is the Poisson loss defined? - what is the exact (hierarchical) procedure? - maybe a short pseudo code would help here. - The results seem to be highly dependent on the chosen model, and its noise assumptions. Firstly, it is not clear if they still hold if the model (or its noise) is changed. Secondly, the influence of the different noise assumptions could be tested. Thirdly, it is not clear how $g_i$ are exactly chosen (see optimization). - While the model seems to be spatiotemporal, the authors purely focus on the analysis of the spatial component. What role does the temporal part play and can it be analyzed in the same framework? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How does the model compare to a Bayesian network, in which noise would be (a priori) independent (but systematically optimized) for each processing unit? - How does the model perform on other performance measures? Even simple metrics such as reconstruction error, mean firing rate etc. could help to judge the results. - How does the model compare to a simple baseline model (for ex. LNP)? - Are the MDS robust across different models? - Is there a way (or even meaningful) to compare the MDS to the RF of individual cells? - How does the MDR (in Fig 4a) for individual cell type channels look like? Are they interesting and can be linked to RF? - How do MDRs compare to other dimensionality reduction methods? For example, does it make sense to do PCA (or more fancy methods) on the neural data, push it forward to the stimulus space and compare to MDR? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors have addressed some limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and helpful suggestions. Weaknesses: 1. The LHS of Eq. 1 is equal to $-2\int P(y|x)\ln[1+\frac{dP(y|x)}{P(y|x)}]dy$. We can add this line to help understand the derivation. The remaining step is just to apply a high-dimensional Taylor expansion to the logarithm function. Note that the first order term will vanish so only the second order term will be left. This is common knowledge in the information geometry (see citation [14)]. 2. The model architecture is mainly described in the caption of Fig. 2 which is adopted from the architecture in citations [11, 12, 13] and more emphasis in the paper is given to the stochastic part. We can modify this part to make the description more clear and also emphasize more that this work follows from earlier studies. Importantly, the model is a highly successful model that (1) underwent extensive architecture search to fit the mean response of salamander retina to natural movies; (2) it generalizes to correctly predict > a decade's worth of experiments on artificial stimuli; and (3) its hidden units and computations match that of the biological retina. We show remarkably that this *same* model could also correctly predict stochastic second order correlations. 3. Poisson loss is defined as loss(input,target)=input−target∗log(input)+log(target!). We used the regular Poisson loss in PyTorch. The detailed procedure of hierarchical clustering is summarized in the supplementary material. We computed the cosine similarity matrix across channels as the affinity matrix and applied the standard procedure with the function implemented in the scikit-learn package. We can define the Poisson loss and improve the description of hierarchical clustering in the final paper. 4. For the first two points, please see the overall rebuttal above. For the third point, we grid-searched $g_i$ to optimize the mean squared errors of noise correlations and stimulus correlations (weighted average), and more details of this can be provided. 5. It is true that the MDS is spatiotemporal. Analyses in Fig. 4b, Fig. 5, Fig. 6 are conducted with the full spatiotemporal MDS vector. We also computed the temporal components of MDSs but found that the temporal components for different stimuli are very similar, and are also very similar to the temporal components of instantaneous receptive fields. We did not find interesting results in terms of pure temporal components and therefore did not present them in figures considering the page limit, but can briefly describe these temporal components. Questions: 1. A Bayesian network is a probabilistic graph model, whereas our model is a mechanistically interpretable neural network whose internal units have correspondence with bipolar and amacrine cells as shown in citation [12] (Maheswaranathan et al., 2023). Rather than taking a different approach that would be disconnected with the biological implementation, in this manner it is important that the full sensitivity and stochasticity can be captured by taking a deterministic model and adding appropriate noise. The linearity and non-linearity in the model correspond to synaptic weighting / current integration and firing / vesicle release, respectively. We can emphasize more about the interpretability and mechanistic correspondence of our model in the final paper. 2. We computed the Pearson correlation between model firing rates and recorded firing rates, which ranges from 70% to 80%, which is the state-of-the-art performance for natural scenes. This is similar to the firing rate performance reported in citations [11, 12] because we are using a similar model architecture, which we can report it in the final paper. 3. The Linear-Nonlinear model and Generalized Linear Model (GLM), performs poorly under natural scenes compared to a 3-layer CNN model (Maheswaranathan, Niru, et al. Neuron 2023). Although GLMs have been shown to capture noise correlations under white noise stimuli (Pillow et al., 2008), the Linear-Nonlinear-Poisson model by definition cannot capture noise correlations between cells because noise is independent, which was tested explicitly in Meytlis et al. 2012. Such noise correlations are known to be important for decoding of stimuli under white noise and natural scenes (Ruda, Kiersten, et al. 2020). 4. Our experiments used four animals and we trained one model for each preparation. As stated more fully in the general rebuttal, comparing the results of these models, we believe that our results and conclusions are robust. 5. Since the receptive field is computed by averaging over stimuli, we think it is more meaningful to compare the MDS with instantaneous receptive fields (gradient of neural response with respect to stimulus), which is dependent on stimulus. According to the theorem we proved in the paper, the MDS lies in the subspace spanned by the instantaneous receptive fields of the output neurons. Indeed they often look similar, and sometimes the MDS looks like a superposition of two or three instantaneous receptive fields. 6. One example of MDR across different cell type channels as well as the stimulus is shown in Fig. A1 in the attached pdf. One can see that different parts of the stimulus generates the MDR for different cell types, indicating that different cell types signal different stimulus regions in the image as conveying the most information. We will add this figure to the paper and discuss briefly the cell-type dependency. 7. The most noisy response directions (MNR) are simply found by PCA with the result shown in Fig. 5b that MNR and MDR are correlated. We have not tried more sophisticated dimensionality reduction methods, which would be appropriate in the in the future to better understand the full structure of the neural manifold, but the relationship between MNR and MDR is highly likely to hold. We can emphasize the equivalence between PCA and the identification of the MNR in the final paper to help understand our method. --- Rebuttal Comment 1.1: Title: Re: Thanks for the clarifications. Comment: I thank the authors for their thorough responses and I appreciate the additional explanations. While the similarities of the CNN to previous work were not obvious to me before (and appropriate explanations & references were missing), the model part becomes much clearer. The authors highlight the mechanistic interpretability of their model, which one could argue about, compared to a detailed Hodgkin-Huxley-like model. But while this is not my intention here, a better explanation of the noise-model decisions would be beneficial, and if possible, how they can be interpreted mechanistically/biophysically (others than “synaptic noise” or “refractory period”), as the authors highlight this aspect. --- Reply to Comment 1.1.1: Comment: Thank you for the suggestion, it is a good point to discuss the mechanistic connection of the stochastic part of the model, because they are somewhat different in functional implication from the deterministic aspect discussed in previous publications. The deterministic portion was meant to capture the circuit architecture, thresholds at the presynaptic terminal and synaptic weighting of the retina in order to ascribe the functional effects of visual computations to individual cell types and synaptic connections, thus avoiding many details present in HH type models. However, when considering the stochastic model, other mechanisms may become important, and so it is worth drawing these connections. Gaussian noise added in our model is prior to the threshold, which mechanistically would correspond to voltage-dependent calcium channels in the synaptic terminal. In the first layer, noise will be dominated by the phototransduction cascade (Ala-Laurila & Rieke, 2011) as well as ion channel noise that when combined through addition of membrane currents will tend to a Gaussian distribution in the bipolar cell membrane potential. In the second layer, noise from bipolar cell vesicle release (after the first layer threshold) and ion channel noise in amacrine cells will similarly tend towards Gaussian. The non-monotonic (inverted U-shaped) single cell noise in ganglion cell spiking classically appears in a variance - mean plot for binomial processes such as voltage-dependent ion channel gating, receptor activation or spiking that exceed p > 0.5. We feel it is an interesting conclusion that although there may be a number of biophysical noise sources that differ microscopically from a Gaussian distribution (e.g. Poisson noise in vesicle release), when summed through the circuitry determined by the deterministic model, Gaussian noise becomes the best model, outperforming Poisson noise. Thus the central limit theorem simplifies the model’s optimization.
Rebuttal 1: Rebuttal: Thank you for pointing out the strengths of our work. As stated by two reviewers, a primary important contribution of our work is that we have solved a long-lasting debate about the role of noise correlations in the retina under natural visual stimuli using a novel information-geometric framework, one that can also be generalized to other neural systems. More generally, although numerous analyses have been conducted about discriminability for simple stimuli, and deterministic sensitivity under natural scenes have been modeled, our stochastic model is the first to be able to capture both sensitivity and stochasticity under natural stimuli for any system, thus enabling a broad set of questions about stimulus discriminability that were previously inaccessible. One shared concern of the reviewers is the rationale of our selections of the model, hyperparameters, and optimization method and whether the scientific results are robust against these selections. Although these are reasonable concerns, some reviewers may not be familiar with previous work that is the foundation of our current model, which consists of the deterministic part (the CNN and the one-hot layer) and the stochastic part (Gaussian and binomial noise). We will discuss these two parts separately. The CNN architecture and corresponding hyperparameters are adopted from citations [11, 13] and (Maheswaranathan, et al. 2023), including the number of layers, the number of channels, nonlinearities, implementation of convolutional layers, etc.. This previous work has not only shown that the current model setting can achieve the state-of-the-art performance in fitting the RGC mean firing rates for natural scenes, but also that the model has a clear correspondence with the real retina given the phenomenology reproduced by the model and the correlation between the activity of model internal units and recorded interneuron responses that the model was never fit to. Therefore, this CNN is not simply a statistical approach for fitting experimental data, but rather a mechanistically interpretable retinal model that captures the computations, circuits and representations of the salamander retina. For the stochastic part of the model, which is new to our current work, it is important to note that our optimization did not change the deterministic parameters described above, but only optimized the addition of noise parameters to capture as much as possible the full properties of second-order retinal stochasticity as a function of stimulus and response. We have attempted a variety of approaches in fitting these second order statistics, including Poisson noise in intermediate layers, using Gaussian noise in the final layer, neglecting the refractory period, only fitting noise correlations, and so on. We have concluded that the current stochastic setting and optimization methods can fit second order statistics most accurately and fit different experimental preparations robustly. We understand that our optimization method for different model components may look complicated, but this is because fitting multiple second order statistics simultaneously across a large natural stimulus set is non-trivial and challenging. To test robustness, we have tested models trained for four experimental preparations with different CNN and noise parameters, different number of output channels and cell-to-channel mappings, different types of nonlinearities and different implementations of the convolutional layer, and different noise parameter selection criterions. We claim that all results reported in the paper are robust against these variations. Furthermore, the Fisher information matrix only depends on sensitivity and the noise covariance matrix (stochasticity). Sensitivity was computed with the noiseless model which has been validated extensively by previous works ([11, 13] and Maheswaranathan et al. 2023), whereas the covariance matrix is proportional to the noise correlation matrix, which is captured well by our model as shown in Fig. 3A. Therefore, we claim that our computation of Fisher information metric is reliable. In the final paper, we will emphasize more about the relation between our work and previous literatures, the robustness of our results, and the challenge of fitting experimental data. Pdf: /pdf/52d2f681045dfd5f0fff8ae26b8bf8e814caf2f9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Rate Free Sampling in Constrained Domains
Accept (poster)
Summary: In this paper the authors first introduce a unified view of some existing sampling algorithms for constrained spaces by exploiting the notion of "mirrored optimisation" from standard convex optimisation in the setting of optimisation on probability spaces. The problem of sampling is first converted into an optimisation problem through standard arguments of Wasserstein Gradient Flows (WGF) on $\mathbb{R}^d$: 1) consider a minimisation problem $\pi = \mathrm{argmin}_{\mu} \mathcal{F}(\mu)$ for some functional $\mathcal{F}$ and target probability measure $\pi$, 2) find a continuous process which transports from some initial $\eta_0$ to the target $\pi$ via the continuity equation, and c) discretise the initial distribution $\mu_0$ into some $N$ particles. which, when transported along the trajectory $\mu_t$, produces, under suitable assumptions, samples from the target $\pi$, as desired. To generalise this to possibly constrained spaces $\mathcal{X} \subset \mathbb{R}^d$, the authors then introduce the *Mirrored* Wasserstein Gradient Flows (MWGF) by using the idea of a "mirror map" from convex optimisation. In particular, they use the mirror map to define a bijective transformation from the space of probability measures on the constrained space $\mathcal{P}_2(\mathcal{X})$ to probability measures on unconstrained space $\mathcal{P}_2(\mathbb{R}^d)$. To make the above construction practically useful, one needs to also discretise the resulting trajectories wrt. time, which then require a choice of stepsize. But equipped with the above view, the authors make use of previous work on "coin sampling", an idea which is based on "coin optimisation" from the convex optimisation literature, to make an explicit choice of stepsize no longer necessary, resulting in a family of particle-based learning rate free sampling methods for constrained domains. Strengths: There are two main points of the paper that are novel: 1. The framework of "Mirrored Wasserstein Gradient Flows", though not a huge step from the unconstrained "Wasserstein Gradient Flows", generalises a few previously seen mirrored sampling methods and allows the authors to introduce new mirrored versions of other existing methods for unconstrained spaces. 2. The combination of "Mirrored Wasserstein Gradient Flows" and "coin sampling", allowing the authors to obtain learning free rate particle-based sampling methods. (1) is, as far as I can tell, of greatest novelty. Even though instances of this family have previously been seen in the literature, as pointed out by the authors, putting these methods into a more general framework can lead to both improved theoretical understanding of the algorithms and new practically useful methods, the later which the authors nicely demonstrates by the generalisation of existing unconstrained methods to constrained spaces. The authors nicely demonstrates through empirical results that the introduced methods have empirical advantages over existing methods; in particular, the learning rate free constrained methods perform similarly to their counterparts with finely tuned stepsizes. Given how sensitive some of these algorithms can be to the stepsize, this seems like a useful improvement for practical applications of these methods. The paper is also very nice to read and not too difficult to follow. The authors also seem to have done a good job mentioning existing works. Weaknesses: One aspect that might be worth a little bit of questioning is the novelty of the work. As mentioned, the novelty comes from 1) the introduction of a framework capturing several existing mirrored sampling approaches, and 2) the application of coin sampling to the resulting framework. Having seen previous works on mirrored particle-based sampling methods in addition the work on coin sampling, neither of these might seem all too surprising. With that being said, the authors themselves demonstrate the utility of this formulation by the introduction of two new mirrored sampling methods, and the practical utility of (2) is clearly demonstrated in the empirical section; in total, I think this makes this work more than sufficiently novel. Another aspect is some lack of clarity regarding the utility of the mirror map in the introduction MWGF in Section 3. Going from the constrained WGF to the unconstrained MWGF is really only using the bijectivity property of the mirror-map $\nabla \phi$; that is, we could easily "generalise" this framework further by just replacing $\nabla \phi$ with *any* bijective map $\psi: \mathcal{X} \to \mathbb{R}^n$, which is commonly what is done under naive application of more classical sampling methods such as HMC to constrained problems (with the additional constraint of differentiability so the push-forward is computable). I therefore find it somewhat non-obvious as to why the "mirroring" approach is preferable to what I describe above. It seems to me that previous works on mirrored sampling are generally motivated by improved convergence rates in the constrained setting as in Hsieh (2018) and Zhang (2020), which is not going to be the case for any bijective transformation. But in this work there is no mentioning of improved convergence rates of the mirrored versions introduced. I can see why the mirroring is important for the coin sampling, as the proof in Appendix C makes use of fact that the push-forward of strongly log-concave $\pi$ under the mirror map $\nabla \phi$ is also strongly log-concave, but it's so clear why this is imposed already in the MWGF construction. I therefore suspect the authors has taken this less general presentation for the sake of readability, which is completely sensible, but then I'd personally appreciate a minor remark in the main text explaining this and mentioning somewhat clearly why the "restriction" to mirror maps is necessary for the coin sampling. For the empirical results, one minor weakness is that, as far as I can tell, only a single number of particles is used in each experiment, making one wonder if there is a reason for that choice, or if indeed the observed results generalise nicely for different choices of number of particles. Even though I'd expect the results to generalise, it would be nice to see a few different values for the number of parameters for each experiment rather than one. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - As far as I can tell, every experiment is performed using a fixed number of particles ($N = 50$ for some, and $N = 100$ for others). Is this a particular choice, and if so, are there reasons for it, other than computational concerns? Though I would suspect the results to generalise, it would be comforting to see results for a few different choices of $N$. - From what I can tell, previous usages of mirror maps in sampling are generally motivated by improving convergence wrt. certain sampler parameters, e.g. stepsize. Do the two algorithms which correspond to mirrored versions of Laplacian adjusted Wasserstein gradient descent and kernel Stein discrepancy descend also posses similar properties? Or is this not something that has been explored yet? (From a quick glance at the appendix, it seem the results are mainly related to continuous-time convergence.) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are two main limitations with the method in its current form: a) since much of the work is based on coin sampling for unconstrained domains, this work suffers from the same ailments, i.e. theoretical results establishing convergence without somewhat strict and non-standard assumptions (which cannot be easily checked in practice) are not yet addressed, and b) any mirrored sampling algorithm requires the availability of a mirror map, and hence so does this work. These limitations are explicitly mentioned by the authors, which is appreciated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thorough engagement with our work and their constructive feedback. We provide a detailed point by point response to their comments below. --- **Weaknesses** **One aspect that might be worth a little bit of questioning is the novelty of the work...** Many thanks for these considered and thoughtful remarks. We would broadly agree that, given a close familiarity with the constrained sampling literature, as well as the recent work in [1], some of the results in this paper may not be entirely surprising. Nonetheless, as acknowledged by the reviewer, we would highlight several novel contributions. We introduce the MWGF framework, which elegantly captures many existing constrained sampling approaches, providing a unifying framework for their analysis, as well as allowing us to derive new sampling schemes and analyse their properties. This framework also provides the basis for a principled extension of coin sampling ideas to the constrained setting, resulting in mirrored coin sampling. The result is a highly practical and easy-to-implement algorithm, which consistently obtains state-of-the-art performance. Although not based on the MWGF framework, we also introduce another learning-rate free algorithm, Coin MIED, extending ideas in [2]. Once again, this algorithm achieves excellent performance in empirical testing. **Another aspect is some lack of clarity regarding the utility of the mirror map...** Thanks for raising this point. The reviewer is correct that, for many of the results in App. B, it would be possible to replace the mirror map by a more general bijective map, up to some minor modifications. As the reviewer suggests, our choice is largely one made out of convenience, as well as to aid comparison with existing results. In particular, adopting this formulation makes it easier to contextualise the results in App. B. For example, some of the results we obtain for mirrored LAWGD hold under identical assumptions to those previously used to analyse MSVGD [4]. This aside, we should note that the use of a mirror map does allow us to strengthen several of the results in App. B. In particular, by combining the arguments in [3, App. C], and the results in App. B, we can obtain existential results on the existence of "good mirror maps" which guarantee the convergence of, e.g., MSVGD or MLAWGD to the target at a particular rate, in the spirit of the result for the mirrored Langevin algorithm in [3, Theorem 3], and our own result for mirrored coin sampling [Proposition 16, App. C]. As an example, let us show how to extend [5, Theorem 1] (Theorem 1 in App. B.1) in this way. Suppose we replace [5, Assumption 4] by the assumption that $\pi$ is strongly log-concave. Under this assumption, [3, App. C] guarantees the existence of a mirror map such that the dual target is also strongly log-concave. By the Bakry-Emery criterion, it follows that the $T_p$ inequality is satisfied by the dual target with $p=2$. Thus, [5, Assumption 4] is satisfied and so [5, Theorem 1] holds. In other words, assuming $\pi$ is strongly log-concave, there exists a mirror map such that MSVGD converges to the target at the rate in [5, Theorem 1]. While the assumption that $\pi$ is strongly log-concave is strictly stronger than the original assumption [5, Assumption 4], it is much more tangible, as it relates to the target $\pi$, rather than the dual target $\nu$. Using similar arguments, we can also extend the other results given in App. B (e.g., Prop. 5 - 7 in App. B.1, Prop. 10 in App. B.2, Prop. 13 - 14 in App. B.3). Given this, we have now added several additional remarks in App. B, summarising how our results can be extended when restricting to a mirror map. As suggested by the reviewer, we have also added an additional remark in Sec 3.2, clarifying our use of a mirror map. **For the empirical results, one minor weakness is that...only a single number of particles...** Thanks for this feedback. The results do generalise across different numbers of particles, but we agree it would be useful to include a comparison in the paper. We have now added an appendix comparing results for different values of $N$, across several experiments. We include some illustrative results in the attached PDF. In Figs 1(a)-(d), 2(a)-(d), 3(a)-(d), we plot the energy distance as a function of the LR using several different numbers of particles, for the sparse Dirichlet posterior in [6] (Sec 6.1), the quadratic target in [7] (Sec 6.1), and the two-dimensional post-selection inference target [8] (Sec 6.2). In addition, in Figs 1(e), 2(e), 3(e), we plot the energy distance as a function of the number of particles, and for each of the three experiments above. --- **Questions** **1. As far as I can tell, every experiment is performed...** Please refer to our previous response and the attached PDF. **2. From what I can tell, previous usages of mirror maps in sampling...** Thanks for this question. Other than as outlined in our earlier response, we have not yet explored this issue as it relates to mirrored LAWGD and mirrored KSDD. As the reviewer notes, the theoretical results provided in App. B are generally restricted to the continuous time case. However, it would certainly be interesting to the discrete-time properties of these methods further. Given the considerable length of this paper, this is something we feel is best left to future work. Nonetheless, we will add a remark in Sec. 3.2 on this point. --- **References** [1] L. Sharrock et al. Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates. ICML 2023. [2] L. Li, et al. Sampling with Mollified Interaction Energy Descent. ICLR 2023. [3] Y-P Hsieh et al. Mirrored Langevin Dynamics. NeurIPS 2018. [4] J. Shi et al. Sampling with Mirrored Stein Operators. ICLR 2022. [5] L. Sun et al. A Note on the Convergence of Mirrored Stein Variational Gradient Descent under (L0,L1) Smoothness Condition. arXiv, 2022. --- Rebuttal Comment 1.1: Comment: I very much appreciate the authors thorough response, and in particular appreciate the discussion regarding extensions and the additional empirical experiments I requested.
Summary: This paper studies constrained sampling with algorithms such as Langevin dynamics or stein-variation gradient. They provide a continuous based framework which can be specialised to various other algorithms as well. The proposed modification of the classical algorithms does not require a well-specified learning rate; instead by utilising a betting scheme the algorithm adjusts its learning rate to maintain a good convergence. A versatile set of numerical examples is provided to support the claim that the iterative methods above converge faster and hence are closer to true distribution than the same instances of the algorithm with badly chosen learning rates. Strengths: The authors focus on a problem which is often omitted in the development of approximate inference algorithms, namely the choice of learning rate. They focus on constrained settings where they work with continuous versions of mirrored langevin dynamics and stein-variational gradient. The proposed discretized variants of the algorithms. I find these problems of very big practical relevance having encountered these problems myself. Authors provide a very compelling experimental section where they show the benefit of their betting scheme in improving the convergence rate and/or converged solution. The benchmark problems are very versatile and compare with the right baselines. Weaknesses: I am not familiar with related work, but it seems a lot of the results are standard from sampling literature with the only novelty being the coin betting which itself is inspired by convex optimization literature works. There is no theoretical analysis and there is no conjecture even if this works or what are the conditions that this might work, but well this might be a really difficult problem. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Can you give an example for operator P from 132? - Are there other betting strategies which are worth mentioning? - Does SVMD fall into your framework too? Afaik SVMD is not only-dual algorithm like your proposed Alg. 1? If not why? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Tthe authors adequately addressed the limitations and, if applicable, potential negative societal impact of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their positive remarks, as well as their insightful feedback. We provide a detailed point by point response to their comments below. --- **Weaknesses** **I am not familiar with related work, but it seems...** Thanks for this remark. Although some of the results in App. B do exist in the sampling literature, we would respectfully disagree that the only novelty in this paper is the introduction of learning-rate free algorithms based on coin betting. In particular, we also introduce a general formulation of constrained sampling as a mirrored optimisation problem, and the notion of a MWGF. This provides a unifying framework for existing approaches, and allows us to obtain new constrained sampling algorithms (e.g., MKSDD, MLAWGD) and analyse their convergence. This framework also provides a principled basis from which we derive mirrored coin sampling. In any case, we would argue that introducing several new learning-rate free constrained sampling algorithms is, in itself, a significant and novel contribution, particularly given their impressive numerical performance compared to existing methods. **There is no theoretical analysis...** The reviewer is correct that a theoretical analysis of coin sampling (under standard conditions) is a very difficult question, even in the unconstrained case. We include several remarks on this in Sec 4.2. It is worth emphasising that we do, in fact, provide a discussion of how to obtain a convergence rate for mirrored coin sampling, under an appropriate extension of the conditions in [1], in App. C. --- **Questions** **1. Can you give an example for operator P...** We could not locate a reference to $P$ in 132. If the question refers to $P_{\mu,k}$ in 133, then this is simply the integral operator $P_{\mu,k}f = \int k(x,\cdot) f(x)\mathrm{d}\mu(x)$. This only differs from $S_{\mu,k}$ in its range. **2. Are there other betting strategies...** Thanks for raising this point. While, in this paper, we consider only the KT betting strategy, there are other strategies (e.g., [2]). We now include a remark pointing to this and other relevant references, leaving a more detailed investigation to future work. **3. Does SVMD fall into your framework too?...** Thanks for raising this question. It is currently not clear that SVMD naturally fits into our framework. As outlined in the paper, MSVGD can naturally be viewed as a kernelised version of the WGF of the KL w.r.t. the dual target in the dual space. When only a single particle is used, MSVGD reduces to gradient descent w.r.t. the negative log-density of the dual target [3]. In this sense, MSVGD can be viewed as the SVGD-analogue of the mirrored Langevin dynamics in [4]. In particular, [4] proposes running the unadjusted Langevin algorithm w.r.t. the dual target. Thus, if one removes the noise, the scheme in [4] also reduces to gradient descent w.r.t. the negative log-density of the dual target. On the other hand, we would argue that SVMD is best viewed as a kernelised approximation to the Wasserstein mirror flow w.r.t. the (primal) target [5, App. C]. In particular, with a single particle, SVMD reduces to mirror descent w.r.t. the negative log-density of the (primal) target [3, Sec 4.3]. In this sense, SVMD can be viewed as the SVGD-analogue of the mirror Langevin diffusion in [5]. Indeed, when the noise is removed in [5], this scheme also reduces to mirror descent w.r.t. the negative log-density of the (primal) target. Given this interpretation, a natural question is then whether one can obtain a coin-sampling analogue of SVMD. In principle, one can certainly write down and implement "Coin SVMD". However, given that SVMD does not naturally fit into the MWGF framework, as outlined above, it is unclear that this is a particularly well-justified approach. Generally speaking, coin sampling analogues of ParVI algorithms perform well when coin sampling is used in place of a time-discretisation of a standard WGF. In such cases, when using a single particle, existing ParVI algorithms reduce to gradient descent, and coin sampling reduces to the coin betting algorithm [6], in either case w.r.t the negative log-density of the target. As outlined above, this is essentially the framework for MSVGD and Coin MSVGD, although now the updates take place in the dual space w.r.t. the dual target. In particular, if one uses a single particle, MSVGD and Coin MSVGD reduce to gradient descent or coin betting w.r.t. the negative log-density of the dual target. On the other hand, it is not clear that coin sampling can be used in cases where existing algorithms correspond to time-discretisations of `non-standard' WGFs, e.g., the Wasserstein mirror flow [5, App. C]. As noted above, in these cases, when one uses a single particle (SVMD), or removes the noise term (mirror Langevin diffusion), these algorithms reduce to mirror descent w.r.t. the negative log-density of the (primal) target. This hypothesis is partially based on some preliminary numerical experiments. Indeed, after implementing "Coin SVMD", we observed numerical instabilities across various experiments. This is in contrast to Coin MSVGD, which performed consistently across experiments, and was derived in a principled way based on the MWGF framework. While beyond the scope of this work, we feel that a detailed investigation of these issues would be an interesting avenue for future work. --- **References** [1] L. Sharrock et al. Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates. ICML 2023. [2] A. Cutkosky et al. Black-Box Reductions for Parameter-free Online Learning in Banach Spaces. COLT 2018. [3] J. Shi et al. Sampling with Mirrored Stein Operators. ICLR 2022. [4] Y-P Hsieh et al. Mirrored Langevin Dynamics. NeurIPS 2018. [5] K. Ahn et al. Efficient constrained sampling via the mirror-Langevin algorithm. NeurIPS 2021. [6] F. Orabona et al. Coin Betting and Parameter-Free Online Learning. NeurIPS 2016. --- Rebuttal Comment 1.1: Title: response Comment: Thank you for your response. As you can see from my score, this is not my field, but your paper was understandable, clearly written, and the problem studied was indeed important for which not many off-the-shelf solutions are available. For other aspects I let other reviewers decide.
Summary: The problem of sampling from unnormalised probability distributions is of central importance to computational statistics and machine learning. The well-known SVGD method appears to break down when applied to constrained targets. Other recent methods share the same limitation, namely, significantly depend on a suitable choice of learning rate. Hence, this paper mainly focus on constrained sampling algorithms. The authors propose a suite of particle-based algorithms (coin MSVGD, coin MIED), which are entirely learning rate free. Detailed theoretical description and justification are provided in the paper. Several numerical experiments (simplex targets, post selection inference, and fairness bayesian neural network) are also conducted to evaluate the performance of proposed methods, superior or competitive results are obtained when compared to other methods. Strengths: 1. The idea of viewing constrained sampling as a mirrored optimisation problem is interesting. 2. A suite of learning rate free methods is provided, which will facilite researchers from different disciplines that related to sampling from unnormalised probability distributions. 3. The paper is well written and easy to read. Weaknesses: 1. Analysis of the results is not enough. 2. On the second numerical experiment, advantage of this method is marginal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have the some minor questions/comments for the authors: 1. In the result of Section 6.1 (Figure 1), although MSVGD and SVMD are sensitive to learning rate, SVMD seems to converge at lower energy distance when using a proper learning rate (eg. 1e-1). I wonder the reason of this phenomenon, and is there any solutions to achieve similar performance for Coin MSVGD? 2. Also in Figure 1, when learning rate is higher than 1e-3, Projected SVGD is also learning rate free, so does Coin MSVGD has any other advantage over it? 3. The presentation of Figure 3 is not clear and straight enough to compare the result difference of the four methods. 4. Move some method background and detailed algorithm description to Appendix and add more result analysis in the main paper. 5. The proposed method is based on [76] and [56], it would be better to describe the connection and difference more clearly in the paper. 6. First sentence of second paragraph in Introduction: 'While such methods have enjoyed great success in sampling from unconstrained distributions, they typically break down when applied to constrained targets', may needs a reference. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have highlighted two limitations of their work, and the limitations are still open problems in the research domain. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their constructive feedback. We provide a detailed point by point response to their comments below. --- **Weaknesses** **Analysis of the results is not enough.** Thanks for this feedback. As noted in Sec 4.2 and discussed in [1], establishing the theoretical properties of coin sampling methods under standard conditions is very challenging, and remains unresolved even in the unconstrained case. This being said, we do actually discuss how to obtain a convergence rate for mirrored coin sampling in App. C, based on an extension of [1]. We also provide a detailed analysis of the convergence properties of MWGFs in continuous-time in App. B. **On the second numerical experiment...** We agree that the advantage of our method in this experiment may seem marginal. We would, however, note the following. First, the optimal learning rate for, e.g., MSVGD, is not known a priori, and must therefore be tuned by hand. This comes at the cost of an additional computational expense. Second, if the learning rate for MSVGD is poorly tuned, Coin MSVGD does lead to a clear advantage (Fig. 9 in App. G.3). Third, empirically, Coin MSVGD seems to converge much faster than MSVGD (Fig. 8 in App G.3), even when using a well tuned learning rate. Although one can obtain comparable results by using a larger learning rate for MSVGD, this can lead to non-convergence (Fig. 8 in App. G.3). These points could be emphasised more clearly in the main text, which we have now addressed. --- **Questions** **1. In the result of Sec 6.1...** Thanks for this question. There is, indeed, a good reason why SVMD compares favourably to (Coin) MSVGD in this example, which we explain below. Before we do so, we should note that Fig. 1a and 1b were labelled in reverse in the original submission (this has now been corrected). With this said, let us consider the results for the quadratic target (Fig. 1a). In this case, the target is log-concave, while the dual target is not. In such cases, SVMD is expected to outperform (Coin) MSVGD, as it can exploit log-concavity in the primal space. This is further discussed in App. G.1. Currently, it is unclear whether it is possible to obtain a coin sampling algorithm that can obtain comparable results in this case. While, in principle, one can write down a coin sampling analogue of SVMD, it is not evident that this is a principled approach. Generally, coin sampling analogues of ParVI algorithms perform well when the coin sampling updates are used in place of a time-discretisation of a standard WGF. In such cases, when using a single particle, existing algorithms reduce to gradient descent w.r.t. the negative log-density of the target, and coin sampling reduces to coin betting [2]. This is essentially the framework for MSVGD and Coin MSVGD, even though the updates are taking place in the dual space. In particular, with a single particle, MSVGD and Coin MSVGD reduce to gradient descent or coin betting w.r.t. the negative log-density of the dual target. On the other hand, it is not clear coin sampling can be used when existing algorithms correspond to a `non-standard' WGF, e.g., the Wasserstein mirror flow [3], which yields the mirror Langevin diffusion [3] and SVMD [4]. In these cases, if one uses a single particle (SVMD), or removes the noise (mirror Langevin diffusion), these algorithms reduce to a mirror flow (i.e., Riemannian gradient flow) w.r.t. the negative log-density of the (primal) target. While beyond the scope of this paper, we believe further study of this topic would be a very interesting avenue for future work. **2. Also in Fig. 1, when learning rate is...** As noted in Sec. 6, projected SVGD is just SVGD with a Euclidean projection onto $\mathcal{X}$ after each update. As such, projected SVGD does always depends on a learning rate, even though in this example the results for projected SVGD with LRs greater than $1\times 10^{-3}$ may appear to be identical. In terms of other advantages of Coin MSVGD over projected SVGD in this example, we would highlight that Coin MSVGD converges to the target distribution, while projected SVGD fails to converge for any value of the learning rate. **3. The presentation of Fig. 3 is not clear...** Thanks for raising this concern. We have now added additional remarks describing this figure in more detail, which should help to aid comparison between the various methods. **4. Move some method background...** Thanks for this feedback. Given that coin sampling is a rather new approach, we feel it is important to include a detailed description of our methodology and algorithms in the main paper. This being said, we agree that some readers may appreciate seeing some more results on MWGFs in the main text. As such, we have now included in Sec 3.2 a much more detailed description of the convergence results in App. B. If space allows, we will also include a basic dissipation result for the MWGF (4). **5. The proposed method is based on [76] and [56]...** We also agree that a more explicit delineation between our approaches and [56, 76] would be appreciated. We have therefore added additional remarks in Sections 4.2 and 4.3, clarifying the relationships between Coin MSVGD and Coin SVGD, and between Coin MIED and MIED, respectively. We have also added an additional appendix providing a more detailed overview of MIED and its relationship to other ParVI methods. **6. First sentence of second paragraph in Introduction...** We have added references to [4, 5] after this sentence. --- **References** [1] L. Sharrock et al. Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates. ICML 2023. [2] F. Orabona et al. Coin Betting and Parameter-Free Online Learning. NeurIPS 2016. [3] K. Ahn et al. Efficient constrained sampling via the mirror-Langevin algorithm. NeurIPS 2021. [4] J. Shi et al. Sampling with Mirrored Stein Operators. ICLR 2022. [5] Y.P. Hsieh et al. Mirrored Langevin Dynamics. NeurIPS 2018. --- Rebuttal Comment 1.1: Comment: Thanks for authors' detailed explanation. The authors have addressed all my questions and have demonstrated them with additional experiments. Based on the response and comments from other reviewers, I tend to keep my current evaluation.
Summary: The authors derive a general class of solutions to the constrained sampling problem via MWGF, which incorporates various existing constrained sampling techniques. They do this by viewing sampling in constrained domains as a mirrored optimization problem on the space of probability measures. The paper presents a set of new particle-based sampling algorithms for constrained domains that are entirely learning rate free, building on the recently introduced coin sampling methodology. The efficacy of these algorithms is demonstrated on various numerical examples. Strengths: Despite the availability of various efficient methods for sampling from unconstrained distributions, which have demonstrated success across numerous applications, the effectiveness of analogous techniques for constrained targets largely relies on selecting an appropriate learning rate. The findings presented in this paper demonstrate that the suggested learning-rate free algorithms achieve comparable performance to finely tuned constrained sampling methods, eliminating the need for hyperparameter tuning. Moreover, the paper thoroughly considers related research and emphasizes the originality and advantages of the proposed approach. Weaknesses: The authors do not address the computational complexity of their methods, which would be beneficial for comparing them with other approaches. Additionally, it is somewhat cumbersome to switch back and forth between the main body and the appendix to comprehend the results in Section 6. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Line 74: Is the assumption of "uniquely minimized" generally valid? 2) Line 197: typo, "using" instead of "sing" 3) Line 256: Why did you chose the IMQ kernel in this case? 4) Figure 4: Are the legends in (b) and (c) accurate? Shouldn't the time increase? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations of their approach in a dedicated paragraph (see Section 7). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive comments and constructive feedback. We provide a point by point response to their comments below. --- **Weaknesses** **The authors do not address the computational complexity...** Thanks for this comment. We do, in fact, provide a discussion of the computational complexity of our algorithm in App. E (lines 941 - 948). To summarise, the time complexity of both Coin MSVGD and Coin MIED is $O(N^2)$ per iteration, which is identical to MSVGD and MIED. In experiments, we also found that our algorithms took essentially the same time as MSVGD and MIED. However, we acknowledge that this discussion wasn't signposted in the main body of the paper, so it was easy to miss. We will update this in the camera-ready version. **It is somewhat cumbersome to switch back and forth...** Thanks for highlighting this. Using the additional page allowed in the camera ready version, we plan to move several Figs from the appendices to the main body of the paper, which should help to include readability. Using this additional page, we will also add additional detail wherever we have referred to results given in the appendices, ensuring as far as possible to make the description of any results in the appendices self contained. --- **Questions** **1. Line 74: Is the assumption of "uniquely minimized" generally valid?** This assumption is indeed valid for the dissimilarity functionals considered in this paper, as well as those considered more widely in the literature. In particular, this is true for $\mathcal{F}(\mu) = \mathrm{KL}(\mu|\pi)$, as well as for other $f$-divergences such as the $\chi^2$-divergence. Under mild assumptions (e.g., use of a characteristic kernel), this is also the case for other dissimilarity functionals whose Wasserstein gradient flows have been studied in the literature, including the MMD [1] and the KSD [2]. We refer to, e.g., Theorem 5 in [3] for (mild) assumptions under which the MMD uniquely vanishes at the $\mu=\pi$. **2. Line 197: typo, "using" instead of "sing".** Thanks for spotting this typo, it has now been corrected. **3. Line 256: Why did you chose the IMQ kernel in this case?** We use the IMQ kernel here due to its convergence control properties. In particular, the IMQ kernel is known to metrize weak convergence [4, Theorem 8]. While this was previously mentioned in the appendix on 'Additional Numerical Details' (App. F), we will make sure to also mention this in the main text. **4. Fig. 4: Are the legends in (b) and (c) accurate? Shouldn't the time increase?** The legends in Fig. 4(b) and 4(c) are, indeed, accurate. It is worth noting that $t$ denotes the value of the fairness constraint here, rather than the time. We follow the standard notation here, see, e.g., [5], although we acknowledge that this notation may be slightly confusing. We have now added an additional remark in the caption to clarify the meaning of $t$. --- **References** [1] Michael Arbel et al. Maximum Mean Discrepancy Gradient Flow. NeurIPS 2019. [2] Anna Korba et al. Kernel Stein Discrepancy Descent. ICML 2021. [3] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, Alexander Smola. A Kernel Two-Sample Test. JMLR, 13, 2012. [4] Jackson Gorham and Lester Mackey. Measuring sample quality with kernels. ICML 2017. [5] Lingxiao Li, Qiang Liu, Anna Korba, Mikhail Yurochkin, and Justin Solomon. Sampling with Mollified Interaction Energy Descent. ICLR 2023.
Rebuttal 1: Rebuttal: Many thanks to all of the reviewers for their thorough engagement with our work, and for the many constructive comments. We have made several revisions to our original submission in response to the reviewers' feedback, which we feel have further improved our paper. Full details are provided in our individual responses to each of the reviewers below. At the request of reviewer 6R7u, we have also performed some additional numerical experiments, verifying that our results generalise for different numbers of particles. We report the results of these experiments in an additional appendix in the latest revision of our paper. We also provide a sample of these results in the attached PDF. Pdf: /pdf/955e93002701d713bbf61b6b0c247cda5f553681.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CoDet: Co-occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection
Accept (poster)
Summary: The paper presents CoDet, a novel approach for region-word alignment in vision-language representations for open-vocabulary object detection. Unlike existing methods that rely on pre-trained or self-trained models, CoDet reformulates alignment as a co-occurring object discovery problem. By grouping images that mention the same concept, CoDet establishes correspondences between shared concepts and common objects through co-occurrence, enabling it to leverage region-region correspondences across images for object discovery and open-vocabulary supervision. Experimental results demonstrate that CoDet consistently outperforms state-of-the-art methods in detecting novel objects, while also showcasing scalability with visual representations, indicating its potential for benefiting from advancements in visual foundation models. Strengths: + In terms of originality, the authors propose a novel approach, CoDet, which reformulates region-word alignment as a co-occurring object discovery problem, diverging from the reliance on pre-trained or self-trained vision-language models. This unique perspective brings a fresh perspective to the field. + The quality of the paper is commendable, as the authors provide a clear description of the proposed approach, detailing how CoDet groups images based on shared concepts, leverages co-occurrence for region-region correspondences, and utilizes open-vocabulary supervision. The experimental results consistently outperform state-of-the-art methods, validating the effectiveness of CoDet in detecting novel objects. + The clarity of the paper is also notable, as the authors provide a concise and coherent presentation of their work, making it easily understandable to readers. + The significance of the paper lies in its potential impact on the vision-language representation field. By addressing the limitations of existing methods, CoDet opens up new possibilities for reliable region-word alignment and object-level vision-language representations, which can benefit various applications such as open-vocabulary object detection. Weaknesses: - The paper would benefit from a more explicit discussion of its differences and a comparative analysis with related work [a]. By highlighting the distinctions between the proposed CoDet approach and the existing method [a], the authors can provide a clearer understanding of the unique contributions and advantages of their approach. [a] Aligning Bag of Regions for Open-Vocabulary Object Detection. CVPR, 2023. - The absence of VLDet in Tables 2 and 3 raises concerns and it is important for the authors to provide an explanation for its omission. Including VLDet in the comparative analysis is crucial to assess its performance against the proposed CoDet approach and other existing methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses mentioned above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer e66K, Thank you for your appreciation of our approach and constructive comments. We address your comments below. 1. Differences between Aligning Bag of Regions for Open-Vocabulary Object Detection. - Thanks for pointing out this missing related work. We will add the discussion of BARON[1] into the Related Work section of our paper. - BARON and CoDet might look similar as both works take advantage of "co-occurrence". However, the meaning of "co-occurrence" in CoDet is quite different from that in BARON. In CoDet, "co-occurrence" refers to the existence of the same object class across different images. While in BARON, "co-occurrence" refers to the existence of different object classes within the same image. Such a difference is further reflected in the motivation. BARON proposes to align "bag of regions" with "bag of concepts", while individual region-word alignment is not a main focus. In contrast, CoDet is making orthogonal efforts to discover single region-word pairs. - Moreover, BARON is a distillation-based method, which still relies on a teacher VLM and inherits constraints from the image-level pre-trained VLM. While CoDet overcomes such dependency by introducing a new region-word alignment mechanism. - CoDet performs on par with BARON on OV-LVIS benchmark (22.7 v.s. 22.7 mAPr), and outperforms BARON in transfer detection from OV-LVIS to COCO and Objects365. The transfer detection results are presented below. Notably, the results of CoDet on Objects365 are different from those reported in the paper because here we use Objects365 v2 val for evaluation instead of Objects365 v1, to keep in accordance with BARON. We directly cite the number reported in BARON. | Dataset | | COCO | | | Obj365 | | |:------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | Method | AP | AP$_{50}$ | AP$_{75}$ | AP | AP$_{50}$ | AP$_{75}$ | | BARON | 36.2 | 55.7 | 39.1 | 13.6 | **21.0** | 14.5 | | CoDet | **38.5** | **55.8** | **41.5** | **14.5** | 20.6 | **15.7** | 2. The absence of VLDet in Tables 2 and 3. - Thanks for your suggestions. We will add VLDet to Table 2 & 3 for comparison. - We primarily benchmarked and analyzed our method on OV-LVIS in the paper as we believe OV-LVIS results are more representative than OV-COCO results because: - OV-LVIS has many more novel categories for evaluation (337 in OV-LVIS v.s. 17 in OV-COCO) - OV-LVIS setting uses large-scale web-crawled caption data for training, which better simulates real-word practice (OV-COCO setting only uses 120K human-annotated caption data for training). - It is noteworthy to point out that VLDet has superior performance over CoDet on OV-COCO (32.0 v.s. 30.6 AP50 on novel classes). We have already presented an analysis of potential causes in our paper. This could be attributed to: - The human-curated bias in COCO Caption data distribution. As analysed in lines 317-323, concepts in COCO Caption images are highly concentrated, which incurs many hard negatives for identifying co-occurring objects. But we believe this would not harm the generailty of our method as we show CoDet works well on web-crawled data, which is a more practical setting and can easily be scaled up. - VLDet has more aggressive trade-offs between novel AP and base AP on OV-COCO. CoDet actually has a higher base AP (52.3 v.s. 50.6) and overall AP (46.6 v.s. 45.8) than VLDet. This trade-off typically happens in methods trained on the detection and caption data simultaneously, e.g., Detic, VLDet, CoDet. - We did not include VLDet for comparison in Table 3 because the original paper did not report their transfer detection results on OV-LVIS to Objects365 and COCO. Here for comparison, we use the officially released checkpoint of VLDet for evaluation. - It can be seen that CoDet outperforms VLDet on large-sclae Objects365 v2 transfer detection, showing its higher capability for detecting a wide range of novel concepts. However, an interesting finding is that VLDet has stronger transfer detection results on COCO, compared with CoDet and BARON. This is not as expected, given that VLDet has a bit lower mAPr on OV-LVIS than CoDet and BARON. It seems VLDet has unique advantages on common object detection. The deep reason behind it is still unclear and worth further studies. | Dataset | | COCO | | | Obj365 | | |:------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | Method | AP | AP$_{50}$ | AP$_{75}$ | AP | AP$_{50}$ | AP$_{75}$ | | VLDet | **39.7** | **56.9** | **43.2** | 12.8 | 18.0 | 13.9 | | CoDet | 38.5 | 55.8 | 41.5 | **14.5** | **20.6** | **15.7** | &nbsp; &nbsp; References: [1] Aligning Bag of Regions for Open-Vocabulary Object Detection
Summary: This paper proposes CoDet, a novel approach for open-vocabulary object detection that reformulates region-word alignment as a co-occurring object discovery problem. The approach groups images that mention the same concept in their captions, leveraging region-region correspondences to discover the common object and adopt the shared concept as category label for open-vocabulary supervision. Experimental results demonstrate that CoDet consistently outperforms state-of-the-art methods in detecting novel objects. Strengths: 1. Introducing a new method, CoDet, for solving the open-vocabulary object detection problem, which reformulates region-word alignment as a co-occurring object discovery problem, achieving more accurate localization and better generalization ability. 2. Experimental results demonstrate that CoDet outperforms existing state-of-the-art methods in detecting novel objects and exhibits strong scalability, benefiting from advancements in visual foundation models. 3. The authors provided open-source code, which ensures the reproducibility of experiments and promotes further research. 4. The paper is well-organized, with a clear structure and logical flow. Weaknesses: 1. The method is simple, and prototype-based solutions are common in other tasks. I am slightly concerned about the novelty of the approach. 2. There are fairness concerns in this paper. See Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The fairness for comparison cannot be promised. For LVIS, CenterNet2 is used in this paper. But in previous methods, FasterRCNN is usually used. 2. Some implement details are confusing. Why using CenterNet2 [62] with ResNet50 as the backbone for LVIS?. But for OV-COCO, Faster R-CNN with a ResNet50-C4 backbone is adopted. Is this fair to compare with previous methods? The same backbone should be used for fair comparison. 3. Some OVD methods are not compared in this paper. e.g., "Open-vocabulary detr with conditional matching." Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The author have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Z3qw, We really appreciate your comments. We hope our response can address your concerns and clarify our contribution. 1. Novelty concern - We believe method simplicity should not harm paper novelty. In contrast, without redundant design, simple and effective methods usually provide clear and concise insights into the research problem. - Besides, CoDet contributes a new perspective to address the OVD problem. As confirmed by all other reviewers, the idea of reformulating region-word alignment as a co-occurring object discovery problem is unique and interesting, which overcomes the reliance on pre-trained or self-trained vision-language models for alignment. Reviewer 2fPU further confirms this idea has reference values for future works within and beyond OVD research. - While prototypes are commonly used in other tasks like few-shot learning, CoDet is fundamentally different from these methods in the way and insight of constructing and using prototypes. For prototype construction, rather than relying on annotated samples, we introduce similarity-based prototype synthesis to automatically discover prototypes for target objects. Besides, CoDet does not directly use prototypes for classification. Instead, prototypes are only intermediate products to learn vision-language alignment. For inference, CoDet no more needs prototypes. 2. Fairness concern in comparison - First, we would like to point out that "using CenterNet2 for LVIS experiments and Faster R-CNN for COCO experiments" is not an "innovation" of this paper. As put in line 234, we follow exactly the same setting as Detic [1], a well-known OVD work (167 citations by now). VLDet [2] also adopted this setting. - Second, we choose to follow the Detic setting because it is a mature setting that has been widely acknowledged by the community. It can be seen that most of, if not all, recent OVD works include Detic results in their comparisons, e.g., PromptDet [3], F-VLM [4], VLDet, CORA [5]. - Third, it is hard to find a universal standard for implementation. Discrepancy in implementation is common among different OVD works. For instance, ViLD [6] uses a 32x training schedule (CoDet only uses 4x), DetPro [7] and BARON [8] use detection-specialized pretraining SoCo[9] for weight initialization, OV-DETR [10] uses Deformable-DETR, not mention that many works do not even follow the strict open-vocabulary setting. 3. "Open-vocabulary detr with conditional matching" is not compared in this paper. - OV-DETR has already been included in comparison (Please see Table 1 & 2). &nbsp; &nbsp; &nbsp; References: [1] Detecting Twenty-thousand Classes using Image-level Supervision [2] Learning Object-Language Alignments for Open-Vocabulary Object Detection [3] PromptDet: Towards Open-vocabulary Detection using Uncurated Images [4] F-VLM: Open-Vocabulary Object Detection upon Frozen Vision and Language Models [5] CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting and Anchor Pre-Matching [6] Open-vocabulary Object Detection via Vision and Language Knowledge Distillation [7] Learning to Prompt for Open-Vocabulary Object Detection with Vision-Language Model [8] Aligning Bag of Regions for Open-Vocabulary Object Detection [9] Aligning pretraining for detection via object-level contrastive learning. [10] Open-vocabulary detr with conditional matching. --- Rebuttal Comment 1.1: Comment: The authors has addressed my concerns. Although I still think the method of this paper is simple, it is a good paper that provides a new perspective. After reading the rebuttal and comments from other reviewers, I upgraded the rating. --- Reply to Comment 1.1.1: Title: Thanks for Your Support of Our Work Comment: Thank you very much for taking the time to reconsider our paper submission. We appreciate you engaging with us in the rebuttal process, thoughtfully considering our responses, and agreeing to upgrade the rating of our paper. We are grateful for your open-mindedness and willingness to re-evaluate our work.
Summary: The paper proposes a novel perspective in discovering region-word correspondence from image-text pairs, which bypasses the dependence on a pre-aligned vision-language space by reformulating region-word alignment as a co-occurring object discovery problem. An open-vocabulary object detection framework named CoDet is built and achieves state-of-the-art performance across multiple standard benchmarks. The proposed method also demonstrates the effectiveness of visual guidance in region-word alignment. Experimental results validate the effectiveness of the method. Strengths: The paper is well-written and easy to understand. The idea of using co-occurrent regions as visual guidance is interesting. Experimental results are promising. Weaknesses: 1. During prototype construction, how to ensure that the prototype does not contain noisy information such as hard negative samples. 2. In line 16, the authors claim that the proposed method can benefit from advancements in visual foundation models. However, no further experiments about different visual foundation models are carried out. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer CkPP, Thanks so much for your constructive comments and support for acceptance. We hope our response can address your concerns. 1. How to ensure that the prototype does not contain noisy information such as hard negative samples? - This is a very good question. Since we are aggregating all region proposals of an image into a prototype, we are not able to ensure 100% purity of the prototype. But we found our learning-based prototype synthesis is relatively robust to the noisy samples. From our observation, most of the proposals got weights smaller than 0.01 in prototype synthesis, which suggests the algorithm automatically learns from the similarity matrix to suppress noisy information. - As for hard negative samples, text guidance serves as an effective method to filter them (only proposals corresponding to the target concept will get a high similarity score). Figure 5 gives an example of how text guidance filters hard negatives. 2. No further experiments on visual foundation models. - We would like to clarify that the original content in line 16 is "CoDet exhibits its potential to benefit from advancements in visual foundation models", which is based on two facets: - Theoretically, CoDet mostly relies on visual correspondences to identify co-occurring objects, thus robust visual features from foundation models would benefit CoDet in finding more accurate co-occurrences. - Empirically, our experiments show that adopting stronger visual backbones (Resnet50 -> SwinB, Table 1) leads to significant performance boost (+6.7 mAP on novel classes). For the aforementioned reasons, we believe using stronger visual foundation models as backbone would possibly continue to bring further performance gains. - We totally agree that adding experiments on visual foundation models is a great idea to improve our work. Therefore, we are running OV-LVIS experiments using EVA-02 [1] pre-trained model, which achieves 90.0% top-1 accuracy on ImageNet-1k val. As this experiment takes a long time, we will post the results later during the author-reviewer discussion period. &nbsp; &nbsp; References: [1] EVA-02: A Visual Representation for Neon Genesis --- Rebuttal Comment 1.1: Title: Experiment Results on Visual Foundation Models Comment: Here we report the results of CoDet using EVA02-L as the backbone on OV-LVIS, which further verifies that CoDet can consistently benefit from stronger visual representations. In comparison with F-VLM [1] which uses the pre-trained CLIP visual encoder as backbone, CoDet achieves superior performances with fewer parameters. | Method | Backbone | Params. | mAPr | mAPc | mAPf | mAP | |--------|-------------|------------|----------|----------|----------|----------| | | R50 | 25M | 22.7 | 30.3 | 34.7 | 30.7 | | CoDet | Swin-B | 88M | 29.4 | 39.5 | 43.0 | 39.2 | | | **EVA02-L** | **304M** | **35.2** | **49.1** | **49.2** | **46.7** | | F-VLM | R50x64 | 420M | 32.8 | -- | -- | 34.9 | &nbsp; &nbsp; [1] F-VLM: Open-Vocabulary Object Detection upon Frozen Vision and Language Models
Summary: This paper argues that existing OVD work relies heavily on visual language pre-training tasks, and current methods cannot provide more fine-grained cross-modal alignment information for OVD, resulting in limited performance on open vocabulary detection tasks. To this end, the authors propose to adopt co-occurrence guided region-word alignment for the OVD task. This method is novel and interesting, and has a good reference value for fields such as OVD and open world segmentation. Overall, the writing of this article is clear and the content is comprehensive. But I think the author still needs a clearer motivation, formulation and explanation for how to discover the co-occurrence proposal in the image. Strengths: As expressed in the summary, the ideas in this paper are novel and interesting, and the writing is clear and complete. Weaknesses: In this paper, the motivation and solution of how to obtain the co-occurrence proposal corresponding to the text from the mini-group image is not clear. For example, - How the author avoids the problem of large differences in the visual appearance of target objects in different images under the same concept mentioned in line 170. - In addition, the author's description of the change in the shape of the similarity matrix in figure2 is too late (in line 185), and it may actually be more appropriate to arrange it in line 166. This caused some difficulty in understanding. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1.How the author avoids the problem of large differences in the visual appearance of target objects in different images under the same concept mentioned in line 170. 2.In line 189, why should the co-occurrence region appear in the last dimension of S? 3.In line 177, the text feature should be an embedding of a sentence, why does the author say that the relative size of text features in different dimensions indicates their relative importance in concept classification? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors provide an objective discussion of the limitations of the article. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 2fPU, Thank you very much for your constructive comments which helped improve our manuscript, as well as your support for acceptance. We hope our response can address your concerns. 1. Dealing with large intra-class variance of object appearances - We adopt text guidance to suppress the impacts of intra-class visual variations. - First, we observe that intra-class variations are typically reflected in particular visual attributes. For example, some object classes have large variances in color, and some have large variances in texture. Those visual attributes of large intra-class variance are naturally considered uninformative in the classification of that class. - Second, in the shared vision-language space, the class text embeddings are forced to have high similarity scores with visual features from the same class, agnostic to the visual feature variance during contrastive learning. This encourages the text embeddings to assign high weights to dimensions corresponding to visual attributes of low intra-class variance, and assign low weights to those corresponding to attributes of high variance. - We thus leverage the learned class text embeddings for feature selection (see Equation) to emphasize invariant features and suppress features with high variations. For instance, suppose the visual feature of "dog" has high variance at dimension 0 but low variance at dimensions 1 & 2, we would expect the text embedding of "dog" to be something like [0.1, -0.6, 0.7], where 0.1 indicates the visual attribute at dimension 0 is unimportant in the classification of "dog". Based on this observation, we use the absolute value of text embeddings to reweight similarity estimation (Equation 1). Thus, intra-class variations are suppressed by small weights from corresponding text embeddings. 2. Move description of the similarity matrix calculation upward - Thanks for your valuable feedback to enhance the work! We agree that it would be more logically coherent to put the introduction of "similarity-based prototype discovery" before "text guidance" as the former constitutes the major part of our method. We will rearrange the layout of the paper as suggested. 3. In line 189, why should the co-occurrence region appear in the last dimension of S? - This is because the last dimension of S essentially includes the similarity scores between the query proposal and region proposals from the support images. Among these scores, the co-occurring region proposals are characterized by having high values. 4. How does the relative magnitude of text features at different dimensions indicate their relative importance in concept classification? - We hope our discussion in question 1 addresses this question as well. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 2fPU Comment: Thank you very much for your detailed explanation of my concerns.
Rebuttal 1: Rebuttal: Dear Reviewers and ACs: Thank you so much for your time and efforts in assessing our paper. Hope our rebuttal has addressed your concerns. We are happy to further discuss with you if there are still other concerns. Thanks for helping improve our paper. Best regards, Paper 2038 Authors
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, they propose CoDet, a novel approach that overcomes the reliance on pre-aligned vision-language space by reformulating region-word alignment as a co-occurring object discovery problem. Specially, CoDet groups images that mention the same concept in their captions, which brings a natural correspondence between the shared concept and the common objects within the group through co-occurrence. Experimental results demonstrate that CoDet consistently outperforms state-of-the-art methods in detecting novel objects. Strengths: 1. This paper propose an interesting idea to reformulate region-word alignment as a co-occurring object discovery problem. 2. Experiments show its effectivenesses. Weaknesses: 1. Experimentation Missing, Need to compare with the fine-grained word-region alignment method proprosed in DetCLIPv2[1], another OVD method try to learn with image-text pair more efficiently with region-word loss. 2. Some more questions, see the questions. [1] DetCLIPv2: Scalable Open-Vocabulary Object Detection Pre-training via Word-Region Alignment Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The image-text pairs currently used are relatively clean. It is hypothetical that if data with larger noise, such as YFCC, are used, the results might differ. In their images, the content described in many texts does not align with the images. Would the algorithm proposed in the paper have limitations in such cases? Furthermore, how efficient is the proposed algorithm? For instance, when expanded to larger datasets like YFCC26M, how does it affect the increase in training time and the stability of performance improvement? 2. Suppose there are two noun that your models have never learned similar concepts (where text guidance is ineffective), and the objects corresponding to these two nouns always appear simultaneously, such as eyes and eyelashes. Would it be difficult for this framework to distinguish between them, or is all existing methods fundamentally incapable of solving such issues? 3. Regarding the technical details in lines 160-161, it appears that multiple prototypes and text losses are computed simultaneously in a single forward pass, followed by iterative selection of the query image, correct? 4. How sensitive is this framework to RPN? For example, in the case of Figure 2, if the RPN proposes a dog's head, is there a possibility that the dog's head might be recognized as a complete dog? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Puz3, Thanks a lot for your insightful reviews and support for our work! We hope our response can address your questions. 1. Need comparison with DetCLIPv2 - Thanks for pointing out this missing **contemporary work**. We will add the discussion of DetCLIPv2 into the Related Work section of our paper. - CoDet and DetCLIPv2 are not readily comparable at this point due to different training and evaluation protocols (DetCLIPv2 follows the setting in GLIP [1], while CoDet follows the setting in ViLD [2]). In detail, for training, DetCLIPv2 uses 5x more data (roughly 15M image-text pairs and 1.44M region-text pairs) than CoDet. For evaluation, DetCLIPv2 uses the full LVIS rather than OV-LVIS which splits LVIS into seen/unseen categories. - We follow the setting in ViLD due to computational resource constraints. Moreover, as there is no open-sourced code of DetCLIPv2, reproducing and evaluating DetCLIPv2 using the same data as CoDet is not feasible, given limited rebuttal time. - Finally, from the high-level insight, our effort in reformulating region-word alignment as co-occurring object discovery are orthogonal to the efforts in DetCLIPv2, which explores aligning regions and words directly. 2. Dealing with noisier image-text pairs (e.g., YFCC) - This is quite an intriguing point of observation. Typically, poor quality of training image-text pairs degrades the model performance. This is **a common issue** for pseudo-label-based OVD methods, e.g., VLDet [3] also has related discussions in Appendix 4. Failure Cases. - CoDet is designed to be relatively robust to such noises. The minimum requirement for discovering co-occurring objects is that the target object exists in the majority of images within a concept group, though not necessarily in all images. - In our early experiments, we attempted to filter out noisy samples, e.g., if an image did not have proposals with sufficiently close neighbors across the majority of images in the mini-group, it will be considered not including the target concept, thus being removed. This design didn't bring notable performance gains yet introduced unnecessary complexities. Thus, we discarded this design. But in case of noisier image-text pairs are used for training, e.g., YFCC, this in-place filtering method might prove beneficial. We may explore this in the future. 3. Training efficiency. - The training time of CoDet grows linearly with the amount of training data. This means CoDet can be easily scaled to web-scale data. 4. Stability of performance improvement when scaled to larger datasets. - This is a good point worth future studies. In our experiments with the OV-COCO setting (small-scale) and the OV-LVIS setting (large-scale), we see a significant performance improvement over baselines in both settings. This is a positive sign that CoDet can scale up to larger datasets. 5. How to distinguish co-existing concepts? - Yes, this can be a systematic failure for existing OVD methods to distinguish co-existing concepts that have never been seen before. Co-existing concepts will incur high ambiguities in associating concepts and regions. For example, in methods based on region-word similarity matching such as VLDet, the ambiguity lies in associating regions with corresponding text supervision. For our method, the ambiguity lies in associating objects discovered in a group with corresponding co-occurring concepts. - But we think this is not a failure of algorithms but the failure of data -- even humans could hardly distinguish two unseen concepts that "always appear concurrently" without any prior knowledge of them. We believe a solution to this problem is to scale up training data. Given sufficient data, hopefully, we can find samples that break up the concurrency of the two concepts. Another is to inject priors for learning more discriminative embeddings of the two concepts. 6. Clarification on technical details in lines 160-161 - The query image is set before computing prototypes and text losses. The full training pipeline can be summarized as follows: - sample images from the same concept group -> extract region proposals for each image -> iteratively set an image as the query image -> compute similarity matrix between proposals of the query image and support images -> synthesize a prototype for the query image -> compute text (classification) loss - Note that although we use "iteratively" to describe this process of query image selection, prototype synthesis is actually independent of each other. This means in practice, prototype synthesis can be executed simultaneously for efficiency. 7. Is there a possibility that the dog's head might be recognized as a complete dog? - There are two possible scenarios. If the image only contains a part of the dog, e.g., dog head, CoDet will recognize the dog head as a dog, which is expected behavior. If the image contains a complete dog, the proposal of dog head will be suppressed by the proposal of a complete dog in NMS, which is often the case. Therefore, it is okay for the RPN to generate some part-level proposals, as long as we can handle them in post-processing, i.e. NMS. Moreover, when conducting visualization, we did not find the misclassification of object parts as objects to be a systematic issue of CoDet. &nbsp; &nbsp; References: [1] Grounded Language-Image Pre-training [2] Open-vocabulary Object Detection via Vision and Language Knowledge Distillation [3] Learning Object-Language Alignments for Open-Vocabulary Object Detection --- Rebuttal Comment 1.1: Title: Response to the author Comment: Thank you very much for your detailed explanation of my concerns. I keep my initial score.
null
null
null
null
null
null
Accelerating Large Batch Training via Gradient Signal to Noise Ratio (GSNR)
Reject
Summary: this paper proposes a gradient descent technique for learning deep neural nets with large batch sizes. the authors focus on the setting where the model size is small to medium (10M to 300M parameters) and the dataset size is also small to medium (up to 1M images in vision or ~ 3B words in NLP) but with as large batch size as possible. the goal of this works to accelerating the learning of such models(e.g., ResNet on Imagenet, BERT on the original BERT-used corpus) by using less training steps and less training time. The authors were able to show the proposed method can achieve good results (without big drop in eval metrics) while using larger batch sizes than prior arts. And when the proposed method is compared with prior arts at the same batch size, the proposed method seems to outperform prior arts as well. Strengths: The papers covers standard benchmarks like ResNet on ImageNet and BERT pretraining. and therefore can be fairly compared against many prior arts on the same tasks Weaknesses: This paper claims "training acceleration" as a key contribution. But throughout the paper, the comparison on speed up is based on number of steps or number of epochs. It is unclear what are the speed advantage in terms of wall-clock time by using the propose technique. I also checked the supp. pdf. In other works (such as LARS and LANs, which are cited by this work), the authors usually report actual wall clock time speedups as they increase the batch size and the compute infra. It was disappointing to not see any mention of that given the authors are using 768 GPUs (therefore I expect very interesting scaling behaviors) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * how does the proposed method help with the large scale trainings such as CLIP models or DiNOv2 models, which the datasets are much bigger than traditional settings like ImageNet? * what would be an ideal scenario for using the proposed technique in a computer vision task? from Table 2, there is a clear trade-off between accuracy vs batch size. * in Table 1 and Table 2, how much time does each training take (under different batch sizes)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: the proposed technique is only validated on small-medium size model on small-medium size dataset. it is not applicable (at least no evidence provided) to large scale model training (either large in dataset size or large in model size. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for his/her constructive comments. We carefully address reviewers' questions as follows. **Q1:** *This paper claims "training acceleration" as a key contribution. But throughout the paper, the comparison on speed up is based on number of steps or number of epochs. It is unclear what are the speed advantage in terms of wall-clock time by using the propose technique. I also checked the supp. pdf. In other works (such as LARS and LANs, which are cited by this work), the authors usually report actual wall clock time speedups as they increase the batch size and the compute infra. It was disappointing to not see any mention of that given the authors are using 768 GPUs (therefore I expect very interesting scaling behaviors)* Sure, please see Table.1 and Table.2 of the response PDF. Both results show that large batch training largely reduce training time. We add them in the revision. **Q2:** *how does the proposed method help with the large scale trainings such as CLIP models or DiNOv2 models, which the datasets are much bigger than traditional settings like ImageNet?* In the CLIP training which is ResNet or Transformer based, they used Adam optimizer and set the batch size to 32k, which can be potentially enlarged with VR-Adam/VR-LARS method we proposed. A larger global batch size means more GPUs can be paralleled to train the model and finally accelerates consuming such bigger training set. **Q3:** *what would be an ideal scenario for using the proposed technique in a computer vision task? from Table 2, there is a clear trade-off between accuracy vs batch size.* The ideal scenario is to select the optimal batch size that satisfies desired accuracy and computing time. We suggest to use VR-LARS in computer vision scenario when batch size $\geq$ 2k since VR-LARS is consistently better than any other optimizers listed in Tabel.2 from 2k to 96k. **Q4:** *in Table 1 and Table 2, how much time does each training take (under different batch sizes)?* Same as Q1. --- Rebuttal 2: Comment: Dear Reviewer cWbq, Thanks for your service as a reviewer for the conference. For this work, you have voted for borderline reject, while most reviewers gave positive ratings to this. Can you please post your post-rebuttal comments? Based on the author's rebuttals and other reviewers' comments, would you like to change your original score? If not it would be great to share your opinions with the authors and the other reviewers so that we—the reviewers and the AC—can reach a consensus. Best, AC --- Rebuttal Comment 2.1: Comment: I ack the rebuttal. I appreciate the PDF response they provided. Im willing to change my vote to borderline accept however I will not change my confidence rating. --- Reply to Comment 2.1.1: Title: Thanks for your comments! Comment: As a reminder, can you please change the score to borderline accept as you said? We appreciate your constructive discussion with us.
Summary: The authors strive to propose a heuristic training strategy, called variance reduced gradient descent technique (VRGD), based on the gradient signal to noise ratio, i.e. the ratio between the norm and the variance of gradient. Compared to vanilla training, VRGD scales the learning rate with the gradient signal to noise ratio during each iteration. Then, the authors prove that the proposed method can converge within finite training steps, and claim that it will give better generalization gap compared to the vanilla training. Finally, the authors show the effectiveness of their methods via Bert-training and ImageNet training with up to 96k batch size. Strengths: ***Strengths*** 1. The paper is clearly written and easy to follow. 2. With the development of hardwares, training with large batch would gradually become a basic requirement for training large-scale models on gigantic data, such as GPT. In my opinion, the authors are focusing on a very topic worthy to probe.I believe that this paper may have a potential significance not only in academia but also in the industry. However, some modification may be required currently. 3. The authors show some interesting and valuable experiment results, but not comprehensive enough. Weaknesses: ***Weakness*** 1. (Major) The authors repeatedly mention that large batch training can lead to sharp minima in Abstract and Introduction, which seems to suggest that the proposed method can avoid such problem. However, I have not found discussions or observations regarding the proposed method can solve this issue. So can the proposed escape these bad minima? Considering that many recent works that focuses on guiding training to converge to flat minima i.e. SAM family/gradient norm regularization, I am quite curious what would be like to adopt the proposed method in these algorithms when given large batch training. And from the results, I find that using the proposed method alone would not result better performance than these flat-minima-based methods. I have listed some typical works below. [1] Foret, Pierre, et al. "Sharpness-aware minimization for efficiently improving generalization." ICLR2020. [2] Kwon, Jungmin, et al. "Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks." ICML2021. [3] Zhuang, Juntang, et al. "Surrogate gap minimization improves sharpness-aware training." ICLR2021. [4] Zhao, Yang, et al. "Penalizing gradient norm for efficiently improving generalization in deep learning." ICML 2022. 2. (Major) Continuing with the previous comment, I find the proposed method is rather heuristic, lacking a clear motivation. I could not find clear rationale and sufficient analysis that the proposed method can benefit training. Essentially, the proposed method is simply to scale the learning rate adaptively based on a specific parameter (Eq. 10). For me, I am not quite convinced that such a learning rate scaling policy will lead to reasonable performance gain. Meanwhile, the author argue that the proposed method can lead to smaller generalization gap. However, the core that supports this claim is based on an empirical observation, which makes the mathematical proof not quite rigorous and the claim much weaker and unhelpful. 3. (Major) I would like to discuss the convergence of the proposed method. Firstly, to my understanding, the convergence analysis focuses on analyzing to what extent can training converge on the given training samples, not testing set. So, it is not quite appropriate to use the convergence curve on the testing set to demonstrate the conclusion regarding the convergence analysis. i.e. Figure 2. Secondly, from Figure 2, the authors state that the proposed method converge 1.7~4 times faster than the conventional optimizers. But, I could not observe such a big gap between them from Figure 2, so could the authors explain how to measure the convergence here. Thirdly, a tighter bound in convergence would not give any information regarding the testing performance. A looser bound and slower convergence rate can give better testing performance in many cases, for example SAM. The author can refer to the paper below. [5] Andriushchenko, Maksym, and Nicolas Flammarion. "Towards understanding sharpness-aware minimization." ICML 2022. Note that I am not saying a faster convergence is harmful. In my opinion, the core meaning of this convergence section is to prove that the proposed method can converge in finite time. And it is not surprising that The proposed method shares the same convergence rate as that in SGD, i.e. O(1/sqrt{T}) given that this method is to scale the learning rate compared to SGD. In the current version, it appears that the convergence section seems to heavily imply that the proposed method outperforms SGD without a promise of testing performance, which I disagree with. 4. (Minor) Line 32. "However, Keskar et al. [2017] theoretically analyze the LB training and finds that it can be easily trapped into sharp local minimum, leading to strong generalization gap". Actually, the cited paper (Keskar et al. 2017) has not provided theoretical analysis. 5. (Minor) Line 95. To my understanding, the variance of random vectors is a matrix, i.e. covariance matrix. Why is a scalar here? 6. (Minor) It is highly encouraged to show the results of training vision transformers with the proposed methods. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I have not found any discussions about the limitations and potential negative societal impact. But in my opinion, this may not be a problem, since the work only focuses on the learning method in deep learning. Still, it is highly encouraged to add corresponding discussions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her detailed and constructive comments. We give the response to address review's concerns step by step below. **Q1:** *(Major) The authors repeatedly mention that large batch training can lead to sharp minima in Abstract and Introduction, which seems to suggest that the proposed method can avoid such problem...* Sharp minimum may cause large generalization gap (Foret et al. 2020, Kwon et al. 2021, Zhuang et al. 2022, Zhao et al. 2022, Ahn et al. 2023, Wang et al. 2021, Simsekli et al. 2019). Our current work mainly focuses on generalization gap. Following their work, we add a schematic (Fig.1 in the response PDF) to help understanding how GSNR works to reduce generalization gap. It shows that larger GSNR helps the weights to escape from the large generalization gap area, while smaller GSNR attracts the weights to stay in small generalization gap area. In addition, our experiment shows that the generalization gap was significantly reduced using our method (Table.3, more than 40%). We are very excited to see other researchers can apply our proposed method on SAM family/gradient norm regularization. Theoretically, they can be combined together to potentially break the current batch size limit. We will try it in the future work. The main purpose of our work is to push the batch size limit of large batch training without noticeable accuracy loss. We checked the results of SAM family/gradient norm but found that the batch size of their experiments is no more than 4k, which is smaller than our proposed method. **Reference** Ahn, K., Jadbabaie, A., and Sra, S. (2023). How to escape sharp minima. arXiv preprint arXiv:2305.15659. Wang, X., Oh, S., and Rhee, C. H. (2021). Eliminating sharp minima from SGD with truncated heavy-tailed noise. arXiv preprint arXiv:2102.04297. Simsekli, U., Sagun, L., and Gurbuzbalaban, M. (2019, May). A tail-index analysis of stochastic gradient noise in deep neural networks. In International Conference on Machine Learning (pp. 5827-5837). PMLR. **Q2:** *(Major) Continuing with the previous comment, I find the proposed method is rather heuristic, lacking a clear motivation. I could not find clear rationale and sufficient analysis that the proposed method can benefit training. Essentially, the proposed method is simply to scale the learning rate adaptively based on a specific parameter (Eq. 10)...* Here is our logic of motivation: 1. the literatures pointed out that training with LB may lead to generalization gap 2. where did the generalization gap come during neural network training? 3. the literature pointed out updating those weights with small GSNR leads to generalization gap (Liu et al. 2020) 4. based on LARS/LAMB who are based on learning rate scaling policy, we came up with the idea to update those weights with large GSNR using large learning rate and small GSNR with small learning rate. Many widely used large batch techniques are based on learning rate scaling policy and actually receive performance gain. For example, LARS/LAMB/LANS used large LRs for the normal layers but layer-wisely or block-wisely limit LRs when $||\theta_t||$ is compatible with its updating quantity and were widely used in research community and industry. Mathematical proof of generalization gap for our proposed method was shown in Sec 5.2. Additional experiments were further strong supports on our derivations and made the proof results more reliable. Jinlong Liu, Guoqing Jiang, Yunzhi Bai, Ting Chen, and Huayan Wang. Understanding why neural networks generalize well through GSNR of parameters. In 8th International Conference on Learning Representations, ICLR. OpenReview.net, 2020. **Q3:** *(Major) I would like to discuss the convergence of the proposed method. Firstly, to my understanding, the convergence analysis focuses on analyzing to what extent can training converge on the given training samples, not testing set. So, it is not quite appropriate to use the convergence curve on the testing set to demonstrate the conclusion regarding the convergence analysis. i.e. Figure 2. Secondly, from Figure 2, the authors state that the proposed method...* Firstly, yes, convergence rate measures how fast optimizer can converge during training while test accuracy measures the generalization. Secondly, we measured the speed up rates by checking the epochs used to reach the same accuracy. For example, Adam reaches 0.48 accuracy in about 85 epochs while VR-Adam reaches the same accuracy in about 20 epochs, which is a roughly 4 times speed up. Thirdly, we wanted to express that our proposed method had a tighter bound in large batch scenarios based on theoritical derivations. Testing performance was further compared in the experiment section. **Q4:** *(Minor) Line 32. "However, Keskar et al. [2017] theoretically analyze the LB training and finds that it can be easily trapped into sharp local minimum, leading to large generalization gap". Actually, the cited paper (Keskar et al. 2017) has not provided theoretical analysis.* Yes, "theoretically" is removed. **Q5:** *(Minor) Line 95. To my understanding, the variance of random vectors is a matrix, i.e. covariance matrix. Why is a scalar here?* $\rho^2(\theta_j)$ is not a scalar. We used $j$ to index the weights. **Q6:** *(Minor) It is highly encouraged to show the results of training vision transformers with the proposed methods.* Thanks for the suggestion but we had performed BERT, ImageNet and DLRM for three commonly used scenarios, which are more than popular LARS (Imagenet only) and LAMB (Imagenet and BERT only). We discuss applying on ViT in the discussion section and leave the experiment to future work. --- Rebuttal Comment 1.1: Title: Thanks for the kindly response. Comment: Thanks for the kindly response and all the authors' effort. I think the proposed method is interesting, and believe it may have potential impact in learning method. However, I think the current version is not ready to publish. Based on the authors' response, I decide to keep my rate. 1. I can understand the authors' intention. The uploaded pdf again has conveyed the idea that the proposed method would encourage training towards flat minima.However, firstly, I could not find sufficient view regarding why the proposed method would avoid sharp minima, especially considering that unlike SAM, the proposed method does not target to guide the training towards flat minima in a straightforward manner. Importantly, given the method targets to improve generalization from the perspective of flat minima training, the authors have not provided any comparison or show the compatibility with the SAM family learning methods, which is considered as the current dominating method in regards to this problem. Meanwhile, based on the experiments and my own training experience, the proposed method would not give better results compared to SAM family (96.52 with SAM using ResNet18 on Cifar10 [1], the best acc is 93.79 with the authors' method using ResNet56). And the authors argue in their rebuttal that "We checked the results of SAM family/gradient norm but found that the batch size of their experiments is no more than 4k, which is smaller than our proposed method." However, this does not mean that SAM could not apply to large batch training. Several papers have already presents the results of SAM training with large batch. For example, [2] provides results of SAM training with up to 32k batch size, against what the authors are arguing. So, in my opinion, a basic comparison between the proposed method and SAM is minimum for acceptance. [1] Efficient Sharpness-aware Minimization for Improved Training of Neural Networks, ICLR2022 [2] Towards Efficient and Scalable Sharpness-Aware Minimization, CVPR2022. 2. Essentially, the proposed method is simply to scale the learning rate adaptively based on a specific parameter (Eq. 10). The core that supports why such training can give better generalization gap is based on an empirical observation, which is quite not rigorous and useful. As far as I know, the mathematical discussion of generalization bound should only be based on some reasonable basic assumptions, like L-smoothness, etc. Meanwhile, the authors have not give clear response in their rebuttal. 3. Based on the results, I could observe that the proposed method presents faster convergence rate for the given case, but the metric that the authors use to gauge the convergence rate is questionable. The authors use the epochs that different optimizers reach the same acc in the middle of the training (in the authors' rebuttal, optimizers reach 0.48 acc), as the speed of convergence. To my understanding, convergence rate is the speed that reaches optimal solution. However, the authors' metric firstly uses the middle state of training, not the optimal solution, and secondly, has not gauged the "speed", i.e. variations of values. --- Reply to Comment 1.1.1: Title: We appreciate the reviewer's patience and constructive discussion. Comment: **Q1:** *I can understand the authors' intention. The uploaded pdf again has conveyed the idea that the proposed method would encourage training towards flat minima.However, firstly, I could not find sufficient view regarding why the proposed method would avoid sharp minima, especially considering that unlike SAM, the proposed method does not target to guide the training towards flat minima in a straightforward manner. Importantly, given the method targets to improve generalization from the perspective of flat minima training, the authors have not provided any comparison or show the...* Firstly, we didn't show more supporting results because our proposed method is mainly based on generalization gap, not a straightforward method to escape sharp minimum like SAM. Our proposed method focused on scaling batch size by controlling the generalization gap using GSNR. Therefore we only stated that it helped reducing generalization gap, not from a straightforward way to escape from the sharp minimum. [6] has theoretically showed how GSNR influences generalization gap. We further derived that our proposed method based on GSNR can control generalization gap in large batch scenarios in Sec.5.2 and verified our derivations with ImageNet experiments in Table.3. Secondly, we apologize for the misleading but we didn't mean SAM [1] can not be used in large batch training. Instead, the more complex hybrid optimizer is our future work. Thirdly, sure, we compare our proposed method with SAM family/Gradient norm on ImageNet-1k with ResNet50 below (these results are cited from [2,3,4,5]). Results show that our proposed method performs better than SAM/ASAM/ESAM/Gradient norm and the same as GSAM. We will add this table in the revision. | Batch Size | 512 | 4k | | --- | --- | --- | | baseline w.o. SAM | 75.8%[2] | 76.0%[3] | | SAM (90 epochs) | - | 76.9%[3] | | SAM (100 epochs) | 76.4%[2] | - | | ASAM (100 epochs) | 76.6%[2] | - | | GSAM (90 epochs) | - | 77.2%[3] | | ESAM (90 epochs) | 77.1%[5] | - | | Gradient Norm (100 epochs) | - | 77.1%[4] | | **Ours** (90 epochs) | - | 77.2% | **Reference** [1] Foret, Pierre, et al. "Sharpness-aware minimization for efficiently improving generalization." ICLR2020. [2] Kwon, Jungmin, et al. "Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks." ICML2021. [3] Zhuang, Juntang, et al. "Surrogate gap minimization improves sharpness-aware training." ICLR2021. [4] Zhao, Yang, et al. "Penalizing gradient norm for efficiently improving generalization in deep learning." ICML 2022. [5] Efficient Sharpness-aware Minimization for Improved Training of Neural Networks, ICLR2022 [6] Understanding why neural networks generalize well through GSNR of parameters, ICLR2020. **Q2:** *Essentially, the proposed method is simply to scale the learning rate adaptively based on a specific parameter (Eq. 10). The core that supports why such training can give better generalization gap is based on an empirical observation, which is quite not rigorous and useful. As far as I know, the mathematical discussion of generalization bound should only be based on some reasonable basic assumptions, like L-smoothness, etc. Meanwhile, the authors have not give clear response in their rebuttal.* The assumptions used to derive generalization gap (Sec 5.2) was Assumption.1 (Non-overfitting limit approximation) and $\lambda\rightarrow$ 0. Note that Assumption.1 was cited from [1] and could hold using early stop strategy. Learning rate assumption can also hold with a commonly used learning rate decay policy. In fact, learning rate was decaying to 0 in our ImageNet experiments. We apologize that our previous response was not clear. Our logic is that we first mathematically derived the generalization gap of our proposed method, then we further verify the derivations with experiments. We think such logic is reasonable and helpful. **Reference** [1] Understanding why neural networks generalize well through GSNR of parameters, ICLR2020. **Q3:** *Based on the results, I could observe that the proposed method presents faster convergence rate for the given case, but the metric that the authors use to gauge the convergence rate is questionable. The authors use the epochs that different optimizers reach the same acc in the middle of the training (in the authors' rebuttal, optimizers reach 0.48 acc), as the speed of convergence. To my understanding, convergence rate is the speed that reaches optimal solution. However, the authors' metric firstly uses the middle state of training, not the optimal solution, and secondly, has not gauged the "speed", i.e. variations of values.* Yes, the reviewer is right. We recompute with the optimal position and the revised version is $1\sim2\times$.
Summary: This paper examines the improvement of training throughput in a large batch parallel training setting. By employing the gradient signal to noise ratio (GSNR) as a measurement of the generalization gap during training, the authors introduce a variance-reduced gradient descent method designed for large batch training scenarios. The authors provide theoretical analysis to substantiate that the proposed variance-reduced gradient descent (VRSGD) method exhibits superior generalization compared to stochastic gradient descent (SGD) and potentially achieves faster convergence. Additionally, experimental evaluations on BERT pretraining, ResNet training, and DLRM training are presented to demonstrate the superiority of the proposed method over alternative approaches for large batch training. Furthermore, orthogonal experiments and analyses pertaining to the behavior of GSNR and sensitivity to hyperparameters are conducted to further support the superiority of the proposed method. Strengths: 1. The research topic of large batch training is of considerable interest. 2. The proposed method is both simple and effective for large-batch training. 3. The theoretical analyses of the convergence rate and generalization are persuasive. 4. The proposed method consistently demonstrates improvements over other baseline approaches. 5. The paper is well-written and easy to understand. Weaknesses: 1. The contribution is limited. It seems that the authors just introduce GSNR into the large batch training, but do not highlight why GSNR is important to large batch training. 2. Although the theoretical analysis and experimental results presented in the study are persuasive, the rationale behind the necessity of the GSNR method for large-batch training remains unclear. It is essential to provide a more comprehensive explanation of why GSNR is specifically relevant and advantageous in the context of large batch training. While GSNR can potentially enhance generalization in various settings, a more explicit justification is required to elucidate its particular suitability and effectiveness for large-batch training scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her detailed and constructive comments. We give the response to address review's concerns step by step below. **Q:** *The contribution is limited. It seems that the authors just introduce GSNR into the large batch training, but do not highlight why GSNR is important to large batch training. Although the theoretical analysis and experimental results presented in the study are persuasive, the rationale behind the necessity of the GSNR method for large-batch training remains unclear. It is essential to provide a more comprehensive explanation of why GSNR is specifically relevant and advantageous in the context of large batch training. While GSNR can potentially enhance generalization in various settings, a more explicit justification is required to elucidate its particular suitability and effectiveness for large-batch training scenarios.* A more explicit justification to elucidate why our proposed method can work in large batch scenarios is shown below: 1. **Methodological perspective.** Our proposed method provides an element-wise level of learning rate adjustment that is more accurate than existing methods and becomes more accurate when batch size gets larger. The linear scaling rule uses the same large LR for all parameters. LARS/LAMB/LANS use large LRs for the normal layers but layer-wisely or block-wisely limit LRs when $||\theta_t||$ is compatible with its updating quantity. VRGD that we proposed **element-wisely** limits the updating quantity for those parameters without confident gradient estimation (Fig.1b in the main context, large gradient variance or small GSNR). GSNR estimation becomes more accurate when batch size is larger. Therefore, when batch size gets extremely large, such mechanism to stabilize training may become even more accurate and helpful. 2. **Convergence rate perspective.** Applying our proposed method on basic optimizers may make the upper bound of convergence rate much tighter when increasing the batch size. For example, VR-SGD's bound depends on the lower ($r_l$) and upper bound ($r_u$) of GSNR. Larger batch size brings smaller gradient variance (eq.43 of Appendix.B) and larger GSNR (both bigger $r_l$ and $r_u$), then may result in **a tighter bound with quicker convergence** (*verified by experiments*). 3. **Generalization gap perspective.** Our proposed method can reduce more generalization gap when batch size is larger. Based on the derivations in Sec 5.2, VR-SGD has a **much smaller generalization gap** than SGD in LB training (*verified by our ImageNet experiments shown in Table.3 of the main context*). When scaling up the batch size, such mechanism to reduce generalization becomes even more useful. Table.3 of the the main context shows that generalization gap drops 47.1% at 32k, 48.8% at 64k and 68.3% at 96k. 4. **GSNR effectiveness perspective.** The theoretical explanation of the mechanism how updating weights with smaller GSNR brings generalization gap is comprehensively discussed in previous study (Liu et al.2020). We further carried out many ablation studies in Sec.7 and found that final accuracy drops in large batch training without GSNR, which demonstrates its effectiveness in large batch scenarios. Furthermore, we add Fig.1 in the response PDF. Following previous discussion on generalization gap (Foret et al. 2020, Kwon et al. 2021, Zhuang et al. 2022, Zhao et al. 2022, Zhao et al. 2023, Wang et al. 2021, Simsekli et al. 2019), we add a schematic to help understanding how GSNR works to reduce generalization gap. It shows that larger GSNR helps the weights to escape from the large generalization gap area, while smaller GSNR attracts the weights to stay in small generalization gap area. **Reference** Jinlong Liu, Guoqing Jiang, Yunzhi Bai, Ting Chen, and Huayan Wang. Understanding why neural networks generalize well through GSNR of parameters. In 8th International Conference on Learning Representations, ICLR. OpenReview.net, 2020. Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. (2020). Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412. Kwon, J., Kim, J., Park, H., and Choi, I. K. (2021, July). Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In International Conference on Machine Learning (pp. 5905-5914). PMLR. Zhuang, J., Gong, B., Yuan, L., Cui, Y., Adam, H., Dvornek, N., ... and Liu, T. (2022). Surrogate gap minimization improves sharpness-aware training. arXiv preprint arXiv:2203.08065 Zhao, Y., Zhang, H., and Hu, X. (2022, June). Penalizing gradient norm for efficiently improving generalization in deep learning. In International Conference on Machine Learning (pp. 26982-26992). PMLR. Ahn, K., Jadbabaie, A., and Sra, S. (2023). How to escape sharp minima. arXiv preprint arXiv:2305.15659. Wang, X., Oh, S., and Rhee, C. H. (2021). Eliminating sharp minima from SGD with truncated heavy-tailed noise. arXiv preprint arXiv:2102.04297. Simsekli, U., Sagun, L., and Gurbuzbalaban, M. (2019, May). A tail-index analysis of stochastic gradient noise in deep neural networks. In International Conference on Machine Learning (pp. 5827-5837). PMLR. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer FtDV Comment: Thank you for the authors' response. Following a thorough review of the rebuttal, some of my initial concerns regarding the significance of the GSNR in the context of large-batch training have been alleviated to a certain extent. It is now evident that GSNR is helpful for generalization and convergence when dealing with larger batch sizes. In light of this clarifying information, I have chosen to retain my original evaluation score.
Summary: The paper proposes a new method for large batch training. It is based on the insight that the gradient-to-signal-noise-ratio for each parameter should be reflected in its learning rate, and hence modifies gradient descent to reduce the variance of the gradients. The paper then shows convergence rates of VRSGD, which are the same as SGD asymptotically, and states that VR-SGD is particularly suited to large batch training where the GSNR will be high. The paper also shows that the proposed method has a smaller generalization gap than SGD in the large batch setting. Experiments are then performed to support these claims on a variety of tasks and architectures. Strengths: 1. The method is based on a solid insight. 2. The empirical results indicate decent gains over baselines. Weaknesses: 1. The assumptions of smoothness and bounded gradients seem a bit too strong. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Can the assumptions for the theoretical analysis be relaxed? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her detailed and constructive comments. We give the response to address review's concerns below. **Q:** *The assumptions of smoothness and bounded gradients seem a bit too strong. Can the assumptions for the theoretical analysis be relaxed?* Sure, the bounded gradients assumption can be relaxed. Moulines et al. 2011 (their Theorem.4) and Nguyen et al. 2018 (their Lemma.2) derived that SGD can still be bounded without bounded gradients assumption, but they still needed the $l$-smooth assumption. Based on the derivations of Theorem.1 in Johnson and Zhang (2013), we can derive a similar bound for our proposed method without bounded gradients assumption by taking $\lambda=\frac{1}{\sqrt{T}}$, we have $$\mathbb{E}||\nabla L(\theta_t)||^2 \leq \left[ \frac{1}{\gamma r_l (\sqrt{T}-2l)} + \frac{2l}{\sqrt{T}-2l} \right]^2 \mathbb{E}[L(\theta_t) - L(\theta_*)]$$ Therefore, when $\lambda$ gradually decreases with $T$, our proposed method still converges with $O(\frac{1}{\sqrt{T}})$ without bounded gradient assumption. We add this part in the appendix. Note that most previous optimizers used the same or stronger assumptions than ours. Table.1 in the Appendix shows that our assumptions are weaker than LARS/LAMB/DecentLaM and the same as common SGD. **Reference** Moulines, E., and Bach, F. (2011). Non-asymptotic analysis of stochastic approximation algorithms for machine learning. Advances in neural information processing systems, 24. Johnson, R., and Zhang, T. (2013). Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems, 26. Nguyen, L., Nguyen, P. H., Dijk, M., Richtárik, P., Scheinberg, K., and Takác, M. (2018, July). SGD and Hogwild! convergence without the bounded gradients assumption. In International Conference on Machine Learning (pp. 3750-3758). PMLR. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response, and stand by my review.
Rebuttal 1: Rebuttal: We thank the reviewers for his/her constructive suggestions. We give the response to each reviewer one by one. We add new figures and tables in the PDF below. Please click the button below to download it. Pdf: /pdf/b5c2eb954d713b4831a0006b6afef7c3faa7fe9a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focus on using large-batch training to accelerate the training of neural network. Specifically, the authors try to use variance reduced gradient descent technique to scale up the batch size and therefore to accelerate the training. The experimental results illustrate that the proposed method can scale up to larger batch size and further accelerate the training of ResNet, BERT and DLRM. Strengths: Strength: 1. This paper focuses on an important problem, accelerate neural network training. especially for large-batch training. 2. The proposed method is very easy to follow. 3. The authors provide some results to verify the performance of proposed method. Weaknesses: Weakness: 1. The authors can provide more visualization and analysis about the proposed method and why the proposed method can further scale the batch size. For example, the proposed method can help the model converge to s flat region? 2. The authors should provide more results about the wall time of each experiment and verify whether the proposed can save the training time. 3. I'm not sure whether you should compare your method with more baselines since LARS and LAMB is not current SOTA. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her constructive suggestion. We give the response to address review's concerns step by step below. **Q1:** *The authors can provide more visualization and analysis about the proposed method and why the proposed method can further scale the batch size. For example, the proposed method can help the model converge to s flat region?* Sure, we add Fig.1 in the response PDF. Following previous discussion on generalization gap (Foret et al. 2020, Kwon et al. 2021, Zhuang et al. 2022, Zhao et al. 2022, Zhao et al. 2023, Wang et al. 2021, Simsekli et al. 2019), we add a schematic to help understanding how GSNR works to reduce generalization gap. It shows that larger GSNR helps the weights to escape from the large generalization gap area, while smaller GSNR attracts the weights to stay in small generalization gap area. We show more analysis of the reasons why our proposed method can scale the batch size: 1. **Methodological perspective.** Our proposed method provides an element-wise level of learning rate adjustment that is more accurate than existing methods and becomes more accurate when batch size gets larger. The linear scaling rule uses the same large LR for all parameters. LARS/LAMB/LANS use large LRs for the normal layers but layer-wisely or block-wisely limit LRs when $\|\theta_t\|$ is compatible with its updating quantity. VRGD that we proposed **element-wisely** limits the updating quantity for those parameters without confident gradient estimation (Fig.1b in the main context, large gradient variance or small GSNR). GSNR estimation becomes more accurate when batch size is larger. Therefore, when batch size gets extremely large, such mechanism to stabilize training may become even more accurate and helpful. 2. **Convergence rate perspective.** Applying our proposed method on basic optimizers may make the upper bound of convergence rate much tighter when increasing the batch size. For example, VR-SGD's bound depends on the lower ($r_l$) and upper bound ($r_u$) of GSNR. Larger batch size brings smaller gradient variance (eq.43 of Appendix.B) and larger GSNR (both bigger $r_l$ and $r_u$), then may result in **a tighter bound with quicker convergence** (*verified by experiments*). 3. **Generalization gap perspective.** Our proposed method can reduce more generalization gap when batch size is larger. Based on the derivations in Sec 5.2, VR-SGD has a **much smaller generalization gap** than SGD in LB training (*verified by our ImageNet experiments shown in Table.3 of the main context* ). When scaling up the batch size, such mechanism to reduce generalization becomes even more useful. Table.3 of the the main context shows that generalization gap drops 47.1% at 32k, 48.8% at 64k and 68.3% at 96k. 4. **GSNR effectiveness perspective.** The theoretical explanation of the mechanism how updating weights with smaller GSNR brings generalization gap is comprehensively discussed in previous study (Liu et al.2020). We further carried out many ablation studies in Sec.7 and found that final accuracy drops in large batch training without GSNR, which demonstrates its effectiveness in large batch scenarios. **Reference** Jinlong Liu, Guoqing Jiang, Yunzhi Bai, Ting Chen, and Huayan Wang. Understanding why neural networks generalize well through GSNR of parameters. In 8th International Conference on Learning Representations, ICLR. OpenReview.net, 2020. Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. (2020). Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412. Kwon, J., Kim, J., Park, H., and Choi, I. K. (2021, July). Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In International Conference on Machine Learning (pp. 5905-5914). PMLR. Zhuang, J., Gong, B., Yuan, L., Cui, Y., Adam, H., Dvornek, N., ... and Liu, T. (2022). Surrogate gap minimization improves sharpness-aware training. arXiv preprint arXiv:2203.08065 Zhao, Y., Zhang, H., and Hu, X. (2022, June). Penalizing gradient norm for efficiently improving generalization in deep learning. In International Conference on Machine Learning (pp. 26982-26992). PMLR. Ahn, K., Jadbabaie, A., and Sra, S. (2023). How to escape sharp minima. arXiv preprint arXiv:2305.15659. Wang, X., Oh, S., and Rhee, C. H. (2021). Eliminating sharp minima from SGD with truncated heavy-tailed noise. arXiv preprint arXiv:2102.04297. Simsekli, U., Sagun, L., and Gurbuzbalaban, M. (2019, May). A tail-index analysis of stochastic gradient noise in deep neural networks. In International Conference on Machine Learning (pp. 5827-5837). PMLR. **Q2:** *The authors should provide more results about the wall time of each experiment and verify whether the proposed can save the training time.* Sure, please see Table.1 and Table.2 of the response PDF. Both results show that large batch training largely reduce training time. We add them in the revision. **Q3:** *I'm not sure whether you should compare your method with more baselines since LARS and LAMB is not current SOTA.* Yes, in Table.1 of the main context, what we showed as the previous SOTA for BERT pretraining is Adasum, which performed better than LAMB. As for ImageNet, to the best of our knowledge, LARS is still the SOTA large batch optimizer. Other new method like ConAdv+AA is not included for comparison because it is not a large batch optimizer but adversarial learning instead. --- Rebuttal Comment 1.1: Title: Thanks for your response! Comment: Thanks for your response! 1. I hope to see more clear visualization about loss landscape, such as the figure in this paper [1]. 2. Transformer model is easier to converge to sharp local minima compared with CNN. If you focus on reducing generalization gap, maybe you should provide an analysis about generalization on Transformer model, such as vision transformer or some other nlp models. 3. I'm happy to see the authors provide the training time results in the attached pdf. That will make the paper stronger. 4. I still find the performance drop of your proposed method in table 2 when scaling the batch size to 32k, for example, from 4k-77.23% to 32k-76.81%, but there is no drop from 4k to 32k for LARS. In my past experience about large batch training, 32k is not the bottleneck for imagenet training and it usually doesn't occur drop when scaling batch size to 32k. You can also find the results in original LARS paper. Could you please provide some explanation about this phenomenon. If GSNR can works well for large batch training, the performance drop should occur when the batch size is larger than 32k. [1] Visualizing the Loss Landscape of Neural Nets --- Reply to Comment 1.1.1: Title: We appreciate the reviewer's constructive discussion. Comment: **Q1:** *I hope to see more clear visualization about loss landscape, such as the figure in this paper [1].* Yes, we carefully read through paper [1] and found that their Fig.2a,d showed the sharp minimum based on $L(\theta(\alpha))$, where $\theta(\alpha)=(1-\alpha)\theta+\alpha*\theta'$, $\theta$ is the optimal parameters for small batch while $\theta'$ for large batch. By taking $\alpha=[-0.5,1.5]$, they got the desired 1D loss landscape. We apologize that we didn't add such loss landscape in last rebuttal but we will try to add it in Appendix later because we can not add new PDF in the discussion period. The reason why we showed the schematic figure is that we followed many previous papers such as [2,3,4,5,6] and they just gave the schematic of generalization gap. **Reference** [1] Visualizing the Loss Landscape of Neural Nets, NIPS2018 [2] Surrogate gap minimization improves sharpness-aware training. ICLR2021. [3] Penalizing gradient norm for efficiently improving generalization in deep learning. ICML 2022. [4] Efficient Sharpness-aware Minimization for Improved Training of Neural Networks, ICLR2022 [5] Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. ICML2021. [6] Towards efficient and scalable sharpness-aware minimization. CVPR2022 **Q2:** *Transformer model is easier to converge to sharp local minima compared with CNN. If you focus on reducing generalization gap, maybe you should provide an analysis about generalization on Transformer model, such as vision transformer or some other nlp models.* Sure. Below shows the generalization gap reduction on BERT pretraining, which is based on transformer. Result shows that our proposed method can also reduce the generalization gap of transformer based models by 65.7% when batch size is 64k. We will add this table into revision. | | LAMB | VR-LAMB (ours) | |----------------|------|----------------| | Train Loss | 1.11 | 1.31 | | Test Loss | 1.46 | 1.43 | | Generalization Gap | 0.35 | 0.12 (-65.7%) | **Q4:** *I still find the performance drop of your proposed method in table 2 when scaling the batch size to 32k, for example, from 4k-77.23$\%$ to 32k-76.81$\%$, but there is no drop from 4k to 32k for LARS. In my past experience about large batch training, 32k is not the bottleneck for imagenet training and it usually doesn't occur drop when scaling batch size to 32k. You can also find the results in original LARS paper. Could you please provide some explanation about this phenomenon. If GSNR can works well for large batch training, the performance drop should occur when the batch size is larger than 32k.* This is because of our hyper-parameter selection strategy shown below. We didn't tune the LR until batch size reaches 64k, which means $LR=7 \cdot 2^{2}$ may not be the optimal LR and can be tuned for further improvement. [2] also used similar LR selection strategy and stated that "it is possible to achieve better results by further tuning the hyperparameters". However, the LARS results were cited from [1] and their Table.6 shows that they finetune the LR from 32k. More detailed settings were listed in Appendix.D. | Batch Size | LARS | VR-LARS (ours) | |------------|------|----------------| | 2k | - | $7 \times 2^{0}$ | | 4k | - | $7 \times 2^{0.5}$ | | 8k | - | $7 \times 2^{1}$ | | 16k | - | $7 \times 2^{1.5}$ | | 32k | 35 | $7 \times 2^{2}$ | | 64k | 41 | 37 | | 96k | 43 | 38 | [1] Concurrent adversarial learning for large-batch training. ICLR 2022. [2] Large batch optimization for deep learning: Training bert in 76 minutes. ICLR 2020.
null
null
null
null
null
null
White-Box Transformers via Sparse Rate Reduction
Accept (poster)
Summary: This paper proposes a optimization target called sparse rate reduction, which is built on previous optimization target called rate reduction [49]. By unrolling the iterative optimization process of sparse rate reduction into neural layers, a transformer-like architecture can be obtained. The derived white-box transformer-like architecture achieves similar performance with ViT. Strengths: Overall the manuscript is well written and related works are properly cited and discussed, very insightful work. The idea is novel. This manuscript provides a significant extension to previous redunet [49]. The rate reduction is extended into sparse rate reduction, based on which transformer-like neural architectures can be derived by unrolling the iterative optimization process. The manuscript provides new insights concerning several important aspects of modern neural networks, i.e., score function is shown to be connected to self-attention and rate reduction under idealized token distribution. The results are promising, showing that the idea of white-box unrolling-based neural network design might be a possible alternative to current black-box design. Weaknesses: The results on imagenet are very promising, while it would be more convincing if the proposed white-box architecture can achieve SOTA performance under fair comparison. Although overall the manuscript is well written, some sentences are too concise and a little confusing. I suggest the authors can go over the whole manuscript and improve the text for general readers. 1) L143 and footnote 4, I think the explaination here is too concise, the footnote confuses me. What is the separation and the mathematical roles? I cannot find related contents elsewhere. 2) L185 and footnote 6, can you explain more about the content in footnote 6? What is the rigorous math here? 3) I think the motivation of sparse coding (the L0) term in eq. 1 can be further clarified. I can understand the motivation of rate reduction [49], but I think it is not clear why we introduce sparse coding here. Further, is L0 norm the best implementation of sparse coding here? I hope the authors can discuss more about the designing principle of those optimization metrics or targets like eq. 1. Following previous question, I think the optimization metric is closely related to specific task. In my opinion, rate reduction has a natural connection to classification problem. Can the authors comment on the connection between optimization metric and specific tasks? If we are considering object detection or more complex problem like image reconstruction, what is the general principle that can guide the design of optimization metrics? Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments, and your compliments on the insightfulness of the contribution, the novelty of the idea, the quality of the exposition, and the strength of the empirical results. Below we attempt to resolve the questions you posed. ## ImageNet results versus SOTA Thank you for the comment on the results. Please note that the main goal of this work is to not push the state-of-the-art, and as such the provided results incorporate minimal engineering compared to ViT or any more recent state-of-the-art model. With a more thorough engineering effort, and using the greater understanding of CRATE compared to ViT, one may potentially push the performance of CRATE-like models using the proposed framework beyond the state-of-the-art. Please see the Public Response for details on more experiments that push the performance of CRATE. ## Terseness of presentation Thank you for the suggestion. We will use some of the extra space afforded by the camera-ready version to improve the text, especially incorporating targeted feedback raised by you and other reviewers. ### Footnote 4 There is some more elaboration on this point in Section 2.5, but we recapitulate it here. The mathematical roles that are separated in this dichotomy are the transformation of the data distribution towards the desired structured form (optimizers of the sparse rate reduction) in the forward pass (which we refer to as “optimization” in the footnote), and the learning of the parameters of these incremental transformations in the backward pass (which we refer to as “learning” in the footnote). Please let us know if this explanation has resolved the confusion; we will make appropriate revisions to the exposition in the paragraph at line 133 in the camera-ready version for improved clarity (merging the footnote into the text). ### Footnote 6 The “rigorous math” referred to in this footnote refers to the mathematical theory of diffusion models, advanced primarily in [1], captured by two key concepts: - Given the score function for the noisy data distribution at a range of noise levels, a diffusion process that ‘follows the score function’ can be used to denoise the input noise distribution towards the data distribution. - In fact, it is not necessary to follow a (noisy) diffusion process to generate the data distribution, as there is a fully-deterministic “probability flow” ODE, which also involves the data distribution’s score function, that is mathematically equivalent to the score-following diffusion process. This means that even if one does not add noise when following the score function (as in CRATE), a suitable deterministic process can still transform the input distribution to the data distribution. We would be happy to provide more details if necessary in the discussion. ### $\ell^{0}$ norm explanation A brief answer is given in Section 2.1, but we reiterate it here. The rate reduction by itself is invariant to arbitrary rotations of the representations. To make the features more efficiently computable, and have human-interpretable structure (e.g., the principal components are just the standard basis vectors), we wish to align the representations with the coordinate axes, so that they become sparse. Thus, we penalize the $\ell^{0}$ norm, which counts the number of nonzero entries, hence the sparsity, of the representation $Z$. There are several "relaxations" of the $\ell^{0}$ norm which are efficiently optimizable – in Section 2.4 we picked arguably the most basic, namely the $\ell^{1}$-norm (c.f. LASSO regression, [2]). More choices are possible, and may even yield better performance, but studying this is left to future work. ## Relation between unrolled optimization objective and task; rate reduction and classification Indeed, the rate reduction framework as introduced in [3] was targeted to the specific case of classification – it uses class labels $\Pi$ to determine how to group the samples. However, in CRATE, our design of the network architecture (described in Section 2) is completely independent of any labels for the data samples: we derive the architecture from the goal of learning a representation that optimizes the sparse rate reduction objective, and this objective incorporates learnable "local signal models" where the labels $\Pi$ were used in [3]. In our experiments, we eventually learn the parameters of the CRATE model by training on a supervised classification task, but the white-box construction of the model makes the model applicable far beyond the setting of classification. Besides, we also mention that recently rate reduction/information gain type objectives have been used to produce representations which are useful in a variety of tasks, including self-supervised learning [4], generative modeling [5], image segmentation [6], etc. We hope that the points raised above resolve the doubts you have about this work, and constitute satisfactory responses to your questions. Please let us know if you have further questions or comments. [1]: Song, Yang et al., “Score-Based Generative Modeling through Stochastic Differential Equations,” in International Conference on Learning Representations, 2021. [2]: Wainwright, Martin J. High-dimensional statistics: A non-asymptotic viewpoint. Cambridge university press, 2019. [3]: Yu, Yaodong, et al., "Learning diverse and discriminative representations via the principle of maximal coding rate reduction." Advances in Neural Information Processing Systems 33 (2020). [4]: Ding, Tianjiao, et al., "Unsupervised manifold linearizing and clustering." arXiv preprint arXiv:2301.01805 (2023). [5]: Yang, Allen Y., et al., "Unsupervised segmentation of natural images via lossy data compression." Computer Vision and Image Understanding 110, no. 2 (2008): 212-225. [6]: Dai, Xili et al., "Ctrl: Closed-loop transcription to an LDR via minimaxing rate reduction." Entropy 24, no. 4 (2022): 456. --- Rebuttal Comment 1.1: Comment: I have read other reviews and author replies. My concerns are well solved, and thus I decide to increase my score --- Reply to Comment 1.1.1: Title: Response to Reviewer tJjQ Comment: Thank you again for thoroughly reviewing our manuscript and response and raising your score. We are grateful for your valuable feedback on our work, which will no doubt improve it. Please let us know if you have any other questions or comments during the discussion period.
Summary: The authors propose an interpretation of the transformer architecture wherein the component blocks may be interpreted as unrolled optimization steps: * Multi-Head Self-Attention (MHSA) is said to be approximately the same as Multi-Head Subspace Self-Attention, which is an unrolled optimization of the following objective: * $ \sum_{k=1}^K R ( \mathbf{U}_k^* \mathbf{Z})$ (see eq. 8), where $R$ is an estimator for the "lossy coding rate" $R(\mathbf{Z}) = \frac{1}{2} \text{logdet} ( \mathbf{I} + \frac{d}{N \epsilon^2} \mathbf{Z} \mathbf{Z}^* )$ * Multi-Layer Perceptrons are said to be approximately the same as Iterative Shrinkage-Thresholding Algorithms (ISTA) which are an unrolled optimization of the following objective: * $\lambda ||\mathbf{Z}||_1 + ||\mathbf{Z}^{l+1/2} - \mathbf{D} \mathbf{Z}||^2_F$ where $\mathbf{Z}$ are activations, $\mathbf{Z}^{l+1/2} = \mathbf{D} \mathbf{Z}^{l+1}$ and this is justified as being a relaxed LASSO objective that will sparsify the representation of $\mathbf{Z}$ The relationship between the architecture and the objective is quite involved and I cannot summarise it here. As a consequence of constructing the network in this way, the authors are able to track both of these objectives as a way to gain insight how a network is operating; this is the "white-box" of the title. Specifically, they track the sparse coding $R$ mentioned above and the sparsity at each layer in a network. During training it is then possible to observe the sparse coding rate and sparsity both decrease with the layer index during training. In addition, the network modifications required to match the theory do not appear to reduce performance on ImageNet compared to a similarly sized Vision Transformer. Strengths: Understanding the performance and popularity of self-attention in deep learning is a valuable goal and finding a theoretical motivation for this specific architecture could be very useful to anyone using or developing transformers. If this goal is achieved by this paper then it is a significant work. The strengths of this paper are therefore mainly found in Figures 3 and 4, which show the statistics $R^c$ and sparsity. Figure 3 compares train versus validation while Figure 4 shows how these statistics change during training. It is interesting to observe these values decreasing as the signal travels through the network as this approximately matches what is predicted by the theory. These are the original empirical observations of the paper and they support the paper's claims. The motivation and experimental results are both clearly stated. Section 1 provides a reasoned argument for why learning representations that are interpretable would be valuable in current deep learning practice, and that it is lacking in current architectures, either diffusion or autoregressive models. The results in Table 1 also address concerns that this method would reduce performance, it looks like performance is mostly maintained, which is promising. Weaknesses: Addressing following weaknesses would substantially improve the paper * Legibility of the derivation: the derivations in Section 2 are dense and difficult to follow * The sparse rate coding function $R^c$ is key but is only introduced in Section 2.3, introducing it earlier and giving the reader an idea of how it relates to the distribution of $\mathbf{Z}$ would make it much easier to understand. For example, noting it's relationship to the entropy of a multivariate normal, how it will grow with the variance of $\mathbf{Z}$ etc * At the end of Section 2.2 the reader has just finished reading a derivation of "Self-Attention via Denoising Tokens Toward Multiple Subspaces" and then immediately afterwards is faced with "Self-Attention via Compressing Token Sets through Optimizing Rate Reduction". I am still confused as to which derivation I'm supposed to pay attention to for understanding the MSSA block. If both derive the same MSSA block then include only one and put the other in the Appendix, this will also help free up space for more experimental results * Do not rely on terminology from previous papers that are not common knowledge, for example if you agree that most readers will not understand the usage of "incoherent" without explanation, do not use it like that. Same for "linearized and compact". * On line 123 there's a reference to (8), which appears to be the ImageNet paper from 2008 and I could not find a formal definition of the lossy coding rate in it * The theory demonstrates how each layer could be performing an unrolled optimization step but it does not explain why this is beneficial to the overall problem of learning a function, such as predicting the class label using the entire network; in fact the target class labels are not present in the notation. I believe the entire network was optimized according to the cross-entropy loss, so where is that in Section 2? * Additional experimental results would be worthwhile: * Table 1 does not contain results mentioned on lines 354-355 showing the scaling performance of ViT. Even if these results are from other work, including it with citation could be valuable in context * It is known that ViT architectures underperform on ImageNet versus larger datasets. Unfortunately, it would be valuable to see results training on Imagenet-21k, as in the original ViT paper, but this may be beyond the authors resource capacity * Computing $R^c$ and sparsity with depth in public pretrained models would also be useful, either to demonstrate that these models fail to minimize these implied objectives or to demonstrate that this is why transformers work * Comparing these statistics to activation norms would be a good comparison Technical Quality: 3 good Clarity: 3 good Questions for Authors: What experiment could you plan that would disprove the theory in Section 2? What architectural modification would not allow a low sparse coding rate or sparsity but would still allow similar performance when trained? What do these results imply for transformer design, can you predict anything that is obviously incorrect or valuable just from these results? Can you design a new layer that performs an additional form of unrolled optimization that is also useful? Is there a regime of transformer operation that completely fails where this theory would provide a useful insight? I think this direction of research is valuable and if I could understand precisely why the white-box observations in Figure 3 and 4 are extremely valuable I would increase my rating. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation of trying to explain why a given deep learning architecture works is: 1. A method becomes popular because it is surprisingly effecitive 2. A theory is constructed to explain why it works 3. The theory demands some small change to the network which only slightly reduces performance 4. The theory does not immediately permit an improvement to the method that is valuable I think this is why Figures 3 and 4 are key. If the statistics shown in Figures 3 and 4 were extremely valuable to understanding transformer training dynamics then it would be obvious that this work is significant. It could be that they are, but I do not see this argument made clearly enough in the paper. Alternatively, maybe there is some architectural or training improvement that this theory implies that would be extremely valuable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Below, in response to the issues you have raised, we briefly reiterate how we view our work’s core motivations and contributions, then follow with precise responses to specific points raised in your review. Unfortunately due to space constraints we cannot answer all points in full detail, but we are glad to answer any follow-ups in the discussion phase. ## General Comments Your review suggests that our work’s goals and contributions are wrapped up in providing an interpretation of the popular transformer architecture, and as a byproduct obtaining implications for the design of transformer/self-attention architectures. We agree that this is an excellent path to achieving high-impact research in this area. However, we do not believe it is the only way. We would like to reiterate that the goal of our work *is not to directly explain the existing transformer, but to introduce a new white-box alternative whose operational characteristics are both different and transparent in a useful way.* As we describe in Section 2 of the submission, the model we propose is designed around the principle of learning a representation for nonlinear, multimodal data by incrementally transforming it to a standardized form. Conceptually, we derive our architecture to achieve this goal following an unrolled optimization perspective, leading to a derivation where we understand the role played by each parameter (i.e., a “white-box architecture”). Experimentally, we demonstrate that for a supervised classification task, the characteristics of the learned model agree with its white-box design, and performance is not sacrificed much either, in terms of both accuracy and scaling. Crucially, experiments on pretrained ViTs do not show the same characteristics as our white-box model, suggesting our white-box architecture has novel and highly interpretable characteristics, some of which are visualized in the Appendix (see Figures 6 to 11), that will be useful for network design and analysis beyond classification. ## Responses to Individual Points ### Results training on Imagenet-21k Please see the discussion in the general response. ### Computing $R^c$ and sparsity in public pretrained models vs CRATE (the value of Figures 3 and 4) Thank you for your insightful suggestion. We have conducted new experiments to evaluate the $R^c$ and sparsity of token representations from each layer of a pre-trained ViT-Base (downloaded from the `timm` GitHub repo). We summarize the results in Figure 1 of the uploaded rebuttal .pdf file. We find that without our white-box design, the vanilla ViT does not optimize our proposed sparse rate reduction objective. This contrasts with the results shown in Figures 3 and 4 of the work, wherein we see that the $R^c$ and sparsity value decrease layerwise for CRATE, in accordance with our theory. Since CRATE is an architectural modification of the vanilla ViT, we think this presents a compelling answer to your questions about an architectural modification that achieves similar performance without the same white-box characteristics, as well as a compelling experiment that could falsify our theory in Section 2 (and fails to do so). We will add these results and this comparison to Section 3 of our main body in our camera-ready version. Overall, this shows that the CRATE model, while looking and performing quite similar to a transformer in experiments, is fully interpretable through the perspective of sparse rate reduction, in a manner that is distinct from the black-box ViT. This is a significant advance in the development of layer-wise interpretable networks -- this is the first, to our knowledge, to achieve performance comparable with standard models such as ViT on ImageNet. ### Implications of our results for transformer design Both our theoretical and empirical results suggest that it is possible to design transformer-like architectures from the principle of unrolled optimization. One valuable design choice from our work is that QKV heads in attention may not be necessary, and this could help reduce the number of model parameters (indeed, our design, instantiated in CRATE-Base, has about 25% of the parameters of ViT-Base while demonstrating comparable performance) and make the whole network more efficient. Furthermore, both the MSSA block and the ISTA block are simpler and more interpretable than the existing multi-head attention block and MLP block, and we hope such a minimal and functional network architecture could further improve our understanding of transformer-like architectures. ### Unrolled optimization, class labels in the derivation Our CRATE architecture is derived from the goal of transforming the data distribution to a structured form, rather than a specific task. This is why the conceptual derivation in Section 2 does not include any discussion of labels: although labeled samples may be used to learn the parameters of the CRATE model, so long as the downstream training task requires semantically meaningful representations of the data distribution, the exact training configuration is of secondary importance. Intuitively, once we could identify the low-dimensional representations of the high-dimensional data (ie, via optimizing the sparse rate reduction), such representations are effective for classification problems. We believe that the unrolled optimization perspective is a major potential advantage of our white-box framework: in settings where, for example, prior information is available about the data distribution (for example, in medical imaging applications or other problems with scientific data), our white-box design allows these structures to be incorporated in a transparent way. ### Comments on the presentation Thank you for your helpful comments. We will incorporate your suggestions in the camera-ready version, using the extra space. We hope that the points raised above help clarify the significance of our contributions. --- Rebuttal Comment 1.1: Title: Falsification Results Comment: In reply to this comment: > We find that without our white-box design, the vanilla ViT does not optimize our proposed sparse rate reduction objective. This contrasts with the results shown in Figures 3 and 4 of the work, wherein we see that the $R^c$ and sparsity value decrease layerwise for CRATE, in accordance with our theory. Since CRATE is an architectural modification of the vanilla ViT, we think this presents a compelling answer to your questions about an architectural modification that achieves similar performance without the same white-box characteristics, as well as a compelling experiment that could falsify our theory in Section 2 (and fails to do so). I don't understand the results presented in Figure 1 of your pdf rebuttal: 1. The sparsity of a vanilla ViT should not be 1.0, in that case no activations would ever be sufficiently negative entering the GeLU activation functions, which would mean the model is failing to learn nonlinear functions. Specifically, [other papers][lazy] have demonstrated activation sparsity of 6.3% in vision transformers recently. 2. While the coding rate decreases for CRATE as the layer index increases, I don't see this as being a very conclusive result, it only decreases 27% through the entire network, while ViT-B decreases 15% from layers 4 to 8. I apologise for the confusion because I did not state what I thought your hypothesis was so that makes it difficult to talk about falsification. The hypothesis I had in mind was, "sparse rate reduction is sufficient for learning useful functions in deep networks". In that case, I don't see this as falsifying anything, if sparse rate reduction is critical for your network to learn then what you need to is ablate those capabilities from the model and show that it is no longer able to learn. In fact, if I accept the statement above that the ViT has no capability for sparse rate reduction and is still able to outperform CRATE then I can only conclude the sparse rate reduction in CRATE is irrelevant. I would attribute the performance to the architectural similarity to a transformer and the effectiveness of contemporary minibatch SGD on a cross-entropy objective. The derivation is interesting and the results matching contemporary networks are promising but I don't see what the "white box" buys you. Specifically: 1. It's not significantly correlated with performance, your results comparing to ViT demonstrate this 2. It doesn't provide a significant benefit in architectural design beyond prescribing a block that is similar to a transformer 3. It doesn't change how models are trained in any significant way (unless the paper fails to mention that this model converges in a significantly different way to a transformer) 4. It encourages sparsity but the computation benefit isn't explored in the paper, nor is it demonstrated that this sparsity is significantly lower than it is now known to be in transformers. For example, ["The Lazy Neuron Phenomenon"][lazy] demonstrates 6.3% nonzero entries in ViT-B16 5. Interpretability is a fuzzy concept, but I don't see any experiments in the paper aimed at interpreting what the network is doing based on the sparse rate reduction metrics, the experiments simply observe the metrics decrease while the network learns [lazy]: https://arxiv.org/abs/2210.06313 --- Reply to Comment 1.1.1: Title: Discussion with Reviewer YxEu (Part 1) Comment: We are grateful for you engaging with our rebuttal further, and for your critical perspective on the work, which will no doubt improve it. Thank you also for pointing out that **‘[t]he derivation is interesting and the results matching contemporary networks are promising.’** ### Interpreting the requested experiments on Figure 3/4 on public ViTs Our results are an accurate reflection of the experiment that you suggested (e.g., computing $R^c$ and sparsity as a function of depth). The last two columns of the first two rows of Figure 1 in the .pdf rebuttal evaluate the sparsity of the tokens after the second block of each transformer layer, $z\_{\\ell} = \\mathrm{MLP}(\\mathrm{LN}(z^{\\prime}\_{\ell})) + z^{\\prime}\_{\\ell}$, as defined in Eq. (3) of the ViT paper [1]), which make this a consistent comparison with how we evaluate the sparsity of the second block (i.e., ISTA block) of each CRATE layer. We applied the original weights and architecture of the public pre-trained ViT model from the `timm` package. Compared with the other paper [2] you mentioned, [2] evaluated the sparsity of the hidden layer output of the MLP, which is different from ours. Meanwhile, [2] replaced the GeLU activation with ReLU activation in the MLP layer. We did not find public checkpoints for the models in [2]. Similarly for the $R^c$ results, although we can agree that a subjective interpretation is possible, we think it is unambiguous that in a network derived from unrolled optimization, it “optimizes” the objective if the objective trends in the appropriate direction on average. The result shows that for the compression part of the objective, this is true of CRATE, and not true of ViT; the CRATE-S model’s $R^c$ term is reduced by about 30% over the course of forward propagation, whereas the analogous terms for ViT-S and ViT-B increase by about 50%. If you would like to see additional comparisons here, we are happy to run them and report the results. **We want to state clearly that the results we are reporting are accurate**; we are only providing our interpretations of the results of the experiments you suggested. But let us emphasize that evaluating the $R^c$ and sparsity metrics in the parts of the network we have is most reasonable for CRATE precisely because _we have designed the network to learn a representation that has these characteristics_. Your suggested experiments demonstrate that the token embeddings of the ViT – analogous to what we evaluated in Figures 3 and 4 – do not have these same properties. However, this does _not_ imply that the ViT does not learn low-dimensional or parsimonious (e.g., compressed and sparse) representations of the data. Rather, it implies that the ViT’s learned representations are less accessible, and thus harder to evaluate, due to its parameter-redundant black-box design. This is a key benefit of our derivation, and the simplified white-box architecture of CRATE: the places where the representations are transformed to standard forms (axis-aligned, hence sparse, orthogonal subspaces) are completely exposed to the network architect, removing any ambiguity in measuring these quantities. We believe these insights present an excellent opportunity for follow-up work to better understand the ViT, as well, but this is firmly out of scope of the present submission. [1]: A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” in International Conference on Learning Representations, 2021. [2]: Z. Li et al., “The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers,” in The Eleventh International Conference on Learning Representations, 2023. --- Reply to Comment 1.1.2: Title: Discussion with Reviewer YxEu (Part 2) Comment: ### Our work’s hypotheses Thank you for clearly stating your thinking on this point. We think you may be misunderstanding our work’s principal experimental hypothesis: "sparse rate reduction is sufficient for learning useful functions in deep networks" seems to us to be a misinterpretation. Let us also mention in this connection that your assertion > if I accept the statement above that the ViT has no capability for sparse rate reduction and is still able to outperform CRATE then I can only conclude the sparse rate reduction in CRATE is irrelevant. does not seem to be logically sound: **we have not claimed anywhere that using the sparse rate reduction is necessary to construct high performance deep models**. In fact, its role in our derivation is quite the opposite: its use in the design of the architecture promotes the learning of mathematically-interpretable representations of the data in the network. Please see the discussion of “what white-box buys us” for more on this point. To clearly state our central hypothesis, let us reiterate our primary motivations, which were written in Section 2.1. Our goal is to design a network architecture that transforms the data to a mixture of nearly-orthogonal axis-aligned subspaces, the optimizers of the sparse rate reduction [3]. We thus obtain our architecture from unrolled optimization on the sparse rate reduction, then learn its parameters with backpropagation, since the structure of the data distribution is unknown. In particular, since we are learning these parameters, there is no guarantee that the resulting network will optimize a sparse rate reduction objective for the data distribution. This leads precisely to **our main hypothesis, that *it is possible to train a transformer-like architecture (i.e., CRATE) to simultaneously achieve high accuracy at scale and optimize the sparse rate reduction***. The results in the left panel of Figure 4 demonstrate that at random initialization, the CRATE-Small model does not optimize the sparse rate reduction for the data distribution – only through learning does the network optimize the sparse rate reduction for the data distribution. Note that in this line of reasoning, the *goal* is to obtain a useful representation of the data. We have argued in the introduction of the submission why this goal is valuable and a central ‘grail’ for learning. **The goal is not only to obtain a network with high performance; the goal is to obtain a white-box network which learns useful representations.** [3]: Y. Yu et al., "Learning diverse and discriminative representations via the principle of maximal coding rate reduction." Advances in Neural Information Processing Systems 33 (2020). --- Reply to Comment 1.1.3: Title: Discussion with Reviewer YxEu (Part 3) Comment: ### What white-box buys us We would like to push back on your characterization of our white-box model. In our work, a “white-box model” can be thought of as a model whose architecture and parameters are derived mathematically from first principles, in a manner where the data distribution plays a central role. In this view, your first assertion is self-evident: the most natural white-box model for representation learning would be sparse coding of the data, possibly in a learnable signal dictionary, which gives a mathematically-interpretable and practically-robust model with performance that unfortunately cannot match that of modern deep learning architectures. Our contribution is to present a white-box derivation of a transformer-like architecture that is simultaneously highly performant. We truly believe there is significant novelty in this contribution: despite notable efforts from the theoretical community to suggest possible interpretations for the self-attention operation in transformers (e.g., summarized in [4]), a holistic and practically-verified interpretation for an entire transformer-like block (i.e., both the self-attention operation and MLP) has not been proposed before our work. In response to your second point, we would like to reiterate that as we wrote in the rebuttal, there are in fact **concrete practical implications** of our work for standard transformers: specifically, that **the QKV matrices in self-attention layers of ViT are redundant, and can be combined to save almost a factor of 4 in the overall parameter count**, with only a minor performance hit that can surely be reduced further with additional engineering work. Regarding your remaining points, we already mentioned the interpretability experiments we conducted in the submission in our rebuttal “**General Comments**” (e.g., visualizing the learned dictionaries and subspaces of CRATE in Figures 6 through 11 in the appendix); we think studying the computational benefit of sparsity based on our results is an interesting direction for future work, but firmly out-of-scope for the present work. We appreciate from your response here and to `AAKz` that you harbor some skepticism of research on the “model-centric” understanding of deep networks – your valuation of our work seems to be primarily a function of the extent to which such work directly implies improvements to specific metrics in practice. Consider that, if some of the significant methodological innovations in deep learning from the last five years were subjected to the same standard, they would have been dismissed – for example, diffusion models were not demonstrated to have sample quality anywhere close to the state-of-the-art GANs of the time [5]. The important aspect of these works was their conceptual insight that pointed the community towards more principled approaches and led to tremendous performance gains in the long run. We believe that CRATE has similar potential for future developments – it becomes possible to realize other novel improvements not just through empirical design, but also by using the guidance of principles from optimization and compression through the white-box approach. [4]: R. Vidal, “Attention: Self-Expression Is All You Need,” Sep. 29, 2021. Accessed: Apr. 05, 2022. [5]: J. Sohl-Dickstein et al. "Deep unsupervised learning using nonequilibrium thermodynamics." International conference on machine learning. PMLR, 2015.
Summary: This paper proposes to structure a classification pipeline based on Transformer networks using precisely defined mathematical operators that are designed to perform a gradient step to minimize a well defined objective, e.g. Lasso objective sparse representation, or maximizing auto-correlation between "noisy" and "denoised" tokens. As such, it not only propose a new architecture with some shared weights, but it also proposes an interpretation for the role of each block to achieve the goal of representation learning. The pipeline is tested on popular Image Classification benchmark with end-to-end learning (ImageNet) as well as transfer learning (pre-trained on ImageNet, finetuned on CIFAR-10/100, Oxford flowers and pets. Strengths: The paper really proposes a new architecture based on an intuition " the objective of representation learning is to compress and transform the distribution of the data towards a mixture of low-dimensional Gaussian distributions supported on incoherent subspaces ". They propose an architecture which resembles the visual transformers with a strong effort of modeling, i.e. trying to affect an objective to the usual transformers blocks (i.e. multi-head attention). The proposed architecture is conceptually significantly simpler than ViT for example, and the performance drop is arguably very slight (-1% top1 on ImageNet). A posteriori analysis before and after training (e.g. Figure 4) seem to validate the intuition of the authors for the coding rate aspect (it seems not so clear for sparsity). This modelisation work in a tough work, and this paper is a significant contribution. Weaknesses: To me there is a caveat about the sparse coding hypothesis. Line 252: " In our implementation, motivated by Sun et al. [29] and Zarka et al. [31], we also add a non-negative constraint to Z^{l+1} " This non-negative constraint is not just a detail, compared to a ReLU, the soft-thresholding reduced and possibly zeros the coding coefficient but it preserves the sign, i.e. the phase. The ReLU collapses the sign, i.e. the phase. This "detail" is also under-estimated in Sun et al. [29] and Zarka et al. [31] . I suggest the authors to read the follow up work by Guth et al https://arxiv.org/pdf/2110.05283.pdf especially section 4 Phase Collapse Versus Amplitude Reduction. This might give further intuitions in this foggy world of modeling neural networks. I'm not claiming that phase collapse is the good interpretation for this block, but for sure SoftShrink vs ReLU, i.e. sparse code vs non-negative sparse code is probably not a detail. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How much does the performance degrade when replacing ReLU by softshrink ? Do you have ideas / remarks / intuitions on the possible importance of this non-negativity constraint ? How does it articulate with the mixture of Gaussian / low-dimensioal subspace hypothesis ? Gaussians do no care about the sign, isn't it ? Note that with non-negativity constraints , a subspace becomes a cone. Why would it be important to have a cone rather that a subspace ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Apart the aspect mentioned below, this is a really strong work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments, and your compliments on the quality of the contribution, the strength of the ideas, and the empirical insights. You bring up a very interesting point that the difference between the sparse coding and non-negative sparse coding formulations may seem important. In the sequel, we will attempt to explain why in fact the choice does not make too much difference conceptually, algorithmically, or empirically. > *How much does the performance degrade when replacing ReLU by softshrink ?* Regarding the empirical performance drop when using the regular sparse coding formulation, we report that the CRATE-Base model trains more or less the same (~67.6% top-1 accuracy on ImageNet-1K, which is a drop of 3.2% compared to the ReLU case, using $\lambda = 10$ and all other hyperparameters the same as in the original CRATE-Base evaluation on ImageNet; c.f., Table 1 in the paper and the rebuttal .pdf file). The results are summarized in Table 2 of the .pdf file. We will add this result to our camera-ready version. The message is that the two networks train comparably well, and one can push performance on either one by more dedicated hyperparameter tuning. > *I'm not claiming that phase collapse is the good interpretation for this block, but for sure SoftShrink vs ReLU, i.e. sparse code vs non-negative sparse code is probably not a detail.* Thank you for bringing the interesting work [1] to our attention. Regarding the specific issue of phase collapse discussed in [1], it is our understanding that the effect of phase collapse which is analyzed in [1] is to better separate out the means of different classes within a classification task. While this may be a cause of the increase in classification accuracy reported above, we believe that our method will be applicable beyond just classification tasks. Indeed, in our framework, we contend that the purpose of the training process is to learn the local signal models at each layer (see e.g., Section 2.5). From this perspective, so long as the downstream training task requires semantically meaningful representations of the data distribution, the exact training configuration is of secondary importance. In particular, we may use self-supervised learning methods to learn the signal models, whence there may not be any well-defined notion of class mean, but such exploration is left to future work. In the camera-ready version, we will expand the discussion to include the work [1] and its discussion of phase collapse along with these clarifications. > *Do you have ideas / remarks / intuitions on the possible importance of this non-negativity constraint ? How does it articulate with the mixture of Gaussian / low-dimensional subspace hypotheses? Gaussians do not care about the sign, isn't it ?* > *Note that with non-negativity constraints, a subspace becomes a cone. Why would it be important to have a cone rather that a subspace?* This is a very good point; if we push a set of input representations through a non-negative sparse coding layer, they will always end up as non-negative, and thus cannot have marginal distributions equal to (an approximation of) a mixture of zero-mean Gaussians, but rather some other distribution. If we propagate this non-negative constraint to the sparse rate reduction problem, we obtain a “nonnegative sparse rate reduction” problem: $$\max_{f \in \mathcal{F}}\mathbb{E}[\Delta R(Z \mid U_{[K]}) - \lambda \\|Z\\|_{0} - \chi(Z \geq 0)]$$ where $\chi$ denotes the characteristic function of a set and the algebraic definition of $\Delta R(Z \mid U_{[K]})$ (as a linear combination of logdet functions) is given in the paper. Our claim that the CRATE model transforms the data to a mixture of incoherent subspaces stems from the analysis of [2] of the minimizers of the rate reduction objective; in this view, understanding the questions you raise about representations in the presence of nonnegative soft thresholding amounts to whether optimal configurations in our nonnegative sparse rate reduction objective can be understood analogously. We sketch an argument below to this effect. Although formal analysis of the optimal points of the sparse rate reduction maximization problem is out of scope of this work, we see that the rate reduction maximization (i.e., $\max_{f \in \mathcal{F}} \mathbb{E}[\Delta R(Z \mid U_{[K]})]$) has optimal points characterized similarly to [2, Theorem A.6], namely that the representation of each distribution in the mixture is supported on a subspace with nearly isotropic covariance on this subspace, and the supporting subspaces are (nearly) orthogonal. Adding the sparsity term for some regularizer $\lambda$ would enforce the axis-alignment of the supporting subspaces; when adding in addition the nonnegativity constraint, following through the proof of [2, Theorem A.6] suggests that the argument goes through with suitable modifications (in particular, considering the conclusions for the covariance rather than $Z Z^T$). This sketch suggests that the statistical and geometric properties of the optimal representation remain the same when adding the non-negative constraint to the sparse rate reduction formulation. Since our CRATE model is derived by unrolling this objective, we believe this justifies the conceptual picture we describe in the submission around the ISTA block, although we use ReLU instead of soft thresholding. In the camera-ready version, we will expand the discussion to clarify these conceptual points. We again thank you for your detailed and interesting point, and hope we have provided satisfactory responses to your questions. Please let us know if you have further questions or comments. [1]: Guth, Florentin et al., "Phase collapse in neural networks." arXiv preprint arXiv:2110.05283 (2021). [2]: Yu, Yaodong et al., "Learning diverse and discriminative representations via the principle of maximal coding rate reduction." Advances in Neural Information Processing Systems 33 (2020). --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for this interesting discussion, happy to read that you have already lines of explanation for the non-negativity constraint, look forward reading the camera ready version. --- Reply to Comment 1.1.1: Title: Response to Reviewer vvYz Comment: Thank you again for thoroughly reviewing our manuscript and response. We are grateful for your valuable feedback on our work, which will no doubt improve it. Please let us know if you have any other questions or comments during the discussion period.
Summary: The authors propose a novel theoretical framework which shows that the popular transformer can be motivated by Maximizing Rate Reduction. The key idea of this work follows previous information-gain framework, mostly ReduNet, but with more special careful treatment on connections to transformer architecture design. By a few approximations, the authors show that maximizing rate reduction with sparsity constraint indeed derives a transformer-like deep structure. The derived white-box transformer-like architectures are verified on multiple datasets. Strengths: 1) The paper is overall well written and easy to follow. Related Work section includes comprehensive surveys. Formulas are clearly explained. Experiments come with detailed settings. 2) The idea is novel and interesting. Especially, deriving multi-head attention from Maximizing Code Reduction is novel. 3) The proposed white-box architecture is verified on real-world datasets such as ImageNet-1k and compared to ViT models. Weaknesses: 1) Although the paper claims that the proposed white-box model is competitive to ViT models, the numerical results seem not strong. On ImageNet-1k, the proposed model is clearly underperforming even with larger number of parameters. 2) The real power of ViTs is on high accuracy regime, where the model size is large. The authors only consider small model regime with low to medium accuracy, which lacks of convincing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Please consider larger model with ImageNet-1k top-1 accuracy above 80.0% Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and compliments on the idea being “novel and interesting” as well as the exposition being “well written and easy to follow”. As we mentioned in our Public Response, the primary goal of our work is not meant to simply push the state-of-the-art in a particular metric, but rather to demonstrate the promise of the CRATE approach (i.e., white-box deep networks constructed via unrolled optimization). Nevertheless, in addition to achieving this goal, CRATE additionally obtains strong performance and has promising scaling behavior on increasingly larger-scale real world datasets. In particular, per your suggestion (as well as that of Reviewer `YxEu`) to try larger models, we investigated the performance of CRATE on ImageNet-1K when pretrained on ImageNet-21K and fine-tuned on ImageNet-1K. We found that in this setting, CRATE-Base could achieve 80.2% top-1 accuracy, which is comparable to the performance of ViT-Base with around 25% of the parameters -- see our Public Response for more precise details. We will add these experiments into our main tables and explicitly mention that our models perform slightly worse than ViTs in the paper text. Unfortunately, due to time limits, we could not pretrain CRATE-Large on ImageNet-21K; we will add the corresponding empirical results and comparison to ViT-Large in our camera-ready version. We hope that the points raised above resolve the doubts you have about this work. Please let us know if you have further questions or comments.
Rebuttal 1: Rebuttal: First, we thank all reviewers for their insightful comments. We are particularly encouraged that reviewers have appreciated: - The novelty and impact of our central ideas (`13ev`: “...deriving multi-head attention from Maximizing Code Reduction is novel”; `tJjQ`: “provid[ing] a significant extension to [prior work]”; `vvYz`: “...this paper is a significant contribution [to modelisation work]”); - The benefits of the conceptual framework we have proposed (`vvYz`: “The proposed architecture is conceptually significantly simpler than ViT… and the performance drop is arguably very slight”; `tJjQ`: “The results are promising… the idea of white-box unrolling-based neural network design might be a possible alternative to current black-box design”); - The quality of the exposition (`YxEu`: “The motivation and experimental results are both clearly stated”; `tJjQ`: “Overall the manuscript is well written and related works are properly cited and discussed, very insightful work”; `13ev`: “The paper is overall well written and easy to follow. Related Work section includes comprehensive surveys. Formulas are clearly explained. Experiments come with detailed settings.”); - The insight presented by our empirical evaluations (`YxEu`: “The results in Table 1… address concerns that this method would reduce performance”; `AAKz`: “Extensive experiments… verify the effectiveness of the proposed method”). In the remainder of this message, we wish to reiterate our key contributions and address certain concerns raised by the reviewers. In particular, we discuss new empirical results undertaken in response to issues raised by reviewers around CRATE’s scaling behavior, where we demonstrate ImageNet-1K accuracy above 80% after pretraining our CRATE-Base model on ImageNet-21K; these results are presented in the attached .pdf and discussed in full detail below. ## Key Contributions Our central contribution is that we introduce a new transformer-like architecture (named CRATE), where each network layer/operator is constructed _ab initio_ from the principles of data compression and representation learning. This provides a clear and principled mathematical interpretability to transformer-like networks, by revealing the functions of each network layer while removing unnecessary redundancy from previous empirically designed transformers. In addition, we have shown through experiments that this cleaner and simpler architecture is competitive in performance with the base transformer models (such as ViT) in large-scale real-world vision tasks (e.g. classification on ImageNet). Empirical evaluations further confirm that the overall learned deep networks and their layers clearly perform the mathematical functions they were designed for, i.e., reducing the coding rate and sparsifying the learned representation. We believe this work has shown the promise of eventually bridging the gap between theory and practice of (transformer-like) deep networks. ## Comparison with SOTA; New Experimental Results Of course, further improving the performance and demonstrating the potential of such new principled models is important -- we agree with several of the reviewers on this point. In this work, however, our goal is not to push the state-of-the-art per se, but rather to develop a more clear and systematic understanding of the extremely ubiquitous transformer-like deep network architectures, by developing a white-box model in this family of architectures. Several reviewers have, nevertheless, suggested some additional experimental improvements so that we may fairly compare to the ViT within more realistic regimes of data and compute; we wish to broadcast the results here. Most notably, as suggested by Reviewers `13ev` and `YxEu`, we have further scaled up the CRATE models. In particular, we pretrained on ImageNet-21K and fine-tuned on ImageNet-1K. As shown in Table 1 of the uploaded pdf file, with the CRATE-Base model (22.80M parameters), we achieve 80.2% top-1 accuracy; this is comparable to ViT-Base (~86M parameters, 83.9%) [1] with around 25% of the parameters. Here, we provide details about the experiments mentioned above. For pre-training on ImageNet-21K, we configure the learning rate to 1e-4, set the weight decay to 0.05, and use a batch size of 4,096. The total number of epochs is 90, with 10 warmup epochs. For fine-tuning on ImageNet-1K, we use the same set of parameters as described in Appendix B.1.2, with the exception of setting the learning rate to 5e-5 and having a total of 50 epochs. Unfortunately, due to time limits, we could not pretrain CRATE-Large on ImageNet-21K; we will add the corresponding empirical results and comparison to ViT-Large in our camera-ready version. We again thank the reviewers for their insight and hope for a continually enlightening discussion period. [1]: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. ICLR 2021. Pdf: /pdf/4a06ae1a1e32aa1596c79f455b7167780fafbee9.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper provides an interesting claim "the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens". This results in white-box transformer-like deep network architectures which are mathematically fully interpretable. Strengths: - Very interesting claim about transformer. - Extensive experiments to verify the effectiveness of the proposed method. Weaknesses: I tried and failed to find any weaknesses. I really like such work on interpretable neural networks. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: No. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are particularly grateful for and truly encouraged by your high assessment of our work. Thank you for dedicating your time and expertise to review our paper, and please let us know if you have any additional questions or comments during the discussion period. --- Rebuttal 2: Title: No argument justifying award quality score of 10 Comment: I'm sorry to point this out but in order to justify a score of 10, it is necessary to write more than four sentences. The assertion that there are no weaknesses to the work does not seem reasonable, there are certainly weaknesses as pointed out by other reviewers. I would be very interested to read a convincing argument as to why this paper is award quality and I hope you will be able to revise this review to provide it. If not I hope an unjustified appraisal of award quality does not affect the final decision on this paper's acceptance.
null
null
null
null
null
null
Boosting Semi-Supervised Few-Shot Object Detection with SoftER Teacher
Reject
Summary: This paper studies a new task named semi-supervised few-shot object detection, where both of base and novel classes are supposed to be scarce. The author first finds the vanilla supervised FRCN trained on base classes has a low recall on novel classes, and trained with extra unlabeled novel data can effectively improve the novel recall. Then the author follows the SSOD framework Soft Teacher to do semi-supervised base training, where only partial base data is available. However, the original Soft Teacher has a low recall of small and ambiguous objects. Thus the author proposes a new proposal learning method to improve it. Finally, the pre-trained model is then semi-supervised fine-tuned on a balance set comprised of both base and novel samples. Experiments show Softer Teacher achieves a good GFSOD performance. Strengths: * The idea is straightforward and has good soundness. * The method has promising results in the GFSOD setting. Weaknesses: 1. The proposed task of semi-supervised few-shot objects is similar to semi-supervised object detection. Since both base and novel classes are scarce, what's the meaning of splitting classes into base and novel? I can't see any practical significance in this task. 2. From my point of view, the setting proposed in the paper is closer to a semi-supervised than a few-shot object detection problem. Particularly, one of the key properties of few-shot learning is that the model does not know the novel classes in training, so it can adapt quickly to new classes with only a few examples in testing either using meta-learning (e.g., meta-RCNN) or small-#step finetuning (e.g., TFA). Another common sense in FSOD is that the novel classes are authentically rare, and we cannot find more images about that class, regardless of label or unlabeled. Therefore, the proposed approach works best for semi-supervised object detection rather than few-shot object detection, and it is not fair to compare it with FSOD works. 3. The proposed method SoftER Teacher is an incremental improvement based on existing work SSOD Soft Teacher. The only improvement seems to be that the author appends a new loss to constrain outputs from the teacher and the student should be close to corresponding proposals, but the method is more likely to be only related to semi-supervised learning, it seems nothing about few-shot learning. 4. Section 3.1 is named "What makes for Effective FSOD", but 3.1 studies unlabeled data can improve the novel recall of the FSOD model. The title is not very accurate. 5. In line 252, the author argues, "we are the first to incorporate external unlabeled data with few-shot fine-tuning", I don't think it is a contribution or anything good. 6. There is no component analysis or ablation experiments to demonstrate the effectiveness of the proposed method. For example, the performance comparison of w/wo the proposal learning loss. 7. The performance improvement is minor in the FSOD setting (not GFSOD). The novel performance is actually bad. The superior GFSOD performance may attribute to the strong baseline Softer Teacher on base performance. 8. The authors do not show any numbers related to the training resources (memory and time). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What's the difference between the proposed semi-supervised few-shot objection and semi-supervised object detection? What's the practical meaning or practical application scenario that semi-supervised few-shot objection covers but semi-supervised object detection does not? 2. Besides the proposal learning loss, any more improvements upon the baseline framework Soft Teacher? Where is the ablation that demonstrates its effectiveness of it? 3. SoftER teacher adopts FRCN for both teacher and student, how about the performance when replacing the FRCN with other FSOD methods like DeFRCN? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer `F1pt` for your constructive feedback. Please find below our responses, along with additional experiments, to address your questions and concerns. ### 1. What's the difference between the proposed semi-supervised FSOD and SSOD? The current leading approach for FSOD is the two-stage procedure comprising the first stage of representation learning via base pre-training followed by few-shot target adaptation via transfer learning. Our work introduces supplementary unlabeled data in both stages, resulting in a new semi-supervised FSOD setting, to substantially improve base and novel class performances while also mitigating base forgetting. Reviewers `F1pt` and `Qe4L` pose a valid question: can we merge the two stages into one and train under the SSOD setting? Recall in FSOD benchmarks, the base dataset is assumed to be fully annotated with boxes covering all instances of interest, whereas the novel set is sparsely labeled at $k$-shots, or bounding boxes, per category. So a novel image containing 3 cats may not be fully labeled but only one box (1-shot) is annotated while the other two ignored. Thus, the benefit of the two-stage approach is to separate fully annotated base examples from sparsely annotated novel ones, thereby allowing for algorithms to optimize on both classes. The reviewer is correct in that the first stage is aligned with SSOD, where we train on {1,5,10} percent of base classes with unlabeled data. However, the second stage is needed to adapt the base domain to novel concepts while preserving base performance by freezing the appropriate layers. We conduct an experiment in the table below to show that if we approach the FSOD problem as SSOD, denoted as "One-Stage Semi-Supervised", we would run into the issue of foreground objects being rejected as background (i.e., missing labels), resulting in a drastic reduction in base performance. |Training Protocol|10-Shot bAP|30-Shot bAP|10-Shot nAP|30-Shot nAP| |-|-|-|-|-| |One-Stage Semi-Supervised|11.0|16.9|11.6|15.3| |Proposed Two-Stage Few-Shot (Section 3)|37.2|38.6|10.6|12.3| ### 2. The proposed SoftER Teacher is an incremental improvement on existing Soft Teacher for SSOD. It seems nothing about few-shot learning. L234-243 in Section 3.2 argue that SoftER Teacher contributes a non-trivial extension to Soft Teacher with our Entropy Regression (ER) module for proposal learning with complex affine transforms, which has not been attempted before. ER addresses a key weakness of Soft Teacher by enhancing proposal recall, which translates to convincingly better FSOD (Figure 5). ***For semi-supervised FSOD (Table 3), SoftER Teacher improves on Soft Teacher by +1.7 base class AR, which yields a gain of up to +1.5 novel class AP. These results support our empirical finding in Section 4.2 demonstrating a potential relationship between SSOD and FSOD.*** ### 3. Section 3.1 is named "What Makes for Effective FSOD", but 3.1 studies unlabeled data can improve the novel recall of the FSOD model. The title is not very accurate. Section 3.1 presents a new empirical analysis linking the effectiveness of FSOD to unlabeled data by way of proposal recall. Then, we follow up with extensive experiments in Section 4.2 to bolster our analysis and claims. Thus, we believe Section 3.1 is aptly titled within the context of our premise and conclusion. ### 4. There is no ablation experiments to demonstrate the effectiveness of the proposed method. Due to page limit, we put three detailed ablation studies analyzing the design and benefits of SoftER Teacher in Appendix A of the supplementary material, and refer to them throughout the main paper (L254 and L299). For your convenience, we reproduce Table 4 from Appendix A.1 below. We will include the ablation studies in the camera-ready version using the extra page. |Proposal Similarity Measure|Proposal IoU Regression?|AP|AR| |-|-|-|-| |None|No|22.4|30.8| |KL-Divergence|No|22.8|31.5| |Cross-Entropy (Eq. (4))|No|22.7|31.6| |None|Yes|22.3|30.8| |KL-Divergence|Yes|22.9|31.8| |**Cross-Entropy (Eq. (4))**|**Yes**|**23.0**|**32.0**| The above table provides an ablation study on 1% of COCO labels to assess the key elements in SoftER Teacher. ***SoftER Teacher (last row) improves on both precision and recall over Soft Teacher (first row) via our proposed Entropy Regression module (Eqs. (4) and (5)) for proposal learning with complex affine transforms.*** ### 5. The novel performance improvement is minor in the FSOD setting. Please see the above **2. General Response** on novel class performance for a detailed answer to your concern. In short, we argue that SoftER Teacher performs exceedingly well on novel classes with less parameters and labels by leveraging supplementary unlabeled data, when compared to TFA and Retentive R-CNN baselines adopting the same base FRCN architecture. ### 6. The authors do not show any numbers related to the training resources (memory and time). Again due to page limit, we put the Implementation Details in Appendix C of the supplementary material, called out in the main paper on L271. In short, we train on 8x A6000 GPUs each with 48GB of memory. One experiment takes between 12 hours and 10 days to complete. Please refer to Appendix C and the included source code for details on reproducibility. We will add details on training resources in the camera-ready version using the extra page. ### 7. SoftER Teacher adopts FRCN, how about the performance when replacing FRCN with other FSOD methods like DeFRCN? The goal of this work is to explore and analyze the contribution of unlabeled data for semi-supervised FSOD. However, we recognize that SoftER Teacher has room for improvement. We observe complementary properties of DeFRCN, DCFS, and Retentive R-CNN which, in principle, could be combined with SoftER Teacher to further advance FSOD without base degradation. Such in-depth investigation is better reserved for future work as it is beyond the scope of this paper. --- Rebuttal Comment 1.1: Comment: The author seems to haven't fully explained my questions, $e.g.$, 1. The proposed task of semi-supervised few-shot objects is similar to semi-supervised object detection. Since both base and novel classes are scarce, what's the meaning of splitting classes into base and novel? I can't see any practical significance in this task. 2. FSOD assumes novel classes to be authentically rare, which means we can't find samples of novel classes, whether labeled or unlabeled, it seems semi-supervised learning conflicts with the definition of FSOD. Please carefully clarify my questions, I would keep my initial rating for now. --- Reply to Comment 1.1.1: Title: Thank you for replying to our rebuttal. Comment: We thank Reviewer `F1pt` for replying to our rebuttal. We believe to have addressed your questions and concerns in our rebuttal, but the answers may have been buried in our responses. Please find below our additional clarifications to your concerns. **Q1. The proposed task of semi-supervised few-shot objects is similar to semi-supervised object detection. Since both base and novel classes are scarce, what's the meaning of splitting classes into base and novel? I can't see any practical significance in this task.** We explained in our rebuttal that the base dataset in FSOD benchmarks is assumed to be fully annotated with bounding boxes covering all instances of interest, whereas the novel set is sparsely, and randomly, labeled at $k$-shots, or bounding boxes, per class, with the potential for missing labels since multiple objects of the same class may appear in an image. In our approach, we simulate the scarcity of base labels by randomly sampling small fractions at {1,5,10} percent from the full base dataset, but the sampled fractions are still fully annotated with all instances having bounding boxes. Thus, ***we explained in our rebuttal that the benefit of the two-stage approach, and its practical significance, is to separate fully annotated base examples from sparsely annotated novel examples, thereby allowing for algorithms to optimize on both base and novel categories.*** To accomplish this goal, we introduce unlabeled data and our SoftER Teacher model in the two-stage procedure (Sections 3.2 and 3.3). For base pre-training, SoftER Teacher vastly expands base AP on both VOC (80.8 to 85.9) and COCO (39.3 to 44.4). For few-shot fine-tuning, SoftER Teacher improves on the strong Retentive R-CNN baseline by up to $+1.6$ novel AP on COCO (Table 1) and $+7.3$ novel AP on VOC (Table 2) while mitigating base forgetting to less than 9%. If we were not splitting base and novel classes into two stages, but train both under a single stage SSOD protocol, we would run into the issue of foreground objects being rejected as background, due to missing labels, resulting in a drastic reduction in base AP, as shown in our rebuttal. **Q2. FSOD assumes novel classes to be authentically rare, which means we can't find samples of novel classes, whether labeled or unlabeled, it seems semi-supervised learning conflicts with the definition of FSOD.** **We agree with the reviewer that one property of the novel class can be attributed to its rarity, *but the novel class is not necessarily only rare or long-tailed.*** By definition, the novel class is a new object category that the model has not yet learned. In practical scenarios, when we want to adapt the base detector to expand its base vocabulary to include a novel concept, a natural way is to find images containing such examples and annotate them. Or in our case, don't annotate the additional images at all, but rather use them as unlabeled data. The novel class, depending on object type, can occur at any distribution, from *frequent* to *common* or even *rare*. Thus, we believe our approach of leveraging unlabeled sources such as COCO-20 and COCO-unlabeled2017 is a fair and reasonable comparison to existing FSOD works since we do not assume large quantities of novel classes in the unlabeled set. **Our approach overcomes a fundamental limitation of prior works like LVC and MINI, in which they make an unrealistic assumption that novel classes necessarily be present in large amounts in the *base training dataset* to achieve robust performance on FSOD benchmarks.** By stark contrast, we do not use the base dataset as unlabeled data. Please see the above [1. General Response on the source of unlabeled data](https://openreview.net/forum?id=THDGuhN7LA&noteId=8h5dO1DL96) for additional experiments to help address your concern. We conducted two experiments on VOC0712 to measure the effectiveness of our approach by using unlabeled data "in the wild" containing many objects outside of the target domain. The first experiment uses the broader COCO-train2017 as unlabeled data, instead of COCO-20, in which the proportion of novel classes is low at roughly 4.6%. In the second experiment, we remove $16496$ images from COCO-train2017 that contain any novel instances. To be clear, our model does not see any instances of the novel classes, in both labeled and unlabeled sets, except during few-shot fine-tuning. ***We observe our SoftER Teacher model to be robust against strong domain mismatch between COCO and VOC datasets. Our approach does surprisingly well in the general scenarios where the novel classes are rare and completely absent in both labeled and unlabeled sets.*** We acknowledge that this observation is limited to experiments on VOC. We believe further experimentation and analysis are needed to determine if the trend holds on the more challenging COCO and LVIS datasets, using open-domain unlabeled sources like Objects365 and OpenImages, which is beyond the scope of this work.
Summary: This paper focuses on Semi-Supervised Few-Shot Object Detection, where both base classes and novel classes have few labeled training set, along with abundant unlabeled data. For model architecture, the softer teacher is proposed to train with unlabeled data and a teacher-student framework. Experiments demonstrate strong performance using only 10% base labels. Strengths: The idea of introducing unlabeled data for few-shot object detection is interesting and has great value for real-world application. The idea of reducing the number of labeled data for base classes in few-shot object detection is also interesting. Weaknesses: My major concerns are as below: 1. What is the sources of unlabeled images? Do the unlabeled images have both base-class and novel-class instances? In this way, it is no surprise that adding abundant images could improve the proposal recall and detection results of few-shot novel classes. 2. The training framework in figure 2 and Section 3.3 are not exactly the same. In Figure 2, the unlabeled images are used for both base-class pre-training and few-shot fine-tuning. But in Section 3.3, it seems that the second stage of few-shot fine-tuning do not use additional unlabeled image. Clarifications are need. If the second stage also use unlabeled images, can we merge the second stage into the first stage because both base classes and novel classes are few-shot? We do not need to have two stages for training in that case, and the problem becomes semi-supervised object detection and each class has very few labeled images. Thus, what is the difference between traditional semi-supervised object detection? 3. This work has improved overall performance of base and novel class. Although the performance of novel class improves compared some baseline model (e.g., Faster RCNN), but has far worse performance compared to the SOTA [1,2]. Does this mean that the additional unlabeled images only work well with base classes, but not for novel class? Using unlabeled image is perhaps a right way for semi-supervised object detection. But does using unlabeled images is the right way for few-shot object detection? [1] Qiao, Limeng, Yuxuan Zhao, Zhiyuan Li, Xi Qiu, Jianan Wu, and Chi Zhang. "Defrcn: Decoupled faster r-cnn for few-shot object detection." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8681-8690. 2021. [2] Kaul, Prannay, Weidi Xie, and Andrew Zisserman. "Label, verify, correct: A simple few shot object detection method." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14237-14247. 2022. 4. The Table 1 and Table 2 are not complete. Table 1 lacks the comparison of Faster R-CNN (Our Impl.) and Soft Teacher (Our Impl.) as in Table 2. Table 2 lacks the comparison of the latest method for few-shot object detection (e.g. LVC, DeFRCN) as in Table 1. 5. The Figure 3 (b) and (c) are very confusing. I can only find one red box. Does it mean that vanilla FRCN-base only have one proposal? This is weird. 6. What is the difference between soft teacher and softer teacher? In L201-L207, the author mentions that soft teacher has an aggressive threshold of 0.9 which is not good. How did the authors address this problem? I do not find the answer in the main text. L208-L225 seems to be a simple extension of soft teacher without big changes. ========================================================================================== After reading author's rebuttal and other reviews, some of the concerns about technical details are clear. But the major concern about the significance of doing few-shot base/novel partition is still there. I would suggest the authors make the problem setting simpler to get broader impact. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the weaknesses above Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please see the weaknesses above Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer `Qe4L` for your positive and constructive feedback. Please find below our responses, along with additional experiments, to address your questions and concerns. ### 1. What is the source of unlabeled images? Do the unlabeled images have both base and novel classes? It is no surprise that adding abundant images could improve proposal recall and FSOD results. Please refer to the above **1. General Response** on the source of unlabeled data for a detailed answer to your question. In short, we don't make a strong assumption that the unlabeled images necessarily contain abundant novel instances, unlike the prior work of LVC [CVPR22] and MINI [aXiv22]. Although "no surprise" in hindsight, there was little prior analysis or empirical study on *how or why* unlabeled data could improve proposal recall in the FSOD setting. We contribute an insightful empirical analysis in Sections 3.1 and 4.2 linking the role of unlabeled data to FSOD by way of proposal recall. Moreover, Section 4.2 explains that ***while the strong Soft Teacher [ICCV21] baseline can harness unlabeled data for FSOD, SoftER Teacher demonstrates superior learning by further boosting object recall with our entropy regression module.*** For semi-supervised FSOD (Table 3), SoftER Teacher is the right model for the task to improve on Soft Teacher by +1.7 base class AR, which yields a gain of up to +1.5 novel class AP. We argue that it is a combination of unlabeled data and our algorithmic contribution in SoftER Teacher that works well together to boost proposal recall and FSOD results. ### 2. The training framework in Figure 2 and Section 3.3 are not exactly the same. We argue that the schematic in Figure 2 closely follows the text in Section 3.3. L245 explicitly states that we use unlabeled data in the fine-tuning phase, with the rest of the text describing the procedural details. ### 3. Can we merge the second stage into the first stage because both base and novel classes are few-shot, and the problem becomes SSOD? To properly answer this question and illustrate the impact of our approach, we train SoftER Teacher on few-shot examples of base and novel classes, supplemented with COCO unlabeled2017 images, following the SSOD formulation described in Section 3.2. Denoted in the table below as "One-Stage Semi-Supervised", this setting exhibits drastic reduction in base performance at the trade-off in slightly better novel detection compared to our proposed two-stage approach, and hence is impractical in real-world scenarios since samples at test time may contain instances of both base and novel classes. |Training Protocol|10-Shot bAP|30-Shot bAP|10-Shot nAP|30-Shot nAP| |-|-|-|-|-| |One-Stage Semi-Supervised|11.0|16.9|11.6|15.3| |Proposed Two-Stage Few-Shot (Section 3)|37.2|38.6|10.6|12.3| Recall that few-shot instances are sampled at the bounding box level. So a novel image containing 3 cats may not be fully annotated, but only one box (1-shot) is annotated while the other two ignored. Thus, FSOD is very different from SSOD, where a small fraction of images is fully labeled with boxes covering all instances of interest. As shown in the above results, if we approach the FSOD problem as SSOD, we would run into the issue of foreground objects being rejected as background (i.e., missing labels), along with extreme label scarcity that would be difficult to rectify with only unlabeled data. ### 4. The performance of novel class is worse compared to SOTA. Please see the above **2. General Response** on novel performance for a detailed response to your concern. ### 5. Do the additional unlabeled images only work well with base classes, but not for novel classes? Section 3 describes in-depth how unlabeled images can contribute significant improvements on both base and novel classes. For base pre-training, unlabeled images vastly expand base AP on both VOC (80.8 $\rightarrow$ 85.9) and COCO (39.3 $\rightarrow$ 44.4). For few-shot fine-tuning, Figure 6 in Appendix A.2 shows that unlabeled images contribute up to +3.3 novel class AP, while mitigating base forgetting to less than 9% (Table 5 in Appendix A.3). ### 6. Tables 1 and 2 are not complete. Due to page limit, we could include only the essential comparisons in Tables 1 and 2. We will include the additional comparisons, per your suggestion, in the camera-ready version with the extra page. ### 7. Figures 3b and 3c are very confusing. I can only find one red box. Does it mean that vanilla FRCN-base only has one proposal? This is weird. Yes, Figures 3b and 3c show the vanilla FRCN-base fails to capture novel foreground objects in low-label regimes, with only 1 red box appearing in the 10% base label setting. These proposals have confidence scores $> 0.9$. The lack of high-quality proposals produced by FRCN-Base further demonstrates the utility of unlabeled for FSOD and validates the motivation for our SoftER Teacher approach. ### 8. What is the difference between Soft Teacher and SoftER Teacher? It seems to be a simple extension of Soft Teacher without big changes. L234-243 in Section 3.2 argue that our SoftER Teacher contributes a non-trivial extension to Soft Teacher with our Entropy Regression (ER) module for proposal learning with complex affine transformations, which has not been attempted before. Soft Teacher uses an aggressive threshold of 0.9, resulting in overall poor recall. ER addresses this key weakness to enhance proposal recall by allowing SoftER Teacher to tap into abundant region proposals to learn diverse representations across scale, color, and geometric perturbations, the results of which translate to convincingly better FSOD performance (Figure 5 and Table 3). We arrived at our SoftER Teacher model based on meticulous research and validated design choices grounded on insightful empirical findings. As such, we believe SoftER Teacher is a good technical contribution to the community as it is well suited for both semi-supervised and few-shot tasks.
Summary: This article has done a meaningful work, which is a object detection method that combines few-shot with semi-supervised learning. The author introduces a SoftER Teacher for semi-supervised object detection in few-shot scenarios. SoftER Teacher enhances the quality of region proposals to substantially boost semi-supervised FSOD. Compared with LVC, DeFRCN and other methods, the performance has been improved. Strengths: - This task is very meaningful. As far as I know, the traditional few-shot object detection task is difficult to be directly applied to the industry, and the method combined with semi-supervised target detection will be a good solution. (Although the method in this paper is not the first to consider the combination of few-shot and semi-supervised.) - It is good to see that the author provides the source code in the supplementary material, which provides a guarantee for the reproducibility of this article. - The authors present rich experimental results in the article and supplementary material. Weaknesses: - To my acknowledgment, the current mainstream FSOD method is verified on MS COCO 2014, not MS COCO 2017. "Consistent with the current literature on FSOD" may be ambiguous. - In Table 1, since the author did not report the results of LVC in 5-Shot, the experimental results of LVC are from the original article? But considering that our method has unlabeled data for additional testing, it seems unfair to compare the results with the LVC experimental setting. - In addition to the comparison with the FSOD method, it would be better to add some comparisons with the SSOD method (under the Few-shot setting). - In Table 1, I found that the results of the novel class seem to be relatively weak, what is the reason? Because the method in this paper utilizes additional data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My main concern is the experimental comparison, such as whether the comparison method has changed the experimental setting. Another concern is the performance of the novel class. If the author can convincingly solve my problem, I would like to improve my score. I also suggest that the author can also refer to a method on arxiv [1], which seems to have the same task as this article (Of course, this article does not need to compare with its method at this stage.). [1] Cao, Yuhang, et al. "MINI: mining implicit novel instances for few-shot object detection." *arXiv preprint arXiv:2205.03381* (2022). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors describe the limitations of this paper in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer `2E7Y` for your positive and constructive feedback. Please find below our responses, along with additional experiments, to address your questions and concerns. ### 1. The current mainstream FSOD method is verified on COCO 2014, not COCO 2017. The reviewer is correct in that the current established FSOD evaluation protocol is on COCO 2014, not COCO 2017. However, both COCO 2014 and 2017 share the same images. The only difference between the two is the number of validation images (41k images for COCO val2014 and 5k for COCO val2017). The authors of TFA [ICML20] created the original FSOD benchmark by sampling from COCO 2014 a random subset of 5k images for validation and used the rest in the training split. Thus, both train/val splits from COCO 2014 and 2017 should effectively be the same, with minor variance due to the sampling process. Our preliminary experiments on both COCO 2014 (following the TFA splits) and the official COCO 2017 splits verified that the difference is indeed minor up to some statistical noise (see table below). The benefit of using the official COCO 2017 splits is to remove the dependency on the random train/val splits created by the TFA benchmark and to maintain consistency with our proposed semi-supervised FSOD benchmark in Table 3. |Model|Dataset|5-Shot bAP|10-Shot bAP|30-Shot bAP|5-Shot nAP|10-Shot nAP|30-Shot nAP| |-|-|-|-|-|-|-|-| |FRCN|COCO 2014|36.0 $\pm$ 0.3| 36.1 $\pm$ 0.1|37.2 $\pm$ 0.1|3.8 $\pm$ 0.7|6.2 $\pm$ 0.6|9.3 $\pm$ 0.6| |FRCN|COCO 2017|36.0 $\pm$ 0.2|36.0 $\pm$ 0.2|37.0 $\pm$ 0.2|3.7 $\pm$ 0.4|6.1 $\pm$ 0.3|9.6 $\pm$ 0.2| |SoftER Teacher|COCO 2014|41.8 $\pm$ 0.1|41.8 $\pm$ 0.4|42.6 $\pm$ 0.4|7.0 $\pm$ 0.3|9.6 $\pm$ 0.4|12.4 $\pm$ 0.5| |SoftER Teacher|COCO 2017|41.8 $\pm$ 0.2|41.9 $\pm$ 0.2|42.7 $\pm$ 0.1|7.5 $\pm$ 0.4|10.0 $\pm$ 0.4|12.5 $\pm$ 0.5| ### 2. In Table 1, are the experimental results of LVC from the original article? It seems unfair to compare the results with the LVC experimental setting with additional unlabeled data. The LVC [CVPR22] method did not report results for the 5-shot setting. All results in Table 1 are reported from the respective original works. To our knowledge, we believe that LVC only performed single sample runs in their few-shot experiments (hence the lack of error bars), instead of following the established protocol of repeated sample runs over multiple random seeds. As such, their results may have been over-estimated due to the high variance of few-shot training samples. The previous works of TFA, Retentive R-CNN, DeFRCN, and DCFS all have reported marked reduction in novel performances with 10 repeated sample runs compared to a single sample run. It is unclear if the strong novel performances of LVC hold in the same repeated setting. We believe our work is a fair head-to-head comparison with LVC because we use the same amount of labeled training data as LVC and other methods. The only difference in our work is the addition of unlabeled data, which is allowed in the comparison because unlabeled images are not an automatic guarantee for improved performance. Thus, the comparison with LVC brings out two advantages attributed to our approach: (a) unlike LVC, we do not assume abundant novel classes exist in the base training set, and we were conscientious to not include the base dataset as a source of "unlabeled" images; and (b) our approach exhibits less than 9% in base forgetting compared to 19% for LVC. It is also unclear if LVC can achieve strong FSOD performance assuming only 10% of base labels (vs. 100%) are available instead. ### 3. In addition to the comparison with the FSOD method, it would be better to add some comparisons with the SSOD method under the few-shot setting. Our work generalizes two SSOD models to the few-shot setting: Soft Teacher [ICCV21] and our proposed SoftER Teacher. L308 states that while the strong Soft Teacher baseline can harness unlabeled data for semi-supervised FSOD, SoftER Teacher demonstrates superior learning by further boosting object recall in Soft Teacher. For semi-supervised FSOD (Table 3), SoftER Teacher improves on Soft Teacher by +1.7 base class AR, which yields a gain of up to +1.5 novel class AP. Moreover, Figure 5 presents new empirical insight into why SoftER Teacher is a better few-shot detector by analyzing semi-supervised FSOD as a function of proposal quality. SoftER Teacher produces better proposal recall than Soft Teacher, which translates to convincingly stronger semi-supervised FSOD. Future work would examine if our empirical finding can be extended to a more general case with other SSOD formulations including one-stage detectors. ### 4. In Table 1, I found the results of the novel class seem to be relatively weak, what is the reason? Please see the above **2. General Response** on novel class performance for a detailed answer to your question. In short, we argue that SoftER Teacher performs exceedingly well on novel class performance while using less parameters and labels than the comparable TFA and Retentive R-CNN baselines adopting the same base FRCN architecture. ### 5. I also suggest that the author can also refer to MINI, which seems to have the same task as this article. From our understanding, MINI is similar to LVC in that *both methods mine novel targets as auxiliary samples from the base training set*, thereby making a strong and explicit assumption that novel instances must necessarily be present in the training set. With this assumption, MINI achieves impressive performances on FSOD benchmarks. However, this assumption is unrealistic in real-world few-shot settings because novel objects may not exist in the base dataset in large quantities. Moreover, it seems that the training protocol is overly complex, introducing four additional hyper-parameters, making MINI impractical in real-world applications. We will include this reference in the camera-ready version, along with a discussion comparing it to our method. --- Rebuttal Comment 1.1: Title: Reply to the author's rebuttal Comment: I am very grateful to the author for carefully answering my questions, I am very grateful. I think the author of the details about the COCO dataset should carefully add it to the text for more convenient reference in subsequent articles. Since the author addressed my concerns, I decided to raise the score to **borderline accept**. --- Reply to Comment 1.1.1: Title: Thank you for your support on our work. Comment: Dear Reviewer `2E7Y`, We would like to sincerely thank you for contributing your time to serve as a reviewer for NeurIPS 2023. And we are grateful to you for providing your constructive feedback, replying to our rebuttal, and raising your score. Per your suggestion, we will include in the revised paper the details about COCO 2014 vs. 2017 to help avoid potential confusion and ambiguities for the readers. May we ask why you think our paper merits a Borderline Accept? According to the description, Borderline Accept means that you still have some concerns about the paper, e.g., limited evaluation. Have we adequately addressed all of your concerns and questions in our rebuttal? If you have additional concerns, we are happy to further discuss and help address them. Given your strong assessments on the quality of the paper in terms of **3 - Good Soundness**, **3 - Good Presentation**, and **3 - Good Contribution**, along with **your positive comments on the meaningfulness of our work and how our approach is a good solution for realistic few-shot settings in practical applications**, we would be very grateful if you could offer your clear and enthusiastic support for our work beyond the Borderline Accept rating, as a reflection of your overall positive review of our paper.
Summary: The paper approaches the task of Few-shot Object Detection (FSOD) from a semi-supervised perspective, where in addition to base classes data it uses unlabeled data during the base-pretraining phase, and then fine-tunes on the combination of base and available novel data using the best design choice of freezing appropriate layers (backbone, FPN and RPN) following the past works. The benefit of the approach comes with a higher bar on fully-supervised base class performance, which is attributed to training on additional unlabeled data. This higher performance then translates to a better overall (base + novel) classes performance in the fine-tuning phase, and establishes the effectiveness of semi-supervised learning on the FSOD task. The authors propose a SoftER Teacher approach which, in addition to the Soft Teacher loss, adds a consistency loss between teacher and student using at proposal-level. The authors show that this leads to better proposals (using recall) in low-data regimes. Strengths: - The paper is well-written with extensive experiments and ablations, and the semi-supervised exposition in the FSOD setup is much appreciated with a potential for realistic low-shot setups - The paper is well-positioned with respect to prior related works Weaknesses: - Table 2 on VOC07 and lower performance of Novel classes in low-shot setup - Retentive R-CNN compared to SoftER exhibits a trend that its 1-shot performance is much higher that the proposed SoftER, and this trend gets reversed with more shots (such as 10-shot) - The authors acknowledges the phenomenon in lines 295-296, and in line 297 mentions that Retentive R-CNN “generally falls behind on novel class performance”. However, this isn’t true in low-data setup (1-shot). - This trend, however, doesn’t appear in the COCO dataset where novel class performance in 1-shot case is low for both Retentive R-CNN and the proposed SoftER approach - My question is: - Do the authors have any intuition for this behaviour? - The authors leverage COCO-20 and COCO unlabeled2017 as he unlabeled data source for VOC (Table 2) and COCO (Table 1) experiments. Do the authors think that the domain mismatch between VOC and COCO is reflected in low-shot novel class performance in the case of VOC (Table 2)? - In general, the proposed approach does better with relatively higher-shot regimes, which also appears in the claims made in the paper - such as Fig 1 (30-shot). But as a reader, there seems to be little explanation about low-shot regimes, which runs counter-intuitive since the approach uses additional unlabeled data compared to prior approaches, and is expected to perform well especially in low-data regimes - Presence of novel classes in unlabeled data - Line 277-278 mentions the use of COCO-20 for VOC and COCO unlabeled2017 as the source of unlabeled data for VOC and COCO respectively - This makes the assumption that novel classes in the plots of VOC and COCO experiments are necessarily present in the unlabeled set - In general scenarios, such assumption may not hold true. Do the authors have some intuition of how the proposed approach would work if the percentage of novel classes in the chosen unlabeled set is low / absent? ### Minor concerns - Figure 3a for more percentage of base labels - Not a head-to-head comparison between FRCN-Base and FRCN-Base + Unlabeled, since the latter assumes more data - Does the difference narrow with more percentage of base labels? - Conflicting claims - Fig 1: exhibiting less than 7% in base degradation - line 54: exhibiting less than 9% in base forgetting ### Justification of the rating My main concern is highlighted above. In general, explanations towards the above mentioned concern would help the readers Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Put together in the weaknesses section Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### We thank Reviewer `4p6Z` for your positive, insightful, and constructive feedback. Please find below our responses, along with additional experiments, to address your questions and concerns. ### 1. Table 2 on VOC07 - low novel class performance in 1-shot case. Table 2 compares the few-shot results of our models based on the *ResNet-50 backbone* to competing methods based on the *ResNet-101 backbone*. ***It is remarkable that SoftER Teacher with ResNet-50 vastly expands the supervised base AP from 80.8 to 85.9, and incurs negligible base forgetting of less than 1.6%, while exceeding MPSR [ECCV20], TFA [ICML20], and Retentive R-CNN [CVPR21] with ResNet-101 by a notable margin on most metrics.*** To our knowledge, we believe that both MPSR and Retentive R-CNN only performed single sample runs in their VOC few-shot experiments (hence the lack of error bars), instead of following the established protocol of repeated sample runs over multiple random seeds. As such, their results may have been over-estimated due to the high variance of few-shot training samples, as originally reported by TFA and exhibited by large 95% confidence intervals in Table 2. For example, ***our {min, max} novel AP from 10 repeated runs for the 1-shot setting is {21.5%, 41.9%}, respectively, the max of which is on par with MPSR and Retentive R-CNN***. In general, the previous works of TFA, DeFRCN, and DCFS all have reported marked reduction in novel performances with repeated sample runs when compared to a single sample run. We believe that if MPSR and Retentive R-CNN were to perform repeated sample runs over 10 random seeds, and if our approach was based on the ResNet-101 backbone, then the observed performance gap for the novel AP in the 1-shot setting would not exist. The reason why we experimented with ResNet-50 is to demonstrate parameter-efficient learning with SoftER Teacher and to see how far unlabeled data takes us with a smaller backbone architecture. ### 2. Presence of novel classes in unlabeled data. Thank you for bringing up this insightful question. Please refer to the above **1. General Response** on the source of unlabeled data for additional experiments to address your concern. In short, we observe SoftER Teacher to be robust against domain mismatch between COCO and VOC datasets. Our approach does surprisingly well in two scenarios where the percentage of novel classes in the chosen COCO-train2017 "unlabeled" set is (a) low at roughly 4.6% and (b) completely absent, with the best case scenario having an unlabeled set containing targeted base + novel classes (e.g., COCO-20). We acknowledge that this observation is limited to few-shot experiments on VOC; we believe further exploration, experimentation, and analysis are needed to determine if the trend holds on the more challenging COCO and LVIS datasets using large-scale, open-domain unlabeled datasets like Objects365 and OpenImages, which is beyond the scope of this work. We believe our approach to FSOD overcomes the fundamental limitation of prior works in real-world scenarios by not assuming the presence of large amounts of base and novel instances in either labeled or unlabeled dataset. ### 3. Figure 3a - Not a head-to-head comparison between FRCN-Base and FRCN-Base + Unlabeled, since the latter assumes more data. Does the difference narrow with more percentage of base labels? The comparison in Figure 3a between the supervised FRCN-Base model and semi-supervised FRCN-Base + Unlabeled model is a standard, well-established protocol routinely employed in the semi-supervised learning literature to measure the effectiveness of SSL algorithms. We argue that Figure 3a is a fair comparison since both models use the same amount of labeled training data. The addition of unlabeled images in the semi-supervised pipeline is allowed in the comparison because unlabeled images are not an automatic guarantee for improved performance. We present Figure 3a to illustrate the contribution of unlabeled data to boost proposal recall of novel categories for the better discovery of novel classes during few-shot fine-tuning, and to motivate the design and development of SoftER Teacher as a well-suited model to address the unique task of semi-supervised few-shot detection at low-label regimes. The performance gap between FRCN-Base and FRCN-Base + Unlabeled becomes narrow with more percentage of base labels, as shown by additional experiments in the table below, using the standard metric AR@300 for quantifying proposal recall of both base + novel categories. Interestingly, with 100% of base labels, the difference in proposal recall between the two models is immaterial with FRCN-Base + Unlabeled edging out the FRCN-Base model by +0.38 point. This result suggests that the addition of unlabeled data during base pre-training can help boost base representation learning *and also* proposal recall, especially at low-label regimes, the *combination* of which should lead to both better transferability and discovery of novel classes in the subsequent fine-tuning phase. We observe supporting experimental evidence in Table 2 in the main paper and Table 10 in Appendix B.4, where the fully supervised FRCN trails behind SoftER Teacher on both base and novel performances, even though proposal recall between the two is effectively the same. | % Base Labels | 1% | 5% | 10% | 100% | |-|-|-|-|-| | FRCN-Base | 21.91 | 28.94 | 30.97 | 39.92 | | FRCN-Base + Unlabeled | 33.63 | 35.49 | 36.83 | 40.30 | | Difference | +11.72 | +6.55 | +5.86 | +0.38 | ### 4. Conflicting claims. Thank you for catching this typo. We will fix the caption in Figure 1 to say "less than 9% in base forgetting" to match the text throughout the paper.
Rebuttal 1: Rebuttal: ### We sincerely thank all reviewers for their thoughtful and constructive feedback. We would like to address two concerns common in the reviews. ### 1. General Response - What is the source of unlabeled data? [`4p6Z`, `Qe4L`] Per L261, we leverage COCO-20 and COCO-unlabeled2017 as unlabeled data for VOC and COCO few-shot experiments, respectively. COCO-20 contains images with VOC base and novel instances, along with other objects outside of VOC domain. And COCO-unlabeled2017 has unknown base-novel class distribution, along with other objects outside of COCO 80 classes. To our knowledge, ***we are the first to use external supplementary unlabeled images for FSOD***, especially COCO-20 which exhibits strong domain mismatch with VOC. By contrast, the previous work of LVC [CVPR22] and MINI [aXiv22] make an explicit assumption that abundant novel instances must necessarily be present in the ***base training set***, which is unrealistic and a fundamental limitation of LVC and MINI in real-world applications. We do not make a strong assumption that novel classes must exist in large quantities in unlabeled images. In practical scenarios, it is natural to collect unlabeled data having both base and novel instances. For example, when one wants to further detect a novel object that is not in the base categories, a reasonable way is to find additional targeted images containing such objects and annotate them. Or in our case, don't annotate the additional images at all, but rather use them as unlabeled data. Thus, a unique benefit of our FSOD approach reduces the human burden on annotating a lot of required images, as shown in Table 3, which is in stark contrast to others requiring an abundance of base labels for robust FSOD. We agree with Reviewers `4p6Z` and `Qe4L` by recognizing that the choice of unlabeled data can be difficult in general scenarios where strong domain mismatch can occur. To address this concern, we perform two few-shot experiments on VOC0712 to demonstrate the effectiveness of our approach by leveraging unlabeled data "in the wild" containing many objects outside of the target domain. The first experiment uses the broader COCO-train2017 as unlabeled data, instead of COCO-20, in which the proportion of novel classes is low at roughly 4.6%. In the second experiment, we filter out all images from COCO-train2017 that contain at least one instance of the novel class, thereby removing the assumption that novel instances must be present in the unlabeled set. |Model|Unlabeled|1-Shot bAP|5-Shot bAP|10-Shot bAP|1-Shot nAP|5-Shot nAP|10-Shot nAP| |-|-|-|-|-|-|-|-| |FRCN|None|81.8|82.3|82.2|36.2|53.3|58.7| |SoftER Teacher|COCO-20|84.5|85.2|85.5|38.6|57.8|63.4| |SoftER Teacher|COCO-train2017|83.4|84.4|84.4|38.4|57.4|63.4| |SoftER Teacher|COCO-train2017-no-novel|82.7|83.4|84.0|36.8|56.8|62.8| Recall our empirical analysis in Sections 3.1 and 4.2 connects the role of unlabeled data to FSOD by way of proposal recall. We show that unlabeled data can help boost proposal recall on novel categories, which should lead to better discovery of novel classes during fine-tuning. Intuitively, we expect the best FSOD performance if the unlabeled images contain targeted base + novel classes (COCO-20). This is reflected in the above results. Surprisingly though, we also observe strong robustness of our approach in the general scenarios where the percentage of novel classes in the chosen unlabeled set is low (COCO-train2017) or completely absent (COCO-train2017-no-novel). Future work would explore in depth if the observed trend with VOC also holds for more challenging datasets such as COCO and LVIS, using open-domain unlabeled data sources like Objects365 and OpenImages. ### 2. General Response - On weak novel class performance. [`2E7Y`, `Qe4L`, `F1pt`] The goal of this work is to explore and analyze the contribution of unlabeled data for semi-supervised FSOD. As such, we adopt the vanilla FRCN as our base detector, transform it into SoftER Teacher with unsupervised losses, and add unlabeled data. We keep everything else about the base architecture the same, including the backbone, FPN, RPN, and RoI heads, to avoid confounding factors due to model design and training protocol. Thus, if we directly compare our SoftER Teacher to TFA [ICML20] and Retentive R-CNN [CVPR21], which is a reasonable and fair comparison since they all use the same base FRCN model and train on the same amount of labeled images, then ***our approach surpasses both TFA and Retentive R-CNN on most novel class settings in Tables 1 and 2 while being parameter-efficient with a smaller ResNet-50 backbone.*** Most notably, Table 3 shows that SoftER Teacher surpasses the novel class performance of Retentive R-CNN on all shots under consideration while requiring only 10% of base labels, further demonstrating its effectiveness. Therefore, ***we argue that SoftER Teacher can do more on novel class performance with less parameters and labels, which should be both a technical contribution and insightful empirical finding of interest to the community, the basis of which could inspire future research.*** Reviewers `2E7Y`, `Qe4L`, and `F1pt` mentioned that SoftER Teacher exhibits relatively poor novel class performance when compared to recent SOTA methods like LVC, DeFRCN, and DCFS. However, it is important to point out that these methods make unrealistic assumptions about the training dataset and/or heavily modify the underlying base architecture in such a way that promotes strong novel performance. These methods also exhibit significant base forgetting (11%–DCFS, 17%–DeFRCN, and 19%–LVC), which is an undesirable outcome since samples at test time may contain both base and novel objects. Lastly, these methods all achieve SOTA results at the requirement that 100% of abundant base labels must be available. It is unclear if these SOTA advances are competitive to SoftER Teacher on FSOD performance if only 10% of base labels are available instead.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Uniform Convergence with Square-Root Lipschitz Loss
Accept (poster)
Summary: This paper develops uniform convergence result for empirical risk minimization with a loss function whose square root is Lipschitz. It is assumed that the covariate vector $x$ is drawn from a $d$-dimensional multivariate normal and the response $y$ is generated by a structural equation that only depends on $x$ through its $k$-dimensional projection. The result bounds the root of population risk in terms of the root of the empirical risk and another term concerning the Lipschitz constant, a complexity measure and the sample size. The bound holds uniformly (in high probability) over all possible affine coefficients in the empirical risk minimization. This key result is then applied to study "benign overfitting" in several problems, including phase retrieval, ReLU regression and matrix sensing. In addition, it is also shown that Gaussian universality does not hold for the setting considered. Strengths: 1. The paper builds upon and generalizes related results in the literature. 2. It establishes a key generic result and then illustrates its strength through application to several topical problems. 3. The presentation is rigorous and clear. Weaknesses: As this paper falls out of my area of expertise, I am only listing below a few points that might help me (or someone from outside the field) to better understand the paper. 1. The notion of "consistency" or a "consistent loss" is mentioned in the Introduction and throughout the applications. It is worth explaining what that means and how it relates the usual notion of "consistency" in statistics. 2. Assumption (C) perhaps deserves more explanation: why is it necessary and when is it expected to hold? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Page 4, line 135: I do not see why the distribution of $\langle w, x \rangle$ only depends on $w^{T} \Sigma W$. By $x \sim \mathcal{N}(\mu, \Sigma)$ as in Assumption (A), wouldn't it depend on $w^{T} \mu$ and $w^T \Sigma w$? 2. Page 2, line 76: missing "be" before "applied". Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I do not foresee any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! > The notion of "consistency" or a "consistent loss" is mentioned in the Introduction and throughout the applications. It is worth explaining what that means and how it relates the usual notion of "consistency" in statistics. We say that an estimator (\hat{w},\hat{b}) valued in a set K is consistent if the test error of the estimator converges to the optimal test error in the class, i.e. L(\hat{w}, \hat{b}) -> \min_{w,b \in K} L(w,b). Here when we take the limit, we are considering an asymptotic regime where the number of samples n goes to infinity but also the dimension and other parameters of the problem may be assumed to scale in a certain way as well. (In classical parametric statistics, consistency was largely studied where the dimension was fixed and number of samples goes to infinity. In high-dimensional statistics like this work, the cited work of Bartlett et al, Thrampoulidis et al, etc. the dimension also goes to infinity with the number of samples and this is crucial to observe modern phenomena like benign overfitting.) For e.g. phase retrieval we show consistency under the same asymptotic conditions on the covariance matrix \Sigma that Bartlett et al. studied for linear regression. A key point with the above definition is that consistency is not just a property of the estimator, but also a property of the loss function f chosen. When we study benign overfitting in linear regression, phase retrieval, etc the estimator we are interested in will interpolate the data, and it is not clear a priori which losses this estimator will be consistent under (if any). The previous works on benign overfitting showed that in linear regression, minimum-norm interpolants will be consistent under the squared loss (but not e.g. the L1 loss if the model is misspecified). A major new contribution of this work is to prove consistency results for phase retrieval, ReLU regression etc and this requires in particular us to identify the consistent loss. > Assumption (C) perhaps deserves more explanation: why is it necessary and when is it expected to hold? Assumption ( C ) is a common assumption in the statistical learning theory literature which goes by a few different names, such as hypercontractivity or ‘norm equivalence’. To understand this assumption, it helps to think of a simple example, so consider for a moment linear regression with the usual squared loss and Gaussian noise & covariates. Then the assumption is saying that $E[(Y - <w, X>)^8] <= \tau E[(Y - <w, X>)^2]^4$ for any predictors w. Since the law of Y - <w, X> is just a Gaussian distribution with a certain variance, this is true simply because for a Gaussian random variable Z, it satisfies E[Z^8] <= 105 E[Z^2]^4. More generally, hypercontractivity will certainly be true for the squared loss if the class of functions is subgaussian in the sense of [Lecue-Mendelson ‘13] (https://arxiv.org/abs/1305.4825), and in fact hypercontractivity is a weaker assumption since it doesn’t require the existence of arbitrarily large moments of the distribution. We want to emphasize that we are not proposing hypercontractivity as a new assumption, we are simply using it as an existing and well-known assumption which makes dealing with the ‘low-dimensional concentration’ part of our analysis clean and straightforward. Because the original work of [Vapnik ‘82] studied the same assumption, we can directly cite his results in our analysis. However, as discussed in Appendix B.3 and [Zhou et al ‘22], it is also possible to apply other results from statistical learning theory to handle the low-dimensional concentration part of the argument and this would yield different versions of the main theorem. _Some_ type of concentration/anticoncentration assumption must always be made for any nonasymptotic guarantees on the test error to be possible, simply to avoid degenerate situations. For example, consider two cases: (1) Y = 0 always, or (2) the true Y equals 0 with probability 1 - \xi and equals 1/\xi otherwise. For \xi -> 0 with a finite number of samples, we will not observe a sample where Y is nonzero, so we cannot distinguish situations (1) and (2). However, in situation (1) 0 is a perfect predictor of Y, whereas in situation (2) it suffers a very large squared loss. Standard assumptions like boundedness or hypercontractivity fix the problem because they rule out situation (2). > Page 4, line 135: I do not see why the distribution of ⟨�,�⟩ only depends on ��Σ�. By �∼�(�,Σ) as in Assumption (A), wouldn't it depend on ��� and ��Σ�? You are right, this is a typo and it should also depend on $w^T \mu$ and $w^T \Sigma w$. The point is that it only depends on these O(k) many quantities instead of O(d) many. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for addressing all my concerns!
Summary: The paper introduces sharp uniform convergence guarantees for generalized linear models in Gaussian space for square-root Lipschitz losses. These results extend the scope of previous findings and open up possibilities for applying the derived loss bounds in new contexts. Strengths: 1.The paper is well-written, providing clear explanations of its contributions in relation to previous work. 2.The paper achieves fast convergence rates for a broader class of losses, surpassing previous research in this area. 3.By simplifying the assumptions compared to previous work, the paper is able to derive new bounds applicable to scenarios where solving the Moreau envelope of the loss function is challenging or not possible in closed-form. Weaknesses: While the authors discuss potential extensions of the Gaussian feature assumption, the paper still focuses on the setting of Gaussian data, similar to previous work. Additionally, the proof technique employed in the paper is not entirely novel, as it follows prior works that utilize the Gaussian Minimax Theorem for establishing uniform convergence. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! We would like to discuss two points from your comments: (1) the Gaussianity assumption and (2) the novelty of the proof techniques. *Regarding Gaussianity.* The Gaussianity assumption is indeed a significant restriction, and deviates from much of the classical distribution-free statistical learning literature. But assuming Gaussianity allows obtaining much tighter guarantees, with tight numerical constants (see [Koehler et al ‘21, Zhou et al ‘21]). Gaussianity is also widely assumed and used in analysis of many statistical learning and inference problems, e.g. sparse recovery [e.g. Stojnic ‘13, Chandrasekaran et al ‘12 https://arxiv.org/abs/1012.0621, …], phase retrieval [e.g. Mondelli and Montanari ‘18, Barbier et al ‘19], logistic regression [e.g. Candes-Sur ‘20 https://arxiv.org/abs/1804.09753], many other works such as those using Approximate Message Passing, and even analysis of deep learning [e.g. Soltanolkotabi ‘17 https://arxiv.org/abs/1705.04591] (although most of these were also studied, usually with weaker guarantees, without assuming Gaussianity). Furthermore, much of the statistical physics based analysis relies on “Gaussian universality”, which essentially also relies on the data behaving as if it was Gaussian [e.g. see Hu-Lu ‘23, Bayati et al ‘12 https://arxiv.org/pdf/1207.7321.pdf for some discussion and related rigorous results]. And so, although we agree this is a significant restriction, and it would be very interesting to relax this assumption, we still believe that results relying on Gaussianity are interesting and useful, both on their own right, and as a step toward more general analysis. Specifically in the context of this work, by working with Gaussian data we were able to discover several interesting phenomena (e.g. benign overfitting in ReLU regression) which were not at all obvious beforehand (for example, that the consistent loss for interpolation in ReLU regression is (13), that this loss is sqrt-lipschitz, and that it satisfies an “optimistic rate” bound). With a view towards understanding all of these new phenomena beyond the Gausian setting, we have included Section 7, which reveals a situation where a naive generalization of our optimistic rates bound to non-gaussian data is false, but a correct & more sophisticated generalization (equation (28)) works. *Regarding the novelty of the analysis.* Besides the proof of the main generalization bound (which has some new elements, see reply to reviewer 8w1y), there is a very substantial amount of technical content contained within sections 5-7 which goes well beyond the previous literature. As a reminder, a very impressive theoretical understanding of benign overfitting in linear models (e.g. kernel machines) has emerged over the past few years. But studying (even mildly) nonlinear models has been a significant challenge since they do not seem as amenable to random matrix theory methods. One of the most important (and surprising!) realizations we had in this work was how to analyze benign overfitting in nonlinear models like phase retrieval & ReLU regression. At a technical level, this comes from the discovery of 1. the construction described in equation (8) which gives a low-norm interpolator for these nonlinear models, and 2. the closely related discovery of the correct ‘consistent losses’ for these problems. This was not at all obvious a priori! (E.g. we are not aware of anybody discovering the consistent loss (13) for interpolating ReLU regression before this work, or realizing that the Bartlett et al conditions for linear regression should also be sufficient for ReLU regression.) The extensions of the theory to matrix sensing, simple neural networks, and the non-gaussian setting of section 7 are also serious new contributions to the literature which cannot be obtained from previous work. In summary, the fact that sqrt-lipshitz losses satisfy a very sharp optimistic rates bound was one key finding, but the fact that combining this bound with the _right_ choice of sqrt-lipschitz losses lets us understand so many new phenomena is the deeper conceptual message of this work. --- Rebuttal Comment 1.1: Title: Answer to the rebuttal Comment: Thanks to the authors for addressing my concerns!
Summary: This paper investigates optimistic rates under square-root Lipschitz losses and Gaussian data. Applications to phase retrieval, ReLu regression, matrix sensing, and single-index NNs are given. Strengths: The paper extends the analysis of optimistic rates to square-root Lipschitz losses, which goes beyond the classical smoothness assumption, which is interesting. Weaknesses: The authors claim that the obtained results provide a better understanding of benign overfitting, but the paper needs more explanations and comments on the main results. As an example, the bound of eq. (10) features ten distinct terms. It would be nice to discuss which terms refer to what and how they compare with existing terms. Another aspect I need clarification on is the notion of uniform convergence. In traditional statical learning, uniform convergence controls the rate of decay of |R(f) - \hat{R}_n(f)| for all f \in F and every distribution in a given class. On the other hand, the rate in Th.1 feature (1-\varepsilon) R(f) on the LHS. I am unfamiliar with benign overfitting literature (which could be standard). However, I invite the authors to add a discussion on this to broaden the paper's audience. As for the assumption, I think the Gaussian data is quite restrictive. In modern applications, it is not uncommon to have abundant data but poor quality. In these cases, the data distribution feature tails fatter than any sub-Gaussian (and even sub-Exponential). Assumption (C) is related to the L_4 norm of the loss. It looks formidable to verify in practice. Even for sub-Gaussian losses, the L_4 would be proportional to \sqrt{4}*\sqrt{VAR(loss)}, while the condition requires proportionality to the E[loss]. Please provide a discussion on these aspects. I may raise my score accordingly. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please provide more discussion on the highlighted weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Assuming the data are Gaussian is limiting. Please provide examples when Assumption (C) is satisfied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback and questions! > As an example, the bound of eq. (10) features ten distinct terms... It would be nice to discuss which terms refer to what... The term \rho corresponds to the deviation between the train error and test error of the reference predictor w^# . The terms appearing on the rhs of the inequality for \epsilon are small assuming Bartlett et al’s “benign overfitting conditions” on the covariance matrix \Sigma. Basically, Bartlett et al’s conditions require that \Sigma can be split into a low rank part and an orthogonal component with small trace and large effective rank (R(\Sigma^{\perp})). The parameter k lets us control the split between the low-rank part and the rest of \Sigma. Though the resulting rhs of equation (10) may appear mysterious at first sight, once we combine it into the generalization inequality to get equation (11) there is a lot of cancellation and we can see that the rhs of (10) is exactly the right size to guarantee consistency under benign overfitting conditions. We also see that the dependencies involved with \epsilon were necessary, because if benign overfitting conditions aren’t satisfied we shouldn’t be able to arrive at equation (11) (the conditions of Bartlett et al are close to necessary for benign overfitting, besides being sufficient). > Another aspect I need clarification on is the notion of uniform convergence... the rate in Th.1 feature (1-\varepsilon) R(f) on the LHS ... The (1 - \varepsilon) factor in the bound is common in statistical learning theory — it is an elegant way to state uniform convergence bounds for classes of functions where the risk R(f) can vary over different scales. If we wanted to, we could rearrange the bound to be of the form R(f) - \hat{R}_n(f) <= \varepsilon R(f) + … and if we assume an a priori bound on R(f) (e.g. assume the class of functions is bounded) this bound will be exactly of the form you suggest. Stating the inequality with the \varepsilon R(f) term is better, because as we obtain better upper bounds on R(f) the guarantee from the rhs improves. (This is related to “localization” in statistical learning.) As far as the historical origin, if you look at Theorem 13 in the appendix (which is from Vapnik 1982) you can see that it has a (1 - \varepsilon) factor in the same way on the rhs. Or for another example, the main result in “Extending the scope of the small-ball method” [Mendelson ‘20] also has such a factor and that paper has a lot of interesting discussion about these types of things. > As for the assumption, I think the Gaussian data is quite restrictive. In modern applications, it is not uncommon to have abundant data but poor quality. In these cases, the data distribution feature tails fatter than any sub-Gaussian (and even sub-Exponential). We agree that assuming the data is Gaussian is restrictive and it is a great direction for future work. Assuming Gaussianity is often a very helpful first step for proving more general results in this area — first we figure out what is true in the Gaussian case, and then we can try to extend it to more general distributions. For this same reason, we included the discussion in Section 7 to illustrate a situation where this type of universality fails, which we hope will help guide future research. (See the response to reviewer 949a for more discussion.) In [Zhou et al ‘22], there are related experimental results supporting the belief that sharp generalization theory from the Gaussian case should have more broadly applicable analogues — these experiments include very heavy-tailed distributions like the ones you mention. > Assumption (C) is related to the L_4 norm of the loss... Please provide examples when Assumption (C) is satisfied. Assumption ( C ) is a common assumption in the statistical learning theory literature which goes by a few different names, such as hypercontractivity or ‘norm equivalence’. To understand this assumption, it helps to think of a simple example, so consider for a moment linear regression with the usual squared loss and Gaussian noise & covariates. Then the assumption is saying that $E[(Y - <w, X>)^8] <= \tau E[(Y - <w, X>)^2]^4$ for any predictors w. Since the law of Y - <w, X> is a Gaussian distribution with a certain variance, this is true simply because for a Gaussian random variable Z, it satisfies E[Z^8] <= 105 E[Z^2]^4. More generally, hypercontractivity will` be true for the squared loss if the class of functions is subgaussian in the sense of [Lecue-Mendelson ‘13] (https://arxiv.org/abs/1305.4825). Hypercontractivity is a weaker assumption since it doesn’t require the existence of arbitrarily large moments of the distribution. We want to emphasize that we are not proposing hypercontractivity as a new assumption — it is an existing and well-known assumption which makes dealing with the ‘low-dimensional concentration’ part of our analysis clean and straightforward. Because the original work of [Vapnik ‘82] studied the same assumption, we can directly cite his results. However, as discussed in Appendix B.3 and [Zhou et al ‘22], it is also possible to apply other results from statistical learning theory to handle the low-dimensional concentration part of the argument and this would yield different versions of the main theorem. _Some_ type of concentration/anticoncentration assumption must always be made for any nonasymptotic guarantees on the test error to be possible. For example, consider two cases: (1) Y = 0 always, or (2) the true Y equals 0 with probability 1 - \xi and equals 1/\xi otherwise. For \xi -> 0 with a finite number of samples, we will not observe a sample where Y is nonzero, so we cannot distinguish situations (1) and (2). However, in situation (1) 0 is a perfect predictor of Y, whereas in situation (2) it suffers a very large squared loss. Standard assumptions like boundedness or hypercontractivity fix the problem because they rule out situation (2). --- Rebuttal Comment 1.1: Comment: I thanks the authors for addressing some of my concerns, especially regarding the hyper-contractivity.
Summary: In the paper, the authors extended a type of sharp uniform convergence guarantee for the square loss to any square-root Lipschitz loss. The proof is considered to be simplified compared to ... In the paper, the authors extend the theory of optimistic rates, a type of sharp uniform convergence guarantee, to square-root-Lipschitz losses, enabling new applications like phase retrieval. The authors have shown the uniform convergence for the multi-index model with Gaussian feature when the loss function is non-negative, square-root Lipshichtz and satisfies hypercontractivity. The convergence is controlled by the Radamacher complexity of the hypothesis class and the square-root Lipshichitz constant. The authors then show how this result can be applied to applications including phase retrieval, ReLU regression, matrix sensing, and single-index neural networks. Compared to other loss function classes being considered in the literature for uniform convergence, square-root Lipshichtz loss is more general that it includes certain nonsmooth non-convex functions, and provides a better intuition of where the square root comparison between loss functions shows up. The author also provides a counter-example to argue that Gaussian universality cannot always be taken for granted, and why the result can be over-optimistic for non-Gaussian data. Strengths: 1. This paper goes beyond Lipschitz and smooth loss functions and extends the analysis to square-root-Lipschitz losses. This extension is valuable as it includes square loss and can address nonsmoothness. Also, it is easier to verify compared to the Moreau envelope condition used in [Zhou et al. 2022]. As the authors have shown in section 5, their results can be applied to show the benign overfitting for a wide range of real-world scenarios, yielding consistent results compared to past literature. 2. The concept of square-root Lipschitz losses is clean to state, and it indeed provides good intuition for the square-root relationship between losses shown up in previous optimistic rates analysis ([Zhou et al. 2020]). As a result, this new loss function class does seem to capture some intrinsic principle of the problem of optimistic rate and does contribute intuitions for the community. This expansion of applicable loss functions enhances the paper’s relevance and potential impact. 3. Application in Neural Networks: The paper’s extension of norm-based bounds and optimistic rates to weight-tied neural networks contributes to the field of neural network research. By providing a generalization bound that can be combined with the algorithmic outputs of Bietti et al., the authors offer practical insights for optimizing and understanding the non-convex optimization of these network architectures. Weaknesses: 1. The major concern of the paper is that the technical contribution of the paper is limited. It is mostly built upon previous optimistic rate results ([Zhou et al. 2021, Zhou et al. 2022]). Upon reading the appendix and related literature, the reviewer is on the side that the paper serves as a simplification and rewritten of some past results (except the fact that there is a generalization of previous results by including an extra (network) parameter $\theta$). Still, the generalization results over general square-root Lipschitz loss is interesting, but need to note that there might not be enough technical contribution in the paper for the main theorem. The norm bounds in the applications are novel. 2. The paper itself is not presented in a way to highlight its contribution either. The paper has done a good job for the literature review, and it has spent more space introducing the results instead of providing intuitions for why the results hold true. This can be understandable as the space for the main submission is rather limited and there are many theorems to be stated in a single paper. Still, the reviewer thinks it can be beneficial in spending a bit more space on proof outlines. 3. There are some other limitations of the work, including the Gaussian feature assumption and the multi-index model for neural networks. Gaussian feature assumption can be served as a good starting assumption for the theory and the authors have addressed the limitation of this assumption appropriately. For neural networks, the multi-index model is a limited shallow neural network. In general, the Rademacher complexity for deep neural networks is normally only a loose estimation and thus results in unrealistic generalization bounds. Still, the multi-index model has been studied extensively recently, so the concern for the limitation here is just a minor one. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How should I think about the technical contribution of the paper? Specifically, how should I consider the difference between the proof here for the square-root Lipschitz functions and the proof for a general Moreau envelope? 2. The generalization results over single-index model is interesting, where you have shown that $\max_{j \in [N]} |\sum_{i = 1}^j a_i| || w ||$ is a good complexity measure. Potentially combined with known results in the single index model, how large should this quantity be? In some way, I think this can be considered as future work to achieve end-to-end results for the weight-tied neural networks so should be fine if there is no clear answer here. Just trying to understand if the generalization bound we achieve here is tight in some sense. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed the limitation of the paper appropriately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! > How should I think about the technical contribution of the paper? Specifically, how should I consider the difference between the proof here for the square-root Lipschitz functions and the proof for a general Moreau envelope? First, let us answer the specific question about the proof of the optimistic rates bound. Comparing the proof of the generalization bound with the Moreau envelope framework, we definitely needed a new analysis to avoid having unecessary assumptions involving the Moreau envelopes of f appear in Theorem 1. The new proof still uses key ingredients like the GMT, but after applying the GMT we have to be careful. Essentially, instead of trying to solve the auxiliary problem exactly, we show how to bound the auxiliary problem using analytic properties of square-root Lipschitz functions (in particular, allowing us to appeal to the useful calculus lemmas 8 and 9 in the appendix). Next, we would like to discuss the key technical contributions of our paper are, which do not end with establishing the optimistic rates bound. As a reminder, a very impressive theoretical understanding of benign overfitting in linear models (e.g. kernel machines) has emerged over the past few years. But studying (even mildly) nonlinear models has been a significant challenge since they do not seem as amenable to random matrix theory methods. One of the most important (and surprising!) realizations we had in this work was how to analyze benign overfitting in nonlinear models like phase retrieval & ReLU regression. At a technical level, this comes from the discovery of 1. the construction described in equation (8) which gives a low-norm interpolator for these nonlinear models, and 2. the closely related discovery of the correct ‘consistent losses’ for these problems. This was not at all obvious a priori! (E.g. we are not aware of anybody discovering the consistent loss (13) for interpolating ReLU regression before this work, or realizing that the Bartlett et al conditions for linear regression should also be sufficient for ReLU regression.) The extensions of the theory to matrix sensing, simple neural networks, and the non-gaussian setting of section 7 are also serious new contributions to the literature which cannot be obtained from previous work. In summary, the fact that sqrt-lipshitz losses satisfy a very sharp optimistic rates bound was one key finding, but the fact that combining this bound with the _right_ choice of sqrt-lipschitz losses lets us understand so many new phenomena is the deeper conceptual message of this work. > The generalization results over single-index model is interesting, where you have shown that max�∈[�]|∑�=1���|||�|| is a good complexity measure. Potentially combined with known results in the single index model, how large should this quantity be? In some way, I think this can be considered as future work to achieve end-to-end results for the weight-tied neural networks so should be fine if there is no clear answer here. Just trying to understand if the generalization bound we achieve here is tight in some sense. End-to-end results are definitely an interesting direction for future work, for example combining our generalization theory techniques with some of the algorithmic ideas in the literature. As far as tightness, if we choose the b_i appropriately then we can ensure that most of the data falls into a single linear region of the network, and \sum_{i = 1}^j a_i \|w\| for the corresponding value of j will be the norm of the corresponding linear predictor on this region. So in this case the bound reproduces the existing sharp norm-based generalization bound for linear models from previous work (which in turn was shown to recover benign overfitting in linear models etc.), and in this sense the bound seems pretty sharp. --- Rebuttal Comment 1.1: Title: Score Changed Comment: The reviewer thanks the authors for their detailed explanation. I fully agree with the authors that all the applications discussed in the paper are interesting and important. As a result, even though I still consider the technical contribution of the sharp generalization bound with sqrt-Lipschotz losses to be limited (the connection and the result itself, are for sure very attractive), the paper provides a nice perspective in studying nonlinear models. I am willing to increase my score from 6 to 7.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, generic uniform convergence guarantees are provided for Gaussian data in terms of the Rademacher complexity of the hypothesis class and the Lipschitz constant of the square root of the scalar loss function. Square-Root Lipschitz Loss is an important class of loss function to study because it is a suitable loss to study interpolation learning in cases like phase retrieval and matrix sensing. The authors obtain an optimistic rate in theorem 1 of the paper and the analysis is mostly inspired by the previous work of Zhou et al. 2022. Then the authors discuss multiple use cases and provide generalization error bound for over-parametrized phase retrieval, matrix sensing, and single index weight tied neural network. Strengths: The paper is technically a good paper. Though I am not an expert in this area but it looks to me like, this paper modified the analysis in the previous recent work by Zhou et al. 2022 to obtain new overfitting results. Weaknesses: I have a few questions and some might seem very trivial and please correct me if I am wrong. 1. I can see that the paper utilizes the Gaussian minimax theorem that is explained in Appendix B.2. However, this still seems like a very restricted setting to me. What happens when one assumes that data is coming from a more general distribution? Previous results (though not applicable for square roor Lipschitz loss) have no major distributional assumption. 2. I am also curious to know how realistic is overparametrization setting in phase retrieval. 3. I am also curious to understand the results of the single index neural network model as the paper cited in this work Biettie et al and other related papers in the domain seem to have the stronger guarantee of recovering the direction using spherical SGD or gf. In this paper, the guarantees are for generalization errors. Is there a way to compare these two results as the results in the other papers where there recover the optimal direction sounds stronger to me? As a minor comment, it would also be great if the authors can discuss and compare the results in section 5 and 6 with previously known results. A discussion would be very helpful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the feedback. The Gaussianity assumption is indeed a significant restriction, and deviates from much of the classical distribution-free statistical learning literature. But assuming Gaussianity allows obtaining much tighter guarantees, with tight numerical constants (see [Koehler et al ‘21, Zhou et al ‘21]). Gaussianity is also widely assumed and used in analysis of many statistical learning and inference problems, e.g. sparse recovery [e.g. Stojnic ‘13, Chandrasekaran et al ‘12 https://arxiv.org/abs/1012.0621, …], phase retrieval [e.g. Mondelli and Montanari ‘18, Barbier et al ‘19], logistic regression [e.g. Candes-Sur ‘20 https://arxiv.org/abs/1804.09753], many other works such as those using Approximate Message Passing, and even analysis of deep learning [e.g. Soltanolkotabi ‘17 https://arxiv.org/abs/1705.04591] (although most of these were also studied, usually with weaker guarantees, without assuming Gaussianity). Furthermore, much of the statistical physics based analysis relies on “Gaussian universality”, which essentially also relies on the data behaving as if it was Gaussian [e.g. see Hu-Lu ‘23, Bayati et al ‘12 https://arxiv.org/pdf/1207.7321.pdf for some discussion and related rigorous results]. And so, although we agree this is a significant restriction, and it would be very interesting to relax this assumption, we still believe that results relying on Gaussianity are interesting and useful, both on their own right, and as a step toward more general analysis. Specifically in the context of this work, by working with Gaussian data we were able to discover several interesting phenomena (e.g. benign overfitting in ReLU regression) which were not at all obvious beforehand (for example, that the consistent loss for interpolation in ReLU regression is (13), that this loss is sqrt-lipschitz, and that it satisfies an “optimistic rate” bound). With a view towards understanding all of these new phenomena beyond the Gausian setting, we have included Section 7, which reveals a situation where a naive generalization of our optimistic rates bound to non-gaussian data is false, but a correct & more sophisticated generalization (equation (28)) works. > I am also curious to know how realistic is overparameterization setting in phase retrieval. The main significance of the phase retrieval model is that it is one of the simplest and most canonical nonlinear models, and so it is valuable to rigorously understand benign overfitting there before proceeding to more complex models like deep networks. Previous works have certainly studied the behavior of phase retrieval in high-dimensional limits with many parameters where overfitting occurs (e.g. cited work of Maillard et al) and understanding the behavior under overparameterization seems like a natural goal just as it was with linear regression. > I am also curious to understand the results of the single index neural network model as the paper cited in this work Biettie et al and other related papers in the domain seem to have the stronger guarantee of recovering the direction using spherical SGD or gf. In this paper, the guarantees are for generalization errors. Is there a way to compare these two results as the results in the other papers where there recover the optimal direction sounds stronger to me? Besides that we are both studying weight-tied neural networks, the results seem to be incomparable. Ours is a generalization bound, so it would apply to any algorithm/estimator and we expect it is pretty quantitatively sharp (see also the response to reviewer 8w1y). On the other hand, we didn’t analyze any polynomial time algorithm. The previous works you mentioned analyzed a particular algorithm and proved some guarantees for it (their main goal is algorithmic efficiency, so they did not carefully analyze features like constant factors in the number of samples which we care about in this work). There is probably a lot of interesting future work that could be done combining these types of algorithmic + statistical analyses. > As a minor comment, it would also be great if the authors can discuss and compare the results in section 5 and 6 with previously known results. A discussion would be very helpful. As far as section 5 is concerned, there was a lot of previous work on matrix sensing, ReLU regression etc. but we are the first to establish these types of results, i.e. to prove (1) sufficient conditions for benign overfitting in these models and (2) as the key ingredient to achieve 1, prove very sharp generalization and norm bounds in these models. If we only wanted to prove _some_ generalization bound for say phase retrieval, we could apply standard tools from statistical learning theory like symmetrization+contraction, but they are very wasteful since our loss is not Lipschitz, but instead sqrt-Lipschitz. Regarding section 6, we have not seen a generalization bound of quite an analogous form (we are taking advantage of the weight-tied structure here). For non-weight-tied neural networks, it is possible to state generalization bounds in terms of the l1 norm of the weights (which could be bigger than the maximum partial sum which appears in our bound), and this type of analysis of neural networks dates at least back to the [Bartlett ‘96] reference.
null
null
null
null
null
null
REx: Data-Free Residual Quantization Error Expansion
Accept (poster)
Summary: The paper proposed REx which allows the flexibility to find the PTQ recipe given a speed/accuracy tradeoff by computing the residual expansion of weights and activations in a data-free manner. The method minimizes error between the FP16 and INT quantizations by computing and adding the quantized residual errors. The authors further propose to reduce the overhead with selective error computations by using the parameter magnitude as a proxy for parameter importance. Additionally, the authors also provide the theoretical error upper bound for a given expansion and show the tightness of the bound. ### Post Rebuttal Update I have increased my score to 5 -- Borderline accept. Strengths: The proposed PTQ method is data-free and the idea is quite interesting. The authors provide theoretical backing for the error upper bound for a given expansion which is greatly appreciated The paper compares with a fair number of previous approaches Weaknesses: Figure 2 is extremely difficult to read due to the color scheme (especially for someone who might be colorblind). I would sincerely request to authors to improve the color scheme. The authors claim that the residual expansions can be made sparse using the norm of the parameters as a selection criterion which might lead to unstructured sparsity. Especially in Table 1, where the sparsity is 50%, this would lead to high inference latencies unless the target device supports unstructured sparsities (which current GPUs do not). Why would this method be preferred over existing methods? I am not sure I understand the BOPS metric. Hardware targets only support certain bit widths which have a constant cost of operations i.e INT4 cores would take the same amount of time even if you use W3/A3 recipe. I don't think comparing existing methods at equivalent BOPS is fair. For LLMs, the authors claim that the residue with W1/A16 has virtually no cost. I am not sure I understand I follow this, even with 1bit residue -- Assuming someone has optimized the weight loading with cache-line optimizations that have minimal impact on weight loading times, performing a large number of FP16 operations will add significant computational overhead when the models are compute-bound. In Line 54, the authors claim a budget of $\gamma$ for the computational overhead but instead use a budget of $\gamma$ for the weight overhead. I would like to point out that a given weight over-head budget does not translate to an equal computational overhead [1], which is usually larger. In general, there is no discussion on the impact of inference latencies in using this strategy which *significantly* diminishes the impact of this work. The writing of the paper can be improved. Some of the errors I noticed were: shall -> should in line 8, in -> of in line 83 [1] - https://dl.acm.org/doi/abs/10.1145/3400302.3415679 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I am not convinced about BOPS and how comparing existing methods at equivalent BOPS is fair. I would like to see some results on latency using this strategy. Latency (and size) of models are the true real-world metrics and I think BOPS is a poor proxy for Latency and limits the applicability of this method. I would like to understand how the authors propose to deal with unstructured sparsity that the method introduces. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **figure 2** We updated Figure 2 to make more readable (please see the pdf attached to our general comment) **bops evaluation and latency** In order to address your major concern regarding performance, we proposed a latency evaluation in the general comment in order to show the good performance of the proposed REx method on both CPU (with limited bit-width support: int8 only) and GPU (supports for multiple bit-widths: int1, int4 and int8). In our original work, we focused on BOPS due to the simpler reproducibility for future research. However, we acknowledge the fact that the proposed REx method does benefit from these new results. **sparsity: granularity and latency** We think that there were a misunderstanding regarding the sparse expansion and its support. In Table 1 and every other results (aside from LLMs), the introduced sparsity is applied at the neuron level (l154) . Thus, the sparsity is structured and is rather straightforward to leverage on hardware devices. This is confirmed by empirical evidence of latency evaluation (please see Table 1 and 2 in the general comment). On the other hand, regarding the sparse (unstructured) expansion for LLMs, this method does need support for sparse matrix multiplications. While nvidia gpus only support semi-structured pruning we do believe that this format can offer greater benefits as we evaluate it CPU. We show that the sparse binary expansion only adds a 0.26\% overhead in terms of latency which, in our opinion, further supports our initial claim on the negligibility of this operation. We hope that these clarifications convince you on the interest of the proposed ideas. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarification. I read through all the other reviews and the authors' replies -- the comparison with Smoothquant is also appreciated. The latency numbers look good and are much more convincing than BOPS. > For LLMs, the authors claim that the residue with W1/A16 has virtually no cost. I am not sure I understand I follow this, even with 1bit residue -- Assuming someone has optimized the weight loading with cache-line optimizations that have minimal impact on weight loading times, performing a large number of FP16 operations will add significant computational overhead when the models are compute-bound. This still looks misleading and should be modified based on the response posted by the authors' describing the specific circumstances in which it is true. > In Line 54, the authors claim a budget of $\gamma$ for the computational overhead but instead use a budget of $\gamma$ for the weight overhead. I would like to point out that a given weight over-head budget does not translate to an equal computational overhead [1], which is usually larger. Can the authors please clarify this? I would also like the authors to mention the speedup for unstructured sparse expansion is dependent on the target hardware. Based on all the additional evidence, I would like to increase the score to 5. --- Reply to Comment 1.1.1: Title: Response to Reviewer m1LK Comment: We would like to thank the reviewer for their constructive feedback and reactivity. We agree with all the remarks that were made and will make the requested changes, shall the paper be accepted for publication. In the specific context of REx, the budget $\gamma$ for weight overhead is very similar to the corresponding computational overhead, due to: 1. the proposed method increases the layer widths without increasing the output tensors size (due to the reduction from summing the residuals after each layer). 2. the extra computations have an appropriate structure, e.g. a 50\% overhead from REx on a layer with a power of $2$ neurons, will add a number of computations that is often well supported by the device. Empirically, we have measured the induced latency from a small budget $\gamma < 75$% (at least $25$% sparsity) and seen that the parallelization capacities of modern hardware lead to a smaller latency overhead. We will also add this element to the implementation section of the revised manuscript.
Summary: In this paper, the authors propose a fixable quantization method called REx, which utilizes residual error expansion to further improve quantization error. Additionally, they suggest applying group-sparsity to reduce the computational cost associated with residual error expansion. Existing DFQ methods have a limitation in terms of flexibility when it comes to quantizing based on the representation format supported by hardware, considering the trade-off between accuracy and speed-up. However, the REx method overcomes this limitation and provides a better trade-off. Furthermore, by calculating the theoretical upper bound of quantization error during the process of residual error expansion and demonstrating it through actual quantization error, the REx method shows that it can have less quantization error than or similar to existing methods at fewer BOPs. The proposed REx method can be combined with recently developed quantization methods, leading to quantized models with improved accuracy. Strengths: * It demonstrates that the proposed REx method can find a better trade-off point between compression ratio and accuracy compared to existing methods through residual error expansion and group-sparsity expansion. * Theoretical and experimental evidence is provided for the upper bound of quantization error that the proposed REx method can achieve, showing a theoretical basis for achieving better quantization error at the same BOPs. * The proposed method can be applied in a composite manner to various quantization methods previously proposed, and experiments on different models demonstrate improvements in accuracy. Weaknesses: * It appears necessary to compare the results in Table 3 with other papers on LLM quantization, such as OPTQ, LLM.int8, and SmoothQuant. * Throughout the paper, different bit-widths for weights and activations were used, which may present challenges in utilizing the formats supported by hardware. However, there is a lack of discussion regarding this aspect. * There is a need for discussion on the criteria for dividing clusters and the benefits of higher sparsity in achieving better accuracy. * In order to discuss the trade-off between accuracy and speed, there should be a discussion on speed. The paper lacks discussion on this aspect. It would be beneficial to provide unit test results comparing the latency when applying group-sparsity expansion along with INT GEMM + SpMM compared to conventional INT GEMM, as well as the throughput on the GPU. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. While Figure 2 represents the inference time in terms of BOPs, is there any analysis regarding the actual latency? I am curious about the analysis of the additional computational cost introduced by residual expansion and how much it can be reduced by group-sparsity expansion. 2. In Figure 2 and Figure 3, when W2A8 is not supported by the hardware, additional packing and unpacking operations are required, which may incur overhead. It would be interesting to see how the graph's trends change when considering this overhead. 3. In Table 5, there is a tendency for higher group sparsity to result in better performance. I am curious about the reasons behind this observation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper provided valuable insights by demonstrating the ability of the proposed method to find a better trade-off between compression ratio and accuracy. However, it is challenging to agree with the paper's emphasis on whether it truly offers a superior approach in terms of the accuracy vs. speed trade-off. Since INT GEMM relies on the formats supported by hardware, it is necessary to analyze the speed gain achievable when applying REx + group-sparsity in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **comparison to other methods on LLMs** In the table below, we provide a comparison between REx (with the PowerQuant method) and other post-training quantization methods designed for LLMs. We report performance in either W4/A16 or W8/A8 based on the available data points excerpted from the papers as well as from our own experiments (Note that, as arguably the biggest bottleneck when quantizing LLMs is memory bandwidth, W4/A16 should be the favored format). | method | data-usage | average score | | :---: | :---: | :---: | | DFQ (W4/A16) | data-free | 51.47 | | REx + DFQ (W4/A16) | data-free | 53.76 | | REx + PowerQuant (W4/A16) | data-free | 54.81 | | OPTQ (W4/A16) | data-driven | 54.37 | | SmoothQuant (W8/A8) | data-driven | 53.81 | | LLM.int8() | data-driven | 54,11 | Our observations are three fold. First, contrary to OPTQ, we do not use group-wise quantization which prevents the quantization of the activations to anything lower than 16 bits. Also, contrary to LLM.int8 we do not need mixed precision between float16 and int8 among the same tensor. And contrary to all of these methods, we do not use any data. Second, OPTQ offers the best performance overall which can be attributed to both the use of group-wise quantization and the weight optimization step. Third, we believe that better performance could be achieved if we were to combine REx and SmoothQuant. Nevertheless, as such, REx+DFQ achieves very close performance to the other methods without suffering from the aforementioned limitations. Furthermore, if we use a recent non-uniform method as the quantization operator rather than DFQ, REx + PowerQuant [4] achieves the highest score. **actual speed** Thank you for your comment. We used the BOPS metric to provide results that are hardware-agnostic and easier to reproduce and compare with other work. However, we do agree that the method on its own would greatly benefit from actual latency measurements. Please see the general comments (Table 1 and 2) for the latency results. **hardware bit-width support** As our results suggest in the general comment, when the hardware does not properly support the provided bit-width, packing and un-packing does introduce overhead, which impacts REx performance. In that example, we consider a CPU that only supports int8 and fed it with int4 weight values. The resulting REx model is 0.658\% slower than it DFQ counterpart with 4.65 higher accuracy. All in all, if the hardware offers limited support REx will only improve the accuracy of the final model. However, as our results on the gpu show, if the hardware does support more bit-widths (e.g. int1, int4 and int8), then REx both significantly improves the accuracy. **higher sparsity in table 5 leading to higher accuracy** There seems to be a misunderstanding here, due to a lack of clarity on our part. The sparsity indicated with respect to the proposed expansion such as W$4_{+ 25\%}$/A4, means that we keep 25\% of the expansion values. In other words, the results provided in Table 5 are indeed intuitive, i.e. the larger the expansion the higher the accuracy. This was stated in l154 where $\gamma$ refers to the kept overhead or in other words the kept values. ## references [4] Yvinec, Edouard, et al. "PowerQuant: Automorphism Search for Non-Uniform Quantization." ICLR, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers and results. I have read the authors' rebuttal as well as other reviews. I would like to keep my rating. --- Reply to Comment 1.1.1: Title: response to reviewer mdRP Comment: We would like to thank you for your comment and we are glad to see that we have addressed your concerns. If you were to have any further questions, we would do our best to answer them.
Summary: In this work, authors proposed a novel quantization method called REx, that leverages residual error expansion. The idea is to repeatedly apply quantization to the quantized residual error, which provides an increasingly better approximation of the original floating-point tensor as the number of steps grows. To combat the computational overhead from additional quantization steps, authors combine the residual expansion with the group sparsity. Two hyper-parameters of the method (a number of steps K, and the sparsity rate $\gamma$) are used to control the accuracy vs. speed trade-off. The method is agnostic to the quantization operator itself and can be combined with many existing methods. Strengths: * Experiments on both CNNs (including some of the more difficult-to-quantize models like MobileNets and EfficientNets) and LLMs across various tasks + combination with several quantization operators from the prior work. * Theoretical derivations on the upper bound of the quantization error + comparison with the empirical values. * Some attention to efficient hardware implementation (Section 4.1), e.g. instead of using K separate CUDA kernels for the error expansion step, use a single one with concatenated output channels, etc. Weaknesses: * Two hyper-parameters (K and $\gamma$) that have to be set. Even though $\gamma$ can be set to match a certain BOP target (as it is done in the paper), the user still has to select a value for K. * I have some concerns related to the data-free nature of the method, please see and elaborate on the below question. * I would like to see error bars / standard deviations in some of the results, e.g. in Table 2. I wonder if the improvement is statistically significant (it is well known that some of the datasets from GLUE are quite small which might lead to high standard deviations in the results). Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The method is advertised as data-free, however, I don't see how it is the case in general. As far as I understood, it's data-free if the quantization operator Q itself is data-free (which is the case of DFQ, for instance). However, in most experiments, the underlying Q is not data-free (e.g., AdaRound, BRECQ, or even naive quantization with static range estimation). Could you elaborate? * Related to the previous comment, in the group sparsity, the importance of a neuron is defined by the norm of its weights (L151: assuming the data-free setting). Assuming we have access to activations and gradients, have you experimented with other importance metrics (e.g., gradient-based, FIT)? * I Would like to know a bit more details on the outlier quantization in LLMs. How are outliers detected/defined? Will the suggested approach work with outliers that are both positive and negative? In Table 3, what gives the most improvement - the method itself or the special treat of the outliers (would be nice to have an ablation study and/or have some additional data on e.g. quantization error for tensors with outliers with W1 vs. W4). * (Sorry if I missed it) Do you quantize embeddings, LayerNorm weights/biases, and LLM head, are there any specific assumptions (e.g., first and or last layer in 8-bit)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors mentioned one limitation that the method does not adapt to the per-layer importance and runtime cost discrepancies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your interest in the method and for pointing out several elements that will improve the clarity of the paper, shall it be accepted for publication. **set the hyper-parameters** As stated by the reviewer REx uses two hyper-parameters $K$ and $\gamma$. In our main results where we compare REx to other quantization methods, we show that using $K=2$ is almost systemically enough in order to achieve better performance. Aside from LLMs where we use a special setting (unstructured sparsity, removing 99\% of the parameters in the expansion to only account for outliers in said expansion), we use s$50\% tructured sparsity (easier to leverage practically speaking). These set of default hyper-parameters already outperform existing data-free methods. Furthermore, our theory provides that the higher $K$ (and lower $\gamma$ to keep the global budget) the better the performance which indicates that further improvements can be expected when tuning these parameters. **error bars and significance** We agree on the fact that GLUE benchmark does have variance in the observed accuracies. To answer your concern, we provide here updated versions of our results (we also include other Tables in order to be exhaustive). These changes were added to the revised paper. | table | Uniform | SQuant | SPIQ | REx | | :---: | :---: | :---: | :---: | :---: | | updated table 2 (average scores) | 74,16 $\pm$ 0.08 | 74,68 $\pm$ 0.19 | 74,48 $\pm$ 0.35 | 75,00 $\pm$ 0.16 | Thus, the provided results are significant. On a side note: we did not update the Table 1 and 3 as the quantization process is deterministic and we only have a single pre-trained version of the models. **is the method data-free:** As stated by the reviewer REx is data-free as long as the quantization operator also is. **Peut etre ajouter qu'on clarifie ça et ajouter 1 ligne ds le papier pour lui donner raison** Aside from Table 5 where we showcase the ability of the proposed method to also work outside of the boundaries of data-free quantization, all of our tests where fully data-free. Thus, we believe that it is appropriate to call our approach data-free as REx in itself do not use data and as such constitute a data-free step in the quantization process. **other criterion for sparsity** To answer your concern, we tested other critera on ResNet-50 such as _gradients_ and _weights $\times$ gradients_. The results are provided below: | sparsity method | weights norm | gradients | weights $\times$ gradients | | :---: | :---: | :---: | :---: | | accuracy (W$4_{+ 25%}$/A4) | 53.11 $\pm$ 0.26 | 52.89 $\pm$ 0.62 | 53.45 $\pm$ 0.37 | These methods provided small to no improvement, which we attribute to the fact that the sparsity goal here is not challenging enough to justify the use of more complex and costly criterion. We however acknowledge that this is an interesting future research direction. **how do we identify outliers** We identify outliers among weight values only (contrary to LLM.int8 [2] for example). The reason behind this choice lies in the fact that we do not want to slice tensors at inference and use fine-grained mixed precision, as this generally leads to significant overhead [3]. As we quantize in 4 bits, we define outliers as any value that is more than 6 standard deviations away from the average weight values (Note that this definition is equivalent to the one used in LLM.int8, where the authors define an outlier as any value larger than 6.0 - or $6 \times 1.0$ as features are reduced by the layer norms). Furthermore, in quantization binary values are either $-1$ or $1$, which enables us to quantize both negative and positive outliers. **LLM specific quantization** To specifically answer your question, we fold the weights and biases from the layer norms, we quantize all the fully-connected layers (including the head) and apply the same quantize to all layers (nothing specific to the first and last layers). We do assume that the softmax and reduction steps are performed in higher precision, as it was done e.g. in I-Bert [1]. **We added those elements in the revised manuscript.** ## references [1] Kim, Sehoon, et al. "I-bert: Integer-only bert quantization." International conference on machine learning. PMLR, 2021. [2] Dettmers, Tim, et al. "Llm. int8 (): 8-bit matrix multiplication for transformers at scale." arXiv preprint arXiv:2208.07339 (2022). [3] Xiao, Guangxuan, et al. "Smoothquant: Accurate and efficient post-training quantization for large language models." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: Thanks a lot for clarifying my questions and commenting on my concerns. I do appreciate to see that the results are significant and also seeing the actual speed improvement from the general response. I read over all other reviews and the corresponding responses. Overall, considering all strength and weaknesses pointed out by various reviewers, I believe the strength do outweigh some of the weaknesses and therefore still lean towards accept. Therefore I keep my rating of weak accept.
Summary: This paper presents a low-bitrate post-hoc quantization method with three major techniques: residual expansion, input expansion, and sparse expansion. The techniques are theoretically guaranteed to improve the accuracy of the quantization, and experimental results show that they outperform competing approaches. Strengths: * Provides theoretical bound for accuracy-bitrate trade-off * Evaluation that compares the theoretical bound and empirical benchmarks. Weaknesses: Lack of (machine) runtime evaluation. The project propose to use quantization for better inference runtime, yet none of the benchmark demonstrates the improvement. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is the (machine) runtime on CPU and/or GPU? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: This paper uses public dataset and benchmark, so it would not have potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer again. We agree that the proposed study would benefit from an empirical analysis of the actual latency induced by REx. Hopefully the new results provided in the general comment addresses your concern. --- Rebuttal Comment 1.1: Comment: It would be helpful if the authors could expand the runtime evaluations to match the benchmark with bops comparisons. The authors indicate that the bops metric is a good proxy for real-world performance, but this connection is not well-supported. Even if the bops metric does provide a good proxy for actual performance, the precise definition of bops is lacking in this work. For future researchers to reference, please provide the formulation of bops. --- Reply to Comment 1.1.1: Title: Response to reviewer E57B Comment: We thank the reviewer for their comment and consideration of the provided runtimes. To address your questions. We use the generic definition of the bitwise operations as in [1,2,3]. This is the standard in quantization and we will include these references to the manuscript for future researcher. Regarding the connection between runtime and bops. The goal of the shared results is to highlight that in the context of REx and other standard quantization approaches such as DFQ. The results that were given for equivalent bops (Table 1 in the paper) do translate in very close runtime performance as shown in the common rebuttal. Similarly our results for LLMs (table 3 in the paper) at equivalent bops also translate in almost identical runtime as shown in the second table of the rebuttal. We hope that this message clarifies our previous responses. In short, our rebuttal shows that, in the context of the benchmarks conducted in our study at equivalent bops, the empirical latencies of the proposed REx is indeed almost identical (or even lower) to other quantization schemes while achieving a higher accuracy. ## References [1] Van Baalen, Mart, et al. "Bayesian bits: Unifying quantization and pruning." Advances in neural information processing systems 33 (2020): 5741-5752. [2] Cai, Zhaowei, and Nuno Vasconcelos. "Rethinking differentiable search for mixed-precision neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [3] Nikolić, Miloš, et al. "Bitpruning: Learning bitlengths for aggressive and accurate quantization." arXiv preprint arXiv:2002.03090 (2020).
Rebuttal 1: Rebuttal: # General Comment We thank the reviewers for their interest and critics. The theoretical guarantees and thorough empirical validation offered by REx were highlighted by the reviewers as a strength of this paper. In the initial submission, we measured the accuracy with respect to the number of BOPS (Bitwise Operation Per Second) for a simple and fair comparison. From our understanding, It appears that a major and common concern among some of the reviewers is that we did not provide comparisons in terms of direct on-device latency. We propose a common response to this matter here, and will address the reviewers' other concerns individually. The choice of the evaluation metric in deep neural network acceleration is not trivial. The BOPS metric bare the advantage of being easily reproducible and comparable as it is (hardware and software agnostic). This is the reason why we opted for this metric. However, we see the point raised by the reviewers and agree that the study would greatly benefit from actual runtime measurements. Consequently, we propose the following extra experiments: 1. our main result and comparison is reported in Table 1 below, in which we report the latency of the referenced methods as well as the result from REx. Please bear in mind that in Table 1 the sparsity is structured (which we did not clearly mention in the paper). 2. our second main result is reported in Table 3 with LLMs, in which case the sparsity is unstructured in order to further limit the memory overhead as it is a major concern with LLM quantization. Thus, in the case of LLM only, we will measure the performance obtained using unstructured expansions. ## Latency with structured sparse expansions 50\% structured sparsity is leveraged by all hardware devices. Thus, we could conduct our experiments on multiple hardware devices. We considered a CPU (intel xeon) and a GPU (A100) for their difference in bit-width support. The latency is reported as the average over 1000 runs in milliseconds. We decide to compare to DFQ as it is the method that offers the lowest latency due to its use of per-tensor uniform quantization. Our results are listed in the Table below (for a ResNet-50). | method | CPU | GPU | accuracy | |:---:|:---:|:---:|:---:| | original model | 7.108 | 0.861 | 76.15 | | DFQ W6/A6 | **3.039** | 0.606 | 71.36 | | REx 150% W4/A6 | 3.059 ($+0.658$) %| **0.564** ($-6.930$) %| 76.01 | We can observe that although the bops are equivalent REx offers a lower latency than DFQ on GPU by $6.930$%. This result can be explained by the fact that in the GPU do provide support in int4 and int8. On the other hand on the CPU, we can clearly see the lack of support for this bit-width which leads to the measurement of the expansion overhead only and an overhead of $0.658$%. Still this overhead is fairly limited due to the good parallelization capabilities of the hardware. If measured the throughput instead, the lack of support for the int4 format would hinder the performance of REx on CPU. Overall, if the hardware supports multiple quantization format, then REx offers the highest accuracy at the lowest inference latency which supports our initial claim that REx offers better trade-offs in terms of accuracy *v.s.* speed. ## Latency with unstructured sparse expansions Regarding unstructured sparsity efficiency, we only considered the CPU benchmark as it is the only support for such inference format. For our own sake, we measured the latency of a single fully-connected layer from the MLP block of the OPT-13B model on which we conducted our initial experiments. Similarly to the previous test, we measure 1000 runs using a naive implementation based on scipy and report the average latency in the table down below | method | CPU | |:---:|:---:| | original layer | 0.104799 | | DFQ W4/A16 | 0.055230 | | REx W4/A16 + 1% A1/A16 | 0.055376 $(+0.2643)$% | Our results highlight the marginal overhead of $0.2643$% introduced by the sparse binary expansion on fully-connected layers. All in all, we believe that these results (Table 1 and 2 in this response for latency, as well as e.g. Table 3 in the paper for accuracy on OPT-13B), and the fact that REx significantly improve the accuracy (and outlier handling in LLMs) of existing quantization methods at the price of, in the worst case scenario (when the considered bit-width is not supported), very little latency overhead, and with adequate hardware support, significant speed boost. This further shows the interest of the proposed method. Pdf: /pdf/65cdd102cbc98ed2f959369a133be13659bc2790.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
GEQ: Gaussian Kernel Inspired Equilibrium Models
Accept (poster)
Summary: This paper makes the observation that conventional DEQ architectures effectively use linear kernels in their feature extraction, and the proposed method in this paper is to use Gaussian Kernels instead of linear kernels. Then, the paper makes the observation that adding Gaussian Kernels to DEQ models produce an architecture that is effectively infinite deep and infinitely wide. The paper then proposes patch splitting, a technique that makes the proposed method enjoy better performance, and shows the effectiveness of the proposed method on CIFAR10, CIFAR100, and saliency map datasets. Strengths: This paper is clearly written. In terms of contribution, this paper makes the observation that the proposed method is effectively infinitely deep and infinitely wide, which I find to be an interesting thread. Finally, this paper contains convergence bounds and stability analysis, training technique (patch splitting), as well as numerous satisfactory experiment results, and therefore I believe that the proposed method is sufficiently interesting. Weaknesses: I believe this paper would benefit more from discussion about the proposed model. For example, now that we know the proposed model is infinitely deep and infinitely wide, what can we say about its connection to NTK? Can this model further some of the NTK analysis? In addition, can this model inspire potential work on Gaussian processes? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Similar to the weakness section---I am curious whether the proposed model can act as an effective model for large neural nets in general, the same way that the NTK regime approximates large width networks? Can this model be representative of large language models, given its infinite depth and infinite width capacity? 2. Does the Gaussian Kernel in the proposed model have any natural use cases in practice (that linear kernels, polynomial kernels, etc don't have)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are no negative societal impacts of this work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. The followings are our responses to your concerns. # 1. About the relationship between GEQ and NTK. Both GEQ and NTK share a fundamental likeness: they align with the concept of infinite-width models. And we also reference some theoretical results in NTK and Gaussian process models for our analysis. However, our GEQ doesn’t rely on some unrealistic weight distribution. Therefore, the training process for GEQ is much easier, which leads to its superior performance over NTK. Consequently, GEQ's capabilities may have the potential to instigate innovative approaches to NTK model development, making such models more useful and realistic. Anticipating the future trajectory of GEQ and NTK analyses, we posit that the kernel mechanism intrinsic to GEQ could serve as a wellspring for novel architectural designs within the NTK framework. An intriguing way to explore involves integrating the strengths of both methodologies. This entails harnessing the intra-sample kernel calculation akin to GEQ, which we think is one of the key reasons that GEQ’s performance is better than NTK. Apart from that, the new model should also leverage NTK's global kernel function computed over the entire training dataset. This hybrid approach has the promise of delivering a robust performance akin to GEQ and simultaneously facilitating the acquisition of dataset-specific properties, such as out-of-distribution (OOD) generalization and fairness considerations. Shifting the focus to the advancement of Gaussian process models, GEQ imparts a valuable insight—equilibrium models could constitute a compelling route for expanding Gaussian process models from their non-parametric state to a parameterized one. By embracing this equilibrium framework, these models stand to achieve heightened performance levels and open avenues for significant enhancements. # 2. About the relationship between GEQ and large neural networks. Regarding the connection between our GEQ and large neural networks, it is worth noting that our GEQ possesses attributes akin to those of a weight-tied, large deep neural network (as illustrated in [1]) accompanied by attention modules. This resemblance stems from the utilization of exponential terms As such, our GEQ holds promise in providing theoretical insights into the realm of deep and expansive neural networks enriched with attention modules. Turning to the potential interplay between our GEQ and current large models, such as Large Language Models or Multi-Modal Models, we think our GEQ shows a closer connection with multi-modal models instead of LLM because our GEQ-induced attentive module can be viewed as cross-attention rather than self-attention mechanisms. Furthermore, our GEQ model itself can scale to large and show better performance than large deep ResNets. For example, on ImageNet: | Model | Model Size | Test Acc | | --- | --- | --- | | ResNet-18 | $13$M | $70.2\%$ | | ResNet-50 | $26$M | $75.1\%$ | | GEQ | $16$M | $\mathbf{75.9}\%$ | The results underscore a notable observation: when our GEQ is scaled up to larger models, it demonstrates superior performance compared to deep ResNets, even while employing fewer parameters, particularly evident when evaluated on larger ImageNet datasets. # 3. About "Does the Gaussian Kernel in the proposed model have any natural use cases in practice (that linear kernels, polynomial kernels, etc don't have)?" Equilibrium models endowed with linear kernels can be conceptualized as akin to weight-tied ResNets or analogous models, as exemplified in [1]. Consequently, our GEQ, employing a Gaussian kernel, can be likened to weight-tied ResNets, augmented by attention modules like SE-Net [2], Graph Attention Network[3] or etc. [1] Deep equilibrium Models [2] Squeeze-and-Excitation Networks [3] Graph Attention Networks --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for taking time to address my questions. I have also read carefully over the feedback of other reviewers. I would leave the question of whether this paper should be accepted for the AC to decide.
Summary: The authors introduce a deep equilibrium model (6) which ostensibly solves an optimisation problem (5) involving a Gaussian (squared exponential) kernel. The authors describe some possible theoretical properties of the new model, and benchmark it on some problems against some related optimisation-inspired DEQs and resnets. Strengths: - The idea of deriving DEQ models from optimisation problems is nice, and fits in well with existing literature. Weaknesses: Unfortunately, there appear to be many issues with the mathematical results in this paper. I do not believe they are sound. Some of them could potentially be fixable, but I am not certain. - **Important** The use of the function $f$ is not clear. - Can you elaborate on the definition of $f$? On the second page, you say "$f$ is a positive indicator function that induces to commonly used ReLU activation" What does "induces to" mean? What exactly is your definition of indicator function? One definition is that given a set $A$, $f_A(z) = 1$ if $z \in A$ and zero otherwise. This does not seem to reconcile with your definition, though. Perhaps $f$ takes a value of $1$ if $z$ is greater than 0 (elementwise), and zero otherwise? What happens at the value of zero? What is $f$'s (weak) derivative? How is it related to the ReLU? - After equation (3), you mention the first order stationarity condition. Is $f$ here the indicator function? Isn't this function not everywhere differentiable? Even if the derivative were defined, how do you know that $\nabla G = 0$ implies a minima? - How is a ReLU activation in (4) obtained by differentiation? I could see that the derivative of $relu(a)^2$ with respect to $a$ would be $relu(a)$, but not sure how to obtain it otherwise? - Proposition 1. What is the meaning of the symbol $\partial f$? How is $(1+\partial f)^{-1}$ related to the ReLU function? - Proposition 1 is not precise. The inner product is not defined, which is quite important because $\Phi$ lives in some kind of Hilbert space (informally being referred to as $\mathbb{R}^\infty$). Can you clarify what $\infty$-dimensional space means, and what $\langle \cdot, \cdot \rangle $ means? - Proposition 3. Isn't the left hand side of equation (11) zero? This would be a trivial bound, I would have thought. - (Minor) Proof of proposition 3. Aren't all the first lines in equation (41) equality, by definition of fixed point? - Proposition 4. Can you elaborate on why it is helpful to exhibit a smaller output similarity for dissimilar samples? One can trivially do this for any kernel, by simply rescaling the kernel by a constant factor. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Minor (*not* a reason to reject): - Section title for section3 is missing an "I". - The titles and labels in Figure 2 are too small to read. - Why are only Gaussian kernels considered? - Many works have pointed out potential issues with using saliency maps such as GradCAM to try and gain insight into neural network predictions. E.g. see Cynthia Rudin's work on explainability versus interpretability. The saliency map visualisation seems to come out of nowhere (not being mentioned anywhere earlier in the paper). Is there a reason why include it in this paper? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: I do not see any explicit mention of limitations in this work. That being said, I may have missed some, if they are somewhere in the text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. The followings are our responses to your concerns. # 1. About $f$: ## 1) The definition and differentiability of $f$: The definition of $f$ is $f(x) = 1$ when $x \geq 0$ while $f(x)=\infty$ when $x<0$. And $f$ is sub-differentiable and $\partial f$ here means the sub-gradients "[subgradients](https://web.stanford.edu/class/ee364b/lectures/subgradients_notes.pdf)". In our paper, we set $\partial f=0$ at $x=0$ for convenience since $0\in \partial f$. We'll rewrite them to make it more clear in the future version. ## 2) About the stationary condition: Although $f$ and $G$ are not differentiable everywhere, the stationary condition can also be used with $G$’s sub-gradient, i.e., $0 \in \partial G$. More details in "[subgradients](https://web.stanford.edu/class/ee364b/lectures/subgradients_notes.pdf)". We use $0=\nabla G$ in our paper for convenience since we set $\partial f=0$ at $x=0$, and our equilibrium also suits the above condition $0 \in \partial G$. Therefore, our conclusion is correct although the formulation is a little non-rigorousness. We'll rewrite them to make it more clear in the future version. ## 3) About the derivative of the ReLU function. We take the following optimization problem as an example. When you solve the following optimization problem $argmin_{x\geq0} \frac{1}{2}||x-y||^2$, you can first reformulate the optimization problem as $argmin_{x} \frac{1}{2}||x-y||^2 +f(x)$ where $f$ is defined as above since two optimization problems have the same optimal point. Then the optimal solution can be achieved by calculating the problems subgradient $0 = (1+f)(x) -y$ and then we can get $x=(1+\partial f)^{-1}(y)$ as the optimal solution. And we can also get the original problem's optimal solution as $x=0$ when $y<0$ because $f(x)=\infty$ when $x<0$ and $x=y$ when $y\geq 0$. Therefore, $(1+\partial f)^{-1}(y) = max(y,0)$ which is ReLU and it also calls the proximal operator of $f$. More details in "[Proximal Operators](https://www.math.cuhk.edu.hk/course_builder/1920/math4230/Note10.pdf)". # 2. About inner-production, infinity dimension spaces’ definition in Proposition 1. The inner product in our paper is$\langle \mathbf{x},\mathbf{y} \rangle = \mathbf{x}^\top\mathbf{y}$, which you can find in "[Inner Product](https://mathworld.wolfram.com/InnerProduct.html)". And the $\infty-$dimensional space means the vector $\mathbf{v}$ in such space has infinite dimension which means $\mathbf{v}\in\mathbb{R}^{\infty}$. # 3. About Proposition 3. Although the left lower bound for Eqn(11) is 0, proposition 3 is non-trivial. A tighter Lipschitz upper bound is important because it can make the model more stable. Due to this reason, there are many works (like [1]) working on it. The upper bound can lead researchers to explore the model's most unstable performance in the worst cases. A smaller upper bound means the model is stable and can perform well on many perturbed or corrupted inputs. Thereby, proposition 3's upper bounds try to state the stability of our GEQ is better, which is also confirmed in our experiments (The second experiment in Section 4.3). As for the proof in the Appendix, the inequality in Eqn(41) is a typo and I'll correct it in the future version. But it won't influence the final results. # 4. About Proposition 4. Small output similarity for dissimilar samples will make the classification easier. For example, a dataset is easy to classify when all samples belong to the same class cluster together(similar samples should have similar representation) while the distances between cluster centers are large (dissimilar samples’ representation should be different). Therefore, simply scaling cannot make the classification performance better because it will scale all sample's similarities even if they are similar, which will make samples hard to classify. Apart from that, simply scaling with a scaler may also make OptEqs' outputs trivial. Because simply scaling by a hyper-parameter $\eta$ to make the equilibrium function become $\mathbf{z}^* = \sigma\left(\eta\left(\mathbf{W}^\top\mathbf{W}\mathbf{z}^*+ \mathbf{U}\mathbf{x}+\mathbf{b}\right)\right)$ can be considered as reformulating its hidden optimization problem to the following form: $\min_{\mathbf{z}}\left[\mathbf{1}^\top f(\mathbf{z})+ \frac{1}{2}\|\mathbf{z}\|_2^2-\eta \left \langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle -\eta \frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2 \right].$ If $\eta$ becomes too small, the final output will be more likely to be just optimizing: $\min_{\mathbf{z}}\left[\mathbf{1}^\top f(\mathbf{z})+ \frac{1}{2}\|\mathbf{z}\|_2^2\right]$ which means the equilibrium model's final output $\mathbf{z}$ has no relationship with the original input. Therefore, the model’s outputs are useless. # 5. About "why only consider Gaussian Kernels". We not only consider Gaussian kernels. We have tried different non-linear kernels like polynomial, sigmoid, and Gaussian in Appendix A.1. However, the empirical results show that the Gaussian kernel is the best. Apart from that, gaussian kernels can also let us analyze the infinite-wide equilibrium model's properties while others cannot. Thereby, we mainly explore GEQ’s theoretical and empirical advantages. Due to the space limits, I cannot list the results here. However, you can find them in our Appendix or answers for Reviewer 8r48’s problem 3. # 6. About GradCAM. Here we use GradCAM to make readers more clear that GEQs with its induced attentive modules (as illustrated in Section 3.5) will make models focus on more semantic-related regions. We choose GradCAM here as it is an easy and widely used way to show. GradCAM is only an auxiliary technique since we have already shown our advantages with theoretical analysis and empirical comparisons. # 7. About some typos and small captions. We'll correct them in the future version. [1] Estimating Lipschitz constants of monotone deep equilibrium models --- Rebuttal Comment 1.1: Title: Regarding eq 11 Comment: I want to clarify a point regarding eq.11 to avoid a misunderstanding: eq.11 in the paper is written as $\|f_{geq}(x_1) - f_{geq}(x_1)\|\leq L_{geq}\|x_1 - x_2\|$. The left-hand side here is clearly 0 since both terms are the same -- which is what I think the reviewer is referring to -- but I'm pretty sure that this is simply a typo and that the authors meant to write $\|f_{geq}(x_1) - f_{geq}(x_2)\|\leq L_{geq}\|x_1 - x_2\|$. Is this correct? Thanks, AC --- Reply to Comment 1.1.1: Title: Comments to AC: Comment: Thanks for pointing out. I misunderstand the reviewer CdMn’s question but now I have raised a new comment for clarification. And I’ll correct the typos in the following version. --- Rebuttal Comment 1.2: Title: To Reviewer CdMn: Comment: There is a typo in Eqn(11), the correct formulation for Eqn(11) is: $\left\| f_{geq} (\mathbf{x}_1) - f_g (\mathbf{x}_2) \right\|_2 $ $\leq L_{geq} \| \mathbf{x}_1 - \mathbf{x}_2\|_2$ $= \frac{\beta_{\max}\mu^2 + \sqrt{\gamma}B\mu^3}{1-\beta_{\max}\mu^2 - \sqrt{\gamma}B\mu^3}\|\mathbf{x}_1-\mathbf{x}_2\|_2$ Here $f_g$ denotes $f_{geq}$ due to OpenReview's bug for tex. I’ll correct them in the following version. --- Rebuttal Comment 1.3: Comment: Thanks for getting back to me. Thanks for mentioning that you are using subgradients. As far as I can tell, subradients are not mentioned at all in the original manuscript, so it would be great if you could update the paper so that it explains that you are using subgradients rather than some other weak notion of a derivative. Thanks for clarifying about the definition of $f$ and ReLU. It is clear now. Thanks to the AC and the authors for clarifying the trivial lower bound. Can you clarify the following point? A local minimum (or maximum) might imply that $0 \in \partial f$. But other points can also have $0 \in \partial f$. How do you know this characterises the minima? --- Reply to Comment 1.3.1: Comment: Thank you for your response. In the upcoming version, we will denote $\partial f$ to represent the subgradient. About your concerns on “the equilibrium point is a local minimum or stationary point”, it depends on the convexity of the model’s hidden optimization problem. However, based on our configuration, we can claim that the equilibrium point in our paper is the local minimum. Firstly, we take OptEqs as an example: $\min_{\mathbf{z}}\left[\mathbf{1}^\top f(\mathbf{z})+ \frac{1}{2}\|\mathbf{z}\|_2^2-\left \langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle -\frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2 \right]$, which can be rewritten as $\min_{\mathbf{z}\geq0}\left[ \frac{1}{2}\|\mathbf{z}\|_2^2-\left \langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle -\frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2 \right]$. Then whether the equilibrium point is a stationary point or a local minimal depends on the convexity of the following part: $\frac{1}{2}\|\mathbf{z}\|_2^2-\left \langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle -\frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2$. To explore the convexity, we calculate the second-order derivative (Hessian) for the above equation: $\mathbf{I}-\mathbf{W}^\top\mathbf{W}$ Since all equilibrium models will use a normalization layer to ensure $\|\mathbf{W}\|_2<1$ to ensure their solutions are unique (we also state in Proposition 2,3,4), the Hessian is positive definite, which means $\mathbf{I}-\mathbf{W}^\top\mathbf{W}>0$. Thereby, the point $0\in \partial G$ can only be local minima. As for our GEQ, its formulation is a little complicated. For convenience, we will take the 1-dim case as an example where $w$ denotes $\mathbf{W}$, $u$ denotes $\mathbf{U}$, $z,x,b$ denotes $\mathbf{z},\mathbf{x},\mathbf{b}$. And the optimization problem becomes: $\min_{z>0}\left[\frac{1}{2}z^2 -\frac{1}{2\gamma}e^{-\gamma(wz-ux-b)^2}\right]$ Then the second-order derivative is : $1-w^2e^{-\gamma n^2} (2\gamma n^2-1)$ where $n=wz-ux-b$, since $e^{-\gamma n^2} (2\gamma n^2-1)<1$ for any $n\in\mathbb{R}$ and $w<1$. One can see that the second-order derivative is also positive. Hence it is also convex and the point $0\in \partial G$ can only be local minima. And this conclusion can also be proved when $\mathbf{z}$ is a vector. From the above analysis, one can see that the equilibrium points in our GEQ are the local minima.
Summary: The presented work proposed a new equilibrium model by replacing the linear term in the original formulation into a Gaussian kernel. Analysis and experiments are performed to illustrate the superiority of the proposed model in terms of expressivity, generalization and stability, etc. Strengths: - The stability and infinite-width equavalence analysis appears to be novel. I didn't see similar analysis for other equilibrium models. - The exploration of matching the performace of equilibrium models and traditional deep learning models is a meaningful direction. The experiments verified the effectiveness of the proposed model. ----- I have decided to raise my rating to this paper to a clear accept because all of my concerns are well address by rebuttal. Weaknesses: - In my understanding, the modification made by the presented work is seentially replacing the linear term in OptEqs with an exponential term, which will result in an attention-like module in the forward layer. However, the idea of incorporating an exponential term into the optimization objective and unfolding it into an attention mechanism may not be considered entirely novel. For example [1] also used an exponential term and derived an attention layer from the exponential term, despite their model is not an equilibrium model and they didn't explain it as a Gaussian kernel. - According to the Section A.2, the forward process is calculated directly by Eq.(6). However, the exponential term in Eq.(6) looks dangerous, as it is widely known that exponential function often leads to numerical instability in computations. I would expect there to be a normalization term (which leads to a softmax-like term) in Eq.(6). - Some expressions are vague, for example it's not clear what does "feature extraction term" mean at line 48. A minor issue: In Proposition 1 there's a term $\sqrt{{\boldsymbol{2}} \gamma} {\boldsymbol{\Phi}}_{\boldsymbol{{W}}}^{({\boldsymbol{1}})}$, what do the bolded ${\boldsymbol{2}}, {\boldsymbol{\Phi}}$ and superscript $({\boldsymbol{1}})$ mean? I guess they shouldn't be bolded and it is a typo? [1] Transformers from an Optimization Perspective Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I wonder if we can say the expressive power of GEQ is strictly stronger than the original form of OptEqs (in Eq.(3) and Eq.(4)) as the authors proved GEQ can be viewed as OptEqs with some terms projected to infinite-dimensional spaces? - Based on my understanding, the Patch Splitting appears to be essential for the proposed model. Since if there is not patch splitting then Eq.(6). would essentially be adding a scalar coefficient to the original update equation, which may not result in significant changes. Am I right? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. The followings are our responses to your concerns. # 1. About the exponential term and the difference between our work and [1] In fact, the methodology and the motivation of [1] and our work are different: 1. [1]'s optimization problem is for one transformer layer while ours is for the whole equilibrium model. 2. [1]'s exponential term is applied for self-attention which only calculates the exponential of inputs' self inner product. However, ours is calculated for the difference between the input and our equilibrium model's output. Therefore, our GEQ is more similar to traditional attention modules like in SE-Net[2] and Graph Attention Networks[3] instead of self-attention models. 3. [1] propose the exponential term because they want to explain softmax in Transformer networks while we use Gaussian kernels to analyze equilibrium models with infinite width. Therefore, the formulations for our work and [1] are similar. They are two different works. # 2. About the stability of our GEQ in Equation (6). In our work, the exponential term in our GEQ is stable because the weights are constrained by weight normalization to ensure the convergence of Eqn(6): $\mathbf{z}^* = \sigma\left[e^{-\gamma\|\mathbf{U}\mathbf{x}+\mathbf{b}-\mathbf{Wz}^*\|^2_2} \mathbf{W}^\top(-\mathbf{Wz}^* +\mathbf{U}\mathbf{x}+\mathbf{b})\right]$ Therefore, our GEQ's exponential term will be stable if the input is stable, which we use some normalization layers to ensure. # 3. About the feature extraction term. As we can see from OptEqs’s optimization problem: $\min_{\mathbf{z}} G(\mathbf{z};\mathbf{x}) = \min_{\mathbf{z}}\left[\mathbf{1}^\top f(\mathbf{z})+ \frac{1}{2}\|\mathbf{z}\|_2^2-\left \langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle -\frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2 \right].$ The feature extraction term is $\langle \mathbf{Ux+b}, \mathbf{z} \rangle$, $\mathbf{x}$ here is input while $\mathbf{z}$ here is the output. Because when calculating the optimization problem's optimal condition to induce equilibrium models' architecture, it will produce the $\mathbf{Ux+b}$ term, which can be viewed as extracting useful features. Therefore, we call it the feature extraction term. We'll rewrite such a part to make it more clear in the future version. # 4. About "I wonder if we can say the expressive power of GEQ is strictly stronger than the original form of OptEqs". As our empirical results and theoretical analysis show, GEQ’s performance is better than vanilla OptEqs with the proper choice of $\gamma$. However, if $\gamma$ is too large. For example, if $\gamma=\infty$, then the optimization problem for GEQ: $\min_{\mathbf{z}} G(\mathbf{z};\mathbf{x})=\min_{\mathbf{z}} \left[\mathbf{1}^\top f(\mathbf{z}) + \frac{1}{2}\|\mathbf{z}\|_2^2 - \frac{1}{2\gamma}e^{-\gamma\|\mathbf{U}\mathbf{x}+\mathbf{b}-\mathbf{Wz}\|^2_2}\right]$ will become $\min_{\mathbf{z}} G(\mathbf{z};\mathbf{x})=\min_{\mathbf{z}} \left[\mathbf{1}^\top f(\mathbf{z}) + \frac{1}{2}\|\mathbf{z}\|_2^2\right]$ then the final equilibrium state $\mathbf{z}$ will be $0$, which is a trivial solution. Thereby, we believe the expressive power of our GEQ is better than OptEqs with the proper choice of $\gamma$. Furthermore, although the choice of $\gamma$ is important, it won’t be hard to get a proper $\gamma$ for better performance. As we can see from the empirical section, we choose one $\gamma$ for different datasets based on our analysis. # 5. About Patch Splitting and its necessity. Patch splitting is an important term and it is our GEQ's unique feature. Because even if we do patch splitting in OptEqs, its formulation still will be the same as the vanilla OptEqs: $\sum_{i=1}^N \langle (\mathbf{Ux+b})_i, \mathbf{z}_i\rangle = \langle (\mathbf{Ux+b}), \mathbf{z}\rangle,$ where $\mathbf{z} = [\mathbf{z}_1,\mathbf{z}_2,...,\mathbf{z}_N]$. With the patch-splitting technique, our GEQ can concentrate on different parts based on their similarity like attentive modules. However, we need to clarify your comments on “if there is no patch splitting then Eq.(6). would essentially be adding a scalar coefficient to the original update equation, which may not result in significant changes.” Even without the patch splitting technique, our GEQ is still not the same as trivially scaling OptEqs because the scaling factor is related to $\mathbf{x}, \mathbf{z}$ and it is also learnable. # 6. About “In proposition 1, there’s a term $\sqrt{2\gamma}\Psi_\mathbf{W}^{(1)}$, what do the bolded 2, $\Phi$ and superscript (1) mean? I guess they shouldn’t be bolded and it is a typo?” They are typos and we’ll correct it in the future version. [1] Transformers from an Optimization Perspective [2] Squeeze-and-Excitation Networks [3] Graph Attention Networks --- Rebuttal 2: Comment: Thank the authors for the responce. Most of my concerns are well addressed, but I still have some questions. I agree that the exponential term in eq.(6) will not cause unstability. However, the author mentioned "the weights are constrained by weight normalization" in the rebutal. I am curious about how the weight normalization is implemented. I understand the weights need be normalized to ensure convergence of eq.(6), but I couldn't find anything in the Algorithm 1 (in appendix A.2) that ensures this constraint. Another point is, the author mentioned that even if without patch splitting, the proposed model is still essentially different from OptEqs because there is a learnable scaling factor which is related to $x$ and $z$. But I still don't understand why this scalar can lead to a essentially different solution. However complicated it is, it is just a scalar, isn't it? It would be helpful if authors can show some simple cases where adding a learnable scalar can lead to a essentially different fixed point. --- Rebuttal Comment 2.1: Comment: Thanks for your reply. The followings are answers to your new questions: 1. About the weight normalization. We just use PyTorch weight normalization after the clarification of each convolution layer such as layer$\mathbf{W}$ and $\mathbf{U}$ like other equilibrium models[1,2]. It can be considered as rescaling $\mathbf{W}$ by its norm after each update. Thereby, we forget to add it in Algorithm 1 as it is the forward process. Since it may lead to misunderstanding, we will clarify such a point in the future version. 2. About the “learnable scaler”. We think GEQ’s scaler is different from trivial scaling because it is a sample-dependent scaler. We think it may stabilize the original equilibrium model like OptEqs. For example, if we compared our GEQ with the following equilibrium model: $\mathbf{z} = \sigma(\mathbf{W}^\top(-\mathbf{Wz}+\mathbf{Ux+b}))$ (1), it is the same as our GEQ without the exponential term and our GEQ can be formulated as: $\mathbf{z} = \sigma(e^{-\gamma\|-\mathbf{Wz}+\mathbf{Ux+b}\|_2^2}\mathbf{W}^\top(-\mathbf{Wz}+\mathbf{Ux+b}))$ As one can see, our GEQ will scale the above equilibrium model’s output $\mathbf{z}$ for Eqn(1) when the difference between $\mathbf{Wz}$ and $\mathbf{Ux+b}$ is too large because the exponential term is small. However, when the difference is not too large, our GEQ’s output will be similar to the original equilibrium model’s outputs for Eqn(1) as the exponential term is around $1$. Thereby, it is different from scaling the equilibrium model with a fixed parameter. We assume that such a constraint can prevent some unstable behavior for the equilibrium models, owing to the higher controllability and stability inherent in the linear model $\mathbf{Ux}$. Such an assumption is also an intuitive motivation for our stability analysis. The empirical results also show that even without patch splitting, our GEQ can perform better than OptEqs as below on CIFAR-10: | Model | Model Size | Test Acc | | --- | --- | --- | | MOptEqs | $8$M | $94.6\%$ | | GEQ(w/o patch) | $8$M | $94.9\%$ | | GEQ | $8$M | $95.6\%$ | However, the advantages will be smaller than with the patch-splitting technique, which is consistent with your opinion. --- Rebuttal 3: Comment: Thank the authors for furthur clarification. All of my concerns are well addressed, so I would like to raise my rating to this paper to a clear accept accordingly. I encourage the authors to make it clear in the future revision that the weight matrices are normalized at each step of training, which is different from standard training process.
Summary: In this paper, the authors propose to use Gaussian kernel in OptEqs (optimization-induced deep equilibrium models). Because of the more powerful capability of kernel, it can better capture the dynamic performance than linear models. Strengths: OptEqs is an interesting attempt to describe the training dynamics of DNN and hence making it more powerful (this paper's work) is interesting. (This does not mean that I appreciate the technique contribution of GEQ. To me, this is a natural incremental contribution from OptEqs.) The reported performance is quite good, considering that GEQ implies a new model structure. Weaknesses: Using a Gaussian kernel instead of a linear is simple, which can also naturally lead to a better generalization bound. In other words, the discussion on the generalization bound is interesting but simply saying "tighter" is somehow trivial. In my point of view, there is essential difference of applying GEQ to ImageNet-100 or the whole ImageNet. I think that a new point of view of neural networks training itself is already interesting. It is not necessary to defeat standard neural networks. If this is the aim of this paper, the experiments should be enhanced. For example, ImageNet should be considered and the setting (SGD with a step learning rate schedule) is fair but may be not sufficient. The following link gives the links of reported best training strategy: https://paperswithcode.com/sota/ CIFAR-10 with ResNet-18 is 95.55; CIFAR-100 with ResNet-50-SAM is 85.2 In one word, I think the current experiments for evaluating GEQ are already good (when ImageNet is included). But it is far from giving the conclusion that GEQ is better than ResNet, etc. My overall recommendation is currently positive, which is mainly based on numerical experiments. There are still many doubts about the performance. If the answer is not strong, I may lower my score. Also, I feel the theoretical and technique contributions of this paper is weak and hence will not fight for my current score, when other reviewers think this part should be improved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Replacing the linear operator in OptEqs by a nonlinear one is nice. But can the authors explain more why kernel is chosen? why not try e.g., MLP, of which the parameters could be trained. Questions for numerical experiments could be found above, especially if the authors want to claim advantages over standard neural networks. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. The followings are our responses to your concerns. # 1. About the contribution of GEQ. While OptEqs has already shown its ability in describing certain DNNs like ResNet, there are many limits on it: 1. Firstly, vanilla OptEqs can only model the behaviors of neural networks as they delve deeply into their architecture. However, its applicability is limited when attempting to analyze neural networks with infinite depth and width simultaneously. 2. Secondly, vanilla OptEqs is adept at describing fundamental neural networks characterized by linear layers (such as convolutions) with one pointwise non-linear activation module. Yet, it falls short when attempting to analyze network architectures with non-linear attentive modules. 3. Thirdly, the numerical performances for OptEqs are not satisfying. To solve the above problems, we decide to use involve Gaussian kernels in vanilla equilibrium models. The reason is mainly as below: 1. Using the Gaussian kernels can give us more insight into equilibrium models with infinite width since Gaussian kernels are usually related to infinite-dimensional space. Thereby, GEQ can not only be used to analyze deep neural models but also can give some insight into the training dynamics when the model’s depth and width increase both. 2. To extend equilibrium models for describing networks with some non-linear attentive modules, we decide to involve different non-linear kernels like polynomial, sigmoid, and Gaussian. And the empirical results show that the Gaussian kernel is the best. Thereby, we take a further step in exploring our GEQ’s theoretical and empirical advantages in this paper. As shown in Section 3.5, our GEQ can be viewed as an equilibrium model with attentive modules. Therefore, we may use it to analyze the training dynamics for neural networks with attention. 3. Our approach offers the potential to enhance both the generalization capability and stability of optimization-induced neural architectures like equilibrium models and provide valuable inspiration for researchers to devise more potent kernels for current models. Prior to our research, no exploration of new equilibrium models starts from our distinctive perspective. # 2. About numerical experiments. Firstly, we want to point out that the best result in https://paperswithcode.com/sota/ you report uses the auxiliary data. They won’t get such good performances if they train CIFAR models without auxiliary datasets. Secondly, we also finish the experiments for ResNet-50 with SAM on CIFAR-10 and CIFAR-100. ## CIFAR-10 with SAM: | Model | Model Size | Test Acc | | | --- | --- | --- | --- | | ResNet-50 | $23$M | $95.5\pm0.4\%$ | | | GEQ | $8$M | $\mathbf{95.9\pm0.3}\%$ | | ## CIFAR-100 with SAM: | Model | Model Size | Test Acc | | --- | --- | --- | | ResNet-50 | $23$M | $78.4\pm0.3\%$ | | GEQ | $8$M | $\mathbf{78.9\pm0.2}\%$ | Our GEQ can outperform deep ResNets with less than half size even if changing optimizers to SAM. Furthermore, we also finish the experiments for our GEQ on ImageNet with SGD shown below: ## ImageNet: | Model | Model Size | Test Acc | | --- | --- | --- | | ResNet-18 | $13$M | $70.2\%$ | | ResNet-50 | $26$M | $75.1\%$ | | GEQ | $16$M | $\mathbf{75.9}\%$ | From the results, one can see that our GEQ can outperform deep ResNets with fewer parameters on larger ImageNet datasets. ## 3. About your concerns on “why kernel is chosen"? We use kernel as we deem that the poor performance for original equilibrium models is caused by the simple term $\langle \mathbf{Ux+b}, \mathbf{z} \rangle$ in OptEqs original optimization problem. Then inspired by traditional machine learning methods, we naturally choose to use kernel methods like Gaussian kernels. Apart from Gaussian, we also try other common kernels as shown in the Appendix A.1. We also list the formulation and results below. The formulation of equilibrium models with commonly used non-linear kernels: | Kernel | Hidden Optimization Problem | Equilibrium Model | | --------- | ---------- | -------------- | | Polynomial | $\min_{\mathbf{z}}\left[\mathbf{1}^\top f(\mathbf{z})+ \frac{1}{2}\|\mathbf{z}\|_2^2-\left( \left\langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle\right)^d-\frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2 \right]$ | $\mathbf{z}^* = \sigma\left(\mathbf{W}^\top\mathbf{W}\mathbf{z}^*+ d\left( \left\langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle\right)^{d-1}\left(\mathbf{U}\mathbf{x}+\mathbf{b}\right)\right)$ | |Sigmoid | $\min_{\mathbf{z}}\left[\mathbf{1}^\top f(\mathbf{z})+ \frac{1}{2}\|\mathbf{z}\|_2^2-{\rm{tanh}}\left( \left\langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle\right) -\frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2 \right]$ | $\mathbf{z}^* = \sigma\left(\mathbf{W}^\top\mathbf{W}\mathbf{z}^*+ \left(1 - {\rm{tanh}}^2\left( \left\langle \mathbf{U}\mathbf{x}+\mathbf{b},\mathbf{z} \right\rangle\right)\right)\left(\mathbf{U}\mathbf{x}+\mathbf{b}\right)\right)$ | |Gaussian | $\min_{\mathbf{z}} \left[\mathbf{1}^\top f(\mathbf{z}) + \frac{1}{2}\|\mathbf{z}\|_2^2 - \frac{1}{2\gamma}e^{-\gamma\|\mathbf{U}\mathbf{x}+\mathbf{b}-\mathbf{Wz}\|^2_2}\right]$ | $\mathbf{z}^* = \sigma\left[e^{-\gamma\|\mathbf{U}\mathbf{x}+\mathbf{b}-\mathbf{Wz}^*\|^2_2} \mathbf{W}^\top(-\mathbf{Wz}^* +\mathbf{U}\mathbf{x}+\mathbf{b})\right]$ | Their experiments are shown below, which demonstrate that GEQ is the best. | Model | Model Size | Accuracy | | --- | --- | --- | | MOptEqs | $8$M | $75.6\pm0.2\%$ | | MOptEqs (Polynomial) | $8$M | $75.1\pm0.4\%$ | | MOptEqs (Sigmoid) | $8$M | $76.1\pm0.3\%$ | | GEQ | $8$M | $\mathbf{78.2\pm0.2\%}$ | About MLP, if we use an MLP layer in the original $\langle \mathbf{Ux+b}, \mathbf{z} \rangle$ term to make it become $\langle \mathbf{Ux+b}, \mathbf{W}_{m}\mathbf{z} \rangle$. Since it equals to $\langle \mathbf{W}_m^\top (\mathbf{Ux+b}), \mathbf{z} \rangle$, it will perform almost the same as vanilla OptEqs. --- Rebuttal Comment 1.1: Title: thanks Comment: Thanks for the additional results. I also read the discussions with other reviewers. For my question, still I am not well convinced why there is significant improvement by simply using a kernel (the discussion about MLP is incorrect, since MLP is not a linear mapping when there is nonlinear activation function). So I would like to keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We would like to correct the answers about MLP since we neglect MLP’s non-linear layer in the former answer: 1. We consider kernels instead of MLP because of the following reasons: 1. From the view of the equilibrium model’s hidden optimization problem, using MLP will make it hard to obtain the equilibrium model’s formulation. For example, if we adopt MLP (denote as $g$) in the equilibrium model’s hidden optimization problem: $\min_{\mathbf{z}} G(\mathbf{z};\mathbf{x}) = \min_{\mathbf{z}}\left[\mathbf{1}^\top f(\mathbf{z})+ \frac{1}{2}\|\mathbf{z}\|_2^2-\left \langle \mathbf{U}\mathbf{x}+\mathbf{b},g(\mathbf{z}) \right\rangle -\frac{1}{2}\|\mathbf{W}\mathbf{z}\|_2^2 \right].$ When $z$ is a scaler, then it’s somehow equivalent to our GEQ, since it will also induce an attention scaler $g'(z)$. However, $\nabla g(\mathbf{z})$ is a matrix when $\mathbf{z}$ is a vector in most common cases. Therefore, the formulation of equilibrium models will be complicated. And its convergence may be also hard to constrain because of its formulation. 2. Apart from its complicated formulation, another important reason why we use kernels instead of MLPs is that kernels can provide us with much more theoretical insights than MLPs. For example, gaussian kernels can inspire us that our new model may be more stable and may also enable us to analyze the performance of wider models. 2. We also want to clarify that simply adding non-linear activations in equilibrium model’s equilibrium function can not make it perform much better. For example, MDEQ has several non-linear activation layers in its equilibrium equation since they adopt a residual block in their architecture. However, it does not make its performance much better compared with OptEqs and MOptEqs. What’s worse, it makes MDEQ lose its ability to be interpreted by an optimization problem. We think the reason is the equilibrium model is already a kind of deep neural network with non-linear layers. Therefore simply adding nonlinear activations inside its equilibrium function will not help much. Thus we don’t think the number of non-linear layers in MOptEqs or OptEqs’s equilibrium equation is the key reason for their weak performances. 3. Comparing our GEQ’s architecture (Figure 1) with other equilibrium models, we may give one intuitive reason for our better performance against other equilibrium models: our GEQ is similar to equilibrium models with attention modules, which is the first trial in equilibrium models especially from the optimization view. And we also want to note that such a difference is also a contribution of our model compared with former works: 1. Firstly, we can explain why attention models are better. Former vanilla models like ResNet and MLP can be viewed as vanilla OptEqs with linear kernels while neural networks with attention modules can be viewed as equilibrium models with non-linear kernels. Since the expressive power for non-linear kernels is better than linear ones. The performance of neural networks with attention is better. 2. Secondly, like ResNets and MLP’s training dynamics can be analyzed by vanilla OptEqs. Our GEQ may be used to analyze the training dynamics of attention networks and inspire new designs on attention modules by findings new non-linear kernels. We will further explore the properties and new architectures of attention modules from such a view.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Curve Your Enthusiasm: Concurvity Regularization in Differentiable Generalized Additive Models
Accept (poster)
Summary: The paper concerns the issue of concurvity in the context of Generalized Additive Models (GAMs), which can be considered as an extension of multicolinearity to GAMs. The authors propose a regularization scheme to reduce the concurvity between learned functions, hoping to improve interpretability of the model. Strengths: - Multicolinearity/concurvity are indeed serious problems in statistics, and any method that can address such issues is of interest to the community. - The proposed concurvity penalty is new and has not been discussed before. - The numerical experiments show the promise of the proposed methodology. Weaknesses: - I think the numerical results although interesting, are rather limited. As far as I could tell, authors only discuss 4 tabular datasets, which is not enough. More datasets and baselines should be added---see [1] as an example. [1] Chang, Chun-Hao, Rich Caruana, and Anna Goldenberg. "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning." International Conference on Learning Representations. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Connections to sparse regularization: As pointed out by the authors, the issue of multicolinearity can also arise in the context of linear models. It is well-known that regularization schemes such as sparse regularization can help to deal with correlation. For example, see [1,2]. The notion of variable selection in GAMs is also well-explored, see [3] and references therein. A natural question would be how such penalization schemes deal with concurvity. Intuitively, if one can reduce the total number of features used, a more compact model can be obtained that is less likely to suffer from concurvity. - How should one choose the parameter $\lambda$? The authors mention the elbow method but I think this will need more exploration. It seems that the concurvity regularization can reduce validation accuracy. So, if one uses validation performance to choose a model, they'll end up with a model with no regularization. It is not clear to me how a practitioner can use the concurvity regularization if they have to sacrifice accuracy. Going back to the previous point, in the context of linear models, sparsity regularization usually comes with improved out-of-sample performance. - Extensions: Another interesting direction is that GAMs might be too simple for more complex datasets, and methods that use higher-order interaction terms have been developed [3,4]. Would it be possible to extend the concurvity regularization to such interaction models? This definitely falls out of the focus of this paper, but a short discussion from the authors would be appreciated. [1] Figueiredo, M., & Nowak, R.. (2016). Ordered Weighted L1 Regularized Regression with Strongly Correlated Covariates: Theoretical Aspects. [2] Hazimeh, H., & Mazumder, R. (2020). Fast best subset selection: Coordinate descent and local combinatorial optimization algorithms. Operations Research, 68(5), 1517-1537. [3] Chang, Chun-Hao, Rich Caruana, and Anna Goldenberg. "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning." International Conference on Learning Representations. [4] Enouen, J., & Liu, Y. Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection. In Advances in Neural Information Processing Systems. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer arAb, Thank you for the thorough review of the paper, as well as for acknowledging the importance of the issue of concurvity and the novelty of our approach to mitigating this issue. We have addressed your concerns and questions as follows: **1. On the scope of evaluation:** We address this shared concern in more detail in our general comments C2 and C3. More specifically, as we argue, quantifying interpretability is yet an open question in the literature and a problem for the community to address. As such, we believe that simply evaluating accuracy and possibly sparsity or concurvity measures across a multitude of additional datasets provides limited additional insights. For these reasons we favored depth over width in our evaluation and chose a more detailed approach to assessing interpretability, illustrated in Figure 5 and, particularly, Fig. 6. We argue that understanding the impact on interpretability requires specific background knowledge on each dataset as showcased by our investigation of the California Housing dataset, limiting the value of evaluating a large benchmark. While quantifying interpretability is an important open question for the community, we argue that it is important to provide practitioners with tools to potentially deal with limited interpretability in their applications now, instead of waiting for progress on measuring interpretability. **2. On the connection to sparse regularization:** We agree with the reviewer that there is an interesting connection between our proposed concurvity regularization and sparse regularization and elaborate in more detail in general comment C2. However, a detailed comparison between these two regularization paradigms are outside the scope of our paper. For a thorough review of dealing with concurvity in spline-based GAMs via feature selection algorithms we refer the reviewer to Kovács 2022 [1]. [1] László Kovács. Feature selection algorithms in generalized additive models under concurvity. Computational Statistics, pages 1–33, 2022. **3. On the choice of the regularization strength $\lambda$:** While we agree that model accuracy is paramount for a prediction model, there exist additional desiderata for interpretable models. Our concurvity regularizer enforces the requirement to eliminate self-canceling feature contributions thereby helping to avoid drawing false conclusions. While this leads to a slight decrease in model accuracy we feel that this is justified considering the increase in interpretability. Please see our global comment C1 for further elaboration of this argument. We propose to use the elbow method (or L-curve) to estimate the correct level of concurvity regularization but depending on the use-case, one could also choose e.g. a maximal 5% decrease in model performance to find the optimal trade-off. We further address this issue in our global comment C4. **4. On extending the approach to GAMs with higher-order interactions:** We thank the reviewer for the suggestion. Using concurvity regularization in higher-order interaction models would be straightforward, in the sense that any additional feature term can also be considered when calculating the pairwise correlation. However, it is currently unclear to us what kind of decomposition should be expected. We will be adding the following to the conclusion: “Moreover, it would be interesting to see how the concurvity regularizer works in differentiable GAMs that incorporate pairwise or higher-order interactions. Specifically, contrasting this with the ANOVA decomposition proposed by Lengerich et al. (2020) [2], in terms of single and pairwise interactions, could unveil some interesting insights.” [2] Lengerich, B., Tan, S., Chang, C., Hooker, G. &amp; Caruana, R.. (2020). Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2402-2412 Once again, we sincerely thank you for your constructive feedback and valuable insights. We look forward to the opportunity to further improve our work through continued dialogue. --- Rebuttal Comment 1.1: Comment: Thank you for clarifications and providing additional experiments. I think you rebuttal answers some questions, but I still think: - The paper would benefit from comparisons with sparse methods (or any other method that can reduce correlation). - Although the numerical experiments have improved, they are not 100% convincing. Based on these, I increase my score to weak accept, but I don't think a higher score would be fair. --- Reply to Comment 1.1.1: Title: Thank you for your swift response! Comment: Dear Reviewer arAb, We thank you for your valuable feedback, and taking our responses into account. We happily acknowledge your willingness to upgrade your rating! We agree that including a comparison to sparsity regularization could benefit the paper, and are currently conducting further experiments to this end. Preliminary results indicate that L1 regularization on the feature contributions $f_i(x_i)$ can somewhat reduce concurvity, though a more substantial decreases in concurvity come at a much higher cost in increased validation error and overly aggressive feature selction. This is of course not surprising, given that we are now optimizing for a different measure. In the California Housing case study, we see that L1 regularization tends to select only very few features (which may vary depending on the random seed leading to some ambiguity in the interpretation of the model), whereas concurvity regularization assigns non-zero importance to several features, pruning mostly moderately to highly correlated features. This is in line with our previous statement that, unlike sparsity regularization, our proposed regularizer does not negatively affect features that do not show concurvity. We are happy to include results from these experiments in the camera-ready version. Finally, we note that there may be different motivations behind choosing sparsity regularization (as few features as possible) as compared to concurvity regularization (as decorrelated feature transformations as possible). If the goal is to remove concurvity, then sparsity regularization is a blunt tool, and vice versa. Thus, we believe these approaches to be complementary tools in building good, interpretable models. Sincerely, the Authors
Summary: The authors highlight concurvity as a relevant concern when developing additive models. They introduce a differentiable regularizer that aims to reduce concurvity (which will in general raise RMSE, though there may be a regularization strength that gives a good trade-off). The proposed regularizer is applied to a neural additive model, which is demonstrated on three toy examples (two 2-variable examples where $Y$ equals one of the variables, one time series example with different weekly and daily step functions) and three UCI data sets (2x regression, 1x classification). Strengths: The paper provides a nice overview of concurvity and related works and why it is an important issue to be aware of. It introduces a novel regularizer that can easily be combined with any differentiable models. Weaknesses: The authors include a large set of references to related work on concurvity. Yet there is no comparison to other models (with the same proposed regularizer) or other methods of dealing with concurvity (despite mentioning multiple related approaches). The paper claims that their concurvity regularization encourages feature selection (line 169) but there is no demonstration of this nor comparison to actual feature selection approaches. Very limited evaluation on real-world data sets; see for example [b] in the same space which has an extensive comparison both on many data sets and across multiple methods. Cited work such as [25] (Kovács 2022) uses toy examples to much better effect; a comparison of the proposed NAM/regularizer approach on the same toy example would make for a much stronger submission. Of course minimizing a given metric (such as the proposed concurvity regularizer) reduces that same metric, but I am missing a stronger demonstration of the practical benefits/relevance of this. The paper claims the resulting model is more interpretable, but does not really demonstrate this. See [a] for a discussion of interpretability of GAM features. Though the discussion is all about concurvity and non-linearity, only scalar correlation values are shown in Fig. 6 (a). Pair plots of both $X_i$ vs $X_j$ and $f_i(X_i)$ vs $f_j(X_j)$ might demonstrate the effect of the regularization more easily. Claims such as “their feature contributions largely [cancel] each other out” (lines 282-284) could easily be shown by showing their sum across the data set. ### References: - [a] *"How Interpretable and Trustworthy are GAMs?"* Chang, Tan, Lengerich, Goldenberg, Caruana (2021) - [b] *"Additive Gaussian Processes Revisited"* Lu, Boukouvalas, Hensman (2022) Technical Quality: 2 fair Clarity: 3 good Questions for Authors: ### Substantial questions: Q1. Time series with multiple seasonalities are in some sense a “degenerate” example as all functions depend on the same feature $t$. How, if at all, would time series have to be treated differently from “regular” data sets with multiple input features $X_1, \dots, X_p$? Q2. Definition 2.2: you claim this “revised formal definition” as a main contribution of your work. How is it different from (and what are the similarities to) previous definitions? Q3. Section 3: your description sounds like concurvity is purely a *model* issue, but is it not a property of the *data*? Q4. Proof in Appendix A.1: this seems to hinge on your definition of correlation which assigns $\infty$ to the correlation with a constant vector. However, this correlation is simply ill-defined - and one could equally define it to be zero. Does this not invalidate your proof? Q5. Your proposed regularizer scales quadratically with the number of additive components: how would you efficiently address this by parallelization? (Does the scaling not remain quadratically?) Q6. “Our concurvity regularizer is agnostic to the function class …” (lines 140-142): “metrics proposed in the literature [42, 25] are not directly applicable.” [42] (Wood 2001) does not mention concurvity at all and while [25] (Kovács 2022) discusses concurvity, I could not find any mention of score/metric proposed in there. Moreover, what is the similarity of your proposed metric to that of [35] (Ramsay et al)? Q7.a) Toy Example 2: Fig. 3 (a) suggests that without regularization, the model learns that $f_1(X_1) = 0$ *more* accurately (range of $f_1$ ~ 0.0015) than with concurvity regularization (range of $f_1$ ~ 0.02), how does this support the claimed benefit of regularization? Q7.b) How do you see that $f_2$ approximates $|f_1|$ (lines 192-193)? Q8. lines 238-241: could it be that this is due to it being classification rather than regression? Q9. You discuss the distinction between multicollinearity and concurvity. Is it possible to have multicollinearity, but *not* have concurvity? ### Notation and clarity: C10. Regarding the origin of concurvity (lines 100-101, 293), you might want to cite the work by Buja, Donnell, Stuetzle (1986) cited within your reference [9]. C11. The sum $\sum_{l=1}^N$ in the (GAM-Fit) and (GAM-Fit$_\perp$ equations seems to be inconsistent with the vector notation (the subscript $l$ is not used anywhere). Q12. line 85: the notation used in the equation $\mathcal{H} \subset \dots$ is not immediately clear to me. Is $\mathcal{H}$ supposed to be a space of $p$-tuples? Q13. “every suitable linear combination of features can be modified by adding a trivial linear combination” (lines 93-94) can you clarify what you mean here? Having gone through some of your references I understand, but your description on its own is rather confusing. Likewise, “any non-trivial zero-combination of features can be added to a solution of (GAM-Fit)” (line 109-110): what does this mean? C14. Simply writing “(GAM-Fit)” e.g. in line 110 made me take a while to realize that this is referring to an equation and to find where it was. How about having the equation label include a number, e.g. “(2; GAM-Fit)”, and then you can explicitly refer to “a solution of Eq. (2; GAM-Fit)” for example? C15. Additional remarks in Appendix A.2: these are very helpful, would be great to at least summarize the points in the main text. Q16. Figure 2 (b) is great to show that there is a value of $\lambda$ that reduces the concurvity measure without affecting RMSE perceptibly, but how sensitive is this to the value? This might be easier to see in two separate plots of $\lambda$ vs $R_\perp$ and $\lambda$ vs RMSE overlaid on top of each other (to identify the range of $\lambda$ in which both are low simultaneously). Q17. line 172: what is Fig. 8 supposed to demonstrate? This is not clear to me. Q18. lines 175, 177: what would be the “scale” of $\lambda$? What does “moderate” or “considerably high” mean? Q19. line 243: Are you aware of any other references that discuss the “elbow technique”? Thorndike’s “Who belongs in the family?” does not even mention “elbow”, and I would hope a better reference in this context is available (though it was a thoroughly enjoyable read which I thank the authors for bringing to my attention!) C20. The references are inconsistent. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors acknowledge the limited validation of their approach (though I would not call it "diverse"). They also acknowledge that there is an interpretability-accuracy trade-off in concurvity regularization (though I find their demonstration of "increased interpretability" lacking). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer aFW2, We appreciate the time and effort you have invested in reviewing our submission. Your feedback is valuable and we are happy to clarify and expand on the main points raised in your review. Due to the character limit we were unable to include our answers to every question, but we will happily provide them upon additional request. The remaining questions are answered in our global comments. We have addressed your concerns as follows: **Weaknesses:** - Regarding the lack of comparison to other models and methods of dealing with concurvity: Our regularizer is only applicable to differentiable GAMs, with Neural Additive Models as the most prominent example. On the other hand, previous approaches dealing with concurvity investigate specific feature selection strategies (Kovács 2022), which are not straightforward to apply to NAMs (where all feature functions are fitted jointly). Since our work specifically focuses on differentiable GAMs, we have intentionally not included a direct comparison with previous concurvity reduction techniques. We are happy to point this out more clearly in the camera-ready version. Nevertheless, we agree that reporting a classical GAM baseline is useful. Therefore, we have added a spline-based GAM (via pyGAM (Servén et al. 2018)) as a baseline to our experiments. Please see also our global comment C3. - On the lack of demonstration of the claimed feature selection: Indeed, the concurvity regularizer encourages feature selection where features are strongly correlated, as demonstrated in Fig. 2a and 6. While feature selection is a possible consequence of concurvity regularization (when features are correlated) it is not the main aim and hence we do not compare with sparsity seeking methods, as detailed in our global comment C1. **Substantial questions:** 1. Indeed, we do not need to treat time series and regular tabular data differently in our approach. However, since time series form a data modality of independent interest, we have decided to put this example into a separate part. But we agree that it can be also considered a “degenerated” extreme case of our tabular data experiments. 2. Previous works do not give a precise definition. Our definition is loosely inspired by the one of [Ramsay et al., 2003], with an important difference: Ramsay et al. only consider model spaces of a cartesian form $\mathcal{H} = \mathcal{H}\_1 \times … \times \mathcal{H}\_p$, while we allow for any subset of $p$-tuples. This might appear like a minor detail but is the key to our insights in Sec. 3 and theoretically justifies why our regularizer is useful. In particular, spaces like $\mathcal{H}\_\perp$ would be incompatible with previous definitions. 3. Yes, one could say that concurvity is primarily a model issue. As we show, restricting the model space to $\mathcal{H}\_\perp$ in Def. 2.2, concurvity can be ruled out, regardless of multicollinearity in the data. But of course, the data also plays an important role. For example, if the inputs $X_i$ are stochastically independent, so are any non-linear transformations. This implies that the regularization term is zero and does not affect the GAM-fit at all (see Remark (3) in Appendix A.2). 4. The assignment of infinity for constant features is just for convenience and we agree that a minor adaptation of the proof would be required for a different convention. This could be addressed by adding another constraint to $\mathcal{H}\_\perp$, excluding constant features. However, this technicality might cause confusion in the main body. Hence, we decided on the simplified version and address the ill-definedness of the correlation in Footnote 5. 5. We elaborate on this issue in general comment C5. 6. We agree that the paper introducing mgcv [42] (Wood 2001) does not mention concurvity, however, the concurvity indices are implemented in mgcv and are described in its manual. Hence, we find the reference appropriate. (Kovács 2022) has a good description of the concurvity indices implemented in mgcv on Page 7. Regarding the similarity of our metric compared to Ramsay et al.: They compute for each feature the correlation between $f_i$ and the sum of all other functions (excluding $f_i$), whereas we compute the pairwise correlation between all $f_i$ and $f_j$. 7. a.) We agree that the range of $f_1$ is closer to zero in the unregularized case; however, we also find that the unregularized model has introduced an almost perfect correlation between $f_1(X_1)$ and $f_2(X_2)$, which isn’t present in the data. Note that both models have the same validation RMSE. b.) From Figure 3 (a) one can see that $f_2(X_2)$ approximates $|f_1(X_1)|$ up to an affine transformation. 8. $R_\perp$ is measured in the (cartesian product) target space, so the regression target space or the space of raw logits in the case of classification. In both cases, the scale will depend on the target specifics and not on the target type. That is, generally, this does not result in $R\_\perp$ being on a much smaller scale in classification as opposed to regression tasks. 9. Yes,an example is the NeuralProphet experiment in Fig. 1, where time is used as input feature for all input components, implying perfect multicollinearity. Our approach demonstrates that the non-linearly transformed features can be decorrelated anyway. More generally, the idea of ruling out concurvity in the presence of (perfect) multicollinearity was precisely our motivation when deriving our regularizer in Section 3. Servén D., Brummitt C. (2018). pyGAM: Generalized Additive Models in Python. Zenodo. DOI: 10.5281/zenodo.1208723 László Kovács. Feature selection algorithms in generalized additive models under concurvity. Computational Statistics, pages 1–33, 2022. T O Ramsay, R T Burnett, and D Krewski. The Effect of Concurvity in Generalized Additive Models Linking Mortality to Ambient Particulate Matter. Epidemiology, 14(1):18–23, 2003. We appreciate your feedback. --- Rebuttal Comment 1.1: Title: Minor notes on Notation and clarity Comment: **Notation and clarity:** (10.) We thank the reviewer for the thorough analysis of the paper. To our understanding Buja, Donnell, Stuetzle (1986) is a preliminary technical report and not a reviewed paper. We were not able to find a digital version of (Buja 86) and hence decided to choose (Buja 89) which is widely cited as the seminal work on concurvity. The paper the reviewer is referring to appears to have later been published in 1994 as Donnell, Deborah J. et al. “Analysis of Additive Dependencies and Concurvities Using Smallest Additive Principal Components.” Annals of Statistics 22 (1994): 1635-1668. (11.) Thank you for spotting this inconsistency in our formulation of ERM. It slipped through and we have updated it accordingly. (12.) Yes, $\mathcal{H}$ can be any subset of $p$-tuples. But it is important to bear in mind that it is not necessarily a cartesian product of the form $\mathcal{H} = \mathcal{H}\_1 \times … \times \mathcal{H}\_p $, where $\mathcal{H}\_1 $, …, $\mathcal{H}\_p $ are individual function spaces. The set $\mathcal{H}$ may impose additional constraints between the functions of a $p$-tuple, like in the definition of $\mathcal{H}\_{\perp}$. This relaxation might appear like a subtlety but is a crucial aspect in our definition of concurvity and derivation of our regularizer. See also Q2. (13.) We are happy to clarify this point in the camera-ready version. By “suitable linear combination” we basically mean a collection of coefficients fitting a target variable, say $Y \approx d_0 + \sum_i d_i * X_i$. In the presence of multicollinearity according to Def. 2.1, we would then have $Y \approx (c_0 + d_0) + \sum_i (c_i + d_i) * X_i$. So there exist other (infinitely many) equivalent solutions with completely different coefficients, which causes an undesirable ambiguity. The same argument applies to non-linear features in the case of concurvity in Def. 2.2. (14.) Good point, we will keep this in mind for the camera-ready version. (15.) Thanks! We agree and will expand the paragraphs where they are referenced in the main text. (16.) Thank you for the suggestion. We have added the suggested plot in Figure 3 of the rebuttal pdf. We also refer the reviewer to global comment C4. (19.) We agree that Thorndike’s “Who belongs in the family?” may seem counterintuitive to be the originator of the elbow method (or “L-curve”) as the term was only coined later. However, to the best of our knowledge, it is considered as such. --- Rebuttal Comment 1.2: Comment: Thank you for the overall thorough response to the comments from all reviewers. Overall, I will increase my score accordingly. Reading through the reviews & rebuttals again, I just wanted to get back to two comments from my review: ### more complex toy example > Cited work such as [25] (Kovács 2022) uses toy examples to much better effect; a comparison of the proposed NAM/regularizer approach on the same toy example would make for a much stronger submission. I would strongly encourage you to apply your method to Kovács's toy examples, whether for the camera-ready if accepted, or for a resubmission elsewhere if rejected - I believe this will make your paper significantly stronger: it would fill the gap between your current toy examples, which seem overly simplistic / unrealistic, and the evaluation on real-world datasets, where it's not clear what the answer ought to be as there is no ground truth available. If possible, if you can still run this and describe the results in comment, that would make it easier for me to finalise my opinion. ### pair plots of features and of transformed features > Though the discussion is all about concurvity and non-linearity, only scalar correlation values are shown in Fig. 6 (a). Pair plots of both Xᵢ vs Xⱼ and fᵢ(Xᵢ) vs fⱼ(Xⱼ) might demonstrate the effect of the regularization more easily. Claims such as “their feature contributions largely [cancel] each other out” (lines 282-284) could easily be shown by showing their sum across the data set. I would have appreciated being able to see these plots; I'm assuming you can't update the rebuttal PDF/add figures at this point, but again I think this is something that would make your work easier to understand and believe in. ### visualization of how to choose $\lambda$ Thank you for preparing Fig. 3 of the rebuttal PDF - this does make it much easier to understand final choice of $\lambda$ on each dataset. For adding it to the manuscript, I would suggest also including vertical lines at the final choices of $\lambda$ for each column / dataset. E.g. based on these visuals I would expect the following choices: - California Housing: $\lambda \approx 0.1$, at which point additional regularization no longer reduces concurvity - Adult: slightly larger (maybe $\lambda \approx 0.2$ - harder to tell on a log plot without minor grid lines) until where the validation error is almost constant, while concurvity is continuously decreasing, and then the validation error suddenly starts increasing significantly - Boston Housing: similar to Adult, a bit larger still (maybe $\lambda \approx 0.4$?). Would be good to see indicated in the Figure whether this is indeed what you were thinking/choosing as well. --- Reply to Comment 1.2.1: Title: Additional results for the toy example by Kovács (2022) Comment: **Regarding the toy example from Kovács (2022)** We thank the reviewer for taking our response into consideration and raising the evaluation score. In response to the suggestion to include a more complex toy example, we have replicated the toy example from Kovacs (2022) using our NAM setup. To recap, this example contains 7 features: $X_1 \sim X_2 \sim X_3 \sim U(0,1) $ $X_4 = X_2^3 + X_3^2 + N(0, \sigma_1) $ $X_5 = X_3^2 + N(0, \sigma_1) $ $X_6 = X_2^2 + X_4^2 + N(0, \sigma_1) $ $X_7 = X_1 \times X_2 + N(0, \sigma_1)$ $Y = 2X_1^2 + X_5^3 + 2 \sin (X_6) + N(0, \sigma_2)$ where $\sigma_1$ is sufficiently small to create severe concurvity among the features ($\sigma_1 = 0.05$, $\sigma_2 = 0.5$). We simulated 10,000 data points from this model and created a 7:3 train/test split. We fitted 20 random initializations of a NAM in unregularized, concurvity regularized, and L1 regularized settings. The regularization parameter $\lambda$ was determined separately for each regularization type based on trade-off curves. For concurvity regularization we used $\lambda = 0.1$ and for L1 we used $\lambda = 0.05$. The results, reported as the $R^2$ on the test set, are reported in the table below, with the top three rows from Kovács (2022) for comparison. Confidence intervals of the mean are estimated on 10000 bootstrap samples. Features are presented in descending order of their importance (as defined in the main paper) for the best-fitting model of each setting, with importances reported in the cell below. | **Model** | **Selected Features** | **$R^2$(test) (%)** mean, (5% / 95% conf. int) | **$R_\perp$(test)** mean, (5% / 95% conf. int) | |---|---|---|---| | Full model | Full model | 84.99 | | | Stepwise | **X1**, X4, **X5**, **X6** | 85.11 | | | Hybrid algorithm | **X1**, **X5**, **X6** | 85.31 | | | Unregularized (ours) | **X1**, **X6**, X4, X2, **X5**, X3, X7 | 80.77, (80.31 / 80.95) | 0.22, (0.20 / 0.23) | | ⤷ Feature Importance | **0.129**, **0.097**, 0.066, 0.053, **0.043**, 0.013, 0.004 | | | | Concurvity Reg. (ours) | **X1**, **X6**, **X5**, X2, X7, X4, X3 | 79.28, (78.52 / 79.88) | 0.03, (0.02 / 0.03) | | ⤷ Feature Importance | **0.132**, **0.125**, **0.088**, 0.070 , 0.006, 0.002, 0.002 | | | | L1 Reg. (ours) | **X6**, **X1**, **X5**, X7, X4, X3, X2 | 79.12, (78.50 / 79.47) | 0.21, (0.20 / 0.21) | | ⤷ Feature Importance | **0.147**, **0.106**, **0.037**, 0.009, 0.008, 0.005, 0.0 | | | We note that both concurvity regularization and L1 regularization correctly identifies the three predictive features $X_1$, $X_5$ and $X_6$ on which $Y$ directly depends. This is not the case without regularization. Furthermore, we find that concurvity regularization effectively reduces $R_\perp$, unlike L1 regularization. We also note that the $R^2$ values of all NAM implementations are lower than those reported by Kovacs, which we believe is due to the inductive biases in spline-based models being particularly well-suited to the mostly polynomial-based problem. We appreciate the reviewer's insightful suggestion, which has indeed underscored the efficacy of concurvity regularization. We are more than willing to provide further results for the toy example, should the reviewer have any specific requests. We would be pleased to incorporate these results into the final version of the paper. **Regarding the pair plots** Regrettably, we are unable to update the rebuttal PDF at this point. Initially, we opted not to include the pair plots due to space limitations. Upon further examination, we found that these plots provided minimal extra insights compared to the scalar correlation plot, while taking up significantly more space. Consequently, we believe they are more suited to the appendix rather than the main body of the paper. We would be glad to incorporate them into the final version of the paper. **Visualization of how to choose** $\lambda$ We thank the reviewer for the suggestion and will include the suggested plots with vertical lines indicating the chosen regularization strength in the final version of the paper.
Summary: Generalized additive models (GAMs) offer greater interpretability than other machine learning methods while having greater flexibility than generalized linear models. This paper addresses how to account for concurvity (the non-linear analog to multicollinearity in linear regression). The authors suggest directly penalizing correlations between the features during training. Concurivity poses problems with interpretability (which (transformed) feature to attribute the contributions to) and can increase the variance in the model. The proposed regularizer was found to reduce concurvity while maintaining strong predictive performance. The viability and success of this method are shown in time series forecasting as well as tabular regression tasks. Strengths: - Clear motivation for the problem and general organization of the paper - Method is light on assumptions regarding the feature transformations, $f_i$ so this can be applied in many settings - Improves model interpretability while reducing variance in fitted functions Weaknesses: - The figures were very information dense (particularly fig 6). More discussion of these results and how to interpret them would aid the reader - Not much space was spent on the time-series experiments relative to how much was dedicated to it in the introduction Minor - line 75: (linear) regression - line 83: minimize over $\beta$ as well? - line 126: minimize over $\beta$? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How much do the $f_i$ differ depending on $\lambda$? How fair is it to compare feature importances across different levels of regularization if the resulting $f_i$ are not the same? (Possibly addressed in figure 6.c?) - line 126.5: in the $gam-fit_\perp$ what is the $l$ index for? It does not show up in any of the other terms - What is the additional computational burden of computing the regularizer? How well does this scale? - How does gam-fit perform in the presence of covariates that are neither perfectly correlated nor entirely uncorrelated (as they were in the toy examples)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 17dg, Thank you for your constructive feedback and recognition of the clear motivation for our problem, the broad applicability of our method, and its contribution to improving model interpretability and reducing variance in fitted functions. We have addressed your concerns and questions as follows: 1. On the information density of the figures: We agree that, given the space constraints of the submission, some figures contain quite a lot of insights. We happily extend our discussion in the camera-ready version. 2. On the space dedicated to time-series experiments: The main reason why we did not devote more space to time-series experiments is their “degenerated” nature in the context of GAMs. Indeed, in the NeuralProphet example, time is used as the only input feature for all additive components, so that perfect multicollinearity is present. While our approach can be very useful in such scenarios, it should be considered an extreme case of our tabular data experiments, where the input feature relationships are more complex. On the other hand, the special form of the time-series example in Fig. 1 makes it intuitive and therefore well-suited for an introduction to our approach without prior knowledge of concurvity. Hopefully, this clarifies the mismatch in dedicated space. We are happy to point out this aspect in the camera-ready version. 3. On the minor errors: We appreciate your attention to detail and have adjusted the paper accordingly. 4. On the difference in $f_i$ depending on $\lambda$: We fully agree that this is tricky in general. As also addressed in our global comment C2, quantifying increased interpretability is challenging and there exists no gold standard yet. We choose to compare the shape functions $f_i$ and aggregated feature importances of NAMs trained with and without concurvity regularization in our case study. After all, the shape functions describe the prediction mechanism of GAMs and are used to gain insights or even guide practitioners in making decisions alongside the aggregated feature importances. The main purpose of Fig. 6(b) is therefore not a direct quantitative comparison of feature importantes between different regularization levels, but it should be rather used to extract qualitative insights, such as larger variances or bi-modality. We are happy to clarify this in the camera-ready version. 5. On the additional computational burden of computing the regularizer: We address this in our global remark C5. We will clarify the computational scaling in more detail in the camera-ready paper. 6. On the performance of gam-fit in the presence of covariates that are neither perfectly correlated nor entirely uncorrelated: We thank the reviewer for suggesting this experiment which was similarly suggested by reviewer 5zAd. We added an additional result for toy example 1 in Figure 4 of the rebuttal pdf (center row) where the features have a correlation of 0.9. We find that w/o regularization the model converges to the wrong solution in every case while w/ regularization the model converges to the right solution in almost all cases. We hope that these responses address your concerns and we are open to further discussions. We appreciate your feedback and will make the necessary adjustments in the camera-ready version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifying comments. I will update my score to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you Reviewer 17dg for raising your score. We are happy to clarify any remaining concerns.
Summary: The paper proposes a regularization using pairwise correlations of shape functions for differentiable GAM, aimed at reducing concurvity between shape functions Strengths: 1. The idea of the paper is well-explained and straightforward. The correlated terms can be self-cancelled to avoid to unnecessary complexity. So reducing concurvity may potentially enhance generalizability. 2. The paper’s story, theory and the numerical experiments all demonstrate that regularization effectively controls the correlations of the non-linearly transformed features. 3. The proposed component is easy to implement and optimize and as far as I understand, it can be incorporated into any differentiable GAM. Weaknesses: 1. The foundation of this paper – the reason why correlated terms should be cancelled out is still beyond me. For the time series scenario in the introduction, I understand that high frequency terms of feature pairs hinder interpretation. But what if the correlation does lie in the ground truth? Especially for the tabular data, the visualized results in the public dataset show that most of the features have almost no prediction power for the target variable (e.g. Population, households, housing age before 30 years, total bedrooms and total rooms, etc.), which seem to be counterintuitive. Little concurvity is allowed, but it happens in the real world. Is it possible that restricting concurvity may in general push the shape functions away from their natural relationship (trained to be not effective but actually affects the target)? The author may provide further evidence and explanation regarding the negative effects of concurvity, to make the article more convincing. 2. While previous literature found that regularizing the cross-covariance increases generalization performance, the numerical results presented in this paper indicate that controlling the concurvity will sacrifice the generalization ability anyhow. The accuracy is slightly worse than the one without regularization, in exchange of the stated ‘interpretability’, which unfortunately is not straightforward in the figures. Therefore, it appears that the trade-off between accuracy and correlation may not be worth it in this case. 3. The paper showcases the motivation using a time-series example, but no visual results for time series are presented to address the issue. It is doubtful whether the regularization does control the frequency or some uninterpretable parts, rather than merely force terms to be uncorrelated (in an unnecessary manner). 4. It is not shown how to do the optimization. Is the calculation of regularization time-consuming and unstable? What would happen if the regularization is implemented in every iteration rather than only 5% of the total optimization steps as introduced in the article? It seems a mismatch between the design and the realization. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Figure 2(a), we see the model with regularization chooses a random point on a curve. This is because both features are identical in the setting, making the model unidentifiable. What if the two features are correlated but not perfectly correlated? Is the regularizer able to identify the true predictive variable X1, and not so affected by random seeds? 2. How to determine a proper level of the strength of regularization (lambda)? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 5zAd, Thank you for your comprehensive review and constructive feedback on our paper. We appreciate your recognition of the clarity of our idea, the effectiveness of our regularization in controlling the correlations of the non-linearly transformed features, and the ease of implementation and optimization of our proposed component. We have carefully considered your concerns and questions and have addressed them as follows: **Weaknesses:** 1. On the concern about the foundation of the paper and the negative effects of concurvity: Thank you for you for this insightful comment. Since related points were brought up by the other reviewers as well, we have added a global statement; please see comment C1. Regarding your specific concern: While we agree that ground-truth input features often exhibit natural relationships due to correlation, we do not think that the behavior of our regularization approach is counterintuitive. In fact, the idea of “suppressing” redundant features is well-established in the field of feature selection and widely-accepted in the community. These modes intend to provide a particularly simple predictive model, relying on as few input features as possible. However, this does not mean that the other features have no predictive power (they almost always have). Our approach aligns well with this approach, as it provides a particularly simple model by decorrelation the target features. Having said this, we agree that it is useful to provide more evidence that ignoring concurvity can be very problematic. To this end, we have added a spline-based GAM (via pyGAM (Servén et al. 2018)) as a baseline to our experiments (see also global comment C3), which demonstrates that the resulting shape functions exhibit a large variance – an issue commonly reported in the concurvity literature (please see the review pdf). In other words, the shape functions do not exhibit a natural relationship, since the space of shape functions is degenerate with possibly infinitely many equivalent solutions. We demonstrate that our method can greatly reduce this ambiguity, and in that sense, make the prediction models more interpretable. We hope that this clarifies your concern about the foundation of our approach and we are happy to make this point clearer in the camera-ready version. Servén D., Brummitt C. (2018). pyGAM: Generalized Additive Models in Python. Zenodo. DOI: 10.5281/zenodo.1208723 2. On the trade-off between accuracy and correlation: We are assuming that you are referring to the work by Cogswell 2016 by “regularizing the cross-covariance increases generalization performance”. While the work by Cogswell et al. indeed proposes a decorrelation approach, it operates in a fairly different regime, namely decorrelating _hidden representations_ to reduce overfitting of deep neural networks. Although regularization often improves generalization of deep neural networks by reducing overfitting, we did not expect the same improvement in generalization in GAMs since overfitting is typically not a major issue in this type of models. Rather, regularization is employed here to constrain the properties of the model with the aim of improving interpretability. The trade-off between accuracy and concurvity mirrors the well-established sparsity-accuracy trade-off in classical feature selection paradigms, and our experiments demonstrate that the sacrifice in accuracy is fairly small in all considered cases. We are not aware of any previous works where regularization has improved generalization in GAMs; if the reviewer has any such works in mind we would be happy for a pointer to these references. Regarding the gain of interpretability, we kindly refer to our answer to your first concern as well as global comment C1. M Cogswell, et al. “Reducing Overfitting in Deep Networks by Decorrelating Representations.” In: International Conference on Learning Representations, 2016. 3. On the lack of visual results for time series: We believe this to be a misunderstanding: the example we present in Figure 1 shows actual experimental results obtained for the three settings detailed in Section 4.2. The only learnable parameters of the model are the coefficients of the Fourier terms of which each has a distinct frequency. As a result, the decorrelation does directly influence the dominance of a frequency since the model has no other parameters to adapt. 4. On the optimization process: We thank the reviewer for raising this concern. We elaborate more on the efficiency and scaling of the regularization in the general comment C5. We want to clarify that the regularization is added after a warm-up phase of 5% of the total optimization steps, but afterwards used *in every optimization step*. We will clarify this in Appendix C.1. **Questions:** 1. On the questions about Figure 2(a) and what happens if two features are correlated but not perfectly correlated. Thank you for suggesting this experiment which was similarly suggested by reviewer 17dg. We added an additional result for toy example 1 in Figure 4 of the rebuttal pdf (center row) where the features have a correlation of 0.9. We find that w/o regularization the model converges to the wrong solution in every case while w/ regularization the model converges to the right solution in almost all cases. 2. How to determine a proper level of the strength of regularization (lambda)? We address this in our global comment C4. We hope that these responses satisfactorily address your concerns and provide further clarification on our work. We are open to continuing this dialogue to further refine our paper. We truly appreciate your insightful feedback and will incorporate your suggestions in the final version of our paper. --- Rebuttal Comment 1.1: Comment: We would like to thank the reviewer again for the thorough review and hope we have addressed your concerns. We would appreciate it if you could consider adjusting your score accordingly.
Rebuttal 1: Rebuttal: We thank all reviewers for carefully reading our manuscript as well as their thoughtful comments and suggestions. We are happy that the reviewers acknowledge that “multicollinearity/concurvity are indeed serious problems in statistics” (R.arAb) and that our approach is novel (R.arAb), light on assumptions (R.12dg), easy to implement and optimize (R.5zAd), can be combined with any differentiable GAM (R.aFW2 & R.17dg), and that it improves the model interpretability (R17dg). Thus, we believe our contribution to be of value to the NeurIPS community. We summarize and address some questions shared amongst the reviewers in the following. ### C1. Motivation of our contribution and clarifications on concurvity in general While all reviewers agree on the efficacy of our regularizer, there were some questions as to the motivation to reduce concurvity, which we address here. Although concurvity may be a property of datasets (as illustrated in our two toy examples), we argue that it is an undesirable property of a model. Akin to linear models in the presence of multicollinearity, concurvity in GAMs produces a large variance in model fits (see Fig. 2c att. pdf) and may introduce spurious correlations between the transformed features. Both phenomena may be observed in the California Housing case study. In particular, some features with positive correlation in the input space (e.g., “Total bedrooms” and “Population”) become negatively correlated in the transformed feature space of the unregularized NAM, leading to canceling contributions. In conventional statistics, it is not uncommon to drop features or employ sparsity regularization in order to remove multicollinearity (Dormann et al., 2013). However, sparsity regularization penalizes large vector norms of transformed features, thus also affecting model fit in the absence of concurvity. Thus, if the primary goal is to remove spurious correlations from a model rather than to reduce the number of features, concurvity regularization may be preferable. Restricting concurvity reduces variance at the expense of increasing bias, generally resulting in a reduced accuracy. As we do not regularize in order to reduce overfitting, we did not expect improved generalization from regularization. ### C2. On quantifying interpretability We note that quantifying interpretability is an unsolved issue in the community. While Chang et al. [a] attempt to quantify interpretability using sparsity and fidelity metrics, these cannot be considered gold standard or best-practice as they pose additional concerns and limitations. Whereas data fidelity can only be measured on synthetic datasets, sparsity “can hide data bias and discriminate against minority groups.” [a]. Therefore, we chose a more detailed approach to assessing interpretability, illustrated in figures 5 and 6 and the detailed discussion of the California Housing case study. Here, we note that regularization mitigates inflated feature importances in correlated features, removes spurious correlations, reduces variance in the shape functions, and prunes some correlated features while leaving uncorrelated features intact. While quantitative measures of interpretability are still lacking, we argue that practical tools to deal with this issue in the meantime are still needed. We believe our contribution serves as an alternative to sparsity-based regularization, thus contributing to the toolkit of building interpretable models. [a] C Chang, S Tan, B Lengerich, A Goldenberg, and R Caruana. “How Interpretable and Trustworthy are GAMs?” In ACM SIGKDD Conference on Knowledge Discovery & Data Mining, p 95–105, 2021. ### C3. On the evaluation of our regularizer A concern shared between most reviewers is the scope of our evaluation, currently comprising three toy examples and four real-world tabular datasets. In response to these concerns, we have conducted additional experiments on three additional tabular datasets. Moreover, we have added evaluation of a conventional spline-based GAM across all tabular datasets, which is depicted in the attached PDF (Fig. 1&2c) and will be included in our camera-ready version. The results are clearly in line with our previous findings and further demonstrate the applicability of our method. Finally, we will provide further evaluation of the investigated datasets, similar to our in-depth analysis presented in Fig. 6, in the appendix of the camera-ready version. ### C4. On regularization strength lambda in practice As some reviewers pointed out, choosing the regularization strength lambda is of great practical importance. We briefly mention the elbow technique (or L-curve) in our paper, using the provided tradeoff curves (e.g. Fig 2b, 3b, 4 and 5). While these curves are well suited to identify a good tradeoff point between gains (in terms of lower concurvity) and losses (in terms of less accuracy), identifying the corresponding lambda is arguably tricky in the printed version (but trivial in an interactive digital plot). Alternatively, we now also provide separate curves for concurvity and accuracy over lambda, as suggested by R.aFW2 – see the attached PDF. ### C5. On computational complexity and overhead A last general concern among reviewers was computational overhead, given the quadratic scaling of the regularizer in the number of features mentioned in the paper. An important point here is that the calculation of the pairwise correlations can be parallelized via vectorization, making it efficient to calculate while keeping the scaling constant controllable (i.e. via increased parallel compute). As an example, even for a dataset of around 1000 columns/features (which is way beyond most typical datasets) at a batch size of 512 our implementation of the proposed concurvity regularizer has a negligible average runtime of 6.9 ms (tested on an M1 MacBook Pro averaged over 1000 runs). We will provide a small analysis of the runtime and overhead in the appendix of our revised paper. Pdf: /pdf/b26fffece310b06fb3b1d1f2b8cbb60668766c55.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Mass-Producing Failures of Multimodal Systems with Language Models
Accept (poster)
Summary: The authors present MultiMon, which is a system for automatically identifying both general example categories that multimodal models struggle with, and new specific examples that would likely produce errors. MultiMon does this by using a semantic-similarity text encoder to find sufficiently different sentences in a corpus, and then giving these sentences to CLIP to see if CLIP actually produces very similar embeddings for them. If CLIP does produce similar embeddings for sufficiently different sentences, then this is evidence that a variety of multimodal models may not be able to distinguish them. MultiMon gives these sentences to powerful LLMs such as GPT-4, and asks the LLMs to identify general categories of examples that multimodal models may struggle with. The LLMs are then asked to generate specific examples of each category that they have identified. Overall the output of MultiMon is challenging test datasets with novel examples for multimodal systems. Subsets of these challenging test datasets have been verified by the authors to produce a significant number of failures on downstream text-to-image, text-to-video, and text-to-3d-model systems. Strengths: The writing and presentation is clear and fairly comprehensive, and the claims seem well supported by the evidence. As far as I am aware, the work is sufficiently original. They present an important general idea, which is to bootstrap multimodal failure knowledge from incredibly proficient text-only LLMs in an automatic way. I think that approaches like this one will only become more important in the field as LLMs improve even more. Weaknesses: The authors used themselves as annotators to evaluate how well their failure-finding procedure could actually find real failures on downstream generation tasks. It could be useful to get independent annotators who are not invested in the project though. It shouldn't be terribly expensive to spin up a task on MTurk - the authors only checked 100 pairs. I found myself really relying on the more meaty figures in the appendix to validate the claims in the paper. You could consider at least putting this in the main text: a table of downstream failure rates of various models as judged by human annotators (ideally not the authors). It will help people skimming the paper who just want to see how well your approach actually works. I see that you can automatically identify failures, but can you automatically fix them? Did you try incorporating the generated data into training routines? How did it do? You could at least use this automatic-fixing end goal to help motivate the paper if you have room. Hugging Face is two words, both capitalized. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Did you test MultiMon on any vision models that definitely don't use CLIP under the hood? It would be great if your failure-finding procedure could be shown to generalize beyond CLIP-based models. The 100 pairs that you manually checked are randomly sampled, right? Are the discovered failure categories in Appendix C quoted verbatim from the language models? It is unclear to me. It would also be helpful to include a specific example generated by the model for each class it discovered in Appendix C. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I believe that the limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive review and valuable feedback! We appreciate that you find our work “present[s] an important general idea” and think “our approach will only become more important”. We include explanations below to address your points, including new experiments on systems that do not use CLIP: --- *Did you test MultiMon on any vision models that definitely don't use CLIP under the hood? It would be great if your failure-finding procedure could be shown to generalize beyond CLIP-based models.* Thank you for your suggestion! Based on it, we test how well MultiMon can find failures on multimodal models that definitely do not use CLIP in two settings: 1. **Transfer**. We show that the inputs we find using CLIP often directly produce failures in T5-based text-to-image models; 70.8% of the inputs used to generate figures in the main body or the appendix produce failures on DeepFloyd, a text-to-image model that uses T5 to embed inputs, rather than CLIP. [Figure 1, supplementary material] 2. **Generation from scratch**. We apply MultiMon on the T5 model from scratch (i.e., replace CLIP with T5 throughout the whole pipeline) and find some systematic failures of the T5-based model that the models we study do not have [Figure 2, supplementary material]. We go into further on the generation experiment below. **Generation details**. To find systematic failures of DeepFloyd, from scratch, we repeat the entire MultiMon pipeline from Section 3, using T5 embeddings instead of CLIP (using our existing code, this took under three hours.) We find many overlapping systematic failures but some new ones. For example, a new systematic failure that MultiMon outputs is: - _The model fails to comprehend the use of pronouns properly. The sentences are similar, but the change of subject affects the visual representation significantly_ - _The model fails to distinguish the time of day. This is critical in visual representation, as these times would significantly change the lighting, color scheme, and potentially the activity depicted in the image_ Overall, the systematic failures we find with MultiMon on T5 have an average success rate of 77.3%. We have also attached some inputs that MultiMon generates using T5 along with their images generated with DeepFloyd [Figure 2, supplementary material]. --- *The authors used themselves as annotators to evaluate how well their failure-finding procedure could actually find real failures on downstream generation tasks. It could be useful to get independent annotators who are not invested in the project though. It shouldn't be terribly expensive to spin up a task on MTurk - the authors only checked 100 pairs.* Thanks for raising this point! We definitely think there are risks in having authors invested in the project doing labeling, so we designed our study to be deliberately *not* gameable. Each author labeled 400 chosen images: 100 random pairs of images from MultiMon, and 100 random pairs of images from the baseline system. However, we scrambled these 200 pairs together, so the authors were not given information on whether each pair was from the baseline versus from MultiMon (and thus couldn’t game which input to select, or exploit the “no match” option). Since there was a significant gap in accuracy between the baselines (80%) and MultiMon (20%), we expect that the failures are genuine. We will clarify this setup in subsequent versions of Section 5.1. We also agree that setting up a MTurk task would add further validation. We were not able to get IRB approval for the study in time for the rebuttal, but expect the results to be very similar to our study and will consider adding it to subsequent versions of our work. --- *It will help the readers to put more meaty figures from appendix to the paper to validate the claims in the paper.* We will update based on your suggestion; thanks so much for reading our paper so carefully! --- *Can you automatically fix the failures you identified? You should at least use this automatic fixing end goal to help motivate the paper.* Ultimately, we hope that developers can use MultiMon to improve subsequent generative systems, e.g., by using it as a source of examples for adversarial training. However, fixing these examples requires not only retraining CLIP, but retraining the entire diffusion model to adjust to the new embeddings [300 - 301]. We will discuss this motivation in the discussion of subsequent versions of our work. --- *Hugging Face is two words, both capitalized* Thank you for catching this mistake! We will change all the Hugging Face in our text. --- *The 100 pairs that you manually checked are randomly sampled, right?* Yes, randomly sampled. We’ll make sure to clarify this --- *Are the systematic failures in Appendix C verbatim from LLM? + It would be nice to include a specific example generated by the model for each class it discovered in Appendix C.* Yes, the descriptions for systematic failures are copied verbatim from LLM. And, thank you for the advice to put examples beside each class discovered. We will update it in the Appendix in the revised version. --- Rebuttal Comment 1.1: Title: Response Comment: I really appreciate that you took the time to carefully respond to the critiques / questions and even ran some new experiments. I can't think of any more concerns at the moment. Definitely keeping my rating as a full "accept".
Summary: They propose MultiMon, a multimodal monitor, to automatically find, categorize, and generate failures of multimodal models. They categorize systematic failures of multimodal models using large language models and show that failures of CLIP embeddings also lead to failures in models that use these CLIP embeddings. Strengths: The authors propose an automatic evaluation pipeline for multimodal models. - Given an initial corpus for evaluation, they leverage large language models (LLMs) to systematically categorize systematic failures in that corpus as well as use LLMs to generate more examples from each category. - Their method can be steered for specific downstream applications like self-driving - Their method has 3 stages each of which are plug and play and therefore, their proposed system is flexible - They detect failures from an initial corpus without the need of generating outputs (though this might not be possible for models that don't have separate components that serve as a bottleneck for failures like CLIP embeddings do for current multimodal models) - Their proposed system is able to generate examples that fool safety filters of MidJourney Weaknesses: - The proposed system heavily relies on the quality and availability of a curated initial corpus - The authors do not provide any study on how the quality of the initial corpus affects the performance of their system - Their system is easy to use for an adversary as well (dual use) - The authors also do not provide examples of datasets that can be used as initial corpuses for their system (except for COCO that they use for experimentation) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - As a suggestion, it would be good to quantify what is the minimum number of samples (and number of erroneous agreements expected, if any) that the initial corpus should have for their proposed system to actually provide a robust evaluation. - Examples of existing datasets that can serve as good starters (apart from COCO that has been used in the paper) would also be very valuable for researchers looking to use this system Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes they mention "We think deploying MULTIMON favors the evaluator over the adversary, as the evaluator gets to test for and fix the generated failures before release (at which point MULTIMON is useless to the adversary).". However, given the heavy reliance on the initial corpus on the types of failures, the proposed system generates, it's hard to be convinced that MultiMon is useless to the adversary (they can always use a better/diverse initial corpus) even if the evaluator had used the system to initially to do robust testing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions! We appreciate that you find MultiMon flexible, steerable and effective. We address your questions and comments below. --- *As a suggestion, it would be good to quantify what is the minimum number of samples (and number of erroneous agreements expected, if any) that the initial corpus should have for their proposed system to actually provide a robust evaluation.* Thanks for your suggestion; based on it, we quantify the properties of the initial corpus in two ways: - We measure **how many pairs of erroneous agreement appear in the corpus** (and find that this is much larger than the number that can fit in the context window), - We measure **how many pairs of erroneous agreement are needed to produce systematic failures** (and find that even for fewer pairs than we test in the original version, we still recover many failures). Overall, we hope these help alleviate the concern that our method is bottlenecked by the corpus, and include further details below. 1. _The number of erroneous agreements in each corpus._ While we only use 150 pairs of erroneous agreement in the prompt (due to the context window), we scrape 33922 pairs of erroneous agreements from SNLI (using 157351 examples to make pairs), and 2131440 pairs of erroneous agreement from MS-COCO (using 616767 examples to make pairs). Intuitively, even relatively small corpora may produce many examples of erroneous agreement, since the number of possible pairs scales quadratically with the size of the corpus. We think the main bottleneck is the *context window of the model*, which only allows us to input ~150 pairs of examples, rather than the corpus itself. 2. _The number of erroneous agreements the categorizer needs to generate failures._ In our paper, we show that MultiMon needs some pairs to produce high-quality systematic failures [184 - 187], and produces many systematic failures with 150 pairs (Figure 3). To augment this, we additionally try giving the language model the top-k pairs from the scraping step (using MS-COCO), then generate systematic failures from them. We report the results in Table 1 of the supplement, and find that the system still finds some systematic failures with even 10 examples and 70% of the systematic failures with just 80 examples. These results suggest that even if the corpus does not have many pairs of erroneous agreement (5 orders of magnitude fewer for MS-COCO), MultiMon still produces failures. --- *The authors also do not provide examples of datasets that can be used as initial corpuses for their system (except for COCO that they use for experimentation)* In this work we study using SNLI in addition to MS-COCO as our corpus datasets, and find that different corpora produce different failures [Figure 3. Main body, Section C.2 Appendix]. Beyond these two corpora, any reasonably large dataset containing sentence descriptions would be suitable, such as MNLI [1], Conceptual Captions [2], or Flickr30k [3]. In practice, we expect that model providers (e.g. Stable Difusion, MidJourney) would simply collect inputs that real users submit, then input those to MultiMon as the corpus. --- *Their system is easy to use for an adversary as well (dual use) … [as] given the heavy reliance on the initial corpus on the types of failures, the proposed system generates, it's hard to be convinced that MultiMon is useless to the adversary (they can always use a better/diverse initial corpus) even if the evaluator had used the system to initially to do robust testing.* Thanks for raising this point; while adversaries could try using a different corpus than the evaluator, we think evaluators (defenders) are more likely to have better corpora, as they can see what queries users submit. For example, StabilityAI and MidJourney will have access to a large volume of submitted queries that they can use as a corpus, while the adversary does not. However, in scenarios where the adversary does have a better corpus, there could be an imbalance; we will add this to our discussion of risks. --- [1] Williams, Adina, Nikita Nangia, and Samuel R. Bowman. "A broad-coverage challenge corpus for sentence understanding through inference." arXiv preprint arXiv:1704.05426 (2017). [2] Sharma, Piyush, et al. "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning." Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018. [3] Plummer, Bryan A., et al. "Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models." Proceedings of the IEEE international conference on computer vision. 2015.
Summary: The authors of the paper observe that CLIP is employed by many generative multi-modal models, as such, they seek to generate failing cases by determining textual inputs that are close within CLIP embedding space while being distant in DistillRoBERTa. They then determine the type of failing cases via an LLM primed to categorise the failing cases into the type of failure (negation, temporal, attribute differences etc.). Finally, they use the categories and the original examples and generalise to new additional failure cases via an additional LLM prompt. They show how different dataset-LLM pairs uncover different failure cases in the generative multimodal models. The later two phases of the proposed approach have been shown to depend on the LLM used (GPT-3.5 performing worse than GPT-4 or Claude). The authors also show the failures that stem from CLIP flow downstream, demonstrating the issue across text-to-{image, 3D, video} models. Strengths: - Clear framework and hypothesis (similarity in CLIP with dissimilarity in textual embeddings are likely to be problematic inputs). - Exploring the specialisation of MultiMon for a specific task, here demonstrated through self-driving, showing how a user of the framework may focus on a subset/subspace of interest for input generation - Exploring the impact of corpus-LLM combinations, showing how the approach may be improved later Weaknesses: - Some magical numbers used in the prompt are unclear (41, 82). Are these due to prompt limits, or some other limit? - The dependence on a dataset for scraping and chaining LLMs to categorise and generalise may create cascade failures in MultiMon. - Sensitivity to prompts can be an issue, however, the prompts being shared mitigate this issue to some degree. - Part of the difficulty in prompt generation is abstracted away, discussing how the prompt could be augmented for larger context window LLMs could be useful in the Appendix, perhaps related to the first point here. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The work seems to be going towards a direction that is similar to fuzz-testing. Did the authors consider how their approach compares to fuzz-testing (specialised to multimodal models)? Some of the steps are already present, input generation (based on finding candidates that are similar in one embedding and different in another) and generalisation/generation through LLMs. The missing step seems to be a fitness function, say visual similarity performed in an automatic manner if the target domain is diffusion models. ### Discussion Phase The authors have clarified the issues related to the first four weaknesses above clearly, although there are some lingering concerns despite the empirical results, I do not feel they are critical. As for the fuzzing point. I feel I was not sufficiently clear with the direction it is applied, however, it was also clearer during the discussion phase that this would be solidly outside of the scope of the paper and instead future work. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The work and failure cases can be used to bypass guard rails and filters employed by models that use CLIP. The authors do demonstrate this capability, however, this is done in the hope of enabling the detection of such issues before models are in production and hence this disclosure seems prudent. This is addressed, but only briefly in the additional material in the section demonstrating this capability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and suggestions! We appreciate that you found our framework clear and liked our steering experiments. We respond to your questions and comments below: --- *Some magical numbers used in the prompt are unclear (41, 82). Are these due to prompt limits, or some other limit?* Yes, we set the numbers in the prompt to 41 because we find empirically that GPT-4 can output at most 41 pairs of failure instances in one response; sorry we did not specify this initially! We will add it to subsequent versions of our work. --- *The dependence on a dataset for scraping and chaining LLMs to categorise and generalise may create cascade failures in MultiMon.* This is an interesting point; at least empirically, we find that MultiMon reliably produces failures across all combinations of two corpus datasets (SNLI and MS-COCO) and three LLM (GPT-3.5, GPT-4 and Claude v1.3) [Section 4.1, Appendix C.2, C.3, and C.4]. One hypothesis for why we don’t see cascading failures is we only have three steps: steering, categorizing, and generating, so even if all steps fail slightly, the aggregate of these three failures may not be significant. --- *Sensitivity to prompts could be an issue + generating prompts may be difficult.* Though prompt sensitivity is an issue for other tasks, we find that our prompt produces good performance across all six of the model-corpora combinations that we test, and we expect the same prompt will continue to work for longer context windows. Nevertheless, we think there could even be room to improve MultiMon with more careful prompt selection, and will add this to the discussion. --- *Did the authors consider how their approach compares to fuzz-testing on multimodal models?* Thanks for your question; while our work at a high level is similar to fuzz testing (i.e., we scrape for candidates and adapt them), the inputs we choose are “in-distribution” (since they come from a corpus), and MultiMon is one shot: we simply scrape, categorize, and generate without iterating. Studying what insights from fuzz testing port well to identifying failures in ML systems seems like an interesting direction for subsequent work. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications! At a glance, the low stacked error as a reason for no cascading failures makes sense. As for the fuzzing part, iteration was not included from the start to direct fuzzing towards interesting inputs. I was comparing more on the multiple samples from some assumed distribution. I agree that this is more for subsequent work, but there is perhaps potential where MultiMon acts as a sort of "prior" or starting distribution before iteration. In particular, the steering mechanism seems quite apt for this direction.
Summary: This paper proposes a system which can find and identify the systematic failures of existing multi-modal models. It measures the failures by comparing whether the model produces unexpected similar outputs for different inputs. It finds a lot of failures of the CLIP text encoder and utilizes the language model to categorizes them. this paper also discusses the effect of the found failures on different downstream tasks including text-to-image generation. Those failures can be helpful for repairing the multi-model models. Strengths: 1. This paper is well-written, and the presentation is clear and easy to understand. 2. The proposed method to detect failures in multi-modal models is simple and effective. It can automatically find and categorize various failures in multi-model systems. 3. The found failures are useful for analyzing the limitation of the multi-modal models and downstream models such as text-to-image generation models. It would provide a clear direction to refine those models. Weaknesses: 1. It would be helpful if the paper conducts further analysis of the reason of those failures, which may point out some systematic problems of the design of existing multi-modal models. 2. This paper only detect and categorize the failures of the CLIP text encoder. This paper can add more experiments to evaluate the proposed methods on different multi-modal models and tasks. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Can the proposed methods directly detect the failures of the multi-modal models when dealing image inputs? For example, detecting that the CLIP image encoder provides similar embeddings for different images. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have addressed the technical limitation of this paper. There is no obvious negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful comments and interesting questions! We appreciate that you found our work well-written, clear, and useful for analyzing the limitation of multimodal models. We respond to your questions below. --- *This paper only detects and categorizes the failures of the CLIP text encoder. This paper can add more experiments to evaluate the proposed methods on different multi-modal models and tasks* Thank you for your suggestion! Based of it, we test how well our system can find failures on multimodal models that do not use CLIP in two settings: 1. **Transfer**. We show that the inputs we find using CLIP often directly produce failures in T5-based text-to-image models; 70.8% of the inputs used to generate figures in the main body or the appendix produce failures on DeepFloyd, a text-to-image model that uses T5 to embed inputs, rather than CLIP. [Figure 1, supplementary material] 2. **Generation from scratch**. We applied MultiMon on the T5 model from scratch (i.e., replaced CLIP with T5 throughout the whole pipeline) and found some systematic failures of the T5-based model that the models we study do not have [Figure 2, supplementary material]. We go into further on the generation experiment below. **Generation details.** To find systematic failures of DeepFloyd, from scratch, we repeat the entire MultiMon pipeline from Section 3, using T5 embeddings instead of CLIP (using our existing code, this took under three hours.) We find many overlapping systematic failures but some new ones. For example, some new systematic failures that MultiMon outputs are: - *The model fails to comprehend the use of pronouns properly. The sentences are similar, but the change of subject affects the visual representation significantly* - *The model fails to distinguish the time of day. This is critical in visual representation, as these times would significantly change the lighting, color scheme, and potentially the activity depicted in the image.* Overall, the systematic failures we find with MultiMon on T5 have an average success rate of 77.3%. We have also attached some inputs that MultiMon generates using T5 along with their images generated with DeepFloyd [Figure 2, supplementary material]. --- *It would be helpful if the paper can conduct further analysis of the reason for these failures, which may point out some systematic problems of the design of existing multimodal models?* Thanks for your suggestion! Based on the construction of MultiMon, we can say that the failures we uncover are caused entirely by the text-encoder; this means that no matter how effectively someone trains a diffusion model on top of these embeddings, failures will remain (so the embedding model itself needs to be fixed or changed) [298 - 305]. Beyond that, it is hard to come up with causal reasons for the failures; one hypothesis is that failing to encode negation, numerical differences and bag-of-words arise because there are insufficient discriminatory examples during pretraining, while another is that the dimensionality of the embeddings is too small (768) to encode all relevant features that appear in text. Robustly testing hypotheses like these is an important direction for subsequent work. --- *Can the proposed methods directly detect the failures of the multi-modal models when dealing with image inputs? For example, detecting that the CLIP image encoder provides similar embeddings for different images.* Conceptually, we think the MultiMon framework can be used to find failures of the CLIP image encoder: for scraping, we could take a dataset of images, and find examples of erroneous agreement by comparing against a pretrained image encoder. The challenge comes at the “categorization” step; since we use language models to identify systematic failures, the inputs must be text, not images. However, as text-guided image-to-text systems improve, an multimodal text + image-to-text model (like Google’s Bard [1], Llava [2], or multimodal GPT-4 [3]) could substitute as a categorizer, and a text-to-image model could serve as the generator. We think this highlights the generality of our framework, and would be an interesting direction for subsequent work. --- [1] Google, An important next step on our AI journey, 2023 [2] Liu, Haotian, et al. "Visual instruction tuning." arXiv preprint arXiv:2304.08485 (2023). [3] OpenAI. Gpt-4 technical report, 2023.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their feedback on our work. Reviewers liked that our framework is clear (VAzu, ATN5), simple and effective (vAM5), flexible with plug-and-play components (dXH6), and steerable towards certain subdomains (VAzu, dXH6), and could have staying power, saying “approaches like this one will only become more important” (ATN5). Multiple Reviewers (vAM5, ATN5) were interested in whether MultiMon can find failures in text-to-image models that are known to not use CLIP. In response, we test MultiMon on such systems in two settings. 1. **Transfer**. We show that the inputs we find using CLIP often directly produce failures in T5-based [1] text-to-image models; 70.8% of the inputs used to generate figures in the main body or the appendix produce failures on DeepFloyd [2], a diffusion model that uses T5 to embed inputs rather than CLIP. [Figure 1, supplementary material] 2. **Generation from scratch**. We applied MultiMon on the T5-based DeepFloyad from scratch (i.e., replaced CLIP with T5 throughout the whole pipeline) and find many systematic failures, some of which the CLIP model does not have (e.g., T5 struggles to encode pronouns properly) [Figure 2, supplementary material]. We respond to individual reviewer comments below. [1] Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." The Journal of Machine Learning Research 21.1 (2020): 5485-5551. [2] Alex, Misha, et al. DeepFloyd IF by DeepFloyd Lab at StabilityAI. https://github.com/deep-floyd/IF, (2023) Pdf: /pdf/b9d7bdd543a0b74d9784431bee60bc05a9683dae.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
UNSSOR: Unsupervised Neural Speech Separation by Leveraging Over-determined Training Mixtures
Accept (poster)
Summary: This paper presents an innovative approach to unsupervised neural speech separation, leveraging the conditions where the number of microphones surpasses the number of speakers. The authors propose a method, named UNSSOR, which transforms an originally ill-posed problem - one that does not have a unique solution - into a well-posed problem - one with a unique solution - thereby facilitating the separation of speakers. Key contributions of the paper include: - The authors establish a linear-filter constraint between each speaker's reverberant images at each microphone pair, which converts the ill-posed problem into a well-posed one, thereby enhancing the separation of speakers. - The authors devise loss functions inspired by the blind deconvolution problem and propose a DNN-based approach to optimize these functions. The speaker images are determined via DNNs, while the linear filters are estimated using a sub-band linear prediction algorithm named FCP, based on the mixture and DNN estimates. - To address the frequency permutation issue that arises when using sub-band FCP, the authors propose a loss term that minimizes a measure known as intra-source magnitude scattering. The authors claim that UNSSOR can be trained to perform under-determined separation, such as monaural unsupervised speech separation, based on over-determined training mixtures. Strengths: The authors propose a novel unsupervised neural speech separation method. The proposed method, UNSSOR, utilizes the multimicrophone over-determined condition to solve the unsupervised learning problem of speech separation. This is a completely new perspective compared to previous unsupervised speech separation methods. The authors have designed innovative loss functions that guide the unsupervised separation model about the desired sound objects and encourage the separation of speakers. This approach is original and shows a creative combination of existing ideas. Weaknesses: However, there are several limitations to this paper that undermine its true potential. Please see below for concerns and questions for the authors. 1. The article mentions that MixIT may have two mixtures that are not in the same scene under reverberant conditions, which in turn may result in providing spatial a priori information, or providing a priori information due to differences in the configuration and number of microphones. However, I feel that this statement is a bit too hypothetical. I think these problems are more a result of how the dataset is constructed and not a deficiency of the MixIT method itself. More specifically, these problems should be solved by constructing a room-specific dataset. The SMS-WSJ dataset mentioned in the paper can control the room's parameters, the arrangement of microphones, and the number of microphones. By finely designing and controlling these parameters, the resulting problems can be effectively circumvented, allowing the MixIT method to be evaluated in a more consistent environment. Therefore, I believe the problems mentioned in the paper do not serve as a weakness of the MixIT method. 2. In Section 4.2, I noticed that the authors proposed the MC loss function. However, as far as I understand, the MC loss function is not presented in this paper and has been described in detail in reference [1]. Cite [1] as a reference and clarify whether the MC loss function in the paper is an innovation from [1] and where exactly the innovation is. 3. The authors chose TF-GridNet as the separation DNN structure for the UNSSOR method.TF-GridNet has shown excellent performance in supervised separation tasks. However, the question I would like to raise is: Is the UNSSOR method's effectiveness due to the separation model TF-GridNet's strong performance, or is it due to the contribution of the UNSSOR method itself? I believe that to elaborate the capability of the UNSSOR method better; it is necessary to try to use different separation structures and compare their performance on each structure. If the UNSSOR method performs well on other separation structures, then we can be more confident that the UNSSOR method itself is highly robust and general. Such an analysis is necessary to assess and understand the actual value of the UNSSOR method. 4. I note that the approach in the paper combines traditional methods (Spatial clustering and IVA) with the DNN-based method iRAS. In addition, UNSSOR, a DNN-based scheme, is compared to include only one DNN method, which may not be comprehensive, in my opinion. Of particular note is that MixIT is a popular and widely used method for unsupervised speech separation, as shown in references [2-5]. I was puzzled while reading the paper as to why the authors did not include MixIT in the comparison. 5. The authors set different STFT window sizes and hop lengths in the comparison, which may raise questions about the fairness of the results. For the separation model of DNN, different settings of STFT window sizes and hop lengths can significantly affect the model's performance [6]. ### References [1]. Wisdom S, Hershey J R, Wilson K, et al. Differentiable consistency constraints for improved deep speech enhancement[C]//ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019: 900-904. [2]. Wisdom S, Tzinis E, Erdogan H, et al. Unsupervised sound separation using mixture invariant training[J]. Advances in Neural Information Processing Systems, 2020, 33: 3846-3857. [3]. Tzinis E, Adi Y, Ithapu V K, et al. RemixIT: Continual self-training of speech enhancement models via bootstrapped remixing[J]. IEEE Journal of Selected Topics in Signal Processing, 2022, 16(6): 1329-1341. [4] Tzinis E, Casebeer J, Wang Z, et al. Separate but together: Unsupervised federated learning for speech enhancement from non-iid data[C]//2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). IEEE, 2021: 46-50. [5] Zhang J, Zorila C, Doddipatla R, et al. Teacher-student MixIT for unsupervised and semi-supervised speech separation[J]. arXiv preprint arXiv:2106.07843, 2021. [6] Peer T, Gerkmann T. Phase-aware deep speech enhancement: It's all about the frame length[J]. JASA Express Letters, 2022, 2(10): 104802. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My detailed questions are as described above. (1) Is the MC loss function in this paper an innovation from [1] and where is the innovation marked? (2) Is the effectiveness of the UNSSOR method due to the powerful performance of the separation model TF-GridNet, or is it due to the contribution of the UNSSOR method itself? (3) Why not compare MixIT method? (4) Why not use the same STFT settings, especially compared to the DNN model iRAS. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors discuss the limitation of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > (1) Is the MC loss function in this paper an innovation from [1] and where is the innovation marked? > [1]. Wisdom et al. Differentiable consistency constraints for improved deep speech enhancement, ICASSP, 2019. Sorry for the confusion. We think that it is a novel contribution, and we should have marked the innovations. We will emphasize in the paper that the proposed MC loss is very different from [1] in the following aspects. First, in [1], DNN estimates are strictly constrained to add up to the mixture (See Eq. (7) and (9) in [1]), but our MC loss only ``encourages'' the filtered DNN estimates to add up to the mixture. Second, our MC loss is applied to filtered DNN estimates rather than DNN estimates. Third, we are dealing with multi-microphone MC, while [1] only addresses single-channel. Overall, the proposed MC loss and the one in [1] have very different motivations and physical meanings. We now realize that it is not a good idea to use the same name, and we will change to ``mixture-constraint'' loss to indicate differences. > (2) Is the effectiveness of the UNSSOR method due to the powerful performance of the separation model TF-GridNet, or is it due to the contribution of the UNSSOR method itself? > I believe that to elaborate the capability of the UNSSOR method better; it is necessary to try to use different separation structures and compare their performance on each structure. If the UNSSOR method performs well on other separation structures, then we can be more confident that the UNSSOR method itself is highly robust and general. Such an analysis is necessary to assess and understand the actual value of the UNSSOR method. We think that both are important. Without using UNSSOR to deal with the ill-posed problem, the modelling capability of strong DNNs cannot be unleased to separate speakers; and without using a strong DNN, the patterns in speech cannot be modelled well to realize good separation. We think that the proposed UNSSOR method would not just work with a particular DNN architecture. We expect that the proposed UNSSOR mechanism can work with many DNN architectures, as long as the architecture is reasonably strong and can effectively handle reverberation, and that stronger DNN architectures would likely produce better separation. To further address the comments, we replace TF-GridNet with TCN-DenseUNet [A1], and train the network using the same training configurations. TCN-DenseUNet contains a temporal convolution network (TCN) sandwiched by a UNet with DenseNet blocks. It is a reasonably strong separation model, which is fully convolutional and shares many similarities with many modern DNN architectures in the literature, and, according to [A2], it is worse than the recent TF-GridNet architecture in supervised separation tasks. We provide the unsupervised results in Table 1 of the attached .pdf file. We observe that it obtains reasonably-good separation results in unsupervised tasks. [A1] Wang et al., Leveraging Low-Distortion Target Estimates for Improved Speech Enhancement, arXiv preprint arXiv:2110.00570, 2021. [A2] Wang et al., TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation, arXiv preprint arXiv:2211.12433, 2022. > (3) Why not compare MixIT method? > The article mentions that MixIT may have two mixtures that are not in the same scene under reverberant conditions, which in turn may result in providing spatial a priori information, or providing a priori information due to differences in the configuration and number of microphones. However, I feel that this statement is a bit too hypothetical. I think these problems are more a result of how the dataset is constructed and not a deficiency of the MixIT method itself. More specifically, these problems should be solved by constructing a room-specific dataset. Therefore, I believe the problems mentioned in the paper do not serve as a weakness of the MixIT method. > I note that the approach in the paper combines traditional methods with the DNN-based method iRAS. UNSSOR, a DNN-based scheme, is compared to include only one DNN method, which may not be comprehensive. Of particular note is that MixIT is a popular and widely used method for unsupervised speech separation, as shown in [2-5]. I was puzzled while reading the paper as to why the authors did not include MixIT in the comparison. See our response to all the reviewers. > (4) Why not use the same STFT settings, especially compared to the DNN model iRAS. > The authors set different STFT window sizes and hop lengths in the comparison, which may raise questions about the fairness of the results. For the separation model of DNN, different settings of STFT window sizes and hop lengths can significantly affect the model's performance [6]. > [6] Peer et al. Phase-aware deep speech enhancement: It's all about the frame length, JASA, 2022. Just to clarify. For UNSSOR and iRAS, we use exactly the same STFT setting, i.e., 32 ms window size and 8 ms hop size. This STFT setting is very common in STFT-domain separation algorithms. If the reviewer meant different filter taps (in time) are used in UNSSOR and iRAS, Appendix G (see Fig. 4) is provided to address the concerns. The idea is that, since UNSSOR performs filtering in the time-frequency domain and iRAS performs filtering in the time domain, we configure the filter lengths (in time) to be the same for comparison. If the reviewer meant different STFT window and hop sizes are used for IVA and spatial clustering, we emphasize that it is very common for IVA and spatial clustering to use longer window so that enough amount of reverberation is covered in each frame. This way, their model assumption can be better satisfied and better separation can be achieved. It is very common in IVA and spatial clustering to tune STFT window and hop sizes. We will emphasize this in the paper. The referred paper [6] only considers non-reverberant cases, and therefore may not tell the full story. --- Rebuttal Comment 1.1: Title: Regarding Q2+Q3 Comment: Regarding Q2 - I think you should compare the speech separation approach rather than the speech enhancement approach. I would like to see the separation performance of classical speech separation models (e.g., Conv-TasNet or DPRNN) in the UNSSOR framework to assess the generality of UNSSOR better. Regarding Q3 - I look forward to your update on the results of MixIT's experiments to re-evaluate my recommended scores. --- Reply to Comment 1.1.1: Comment: > Regarding Q2 - I think you should compare the speech separation approach rather than the speech enhancement approach. I would like to see the separation performance of classical speech separation models (e.g., Conv-TasNet or DPRNN) in the UNSSOR framework to assess the generality of UNSSOR better. Thanks for the further comments. In the literature, TCN-DenseUNet has also been applied to speaker separation tasks and obtained reasonably strong performance. See for example [A1-A3] listed below. Following the criticism, we have also experimented UNSSOR with a six-channel DPRNN architecture. The DPRNN has a window size of $4$ ms and a hop size of $1$ ms. It has $6$ layers. The number of bases is $256$. The bottleneck dimension is $128$. The number of hidden units in each BLSTM in each direction is $128$. We apply ReLU as the encoder non-linearity and as the non-linearity for embedding masking. The chunk size is set to $64$ and the chunk overlap is $50$%. To leverage spatial information for model training, we follow the strategy proposed in [A4] listed below (see its Fig. 2 to get the idea), where spectral embeddings are learned together with spatial embeddings and DNN-estimated masks are used to mask spectral embeddings. In the six-channel case, the spatial embedding dimension is set to $360$, following [A4]. Differently from Fig. 2 of [A4], we don't use microphone-pair-wise Conv1D layers to obtain spatial embeddings, and we obtain them by using a Conv1D layer with $P$ ($=6$) input channels and $360$ output channels. We first use the DPRNN to obtain intermediate separation results in the time domain, and then apply STFT (with a window size of $32$ ms, a hop size of $8$ ms, and the square-root of Hann window) to obtain $\hat{Z}(c)$ for each speaker $c$. The network is trained only using the MC loss in Eq. (9) of the paper and without using the ISMS loss in Eq. (10), while we only observe minor frequency permutation. All the other procedures are the same as the TF-GridNet based UNSSOR system. The results on SMS-WSJ (obtained by using six-channel input and loss) are shown below. Row | System | $I$ | $J$ | Loss | SDR (val.)| SDR | SI-SDR | PESQ | eSTOI | :----: | :----: | :----: | :----: | :----: | :-----: | :----: | :----: | :----: | :----: | 0a | Mixture | - | - | - | 0.1 | 0.1 | 0.0 | 1.87 | 0.603 2a | UNSSOR | 19 | 0 | $\mathcal{L}_{\text{MC}}$ | 8.7 | 8.7 | 7.8 | 2.64 | 0.719 2b | UNSSOR + Corr. based freq. align. | 19 | 0 | $\mathcal{L}_{\text{MC}}$ | 8.7 | 8.7 | 7.8 | 2.64 | 0.719 2c | UNSSOR + Oracle freq. align. | 19 | 0 | $\mathcal{L}_{\text{MC}}$ | 8.8 | 8.8 | 7.9 | 2.65 | 0.722 4a | PIT (supervised) | - | - | - | 12.3 | 11.7 | 11.3 | 3.00 | 0.820 We observe that UNSSOR also works, to some extent, with DPRNN, in addition to TCN-DenseUNet. Both architectures have lower modelling capability than TF-GridNet. We will add these results to the paper. [A1] H. Taherian, K. Tan, and D. Wang, “Multi-Channel Talker-Independent Speaker Separation Through Location-Based Training,” IEEE/ACM TASLP, vol. 30, pp. 2791–2800, 2022. [A2] Z.-Q. Wang, G. Wichern, and J. Le Roux, "Convolutive Prediction for Monaural Speech Dereverberation and Noisy-Reverberant Speaker Separation", in IEEE/ACM TASLP, vol. 29, pp. 3476-3490, 2021. [A3] Y. Liu and D. Wang, “Divide and Conquer: A Deep CASA Approach to Talker-Independent Monaural Speaker Separation,” IEEE/ACM TASLP, vol. 27, no. 12, pp. 2092–2102, 2019. [A4] Zhang et al., “On End-to-End Multi-Channel Time Domain Speech Separation in Reverberant Environments,” in ICASSP, 2020, pp. 6389–6393. > Regarding Q3 - I look forward to your update on the results of MixIT's experiments to re-evaluate my recommended scores. We have now obtained the results of MixIT. See our responses to all the reviewers.
Summary: The most popular method for training neural networks for speech separation is by artificially mixing sources, since neural network based training requires supervision. However, this kind of supervised training creates a mismatch since the mixtures seen at test time contain real overlapping speech and noise. This paper formalizes the problem of unsupervised speech separation, and posits that it becomes a well-posed problem for over-determined conditions (i.e., when the number of microphones is more than the number of speakers). The authors propose a method (inspired from signal processing) that uses the images at each microphone as supervision for the neural network training, and a new loss function to address the frequency permutation problem that often happens in blind source separation (BSS). They also propose a method to use these models for monoaural separation. Strengths: Overall, I believe this is an extremely strong paper with landmark results for unsupervised speech separation. As mentioned above, the majority of literature in the field of speech separation, since permutation-invariant training (PIT) was proposed, has only revolved around better neural network architectures. While the SI-SDR on the synthetic WSJ-Mix benchmark has consequently gone up, significantly less attention has been paid to real overlapping speech. The paradigm of continuous speech separation (CSS) tries to address some of the issues (such as sparse overlaps), but it is also trained with PIT on synthetic mixtures. The result is that there exists no public speech separation models that can perform well on multi-speaker data such as AMI, CHiME-6, AliMeeting, etc. I think UNSSOR is quite promising and should change the face of speech separation research. The paper is strong on many levels, such as the following. 1. The authors correctly identify that the problem with training separation networks has to do with a lack of accurate supervision, and propose that this supervision may come from estimating each speaker’s reverberant image at every microphone. Under the assumption of a linear-filter constraint between a speaker’s images at the microphones, the resulting linear system has a unique solution, as described in Section 3. 2. The authors design loss functions which are inspired from signal processing formulation of the problem. This includes (i) mixture consistency loss for the filtered estimates, (ii) forward convolutive prediction (FCP) for relative RIR estimation, (iii) causal/non-causal stacking to handle time misalignment, and (iv) an intra-source magnitude scattering loss to address the frequency permutation problem. Overall, all these losses have clear motivations and design. 3. The recently proposed TF-GridNet encoder is used as the backbone of the model. This architecture has been shown to obtain state-of-the-art results on PIT-based separation (even surpassing time-domain models), and it is interesting to see that it can be trained well with the unsupervised technique. 4. The authors compare their method with extremely strong baselines, which makes their conclusions stronger. For example, they implemented a novel variant of the recently proposed “Reverberation as Supervision” (RAS) method, called iRAS, which can perform unsupervised separation. This modification could very well be a short paper in itself. 5. The results on SMS-WSJ show that UNSSOR obtains results approaching supervised separation models. For example, for the 6-channel setting, the SDR on the test set is 15.6 dB (compared to 19.4 dB for PIT model). As a comparison, the next unsupervised method is IVA, which obtains 10.6 dB. Moreover, on reducing the number of channels from 6 to 3, UNSSOR’s performance degrades only marginally (to 15.4 dB), whereas PIT degrades to 16.8 dB. 6. The authors even find a way to use models trained with multi-channel inputs for performing mono-aural separation. 7. The code and models for UNSSOR will be released. A side effect of all of the above is that the paper is quite dense and may not be accessible to readers who do not have a source separation background. However, it is a case study in how neural methods can be designed by incorporating the knowledge from other domains (in this case, signal processing). Weaknesses: 1. The main point of concern about the paper is that results are shown just for the SMS-WSJ dataset. Such an evaluation defeats the original motivation for unsupervised training, since SMS-WSJ contains synthetically mixed sources. Clearly, the PIT models outperform UNSSOR on this benchmark, which is expected. The real test of UNSSOR would have been an evaluation on real mixtures, such as AMI, CHiME-6, or AliMeeting, where PIT-based models fail completely. Nevertheless, since the paper proposes a completely novel paradigm of speech separation, I am willing to forgo this point, but at the cost of docking a point from the ratings. 2. Can UNSSOR be trained with data containing different numbers of channels or different array geometries? It seems that the choice of the hyper-parameters $I$ and $J$ may be heavily dependent on the microphone configuration. 3. The authors delegate the “Limitations” section to the appendix. I think it should be put in the main paper, since an important part of presenting our research is talking about its limitations. This would also be useful for other researchers to get ideas for building upon UNSSOR for their own work. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. How robust is UNSSOR to change in array configuration or the number of channels for the case of mono-aural separation? 2. In Section 5.2, can you briefly explain what these metrics are, for the unfamiliar reader. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: None, except those presented in Appendix H. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Overall, I believe this is an extremely strong paper with landmark results for unsupervised speech separation. As mentioned above, the majority of literature in the field of speech separation, since permutation-invariant training (PIT) was proposed, has only revolved around better neural network architectures. While the SI-SDR on the synthetic WSJ-Mix benchmark has consequently gone up, significantly less attention has been paid to real overlapping speech. The paradigm of continuous speech separation (CSS) tries to address some of the issues (such as sparse overlaps), but it is also trained with PIT on synthetic mixtures. The result is that there exists no public speech separation models that can perform well on multi-speaker data such as AMI, CHiME-6, AliMeeting, etc. I think UNSSOR is quite promising and should change the face of speech separation research. > The paper is strong on many levels, such as the following. > ... > A side effect of all of the above is that the paper is quite dense and may not be accessible to readers who do not have a source separation background. However, it is a case study in how neural methods can be designed by incorporating the knowledge from other domains (in this case, signal processing). We feel so excited after reading the comments! This is such a big encouragement for us to invest more on improving the proposed methods in follow-up studies. > The main point of concern about the paper is that results are shown just for the SMS-WSJ dataset. Such an evaluation defeats the original motivation for unsupervised training, since SMS-WSJ contains synthetically mixed sources. Clearly, the PIT models outperform UNSSOR on this benchmark, which is expected. The real test of UNSSOR would have been an evaluation on real mixtures, such as AMI, CHiME-6, or AliMeeting, where PIT-based models fail completely. Nevertheless, since the paper proposes a completely novel paradigm of speech separation, I am willing to forgo this point, but at the cost of docking a point from the ratings. Insightful point! We made an effort to evaluate on AMI, CHiME-6, and AliMeeting datasets, which are real-recorded meeting-style data. However, to obtain strong performance, we need to deal with many other problems (such as sparse speaker overlap, varying number of speaker, unknown number of speakers etc.), which would make the paper much less focused. We hence leave the evaluation as a future work, and this paper focuses on showing the potential of UNSSOR, which will be the core technique we will build upon in our future work. > Can UNSSOR be trained with data containing different numbers of channels or different array geometries? It seems that the choice of the hyper-parameters and may be heavily dependent on the microphone configuration. Good point! We think that UNSSOR can be trained with the mentioned data, and this could be a good future extension. We could use DNNs that can handle variable number of input channels and different array geometries. The loss can also be computed on training examples that have different numbers of channels, by configuring the filter taps in a smart way to deal with different array geometries (i.e., to cover a wide range of microphone distances). > The authors delegate the “Limitations” section to the appendix. I think it should be put in the main paper, since an important part of presenting our research is talking about its limitations. This would also be useful for other researchers to get ideas for building upon UNSSOR for their own work. Will change. > How robust is UNSSOR to change in array configuration or the number of channels for the case of mono-aural separation? At run time, the trained model performs monaural separation. We think that it would be invariant to changes in array configurations or the number of channels. At training time, we expect that UNSSOR can robustly deal with training examples with various array configurations or number of channels. Notice that the DNN is trained to only exploit monaural spectro-temporal patterns for separation. As long as there is a sufficient number of microphone mixtures to help pinpoint the solutions to speaker images, we expect the DNN to be trained well, although the shape of the loss surface afforded by different array configurations or numbers of channels could influence the training. This investigation could be a follow-up paper. We expect that using more microphones for loss computation would lead to clearly better separation, but we don't observe this in our current evaluations (e.g., see Table 3 and 4). We will investigate this in a future study. > In Section 5.2, can you briefly explain what these metrics are, for the unfamiliar reader. They are popular metrics in speech separation. SI-SDR and SDR measure the quality of predictions at the sample level, and PESQ and eSTOI are objective metrics of speech quality and intelligibility respectively. We will add this description to the paper. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for clarifying some of my original points. I don't think this needs much more discussion, and I congratulate the authors on a remarkable paper!
Summary: This paper proposes an unsupervised method for training neural networks to separate speech from multi-microphone recordings. The idea is to separate a reference microphone into separate sources, then use each microphone as a mixture signal that must match filtered versions of the reference microphone separated sources. The linear filters are computed in subbands, which results in a well-known frequency permutation problem, for which an additional "intra-source magnitude" loss term is proposed to ameliorate. Though the method requires "overdetermined" mixtures (i.e. more mics than sources), the method can also be used for underdetermined cases, e.g. 2 speakers with 1 mic, where a single mic is used as input, and multiple mics are used in the loss function. For 3-mic and 6-mic mixtures, the proposed method is shown to achieve better performance in terms of SDR, SI-SDR, PESQ, and eSTOI versus a baseline (the RAS algorithm) that works on a similar principle on the SMS-WSJ dataset. The method does not outperform a supervised PIT method, which is expected. Strengths: S1) The method is intuitive and is explained well, and the paper provides thorough description of the method and chosen hyperparamters. S2) The proposed method can perform unsupervised training on single mixtures, unlike MixIT which requires combining at least two mixtures into a training examples. S3) The authors clearly describe the differences of the proposed UNSSOR method with the RAS method, particularly why UNSSOR can successfully train unsupervised, while RAS cannot. Weaknesses: W1) The dataset used, SMS-WSJ, consists entirely of synthetic mixtures that use synthetic simulated RIRs. Thus, there is some concern that the method may not generalize to real-world acoustic scenarios. Some evaluation on real data would further improve the paper. Also, one of the main advantages of unsupervised algorithms is to be able to adapt separation models to unsupervised real-world data, so that models can work better on those data domains. Of course, evaluation on real domains cannot be evaluated by intrusive metrics, as used in this paper, and require subjective human listening tests. I am glad the authors are considering this as a future direction, and I encourage them towards that goal. W2) As acknowledged by the authors, the method assumes stationary sources, which is potentially quite limiting when considering real scenarios where speakers may be moving. Even if speakers are seated, head movement can still produce acoustic effects. There are likely interesting extensions of the method to handle nonstationary spatial scenarios, such as allowing for slowly-varying linear filters. W3) No audio demos are provided. Providing such demos would make it a lot easier to readers to evaluate the quality of the predictions and improve understanding of the method. W4) It would be interesting to see how a single-channel unsupervised method like MixIT compares to the multi-microphone method. Also, since this paper was submitted, a multichannel version of MixIT seems to have been proposed (https://arxiv.org/abs/2305.11151), which seems to operate differently (network with multichannel input and multichannel output, and MixIT applied directly to multichannel outputs using multi-channel mixtures-of-mixtures). It would be interesting to compare both single-channel and multi-channel MixIT to the proposed approach; these MixIT methods will likely be mismatched to test time since they train on mixtures-of-mixtures, but it would be interesting to compare. Minor comments and typos a) "boot-start" -> "bootstrap" ? b) "comptue" -> "compute" c) Would be good to mention what \mathcal{F} is in description of equation (4) d) "doing this would complicates" -> "doing this would complicate" e) "assumed time-variant." -> "assumed time-variance." f) Maybe bold the best numbers in each column for Tables 1 and 2? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1) How much tuning of the number of filter taps was done? This seems like it could have a big effect on the performance of the model. Also, does tuning the number of taps suggest some knowledge about the unsupervised data? I suppose a practitioner could use a blind T60 model to estimate the distribution of T60 across an unsupervised dataset, which could give some insight into optimal filter taps. Besides T60 (i.e. duration of expected RIRs to model), are there other issues that affect the optimal choice of filter taps? Q2) How certain is it that frequency permutation is the primary cause of the drop in performance going from rows 1a->1b and 2a->2b in Tables 1 and 2? The appendix provides an illustrative example, but none of the objective metrics are directly measuring frequency permutation. It seems that an objective metric could be formulated that directly measures the degree of frequency permutation, but perhaps such a metric is not necessary if spot checks of the predictions indicated that frequency permutation was present? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are discussed well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W1) The dataset used, SMS-WSJ, consists entirely of synthetic mixtures that use synthetic simulated RIRs. Thus, there is some concern that the method may not generalize to real-world acoustic scenarios. Some evaluation on real data would further improve the paper. Also, one of the main advantages of unsupervised algorithms is to be able to adapt separation models to unsupervised real-world data, so that models can work better on those data domains. Of course, evaluation on real domains cannot be evaluated by intrusive metrics, as used in this paper, and require subjective human listening tests. I am glad the authors are considering this as a future direction, and I encourage them towards that goal. Great comment! We are currently working towards that goal, and will share our findings. > W2) As acknowledged by the authors, the method assumes stationary sources, which is potentially quite limiting when considering real scenarios where speakers may be moving. Even if speakers are seated, head movement can still produce acoustic effects. There are likely interesting extensions of the method to handle nonstationary spatial scenarios, such as allowing for slowly-varying linear filters. Yes, we will investigate the moving-source case, which is a common problem also in spatial processing. > W3) No audio demos are provided. Providing such demos would make it a lot easier to readers to evaluate the quality of the predictions and improve understanding of the method. An audio demo is available at https://anonymauth.github.io/ We will add this link to the paper. > W4) It would be interesting to see how a single-channel unsupervised method like MixIT compares to the multi-microphone method. Also, since this paper was submitted, a multichannel version of MixIT seems to have been proposed (\url{https://arxiv.org/abs/2305.11151}), which seems to operate differently (network with multichannel input and multichannel output, and MixIT applied directly to multichannel outputs using multi-channel mixtures-of-mixtures). It would be interesting to compare both single-channel and multi-channel MixIT to the proposed approach; these MixIT methods will likely be mismatched to test time since they train on mixtures-of-mixtures, but it would be interesting to compare. See our response to all the reviewers. Thanks for pointing us the paper on multi-channel PIT. We plan to compare with multi-channel MixIT in a future study, as it was online after the submission of this paper. > a) "boot-start" -> "bootstrap" ? > b) "comptue" -> "compute" > c) Would be good to mention what $\mathcal{F}$ is in description of equation (4) > d) "doing this would complicates" -> "doing this would complicate" > e) "assumed time-variant." -> "assumed time-variance." > f) Maybe bold the best numbers in each column for Tables 1 and 2? Will change following the comments. > Q1) How much tuning of the number of filter taps was done? > This seems like it could have a big effect on the performance of the model. > Also, does tuning the number of taps suggest some knowledge about the unsupervised data? > I suppose a practitioner could use a blind T60 model to estimate the distribution of T60 across an unsupervised dataset, which could give some insight into optimal filter taps. > Besides T60 (i.e. duration of expected RIRs to model), are there other issues that affect the optimal choice of filter taps? Great question! The filter taps in Fig. 4 of the Appendix show the filter taps we tuned. It indeed has a big effect. Our experience is that the filter tap can not be set too long or too short. If it is set too long, the model would optimize the loss well but not separate speakers (i.e., overfit); and if it is set too short, the model would not fit the loss well (i.e., underfit). We see what you mean by using a blind T60 model to estimate the filter length for each training example, but the FCP filters in the study are the relative RIRs among speaker images at closely-placed microphones rather than the RIRs between sound source and its far-field images. We currently do not have a good way to determine the optimal choice of filter taps. > Q2) How certain is it that frequency permutation is the primary cause of the drop in performance going from rows 1a->1b and 2a->2b in Tables 1 and 2? > The appendix provides an illustrative example, but none of the objective metrics are directly measuring frequency permutation. > It seems that an objective metric could be formulated that directly measures the degree of frequency permutation, but perhaps such a metric is not necessary if spot checks of the predictions indicated that frequency permutation was present? We are confused by the first question. In Table 1 and 2, going from 1a to 1b and going from 2a to 2b both do not have performance drop. A direct metric of frequency permutation may not be necessary, as you suggested. Our spot checks are that frequency permutation is always presented, and its presence also makes sense as FCP is performed independently in each frequency. From the results in row 1a and 1c of Table 1 and 2, we can see that using oracle frequency permutation produces large improvement. This largely indicates how severe the frequency permutation problem is.
Summary: In this paper the authors tackle speech source separation, in which multiple fixed speech sources $X(c)$ are recorded by an array of p microphones, resulting in p observable mixtures $Y_p$. The authors propose an STFT domain source separation algorithm called UNSSOR, leveraging a complex-valued deep neural network (TF-GridNet) to estimate the individual sources at a reference microphone (e.g., mic 1). Typically the problem is solved in a supervised fashion using PIT (permutation invariant training) where the clean sources must be known, but the authors propose an unsupervised method, where only the mixtures $Y_p$ are required at training time. The key idea is to relax the under-determined problem, by leveraging physical constraints, i.e. the sources combining at a microphone p are the same sources at the reference microphone convolved with a relative room impulse response (relative RIR). As such the authors estimate virtual sources $Z(c)$ and relative RIRs at each microphone for each source $g_p(c)$. The relative RIRs are computed in closed form using FCP (a least squares optimization problem), leveraging only the mixtures $Y_p$ and the estimates $Z(c)$. Multiplying $g_p(c)$ with $Z(c)$ the authors obtain the FCP-estimated source images, which are summed to obtain mixtures estimates $\hat Y_{p}$ at each microphone, obtaining a mixture consistency loss $L_{MC}$ for training the neural network. The authors notice that solutions obtained with the model trained on $L_{MC}$ only exhibit the frequency permutation problem, where estimates on certain spectral bands are swapped between estimates. To address this problem, the authors regularize the variance of the magnitude of the FCP-estimated source images over frequency bands during training time, with a novel intra-source magnitude scattering loss $L_{ISMS}$. Combining $L_{MC}$ and $L_{ISMS}$ results in improved separation metrics. Finally, the authors try the algorithm in the monoaural unsupervised source separation setting, where they provide only one mixture at training time, but optimize with multiple microphones in the loss. Strengths: - The idea of reducing the number of equations using physical constraints is natural, leading to a well-structured unsupervised multi-mixture speech source separation algorithm. Computing the RIRs with least squares also is very useful, given that in such a way the mixture consistency loss depends only on the observed mixtures and the estimated virtual sources. The authors present such an idea in a very clear and formal manner, with relevant links to the existing literature. I believe that this approach is way more founded theoretically than the MixIt approach, leading to a better methodological line of research on unsupervised neural source separation. - The intra-source magnitude scattering regularizer ($L_{ISMS}$) seems very interesting because such a loss can be incorporated not only in the presented multi-microphone speech scenario but in every source separation task working on STFT spectrograms (e.g, music, universal). Improving the results for more than 4dB gives strong empirical results to the effectiveness of such a technique. - The method achieves good empirical results with respect to a plethora of metrics (SDR, SI-SDR, PESQ, eSTOI) on SMS-WSJ compared to other unsupervised multi-mic algorithms (Spatial clustering, IVA, RAS) and fares well with respect to a supervised PIT baseline (using the same neural architecture). Weaknesses: - UNSSOR, while improving the results in the monoaural speech separation setting with respect to iRAS (improves 1.4 dB on the SMS-WSJ test set both with 3 and 6 channels loss), still faces a large gap with respect to the supervised Monaural PIT baseline (~ 4 dB). This is expected since not relying on supervision can negatively impact the separation results. The authors should have included at least a comparison with some other popular unsupervised source separation baseline such as MixIt, in order to prove the superiority of UNSSOR in the unsupervised source separation arena. - As with other unsupervised neural source separators such as MixIt, it is not really clear why one should go unsupervised if the supervised metrics are already better on the same dataset. These types of studies (including MixIt or its regularized versions such as https://arxiv.org/abs/2106.00847), should experimentally showcase that performing unsupervised source separation can benefit over supervised source separation as the dataset size increase. For example in https://arxiv.org/abs/2106.00847, authors cannot beat a supervised baseline trained on FUSS, using an order of magnitude more data (Audioset or YFCC100m) with an unsupervised method (regularized MixIt). Providing a scaling law should be of uttermost importance in papers such as the presented one and in future papers, if not we can continue inventing unsupervised methods that will never surpass supervised or weakly supervised methods (using labels, learning Bayesian priors). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I will add suggestions for the improvement of the paper and ask relevant questions. - Line 65: The authors should use a term like `metric` instead of `measure`, given the precise meaning of such a term in measure-theory. I know they can be synonyms in an applied field such as audio processing but it is better to be more precise. - Line 120: `Hermitian transpose` instead of `Hermittan` - Line 146: the term mixture consistency is defined here: https://arxiv.org/pdf/1811.08521.pdf, I think it's ok to use the same term but could create a little bit of confusion. - Line 161: Can the authors explain briefly in the text, for better understanding, the rationale of matching the FCP-estimated images with the mixture in Eq. (6)? - Line 167: Regarding the $\xi$ hyperparameter, I don't understand if it multiplies only the max or all the expression - Line 179: I found the causal analysis well developed, but I still do not well understand why if a source sound c reaches the reference microphone earlier it should be processed with future values (non-causal filtering). - Line 193: Given that $Z(c)$ is learned, why it is interpreted like a virtual microphone estimate and not the real dry signal at speaker $c$? - Line 232: In the monoaural speech source separation setting, I don't understand what you match on the different microphones in the mixture consistency loss, when using only one input $Y_p=1$? Do you match the only available $Y_p=1$ at all "virtual" microphones? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors address the limitations of their work in Appendix H, namely that they assume to know the number of sources, they assume they are directional point sources at fixed positions. I do not believe these limitations are much problematic as the main limitation is the unavailability of data required for scaling up the algorithm with respect to the supervised baseline (if it scales). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The authors should have included at least a comparison with some other popular unsupervised source separation baseline such as MixIt See our response to all the reviewers. > As with other unsupervised neural source separators such as MixIt, it is not really clear why one should go unsupervised if the supervised metrics are already better on the same dataset. These types of studies (including MixIt or regularized MixIt https://arxiv.org/abs/2106.00847 [Wisdom'21]) should experimentally show that performing unsupervised separation can benefit over supervised separation as the dataset size increase. For example in [Wisdom'21], authors cannot beat a supervised baseline trained on FUSS, using an order of magnitude more data with an unsupervised method. > Providing a scaling law should be of uttermost importance in papers such as the presented one and in future papers, if not we can continue inventing unsupervised methods that will never surpass supervised or weakly supervised methods. Insightful comment! We will pay more attention to the scaling law in follow-up studies. In speech separation, a major motivation of using unsupervised separation is that the models can be trained directly on real mixtures (where the clean speech signals are not available) and hence could have better generalizability on real data than supervised models, which need to be trained on simulated data (often mismatched with real-recorded test data). As is also pointed out by reviewer ``XWBZ'', supervised models such as PIT have limited success so far on real-recorded multi-speaker datasets such as AMI, CHiME-6 and AliMeeting, even if a lot of data can be simulated to train PIT. A possible initial step towards solving this problem could be using an algorithm like UNSSOR which can be trained directly on real mixture and avoid using unrealistic synthetic mixtures for training. On the other hand, unsupervised separation and supervised separation may not be mutually exclusive, and they could be combined. We could, for example, adapt a supervised model to new domains via unsupervised mechanisms such as UNSSOR, or fine-tune unsupervised models such as UNSSOR via supervised mechanisms like PIT in a target domain, where some high-quality labelled data is available. > Line 65: Ue metric instead of measure > Line 120: Hermitian transpose instead of Hermittan Will change. > Line 146: the term mixture consistency is defined in [Wisdom 2019], I think it ok to use the same term but could create confusion. Will rename the loss to "mixture-constraint" loss to avoid any confusion. > Line 161: Can the authors explain briefly the rationale of matching the FCP-estimated images with the mixture in Eq. (6)? Will explain. If $Y_p$ only contains $X_p(c)$, (6) can estimate the relative RIR relating $\hat{Z}(c)$ to $X_p(c)$. If $Y_p$ contain other sources besides $X_p(c)$, (6) could still estimate the relative RIR following the derivations in Appendix C. > Line 167: Regarding the $\xi$, I don't understand if it multiplies only the max or all the expression Only to the max expression. We will make the equation clearer. > Line 179: I found the causal analysis well developed, but I still do not well understand why if a source sound c reaches the reference mic earlier it should be processed with future values (non-causal filtering). We give an example below. In Fig. 1(a) (see attached .pdf file), suppose that the blue signal is the DNN estimate for speaker $c$, and the orange signal is speaker $c$'s image at another microphone, which is a delayed version (i.e., reaching the microphone later). To filter the blue signal to approximate the oracle signal, we only need a causal filter. Reversely, suppose that the orange signal is the DNN estimate for speaker $c$, and the blue signal is speaker $c$'s image at another microphone, which is an advanced version (i.e., reaching the microphone earlier). To filter the orange signal to approximate the blue signal, we need a non-causal filter. We will add this example to the Appendix of the paper. > Line 193: Given that $Z(c)$ is learned, why it is interpreted like a virtual microphone estimate and not the real dry signal at speaker $c$? In Eq. (9), $\hat{Z}(c)$ is constrained such that it can be filtered by a causal filter $\hat{\mathbf{g}}_p(c)$ to approximate $X_p(c)$, and it is not explicitly constrained to be a dry signal. Since there could be an infinite number of $\hat{Z}(c)$ and $\hat{\mathbf{g}}_p(c)$ whose convolution results would well approximate $X_p(c)$, $\hat{Z}(c)$ is likely not the dry source signal. See Fig. 1(b) (see attached .pdf file) for an example, where each virtual microphone captures the direct-path signal of a target speaker earlier than any other microphones so that we can use causal FCP filters. > Line 232: In the monoaural speech source separation setting, I don't understand what you match on the different microphones in the mixture consistency loss, when using only one input $Y_{p=1}$? Do you match the only available $Y_{p=1}$ at all "virtual" microphones? We now think that, in the monaural case, $\hat{Z}(c)$ would all be aligned to the speakers' images at the reference microphone $1$, since the DNN only has monaural input and in this case the DNN is not likely to align its outputs to a virtual microphone different from microphone $1$. To help understanding, we give an example in Fig. 1(c) (see attached .pdf file), where the reference microphone captures speaker $2$'s direct-path signal later than all the other microphones. In this case, we need to use non-causal FCP filters when filtering $\hat{Z}(c)$ (which is estimated based on the monaural signal at the reference microphone) to approximate speaker $2$'s images captured at the other microphones. > ... the main limitation is the unavailability of data required for scaling up the algorithm with respect to the supervised baseline (if it scales). See our responses to your earlier comment on this.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their valuable feedback towards improving this manuscript. Here we collectively address some common concerns raised by the reviewers: $\textbf{1. Reasons for not comparing with MixIT in the first submission}$ We carefully considered using MixIT as a baseline, since MixIT also deals with unsupervised separation, but we realize that MixIT may not be a good baseline to UNSSOR out of the following considerations: $\textbf{1.1.}$ MixIT needs to be trained on synthetic mixtures of mixtures (MoM), while UNSSOR is designed to be trained directly on existing mixtures. The two models would be trained on different training examples, and our concern is that this would make the comparison difficult. We hence consider methods that can be trained (or performed) directly on existing mixtures (such as IVA, spatial clustering and iRAS) as baselines. $\textbf{1.2.}$ One could argue that we could, for example, mix the existing 2-speaker mixtures in SMS-WSJ to train MixIT, and compare it with UNSSOR trained directly on the existing 2-speaker mixtures in SMS-WSJ. However, this would require the mixtures used for creating each MoM to be recorded in the same room, by the same array, and at the same location in the room, each of which would incur restrictions to the dataset that can be used for training MixIT models, while UNSSOR does not have such restrictions. $\textbf{1.3.}$ We could simulate a particular scenario, where, in each simulated room, we generate 4 speaker sources so that we can have several 2-speaker mixtures to create MoM for training MixIT models (for 4-speaker separation) and then compare the performance with the performance of UNSSOR trained on the same 2-speaker mixtures for 2-speaker separation. However, this simulated scenario would be very ideal for MixIT. Note that the eventual MixIT system should synthesize MoM based on real-recorded mixtures. Many times, the procedure for synthesizing MoM is very tricky, since real-recorded mixtures are usually not recorded in the same room, using the same array, and at the same location in the room. $\textbf{1.4.}$ A possibly good way to compare UNSSOR and MixIT is by using real-recorded datasets such as AMI, AliMeeting and CHiME-{5,6,7} recorded in meeting scenarios where concurrent speech naturally happens. However, for both algorithms, to achieve good performance we need to solve many other problems (such as sparse speaker overlap, varying number of speaker, unknown number of speakers etc.) and there do not exist mature solutions yet; and, in addition, including the solutions to many other problems in this paper would make the paper much less focused. We hence lean towards leaving this investigation to a future study, and focus on showing the potential of UNSSOR, which avoids using synthetic MoM. We will add these discussions and considerations to the paper. $\textbf{2. Performance comparison with MixIT}$ Following the reviewers' suggestions, in this rebuttal, we have been trying to provide a performance comparison with MixIT. However, due to time limit, the training cannot finish before the rebuttal deadline, and we will update the results during the discussion phase (by the end of Aug. 13 ET). What we have been doing is to create the particular scenario (**ideal for MixIT**) described in paragraph $\textbf{1.3}$ above, where, for each existing SMS-WSJ mixture ($y = s_1 + s_2 + n$ where $n$ denotes noise), we randomly add two extra speakers in the same simulated room and use the same array placed at the same location so that we can have two 2-speaker mixtures (i.e., mixture 1: $s_1 + s_2 + n/2$ and mixture 2: $s_3 + s_4 + n/2$) to create MoM for training MixIT for 4-speaker separation. The DNN architecture and training configurations for MixIT are the same as that in UNSSOR and PIT, the loss function is defined similarly on the real, imaginary and magnitude of the reconstructed mixtures, and the DNN can take single- or multi-channel input. At run time, the trained MixIT model is used to separate the existing two-speaker mixtures in SMS-WSJ to four outputs, and we select the two outputs with the highest energy for evaluation. This way, the numbers can be pretty much directly compared with the existing ones obtained by UNSSOR. We will update the results during the discussion phase (by the end of Aug. 13 ET). We will also add the results to the paper, if the paper is accepted. Pdf: /pdf/8086877f86604e025d25c988b1e96f998b9b5f29.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The focus of this paper is unsupervised speech separation by exploiting training mixtures with more microphones than speakers (over-determined). The proposed neural separator is using the input mixture to constrain the estimated images of each speaker. It is proposed to also train the system for under-determined scenarios, e.g., for single-channel speech separation, by separating a single-channel signal, while using multichannel loss function. Strengths: The idea of using multichannel mixtures for unsupervised training is novel and reasonably effective compared to the supervised and unsupervised baselines. The experimental evaluation is sufficient and demonstrates the effectiveness and includes sufficient data to support some of the design choices. The paper is well written. Weaknesses: It would be useful for the readers less familiar with speech processing to include references to a few papers very relevant to relative IR estimation, mixture consistency and linear prediction. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Only measurement/modeling noise is included in the model, and background noise or discrete noise sources would complicate things, e.g., since the same RTF could not be applied to the noise component. Is there any initial data on robustness in presence of noise? ll. 119 It is claimed that the relative room impulse response in (2) is typically short. However, it should be noted that this is not necessarily true, even for the microphone array and reverberation times used in the experiments (20 cm diameter, up to 0.5s RT60). A useful datapoint is paper by [Talmon, 2009], Fig. 7 in particular. I think this it would be helpful for many readers at NeurIPS to mention this (or a similar paper) and point out that this assumption does not always hold, and has been investigated in the literature. [Talmon, 2009] Talmot, Cohen, Gannot, Relative Transfer Function Identification Using Convolutive Transfer Function Approximation, IEEE Tr. ASLP, 2009. ll. 128 - Typo in “comptue” Section 4.2 Mixture consistency has already been proposed in [Wisdom, 2019]. This paper should be clearly mentioned. Also, consider renaming to multichannel mixture consistency, since you’re using P channels and RTFs. [Wisdom, 2019] Wisdom et al., Differentiable Consistency Constraints for Improved Deep Speech Enhancement, 2019. Section 4.3 Formulation is (6) is basically multichannel linear prediction with using estimated sources Z instead of the original mixtures Y. Relevant works solving a problem analogous to (6) should be at least briefly mentioned, such as earlier work in [Yoshioka, 2010] and many later works. [Yoshioka, 2010] Yoshioka et al., Blind Separation and Dereverberation of Speech Mixtures by Joint Optimization, IEEE Tr. ASLP 2011. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The main limitation is that the model does not take into account background or discrete noise, and is only demonstrated to work in presence of uncorrelated (measurement) noise. This severely limits applicability to using real-world recordings to train a separation model. It would be very useful to include results on robustness in presence of higher levels of noise. However, this is not essential and it may be out of scope of this paper. Another limitation for real-world use is the assumption of a static scenario (fixed relative IRs), which would not hold in real recordings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > It would be useful for the readers less familiar with speech processing to include references to a few papers very relevant to relative IR estimation, mixture consistency and linear prediction. Will include, especially [Talmon, 2019]. > Only measurement/modeling noise is included in the model, and background noise or discrete noise sources would complicate things, e.g., since the same RTF could not be applied to the noise component. Is there any initial data on robustness in presence of noise? Yes, we need to consider background noises. We haven't had comprehensive results on this yet, and plan to address this in a follow-up study. We could use a number of garbage sources to absorb directional sources. That is, we can train UNSSOR to separate the mixture to $C+N$ sources (where $C$ is the hypothesized number of speakers and $N$ the hypothesized number of directional noise sources) so that we can have a separate RTF for each source. We envision that our method could be effective at separating a large number of directional sources (including both speech and noise sources) if there is a sufficient number of microphones to afford over-determined conditions. In this case, we can assume $\varepsilon$ to be a measurement/modelling noise, as the speech and noise signals are all modelled by the $N+C$ sources. > ll. 119 > It is claimed that the relative room impulse response in (2) is typically short. However, it should be noted that this is not necessarily true, even for the microphone array and reverberation times used in the experiments (20 cm diameter, up to 0.5s RT60). A useful datapoint is paper by [Talmon, 2009], Fig. 7 in particular. I think this it would be helpful for many readers at NeurIPS to mention this (or a similar paper) and point out that this assumption does not always hold, and has been investigated in the literature. > [Talmon, 2009] Talmot et al., Relative Transfer Function Identification Using Convolutive Transfer Function Approximation, IEEE TASLP, 2009. Thanks for pointing this out. We realize that our sentence "Note that $\mathbf{g}_p(c,f)$ is very short (i.e., $E$ is small)" may be misleading. Just to clarify. We intended to say that $E$ is small. In our paper, given that the STFT hop size is $8$ ms, $E=I+1+J$ equals $20$ in in Table 1-2, and equals $21$ in Table 3-4. We don't mean that the RIR in the time domain is very short. Thanks for sharing the paper by Talmon et al. Based on our understanding of the paper, its Fig. 7(a) suggests that, in low-noise cases, when the microphone distance is larger, CTF, which can use multiple taps, can better estimate the RTF than MTF, which is restricted to only use one tap. In [Talmon, 2009], the number of the filter taps of CTF is set to $1/8$ of the T60. So given a T60 of $0.5$ s, the number of filter taps is roughly $0.5/8/0.016=3.9$, where $0.016$ is the STFT hop size in seconds. In other words, this setup echos our claims that $E$ is small. We will improve the sentence and cite the referred paper. > ll. 128 Typo in “comptue” Will correct! > Section 4.2 Mixture consistency has already been proposed in [Wisdom, 2019]. This paper should be clearly mentioned. Also, consider renaming to multichannel mixture consistency, since you’re using P channels and RTFs. > [Wisdom, 2019] Wisdom et al., Differentiable Consistency Constraints for Improved Deep Speech Enhancement, 2019. We now realize that it is a bad idea to use the same name, and we will change the name of our loss to "mixture-constraint" loss to differentiate it from the "mixture consistency" term proposed in [Wisdom, 2019]. We will highlight their differences in the paper. As you mentioned, our loss differs in the number of channels used and, in addition, we filter the DNN estimates before loss computation. Another major difference is that [Wisdom, 2019] constrains the estimated sources to strictly add up to the mixture (see their Eq. (7) and (9)), while our method only ``encourages'' the filtered source estimate to add up to the mixture. > Section 4.3 Formulation is (6) is basically multichannel linear prediction with using estimated sources Z instead of the original mixtures Y. Relevant works solving a problem analogous to (6) should be at least briefly mentioned, such as earlier work in [Yoshioka, 2010] and many later works. > [Yoshioka, 2010] Yoshioka et al., Blind Separation and Dereverb. of Speech Mixtures by Joint Optimization, TASLP 2011. Will mention! Although (6) appears similar to conventional multichannel linear prediction (MCLP), we would like to emphasize that it has very different physical meanings. We consider that (6) does "forward filtering", where source estimates are filtered to approximate mixtures, while MCLP does "inverse filtering", where mixtures are filtered to approximate sources. This modification leads to non-trivial changes of the physical meanings of the computed filters (see also discussions in Section V.C of [A1] listed below). [A1] Z.-Q. Wang et al., Convolutive Prediction for Monaural Speech Dereverb. and Noisy-Reverb. Speaker Separation, in TASLP, 2021. > The main limitation is that the model does not take into account background or discrete noise, and is only demonstrated to work in presence of uncorrelated (measurement) noise. This severely limits applicability to using real-world recordings to train a separation model. It would be very useful to include results on robustness in presence of higher levels of noise. However, this is not essential and it may be out of scope of this paper. See our earlier response to this point. We also think this may be out of scope of this paper. > Another limitation for real-world use is the assumption of a static scenario (fixed relative IRs), which would not hold in real recordings. This is indeed a tricky issue, which is a common problem also existed in many other algorithms. We could model time-varying filters in some way, and we will investigate this in a future work.
null
null
null
null
null
null
The medial axis of closed bounded sets is Lipschitz stable with respect to the Hausdorff distance under ambient diffeomorphisms
Reject
Summary: This paper is an extension of a result from Chazal and Soufflet, which states that the Hausdorff distance of a set to its medial axis is lipschitz-bound under ambient deformations. The authors extend the result from C2 sets and C2 deformations to arbitrary closed sets and C1,1 diffeomorphisms which preserve a bounded sphere in the set. Strengths: Medial axis play a central role in vision, and 3D geometry and investigating may lead to novel approaches and algorithms. Weaknesses: Unfortunately, I do not believe it is within my capacity to evaluate the full correctness of the theorems presented in this paper, as such I feel uncomfortable to recommend its acceptance. I am a first-time reviewer hence will reassess my thoughts given other reviewers' input, but in a sense, placing the proofs in the supplementary is a somewhat odd choice to me, as the paper is a completely theoretical paper with its crux being the proofs themselves. Additionally, I find the theorem somewhat esoteric, being both a slight extension of Chazal and Soufflet, and requiring the ambient deformation preserve a bounded sphere to only yield a lipschitz bound. Given the above, I find this paper more suiting to a computational geometry/mathematical journal than NeurIPS. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you describe some applied experiment (in, e.g., learning, vision, geometry) you would perform to show the argued practical impact of this result? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: We prove that certain algorithms involving the medial axis are correct under reasonable assumptions. The result is relevant for any algorithm whose input consists of images acquired using imperfect lenses. Our result allows to quantify the impact of the imperfection. In practice, many machine learning algorithms (in particular for shape recognition) based on the medial axis are already used [10, 18, 25, 33, 41] (as cited in the introduction), for example for the study of root systems of plants. In this context our quantified result can be used to improve the accuracy of such algorithms. --- Rebuttal Comment 1.1: Comment: Can the reviewer please take a look at [this comment](https://openreview.net/forum?id=T47mUw8pW4&noteId=bBym4gFu57) to see if your question on application is better addressed?
Summary: The medial axis of a closed set $\mathcal{S} \subset \mathbb{R}^d$ is defined to be the set of points in $\mathbb{R}^d$ which do not have a unique closest point on $\mathcal{S}.$ The authors develop a notion of stability for such sets with respect to ambient diffeomorphisms of $\mathbb{R}^d.$ The main result proves stability with respect to $C^{1,1}$ diffeomorphisms under additional assumptions about the set $\mathcal{S}$ (for instance, that $\mathcal{S}$ is bounded, See Assumption 3.8 for a full list.) This result is considered a generalization of an earlier result, which makes stronger smoothness assumptions (namely $C^2$) on both the set $\mathcal{S}$ and the class of ambient diffeomorphisms. The authors argue in the early parts of the paper that this extra generality is needed for (unspecified) applications in astrophysics. Strengths: The expository style in the early parts of the paper is inviting, where it does a good job of illustrating some basic notions, including familiar ones such as the medial axis and less-familiar ones like the generalized tangent space. Weaknesses: There are a few different criticisms one can make of this paper: 1. The topic is niche for a NeuRIPS audience. 2. The main result is technical and difficult for non-experts to verify. 3. The main result, as described in the introduction, is a marginal improvement over the current state of the art in reference [13], in the sense that one gains only less restrictive assumptions about the regularity of the functions and shape that appear. 4. The connection to applications is tenuous at best, and no experiments are provided. With regards to 1 and 2 above, let me draw attention to the statement of the main result in Theorem 3.9 and the preceding assumptions it requires, which take nearly a page to write down even with many prerequisite definitions that appear before it. One would at least hope based on the promises of the introduction that a simple definition of "stability" would be available for use in the statement of the theorem. With regards to 3 and 4, there is very little given to convince the reader that the $C^2$ results are insufficient for applications. I would not necessarily suggest that a paper with one or more of these deficiencies be excluded from NeuRIPS. However, given that all four issues are present, it seems better to focus on resolving some of them, or to send the paper to another venue (eg. a pure mathematics journal) where some of these criteria are judged to be less important. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It would be very helpful for readers to point out where to find your main result early on in the introduction. That would be Theorem 3.9, right? lines 112: I don't understand why the assumption $\mathcal{S}$ and its medial axis being bounded is not a further restriction needed to state your result. There is nothing in Remark 2.1 that addresses the case of an unbounded set $\mathcal{S},$ nor anywhere else in the paper. Moreover, these assumptions do appear in the statement before Theorem 3.9. Thus, it seems incorrect for you to state your theorem in the introduction simply for "closed sets" without further qualification. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The main result may differ depending on the audience. For guarantees on a specific algorithm Theorem 3.9 is indeed the most relevant. However, for a general statement about stability and computability of the medial axis, the formulation as given in Theorem 4.1 is more useful. Q2: See the answer to question 1 of the reviewer zm1a: "The assumption is technical in one sense, but necessary in another. Let us explain this: If we consider just two points in the plane, let’s say $(0,0)$ and $(0,1)$ then the medial axis is a horizontal line. If you perturb $(0,1)$ into $(\sin \theta, \cos \theta)$ the medial axis will have an angle of $\theta$ with the horizontal line (see also Figure 2 in the paper). The Hausdorff distance between two non-parallel lines is infinite, so it is impossible to give a bound on the distance between the medial axes without localizing in one way or another. However, if we restrict ourselves to a ball around the origin of size $r/2$ then the Hausdorff distance between the two restricted medial axes is $\mathcal{O} (r \cdot \theta)$. This shows that some bounding is necessary to obtain quantitative results. The assumptions on the ambient diffeomorphism (namely that it keeps the bounding sphere invariant) could be replaced by other assumptions that guarantee localization (as we tried to explain in Remark 2.1): For a given point $x$ in $\mathbb{R}^d$ we only have to consider those points of the set $\mathcal{S}$ that are relatively close (a distance at most $r/2$) to $x$ (if they exist). In other words, the medial axis of $\mathcal{S}$ will not be influenced by points that are far away: the bounding sphere of radius $r$ can be ignored (technically one can interpolate between the given diffeomorphism and a diffeomorphism that is the identity beyond bounding sphere). So if the set $\mathcal{S}$ is sufficiently dense (i.e. for every $x$ there there are points in $\mathcal{S}$ that are not further than $r/2$ away from $x$) or locally (for points not so far from the set $\mathcal{S}$) all the stability results go through. In particular, if we consider our first example and we look at a neighbourhood of size $r/2$ of the origin then the stability bounds on the medial axis will hold in this neighbourhood. We intend to extend our explanation near Remark 2.1 in the final version, because we agree that we were too terse." --- Rebuttal Comment 1.1: Comment: I am happy with the authors' responses. I will need to read the paper more and monitor discussions before making a final decision with regards to a rating.
Summary: The authors prove that the medial axis of closed set is Hausdorff stable without any further assumption on it. In this proof, the authors achieve stability without pruning the medial axis which is a significant advantage. Meanwhile, the results hold for sets in arbitrary dimensions. Strengths: In originality, this work holds for sets in arbitrary dimentions and removes the limitation of manifold assumption when in proof, and it does not need to prune the medial axis which is a significant advantage. The quality and clarity is good enough, it is easy to understand the motivation, outline and contribution. The proof in this paper implies that the medial axis of an imprecise shape is stable. The medial axis plays an important role in the field of computational geometry, computer vision and graphics. Weaknesses: The result of this work shows the numerical stability of medial axis, but there is little analysis about the impact from noise size and quantity in real world data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In real world data, the noise quantity and size maybe different, does the result in this work mean that the stability of medial axis of different noisy data is always guaranteed? 2. What does the meaning of `considered set` in line 37 and 112? It seems that the word `considered` is not necessary there. 3. There are some standard examples of the instability of the medial axis mentioned in line 38. Could you give more explanation why these kinds of instability exist? Is the instability essential or numerical? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: From line 36 to 39, the authors give the limitation about the work, but it could be more clear if there is more explanation. I also asked the questions about it in section Question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: If the noise is due to some smooth deformation by e.g. a non-perfect lens, then the answer is yes. However, if you sample from a smooth object, it may be better to prune your axis. Stability results in the latter setting can be found in [29]. Q2: We agree with the reviewer. Perhaps it would have been better to just write `the set $\mathcal{S}$'. Q3: The instability of the medial axis is essential, and therefore has a significant numerical impact. The instability makes numerical computation unreliable, unless you are very careful about the perturbations (discussed in this paper) or the pruning (discussed in various other works). The intuition behind the existence of the instability is as follows: Roughly speaking, the medial axis is sensitive to "curvature"-like effects and global effects. Small (in terms of the Hausdorff distance) local perturbations can still have huge effects on the "curvature" and thus on the shape of the medial axis. The simplest example is the following: We start with two parallel lines. The medial axis is the line between them right in the middle. Then we perturb one of the lines a tiny bit (in Hausdorff distance) to create a small bump (with high curvature). As a consequence, the medial axis gains a large new branch that extends towards the bump. We will add some extra explanation and a figure in the final version, based on this example. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, I have no other question, I will make a final decision based on all the discussions and the revised paper.
Summary: This work proves that the medial axis of closed sets is Hausdorff stable, this extends existing stability result on the stability of the medial axis of C^2 manifolds under C2 ambient diffeomorphisms. The contributions are 1. This work makes no assumptions of the set except the closedness. The stability of the medial axis of smooth manifolds has been intensively studied in the literature, this work omits the manifold assumption. 2. The stability is achieved without pruning the medial axis. Large body of works have to prune the medial axis. 3. The stability results hold for sets in arbitrary dimensions and are insensitive to the dimension of the set itself. This theoretical result plays a fundamental role in many fields, the generalization is important to many practical applications. Strengths: The theoretic results are much general, it doesn't require the manifold assumption, it doesn't need to prune the medial axis, the results hold for any dimensions. The work is clearly represented. All the key concepts are explained thoroughly, the lemmas, theorems, corollaries are explained in detail, and rigorously formulated. The proofs are step by step, clean and easy to follow. Weaknesses: The theoretical results are elegant and convincing. It will be helpful to give some numerical experimental results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The stability result is established with respect to smooth diffemorphisms of the ambient space, is the bounding sphere preserving condition intrinsically essential or technically necessary ? To what extent, one can remove this restriction ? 2. Suppose the set S is a C2 surface, if S is deformed to generate a curvature singularity, the surface becomes C1 at the singularity, the medial axis may change drastically. Please explain why the ambient diffeomorphism C^{1,1} can avoid this situation. 3. For the conjecture, the cut locus is closely related to the sign of the Gaussian curvature on the surface. Small perturbation changing the sign of the curvature may generate conjugate points suddenly. From this point of view, it seems the stability of cut locus may be hard to achieve. Please explain your insights for the conjecture. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The current stability result assumes the diffeomorphisms is a small perturbation of the identity, and it preserves the bounding sphere. This constraint seems to be artificial and inconvenient for practical applications. Maybe this requirement can be weakened or the bounding sphere is pushed to infinity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The assumption is technical in one sense, but necessary in another. Let us explain this: If we consider just two points in the plane, let’s say $(0,0)$ and $(0,1)$ then the medial axis is a horizontal line. If you perturb $(0,1)$ into $(\sin \theta, \cos \theta)$ the medial axis will have an angle of $\theta$ with the horizontal line (see also Figure 2 in the paper). The Hausdorff distance between two non-parallel lines is infinite, so it is impossible to give a bound on the distance between the medial axes without localizing in one way or another. However, if we restrict ourselves to a ball around the origin of size $r/2$ then the Hausdorff distance between the two restricted medial axes is $\mathcal{O} (r \cdot \theta)$. This shows that some bounding is necessary to obtain quantitative results. The assumptions on the ambient diffeomorphism (namely that it keeps the bounding sphere invariant) could be replaced by other assumptions that guarantee localization (as we tried to explain in Remark 2.1): For a given point $x$ in $\mathbb{R}^d$ we only have to consider those points of the set $\mathcal{S}$ that are relatively close (a distance at most $r/2$) to $x$ (if they exist). In other words, the medial axis of $\mathcal{S}$ will not be influenced by points that are far away: the bounding sphere of radius $r$ can be ignored (technically one can interpolate between the given diffeomorphism and a diffeomorphism that is the identity beyond bounding sphere). So if the set $\mathcal{S}$ is sufficiently dense (i.e. for every $x$ there there are points in $\mathcal{S}$ that are not further than $r/2$ away from $x$) or locally (for points not so far from the set $\mathcal{S}$) all the stability results go through. In particular, if we consider our first example and we look at a neighbourhood of size $r/2$ of the origin then the stability bounds on the medial axis will hold in this neighbourhood. We intend to extend our explanation near Remark 2.1 in the final version, because we agree that we were too terse. Q2: The composition of a $C^2$ map with a $C^{1,1}$ map is itself $C^{1,1}$, so the surface can never be just $C^1$. One can associate some curvature to sets of positive reach (this is a non-trivial theory that goes back to Federer, for a complete modern introduction see [RZ19]) and the curvatures of these sets are in a certain sense bounded. In particular, sets of positive reach cannot have a curvature singularity (and $C^{1,1}$ maps preserve the positivity of reach). [RZ19] Jan Rataj and Martina Zähle. Curvature measures of singular sets. Springer, 2019. Q3: We are not completely sure that we understand the question of the reviewer. If the reviewer means: What happens if the curvature of (for example) a curve changes from slightly positive to slightly negative? In that case, the medial axis is very far away from the curve and thus excluded by our localization assumptions, see the answer to question 1. If our reading of the question is not correct, we kindly ask the reviewer to specify in particular what they mean by the cut locus, why the focus lies on the Gaussian curvature, and which conjecture they refer to. --- Rebuttal Comment 1.1: Comment: Dear reviewer, The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper. Thanks, Area Chair
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proves the Hausdorff stability of the medial axis of closed bounded sets. This is a mathematics paper. The authors set up a foundation of their problem, then applied Theorem 2.6 (from [19]) to complete their proof. The end result is quite beautiful in fact. The authors also show that the results in [13] is a special case of their result. Strengths: - The paper is written well. Despite not having a mathematics background, I am able to read and understand the majority part of the proof. (Nitpick: there are small typos, for example, some \pi_{S}(p_4) are annotated incorrectly in Figure 1.) - The authors proved a difficult result (as an indication, [13] is a special case of the result). The proof seems to be correct to me. Weaknesses: - I have a hard time understanding how this result can be used in machine learning / computer vision / computational geometry applications. Yet the motivation is explained in ln45 - ln73. However, I still do not see how this result can be applied. For the benefit of the readers, I think applications need to be demonstrated, otherwise Neurips might not be the right audience. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I'd like to understand how this result can be applied in applications that are of interest to Neurips audience. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: Many machine learning algorithms (in particular for shape recognition) based on the medial axis are already used in practice [10, 18, 25, 33, 41] (as cited in the introduction), for example for the study of root systems of plants. Our paper gives a theoretical underpinning of these results. We show that the features extracted in these papers are stable and therefore reliable and explainable. We believe that this paper will be of interest to those at NeurIPS that are interested in explainable A.I. and provably correct algorithms. --- Rebuttal Comment 1.1: Comment: Can the reviewer please take a look at [this comment](https://openreview.net/forum?id=T47mUw8pW4&noteId=bBym4gFu57) to see if your question on application is better addressed?
Summary: In this paper, the authors analyze the stability of the medial axis of a set S, when S is perturbed by a map that is lipschitz with lipschitz derivatives. This stability result is of interest in numerous applications in machine learning, such as astrophysics. The author's results improve upon an existing result by Chazal and Soufflet in a few ways: 1. The authors remove an assumption that the set S must be a piecewise smooth manifold; here they only require S to be closed and bounded. 2. They do not require pruning the medial axis. 3. Their result holds in high dimensions. Strengths: I think the result is significant, and of interest to the neurips community. Compared to Chazal and Soufflet, I think another significant aspect of this result is that this result is quantitative whereas Chazal and Soufflet's result is only qualitative. Weaknesses: I have some questions about how this paper's results compare to existing results, as well as about several aspects of the result (see below). These may not be considered weaknesses if the authors can address them. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have the following questions regarding this result: 1) How significant is removing the manifold assumption? Even if a set is not smooth, can it not be made smooth via some infinitesimally small perturbation (e.g. a gaussian convolution)? Can the authors elaborate on the strength of their result in the context of an application? E.g. for the astrophysics image example, can we not simply smooth the image via a infinitesimal smoothing operation, and then apply Chazal and Soufflet? 2) Is there any existing quantitative bounds that the authors can compare to (even if assumptions differ)? If there are, how does the rate in Theorem 3.9 (line 212) compare to existing rates? 3) To double check, on line 212, as L_F -> 1, L_{DF}-> 0, epsilon_1 and epsilon_2 -> 0, C_L(r, L_F, L_{DF},eps_1,eps_2) -> 0, is that correct? 4) Can the authors give an interpretation of rch(S) defined on line 99? In particular, for a non-smooth S, can rch(S) be 0? 5) if rch(S)=0, the result due to Federer in Theorme 2.6 becomes vacuous. However, the result in Theorem 3.9 does not seem to depend on rch(S) at all, even though it crucially uses Federer's result. Can the authors explain why this is? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The strength of weakening the differentiability assumption is best seem when the set is not a manifold. Consider for example a Y shape in the plane. A (Gaussian) convolution cannot make this into a smooth curve. Such Y branches are common in biology (for example the splitting of branches or roots of a plant or in the structure of cells in i.e. a plant). Note that shape recognition questions in biology inspired Blum to introduce the medial axis (although some earlier authors such as Erdos considered the same set). Given this remark we think we should have mentioned some applications in biology as well. We focused on the applications in astrophysics because there it would be apparent that high dimensional results are relevant. In the astrophysical context (part of our motivation), one could think of a planet being ripped into pieces (creating very irregular i.e. non-smooth pieces) while it falls into a black hole and it is at the same time surrounded by the gas (which is usually present in the accretion disk of a black hole). Another scenario would be some colliding objects, like asteroids (which are themselves not very regular), or several shockwaves or jets hitting each other. None of these examples are manifolds and are thus beyond the reach of the theory of Chazal and Soufflet, who only consider smooth manifolds without boundary. Q2: Chazal and Soufflet [13], Theorem 3.2 and 3.3, do not give quantitative results, they only prove convergence, but not that the convergence is Lipschitz (Theorem 4.1 of our paper). Our result gives explicit Lipschitz constants. Our bounds are significantly better than the recent contribution in Lieutier and Wintraecken [29] (the most recent paper that gives bounds in the setting where one prunes) which only gives 1/2-Hölder bounds on the Hausdorff distance. Q3: This is indeed correct. Q4: Yes, many non-smooth sets have 0 reach: In fact, Federer [19] says that sets of positive reach are piecewise (in a very weak sense) C^{1,1}, meaning that the derivative is Lipschitz continuous. The simplest example of a set with reach 0 is perhaps two line segments meeting at a non-zero angle. Q5: Many thanks for this question, because this was rather a big surprise to us as well. One can give some intuition. It suffices to consider the balls centred on the medial axis, and we apply Federer's result to these balls and not to the original set. These balls are well defined even if the set doesn’t have positive reach. Now, roughly speaking, one has the following: If such a ball is large, meaning that you are far away from the set, then Federer’s result will give stability. However, if the ball is small then after applying the map $f$ it will remain a slightly deformed small ball and (the point of) the medial axis (we are interested in) has to lie in this ball. To put it differently, because the ball is small the point has nowhere to go. --- Rebuttal Comment 1.1: Comment: Thank you for the response. These address my concerns and I increased my score to 7.
null
null
null
null
PUCA: Patch-Unshuffle and Channel Attention for Enhanced Self-Supervised Image Denoising
Accept (poster)
Summary: This paper propose a novel network architecture for self-supervised image denoising. The network is build with foundation blocks of NAFNet, which combines mask and dilated convolutions to implement the blind-spot network. Patch-unshuffle is introduced as downsample/upsample operations with enables multi-scale design of the network while maintaining the blind-spot mechanism. Experimental results show the effectiveness of proposed method on real-world denoising datasets. Strengths: 1. The idea of searching for flexible network architecture is reasonable for improving the self-supervised denoising performance. 2. Patch-unshuffle is well designed to enable downsample/upsample operations for implementing multi-scale network architecture. 3. Experimental results show the effectiveness of proposed method in self-supervised image denoising on real-world images. 4. The paper is well written and easy to follow. Weaknesses: 1. The overall novelty is limited. The training process is the same as AP-BSN, and the network architechture is assembled with existing DBSN and NAFNet. The only contribution is the idea of improving the network flexibility and to implement a multi-scale network with patch-unshuffle, which is not technically sound. 2. The ablation study is not complete. The central idea of this paper is to improve DBSN used in AP-BSN with multi-scale architecture, however, there are existing multi-scale BSN architechture such as [1]. What's the performance of porposed network compare with [1]? 3. The competing methods is not complete. This paper shows good performance on real-world image denoising, but the latest self-supervised denoising methods [2][3] are ignored. [1] High-quality self-supervised deep image denoising. NIPS, 2019. [2] LG-BPN: Local and Global Blind-Patch Network for Self-Supervised Real-World Denoising. CVPR, 2023. [3] Spatially Adaptive Self-Supervised Learning for Real-World Image Denoising. CVPR, 2023 Technical Quality: 3 good Clarity: 3 good Questions for Authors: My major concern is weakness 2 and 3, please complete the necessary experiments. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors hav adequately addressed the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** The overall novelty is limited. The training process is the same as AP-BSN, and the network architechture is assembled with existing DBSN and NAFNet. The only contribution is the idea of improving the network flexibility and to implement a multi-scale network with patch-unshuffle, which is not technically sound. **WA1** As you have mentioned in the strengths section, we devised multi-scale network architectures through patch-unshuffle/shuffle to enhance self-supervised denoising performance. This outcome emerges from the expansion of network design, which was previously constrained due to the requirements of BSN. Additionally, by incorporating the J-invariant block with channel attention, known as DAB, we achieved an ensemble effect from the subsamples generated by patch-unshuffle. This successful integration allowed us to effectively elevate denoising performance. --- **W2** The ablation study is not complete. The central idea of this paper is to improve DBSN used in AP-BSN with multi-scale architecture, however, there are existing multi-scale BSN architechture such as [1]. What's the performance of porposed network compare with [1]? **WA2** Based on our understanding of [1] you referred to, it appears that all convolutional layers employ a shifted upwards kernel with zero values below the center row and incorporate offsets. The same approach is used for downsampling/upsampling, utilizing offset methods along with either average pooling or nearest-neighbor upsampling. It seems plausible that for ablation studies: 1) DAB could be replaced with the proposed convolution and 2) patch-unshuffle/shuffle could be substituted with offset-incorporated avg. pooling/nearest-neighbor upsampling. In [1], the maintenance of j-invariant is achieved through a combination of downward rows with zero-value convolutions, offsets, and avg. pooling/nearest-neighbor upsampling. Our method is optimized for centrally masked convolutions and dilated convolutions, which is why using the alternative approach that we suggested could potentially break the j-invariant. Consequently, both methods might yield results similar to pixel-unshuffle/shuffle. --- **W3** The competing methods is not complete. This paper shows good performance on real-world image denoising, but the latest self-supervised denoising methods [2][3] are ignored. **WA3** At the time of our proposed method's submission, neither [2] nor [3] had been published. However, in response to your request, we conducted a comparison. LG-BPN [2] augmented receptive fields through expanded convolutions and a local-global branch. SASL [3] employed a Blind-Neightborgood Network and a Locally Aware Network to separate denoising into textured and flat regions. PUCA effectively expands the receptive field by downsampling/upsampling feature maps using patch-unshuffle/shuffle. Furthermore, by extracting global context from a multi-scale representation and integrating fine details through skip connections, PUCA benefits denoising without the need for distinct regions or local-global separation, unlike other methods. The performance on the SIDD and DND benchmarks is as follows, with results from [2] and [3] extracted from the original texts: | | SIDD benchmark (PSNR/SSIM) | DND benchmark (PSNR/SSIM) | |-------------|----------------------------|---------------------------| | PUCA (Ours) | 37.54/0.936 | 38.83/0.942 | | LG-BPN [2] | 37.28/0.936 | 38.43/0.942 | | SASL [3] | 37.41/0.934 | 38.18/0.938 | Thank you for your suggestion. we will incorporate it into the final version. --- Rebuttal Comment 1.1: Comment: The rebuttal has addressed most of my concerns, and I'll improve rating from 6 to 7. --- Reply to Comment 1.1.1: Title: Thank you for updating the score! Comment: We greatly appreciate the insightful feedback you've provided. Your raised points have prompted us to reevaluate the approaches we had overlooked. Your feedback has been instrumental in improving the manuscript, and we sincerely thank you for raising the score to 7. Your perspective has notably elevated the quality of the paper, and we're open to any additional thoughts or suggestions you want to share.
Summary: This work addresses the problem of developing a self-supervised learning-based denoiser. To this end, this work proposes the concepts of patch shuffle/unshuffle operations to effectively downsample and upsample the features while ensuring that the J-invariance property holds with the resulting network. Using dilated attention blocks and patch shuffle/unshuffle, this work comes up with a variant of Unet architecture that can significantly improve J-invariance networks' spatial information aggregation ability. The proposed method improves the denoising performance on public benchmark datasets. Strengths: 1. The paper is written well. 2. The new notions of patch shuffle/unshuffle and dilated attention blocks to ensure the holding of J-invariance properly is interesting 3. The proposed method outperforms prior arts significantly Weaknesses: 1. A study is missing to elaborate more on what other network architectures the proposed ideas can find a perfect fit. There have been numerous variants of Unet proposed in the literature. Can those extensions boost the performance further remain unclear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Please address my comments under weaknesses. - Visual comparisons with other self supervised methods (example, Figure 6) reveals as if the proposed method lacks details while removing the noise completely whereas other methods retains more details. Is there an explanation on why so? Is it possible to control the denoising level of the proposed method to earn back the lost details? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations has been addressed adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** A study is missing to elaborate more on what other network architectures the proposed ideas can find a perfect fit. There have been numerous variants of U-Net proposed in the literature. Can those extensions boost the performance further remain unclear. **WA1** Structures employing self-attention, such as Uformer [42], are left for future work due to the potential of self-attention to compromise J-invariance. Adhering to BSN's requirements, we adapted the MIMO-UNet [8]. Our training yielded results of PSNR: 37.209 dB and SSIM: 0.875 on the SIDD validation set. Due to time constraints, extensive hyperparameter searching was not conducted. However, our modified MIMO-UNet surpassed the denoising performance of AP-BSN [21]. Thus, we anticipate that PUCA could enhance performance across various U-Net variants. --- **Q1** Visual comparisons with other self supervised methods (example, Figure 6) reveals as if the proposed method lacks details while removing the noise completely whereas other methods retains more details. Is there an explanation on why so? Is it possible to control the denoising level of the proposed method to earn back the lost details? **QA1** Sorry for the confusion. It might seem that we have lost some details by not visualizing the ground truth together, but it is apparent from Figure 6 that PUCA indeed displays sharper edges compared to other models. Given this observation, we believe that our method does not miss the details. It seems that the baselines might preserve detail that appears more akin to less-removed noise, rather than enhancing real image details, as per our perception. Upon reviewing Figure 1 and the supplementary materials, we can confirm that PUCA represents finer details like text with greater clarity. Adjusting the level or dilation of the model does offer the possibility to control details, but along with that, denoising also becomes weaker, making it less practical. --- Rebuttal Comment 1.1: Title: Post-rebuttal comments Comment: The rebuttal has addressed all my concerns. I retains my original rating. --- Reply to Comment 1.1.1: Title: Thank you for the valuable review! Comment: We appreciate your review and we are glad to hear that the rebuttal has effectively addressed your concerns. Thank you for maintaining your original rating. Your feedback has been valuable in enhancing the quality of the manuscript.
Summary: This work extends the field of blind spot networks (BSN) for self-supervised image denoising. They propose patch-unshuffle/shuffle a technique for downsampling/upsampling that preserve J-invariance, which allows to build U-Net like architectures expanding the receptive field and utilizing multi-scale representations in BSN. Second, the dilated attention block is introduced, a J-invariant channel attention mechanism incorporating global information. The method outperforms existing self-supervised methods by 1.6dB (SIDD dataset) and 0.7dB (DND dataset) in PSNR resulting also in improved perceived image quality for the depicted examples. Strengths: 1. The proposed method is interesting and alleviates the constrained architecture design imposed by J-invariance opening up possibilities for new designs of BSNs. 2. The experiments on the two most common benchmarks for self-supervised image denoising are extensive and the method is compared to a range of baseline methods. 3. The authors theoretically derive the J-invariance property of their proposed patch-unshuffle method. Weaknesses: 1. Section 3.2 introduces the dilated attention block (DAB) that preserve J-invariance. However, from the sentence "To fulfill this requirement we incorporate a d-dilated 3x3 depth-wise convolution (DDC) before gating and attention, taking inspiration from D-BSN" it is not clear to me how exactly J-invariance is preserved nor if or how the approach differs from the previous work. 2. From Section 2.2 it is not clear to me how pixel-shuffle downsampling (PD) works. From Figure 4 it seems as PD changes the dimension of the input image, however this is not discussed in Section 2.2 and the algorithm overview in Figure 3 shows the signal before and after PD to exhibit the same dimension. 3. I find that dropping the channel dimension in the proof in Section 3.1 is a bit confusing as now several input indices are mapped to the same output index. Maybe an illustration with an explicit example of how a set of indices is transformed might be helpful. 4. Summarizing the above points the clarity of the work could be improved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Choosing the by far worst performing supervised baseline as the only visual reference for supervised training in Figure 1 seems a bit misleading with respect to the comparison of supervised and self-supervised methods. 2. The caption in Table 3 (3b with DAB) does not fit the description in the text (3b no DAB). 3. Line 256 refers to other work without citing it. 4. The paper uses a numeric citation style in the text but alphabetic in the references making it impossible to match citations to entries in the references. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors briefly touch on one limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** Section 3.2 introduces the dilated attention block (DAB) that preserve J-invariance. However, from the sentence "To fulfill this requirement we incorporate a d-dilated 3x3 depth-wise convolution (DDC) before gating and attention, taking inspiration from D-BSN" it is not clear to me how exactly J-invariance is preserved nor if or how the approach differs from the previous work. **WA1** In Figure 3 of the main text, layers of the DAB block are depicted. The DAB block integrates components including LayerNorm and 1x1 convolution, DDC, SimpleGate, and SCA. LayerNorm and 1x1 convolution do not affect j-invariance. SimpleGate performs element-wise multiplication on two chunks divided along the channel axis, thereby maintaining j-invariance similarly. SCA goes through global average pooling and a fully connected layer to create channel-attention. Then, it multiplies the input and the channel-attention along the channel axis Consequently, even in this scenario, it has no impact on j-invariance. In the context of the conventional channel-attention block, using simple depth-wise convolution instead of DDC disrupts the j-invariance upheld by centrally masked/dilated convolutions, as illustrated in Figure 2. To address this, DDC (a dilated depth-wise convolution) is proposed to maintain j-invariance. While D-BSN can be used as is, channel attention was introduced to harness ensemble effects from the subsamples generated by patch-shuffle/unshuffle. As indicated in Table 3 (c) and (d), the presence of DAB leads to a PSNR increase of 0.106. Table 3: Ablation study on PUCA components with SIDD validation | | DAB | Unshuffle | PSNR | SSIM | |-----|-----|-----------|--------|-------| | (a) | - | Pixel | 23.662 | 0.328 | | (b) | V | - | 36.768 | 0.875 | | (c) | - | Patch | 37.386 | 0.880 | | (d) | V | Patch | 37.492 | 0.880 | --- **W2** From Section 2.2 it is not clear to me how pixel-shuffle downsampling (PD) works. From Figure 4 it seems as PD changes the dimension of the input image, however, this is not discussed in Section 2.2 and the algorithm overview in Figure 3 shows the signal before and after PD to exhibit the same dimension. **WA2** PUCA adopts the pixel-downsampling approach from AP-BSN [21]. Similar to the observation in Figures 4 and 5 of AP-BSN, pixel-downsampling (PD) serves to arrange the subsamples that are listed along the channel axis of pixel-unshuffle onto a single plane. Consequently, Figure 3 of the main text exhibits the same channel dimensions before and after PD. --- **W3** I find that dropping the channel dimension in the proof in Section 3.1 is a bit confusing as now several input indices are mapped to the same output index. Maybe an illustration with an explicit example of how a set of indices is transformed might be helpful. **WA3** We agree with your suggestion, multiple input indices mapping to the same output index could potentially lead to confusion. We visualized the operation principle of patch-unshuffle in Figure 4 (b). Patches with the same color are assembled along the same channel axis, and patches of different colors are arranged in the same order as indicated by the red patches. --- **Q1** Choosing the by far worst performing supervised baseline as the only visual reference for supervised training in Figure 1 seems a bit misleading with respect to the comparison of supervised and self-supervised methods. **QA1** We agree with your suggestion and we will include additional supervised models into the final version. --- **Q2** The caption in Table 3 (3b with DAB) does not fit the description in the text (3b no DAB). **QA2** As you mentioned, the table caption in the main text has indeed been changed. We willl make the necessary corrections. Thank you for bringing this to our attention. --- **Q3** Line 256 refers to other work without citing it. **QA3** We will add the citation to the work at line 256, as you suggested. --- **Q4** The paper uses a numeric citation style in the text but is alphabetic in the references making it impossible to match citations to entries in the references. **QA4** We will match the bibliography style to the number citation style. This should help avoid any confusion. --- Rebuttal Comment 1.1: Comment: Thank you for the reponse and clarifying my questions regarding D-BSN vs. DAB and PD (pixel-shufle downsampling) in Section 2.2 vs. pixel-unshuffle in Figure 4a. I suggest adding some of those explanations to the paper to make it more self-contained as for now it requires detailed knowledge of the concepts in [47,21] and [36] to follow the paper. Through that together with including more supervised baselines as mentioned in QA1, I believe that the presentation of the paper has been improved and I increased the presentation score from 2 to 3 and the overall score from 5 to 6. --- Reply to Comment 1.1.1: Title: Thank you for updating the score! Comment: Thank you for your response. Thanks to your deep insights, we can clearly distinguish between pixel shuffle downsampling (PD) and pixel unshuffle and clearly explain the difference between D-BSN and DAB. We fully agree with the suggestion to include these explanations in the body of the paper. This approach aligns well with our goal of making the paper more self-contained by increasing clarity and understanding. We also thank you for emphasizing the need to incorporate additional supervised baselines, as highlighted in QA1. Your comments have been very helpful in refining the manuscript, and we greatly appreciate your thoughtfulness in adjusting the score to a 6. We sincerely appreciate your valuable feedback and insightful suggestions. Your perspective has greatly improved the quality of the paper, and if you have any additional insights or recommendations, please feel free to share them with us.
Summary: This paper presents a method for self-supervised image denoising. Specifically, a patch-unshuffle (and shuffle) operation together with a dilated attention block was proposed to achieve the goal. Experimental results on two real noise datasets (SIDD and DND) show the effectiveness of the proposed method over other self-supervised methods. Strengths: + Real noisy image denoising is an important problem in the denoising community and the authors also alleviate the limitations from the $\mathcal{J}$-invariance to some extent. + The proposed method was shown to perform generally better than other self-supervised methods on the SIDD and DND real noise datasets. + The paper is generally well-written and easy to follow. Weaknesses: - From the description, it seems the authors were motivated by the flexibility in network structures. But it is unclear why the flexibility in network structure design is necessary, or how would it benefit the denoising community. - The novelty of the proposed patch-unshuffle/shuffle is a bit limited, especially considering the previously proposed pixel-unshuffle/shuffle approach [47]. The key idea is very similar with the only difference as pixel vs. patch. Whereas the significance of such a change and novelty were not well clarified with convincing justifications. For example, what if replace the patch-unshuffle/shuffle blocks in the proposed method with the corresponding pixel-wise version? It is also unclear what is the contribution of the PD and PU processing at the input/output ends. What if remove these processing operations and only use the proposed patch-wise blocks? On the other hand, from what was shown in Fig.5 in [47], it seems that they already used a similar idea of patch-shuffle. - The proposed patch-unshuffle was motivated by the claimed issue of breaking the \mathcal{J}-invariance in pixel-unshuffle. But as also acknowledged by the authors, the proposed patch-unshuffle can still break it in some cases. - The novelty of the proposed dilated attention block is also a bit limited. As mentioned by the authors, it was based on the channel attention [7] being applied to U-Net, with inspirations from D-BSN [36] and SCA [7]. If there is a new design involved, the contribution and novelty are unclear, and it lacks convincing (experimental) validation, e.g. what if without the new design? how the performance relates to the proposed design? - It is unclear what dilation rate was used in the proposed DAB block. And unclear how this hyper-parameter d affects the model's performance. - The proposed method performs worse than the C2N+DIDN for the SSIM (Table 1). We know that SSIM usually represents the visual or structural quality of the reconstructed image. But there is no clarification for this. - In the ablation study (Table 2), the authors attribute the decreased performance when the level increased from 3 to 4, to the resolution of the latent features. To validate this, an experiment with a larger input image (thus a higher resolution for the latent features when level=4) should have been included. - Although LG-BPN [35] may not be published (and not published at CVPR) when the proposed method was submitted, the idea of local-global branches and the larger receptive fields through dilated convolutions is very similar to the proposed method. The novelty is as a result weakened. - The proposed method claimed the issues within existing methods about the \mathcal{J}-invariance, and motivated by this, claimed that the proposed method "successfully alleviates the constrained architecture design imposed" by this. But it is unclear how the constraints for architecture design were alleviated. This was not validated, unless other designs were shown to be effective with the proposed method. - Missing definition to p (L136-144). If it's the same as the p in L145, please specify (the first time it appears). - It is unclear what "PUCA" indicates, "Patch-Unshuffle Channel Attention", or "xxxUnetxxx"? Please clearly define it the first time it was introduced. - In the caption of Fig. 3, "During encoding...through patch shuffling..." should be "patch unshuffling"? Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be helpful if the authors could address the concerns raised in the above Weaknesses section. For example, the main motivation of the proposed method and its validity; the technical novelty of the newly proposed components (i.e. the patch-unshuffle and the DAB blocks); those concerns about the experiments. The reviewer would be happy to change the rating if the concerns could be well addressed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors clearly mention the limitations of their method (in the Conclusion) and potential societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Table 3: Ablation study on PUCA components with SIDD validation | | DAB | Unshuffle | PSNR | SSIM | |-----|-----|-----------|--------|-------| | (a) | - | Pixel | 23.662 | 0.328 | | (b) | V | - | 36.768 | 0.875 | | (c) | - | Patch | 37.386 | 0.880 | | (d) | V | Patch | 37.492 | 0.880 | **W1** BSN's evolution has shifted from input-masking to a centrally-masked kernel approach for efficiency and reduced artifacts. Due to BSN's constraints, only stacked dilated convolutions were viable. In contrast, supervised denoising, like U-Net [2,8,19,33,41,42,46], enjoys more design flexibility. U-Net extends receptive fields, extracts multi-scale global context, and combines details through skip connections, making it effective. By infusing scalability into self-supervised denoising design, we foresee new network structures emerging for enhanced performance. **W2** In self-supervised denoising, the target image matches the input noisy image. To prevent identity mapping, BSNs have been used, requiring j-invariance. However, centrally masked/dilated convolutions with pixel-shuffle/unshuffle can disrupt j-invariance and hamper denoising. [47] introduced pixel-shuffle down-sampling adaptation as shown in Figure 5 of [47]. To maintain j-invariance and successful denoising, we propose patch-unshuffle/shuffle for downsampling/upsampling. Table 3 (a) in the main text highlights that pixel-shuffle/unshuffle generates identical outputs to the input image. Real noise displays correlations distinct from synthetic noise, challenging the assumption of independent zero-mean noise in synthetic methods. Thus, synthetic-based approaches struggle with real noise generalization. PD disrupts noise correlations by arranging subsamples along the channel axis onto a plane as seen in AP-BSN’s Figures 4 and 5. PU recombines subsampled images. While j-invariance remains intact without PD and PU, real noise's characteristics allow predictions influenced by neighboring pixels, resembling identity mapping. This applies to AP-BSN and LG-BPN as well. **W3** D-BSN constructs its network through a combination of centrally masked/dilated convolution. As demonstrated in Noise2Kernel*, maintaining J-invariance necessitates a dilation of dilated convolution, $d \geq \mathrm{ceil}(K/2)$ (where $\mathrm{ceil}(a)$ represents the smallest integer greater than or equal to a, and $K$ signifies the kernel size of centrally masked convolution). Similar to adjusting the dilation of dilated convolution to satisfy BSN's requirement, adjusting the patch size of patch-unshuffle serves as a method to meet the demands of BSN. **W4** Using the channel attention [7] directly would break j-invariance. Hence, we made the modification of changing the convolution in the channel attention to dilated convolution. While D-BSN could be utilized as is, we incorporated channel attention to gain ensemble effects from the subsamples generated by patch-shuffle/unshuffle. In Table 3 (c) and (d), it can be observed that the presence of DAB results in a PSNR increase of 0.106. **W5** In the main text, we utilized a dilation of 2, and in accordance with your request, the effects of experimentation with different dilations are as follows: | Dilation | PSNR | SSIM | |----------|--------|-------| | 2 | 37.492 | 0.880 | | 3 | 37.284 | 0.884 | | 4 | 36.824 | 0.879 | As the dilation size increases, the PSNR decreases. We infer that this phenomenon occurs due to the simultaneous increase in dilation and patch-unshuffle size, which presents challenges in reconstructing local information. **W6** Our intuition is that C2N+DIDN's stabilizing loss term helps maintain the luminance and contrast of the dataset, and the addition of C2N-generated noise to the clean image enhances structural aspects, leading to high SSIM scores. Furthermore, we infer that the SIDD dataset's larger object sizes relative to the noise positively contribute to C2N+DIDN achieving high SSIM values. **W7** As per your request, we measured the performance based on level variations in the DND dataset with a size of 512x512. The results are as follows: | level | PSNR | SSIM | |---------|--------|-------| | level-3 | 38.884 | 0.942 | | level-4 | 39.030 | 0.944 | As anticipated in the main text, it is evident that as the resolution increases, performance improves with deeper levels (level 3 to level 4). **W8** LG-BPN augmented receptive fields through expanded convolutions and a local-global branch. PUCA effectively expands the receptive field by downsampling/upsampling feature maps using patch-unshuffle/shuffle. Furthermore, by extracting global context from a multi-scale representation and integrating fine details through skip connections, PUCA benefits denoising without the need for local-global separation. **W9** Sorry for the confusion. We intended that we introduced scalability to network design. To assess the potential for extension in U-Net variants, we modified [8], considering the requirements of BSN. As a result, we obtained a PSNR of 37.209 dB and an SSIM of 0.875 on the SIDD validation. While further optimization seems necessary, we anticipate that the components of PUCA can be applied to various U-Net variants to enhance performance. **W10** With the same $p$ as in L145, we will define it according to your opinion when it first appears. **W11** "PUCA" stands for "Patch-Unshuffle and Channel Attention," and, we will make sure to properly include it in the introduction as you suggested. Thank you for the advice. **W12** We will make the correction to "Patch Unshuffling" in Figure 3 as you mentioned. [*] Noise2kernel: Adaptive self-supervised blind denoising using a dilated convolutional kernel architecture. Sensors, 2022 --- Rebuttal Comment 1.1: Title: Re: rebuttal Comment: Thanks to the authors' rebuttal. I have read the rebuttal and comments from other reviewers. It is good to see that most of my concerns were addressed, making the paper more clear and easier to understand. The authors are suggested to include these explanations into their final version. Although the technical novelty is still a bit limited to me, given the related prior works, I would raise my rating in response to the major concerns being addressed. --- Reply to Comment 1.1.1: Title: Thank you for updating the score! Comment: Thank you for your response. The questions you raised have helped us to clarify the role of dilation and levels again, and we completely agree with your suggestion to add more detailed explanations throughout the paper to make it clearer and easier to understand. Your insights have been very helpful in refining the manuscript, and we deeply appreciate your thoughtful adjustment of the score to a 6. Thank you very much for your valuable feedback and insightful suggestions. Your perspective has greatly improved the quality of the paper, and if you have any additional insights or recommendations, please feel free to share them with us.
Rebuttal 1: Rebuttal: We thank reviewers for the positive comments and encouraging remarks: “The authors observe that the commonly used simple structure, such as multi-scale structure, in image denoising violates the J-invariance and thus cannot be used in BSNs.” (**nv4m**) “The proposed method is interesting and alleviates the constrained architecture design imposed by J-invariance opening up possibilities for new designs of BSNs.”(**nv4m**, **daq5**, **DQiP**, **jJkN**, **3t6u**) “The proposed method outperforms prior arts significantly” (**nv4m**, **daq5**, **jJkN**, **3t6u**) “The submission is technically sound in my opinion and the advantages and limitations of this work are discussed carefully and honestly.” (**nv4m**, **daq5**, **jJkN**, **3t6u**) “The submission is written with sufficiently clear definitions and formulas. And the organizations are well-designed.” (**nv4m**, **daq5**, **jJkN**, **3t6u**) We sincerely appreciate your thorough understanding of the method and careful review of the paper. We are truly grateful for your valuable insights and advice.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces PUCA, a J-invariant U-Net for self-supervised image denoising. Specifically, the authors propose a patch-unshuffle and dilated attention block to allow the use of the multi-scale structure for enlarging the receptive field. Extensive experiments demonstrate that the proposed PUCA outperforms the existing method by a notable margin. There is also adequate analysis to illustrate the correctness and properties of the proposed method. Strengths: 1. Good Quality. The submission is technically sound in my opinion and the advantages and limitations of this work are discussed carefully and honestly. 2. Good Clarity. The submission is written with sufficient clear definitions and formulas. And the organizations are well-designed. 3. Good motivation. The authors observe that the commonly used simple structure, such as multi-scale structure, in image denoising violates the J-invariance and thus cannot be used in BSNs. 4. Solid solution. The authors propose patch unsuffle to achieve downsampling without violating j-invariance. In addition, proof is also provided to demonstrate this property of patch unshuffle theoretically. 5. Sufficient experiments. The authors conduct sufficient experiments and ablation studies to demonstrate the effectiveness of the proposed method. The results show that the proposed method outperforms all existing methods by a notable margin. Weaknesses: No obvious weaknesses. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In L28, the authors emphasize the generalization issue of the supervised method and note that the unsupervised method can generalize better. However, this claim is not justified. Can the author show the cross-dataset validation of both the supervised method and the unsupervised method? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The paper contains an adequate discussion of social impacts and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** In L28, the authors emphasize the generalization issue of the supervised method and note that the unsupervised method can generalize better. However, this claim is not justified. Can the author show the cross-dataset validation of both the supervised method and the unsupervised method? **QA1** Sorry for the confusion. The meaning of L28 in the main text was that supervised denoising approaches require extreme cost in data collection, making it challenging to gather large-scale data, which would inevitably lead to limited generalization ability. We will revise the main text to resolve this confusion. In line with your request, we conducted cross-dataset experiments (testing the model trained on SIDD with the DND benchmark) and obtained the following reasonable results: | | SIDD (PSNR/SSIM) | DND (PSNR/SSIM) | |------------------------------|------------------|-----------------| | Restormer (SIDD trained)[43] | 40.02/0.960 | 40.03/0.956 | | PUCA (SIDD trained) | 37.54/0.936 | 38.60/0.940 | The results of Restormer, a type of supervised image denoiser, are from the results reported in the original paper [43]. We would like to emphasize the significance of our self-supervised image denoiser demonstrating comparable performance to the supervised image denoiser. --- Rebuttal Comment 1.1: Title: Post-rebuttal Comments Comment: Thanks for the authors' feedback. It has addressed my question. I'll maintain my original rating. --- Reply to Comment 1.1.1: Title: Thank you for the valuable review! Comment: Thank you for your review, and we are pleased to hear that the concerns have been effectively addressed through the rebuttal. We appreciate your decision to maintain the original rating. Your feedback has been instrumental in improving the quality of the manuscript.
null
null
null
null
null
null
Connecting Certified and Adversarial Training
Accept (poster)
Summary: The paper presents TAPS, an unsound certified training method that combines the advantages of certified training IBP and adversarial training PGD. TAPS first splits the neural network into two parts, the feature extractor, and the classifier. TAPS then uses IBP to propagate the over-approximation through the feature extractor. TAPS uses PGD to estimate multiple adversarial examples inside the over-approximated box and trains with these adversarial examples. The challenge is how to backpropagate gradients through the PGD part in the middle. TAPS designs a gradient estimator to connect the backpropagation. TAPS can also be combined with the current state-of-the-art method, SABR, to reduce the regularization in the feature extractor further, leading to higher natural accuracy and certified accuracy. The experiment results show that TAPS and STAPS (TAPS+SABR) achieve the highest certified accuracy on MNIST, CIFAR10, and TinyImageNet, except for CIFAR-10 8/255. Strengths: 1. The paper presents a training method that beats the state of the art. 2. The paper conducts extensive abolition studies. Weaknesses: 1. In Table 1, STAPS has the best results in two out of five settings. And TAPS has worse results than SABR in two out of five settings. It seems SABR is as important as TAPS for STAPS. Then the discussion in Section 3.5 needs more rigorous justification. For example, the paper states that "the exponential growth of BOX abstractions still causes a strong regularization of later layers". The experimental illustration of this claim is missing. A layer by layer comparison of Figure 5 would be interesting to see. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Comment: 1. In line 92, $\bar{\mathbf{o}}^\Delta > 0$ should be $\bar{\mathbf{o}}^\Delta < 0$. 2. In Section 3.4, between lines 123 and 124, the gradient has an additional factor $2$. Questions: 1. In lines 162-164, the paper states that the j-th dimension of the latent adversarial examples is independent of the bounds in the i-th dimension. However, PGD will be affected by each dimension. Is this statement in the paper an assumption or these dimensions are in fact independent? 2. In Figure 3, it seems $c$ cannot be greater than 0.5, otherwise the gradient connector cannot be a valid function. However, in Figure 7, $c$ can be larger than 0.5, and even achieves higher natural accuracy. What do I miss here? 3. In Table 2, some settings have a time limit on MN-BaB. Does it means the certified accuracy is over-approximated, e.g., time-outed ones are considered as not certified? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The paper does not address any limitation. How to efficiently select hyper-parameters is a large problem for this type of training methods. For example, in Table 2, it might not be possible to completely compute the certified accuracy and to compare settings based on these metrics. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer $\Rf$ for their insightful feedback, helpful suggestions, and interesting questions. Below, we address their questions. **Q1: Is the independence of a given dimension of the PGD adversarial example from the IBP bounds in a different dimension an assumption?** A: Great question! This depends on the setting on considers: When doing an additional step of PGD, the bounds in the jth dimension have no impact on the obtained adversarial example as they impact neither the gradient sign nor the projection in this dimension, as Box bounds are axis parallel. However, when assuming an optimal adversarial attack capable of finding the global minimum over its perturbation space, the resulting adversarial example would indeed depend on all dimension-wise bounds jointly. Thus, this independence holds rigorously (up to initialization) for a single step attack and constitutes a mild assumption otherwise, we are happy to add a corresponding discussion to the relevant section. **Q2: In Table 2, some settings are noted to have timed out. What does this imply for certified accuracy?** A: Typically, branch-and-bound based neural network verifiers such as MN-BaB, the one we use, apply a time-out per sample and consider all samples where verification times out to be uncertified. However, in Table 2, we refer to a total timeout for the verification process. The affected settings in Table 2 are particularly hard to certify, leading to frequent time-outs and thus very long verification times. As the obtained partial results already showed these settings to be less attractive, we decided to only evaluate part of the test set and then report the mean across the evaluated portion. Evaluating the slowest setting would require roughly 100 GPU days. **Q3: Can you extend the discussion of STAPS in Section 3.5, e.g. expanding on mentioned exponential growth of box abstractions?** A: Yes! Please see the main response for a more detailed discussion of STAPS which we are happy to include in Section 3.5. While, due to space constraints, Section 3.5 focuses more on outlining the mechanics behind the complementarity of SABR and TAPS rather than a full discussion of all intricacies of STAPS, we are happy to add an extended version of the below discussion and corresponding plots to the appendix. The exponential growth of Box abstractions, mentioned there, has been established both theoretically and empirically in prior work [1,2] which we are happy to highlight in the relevant section. As Figure 5 shows the worst-case loss approximation error, which can only be computed on the output, a directly analogous layerwise comparison is not possible. While we could compute and compare the mean side lengths of the Box abstractions obtained in different layers, this is very computationally expensive for PGD propagation, as the bound in every dimension requires a separate attack to estimate. However, as the TAPS and IBP and STAPS and SABR bounds are identical in the feature extract (before the split), their pairwise difference shows the isolated effect of the under-approximating effect of the PGD propagation through the classifier. While this can already be seen in Figure 5, we have added versions of the figure showing the distribution of the pairwise differences directly to the PDF attached to the general reply. We are happy to include these in our next revision. **Q4: Does the gradient in line 213-214 have an extra factor of 2?** A: No, when choosing $\alpha = 0.5$, both scaling terms ($2\alpha$ and $(2-2\alpha)$) evaluate to $1$ and we recover the standard gradient as given by the product rule. Choosing different $\alpha$ allows us to scale the gradients of the two loss components in the employed multiplicative regularization, as is common for additive regularizations. **Q5. Why can the parameter $c$ in the gradient link, illustrated in Figure 3, be greater than 0.5?** We believe the confusion might stem from Figure 3 showing two functions corresponding to the partial derivative with respect to the upper and lower bound, respectively. Both can generally have non-zero gradient for the same coordinate of the adversarial example, as is possible for $c > 0.5$ Thanks for pointing out the typo in Line 92, we will correct it. **References** [1] Müller et al. "Certified Training: Small Boxes are All You Need.", ICLR’23 [2] Shi et al. "Fast certified robust training with short warmup." NeurIPS’21 --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks for authors' efforts and detailed replies. I will raise my score. Overall, I'd like to see this paper being accepted. For Q1, please add a corresponding discussion to the relevant section. For Q2 please add how do you compute the numbers, i.e., "evaluate part of the test set and then report the mean across the evaluated portion", to the paper or appendix.
Summary: This paper aims to improve the combination of adversarial training and certified training for certified robustness. A gradient connector is proposed for jointly conducting these two kinds of training. Results show some improvement on the certified robustness after the training, as well as a more precise approximation on the worst-case loss. Strengths: - This paper proposes a "gradient connector" which enables end-to-end training with both adversarial training and IBP-based certified training. - There is some empirical improvement on the certified accuracy on some settings (TinyImageNet), compared to prior works combining adversarial training and IBP training. - The proposed method can more precisely estimate the worst-case loss, compared to PGD, iBP, or prior methods doing the combination. Weaknesses: The empirical improvement on the major metric, certified accuracy, is very marginal and sometimes negative: - Compared to SABR (Muller et al., 2022a), the absolute improvement on MNIST is only 0.17% or 0.22%. - On CIFAR, simply using the proposed method does not bring any improvement but may even yield lower certified accuracy. - On CIFAR eps=2/255, combining SABR and the proposed method only has an absolute improvement of 0.14%. On CIFAR eps=8/255, doing this combination still leads to worse results compared to SABR alone. Overall, the current results on all the three datasets do not sufficiently demonstrate that the proposed method is effective in practice. It is still possible that the tiny difference on MNIST and CIFAR may come from randomness, yet standard deviation is not reported. ==Updates== Thanks to the authors for the explanations. I understand that both TAPS and STAPS are contributions of this work. However, while this paper looks overall good, empirical results still look kind of weak to me (as mentioned in the first point and the third point in my original review). Thus I am maintaining my original rating. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See "Weaknesses". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer $\Rtr$ for their insightful feedback, helpful suggestions, and interesting questions. We are delighted they appreciate our novel gradient connector and found our performance improvements on the challenging TinyImageNet satisfactory. Before addressing the reviewer’s remaining questions, we would like to respectfully ask them to outline any soundness concerns they might have (leading to a soundness score of 2) as their review does not touch on any such point. **Q1: Can you comment on the magnitude of improvement of TAPS and STAPS over SABR?** A: First, we want to highlight, that both TAPS and STAPS are core contributions of this work and thus believe the fact that TAPS alone can realizes a comparable or bigger improvement over prior work than SABR (which achieved the biggest improvement in years) to highlight the promise of our method rather than being a weakness. Second, in settings where low regularization strength is particularly desirable (CIFAR-10 2/255 and TinyImageNet 1/255), TAPS and SABR can be efficiently combined to STAPS to achieve even higher performance and highlighting the complementarity of the two methods (discussed in more detail in Section 3.5). Third, while the absolute improvements on MNIST might be small, they are a much bigger than e.g. those of SABR over SortNet and correspond to significant portions of the remaining error (9.5% and 3.3%), highlighting their importance. **Q2: Can you report the standard deviation of the considered performance metrics** We first want to highlight that we already report standard deviations for MNIST in Table 14. We are happy to also report statistics for all other settings in the next revision of this work. However, given that prior work often only reports the best observed results, putting these results in the right context would require reproducing and reporting statistics for all baseline methods as well, thus carrying significant computational costs.
Summary: This paper proposes a method called Training via Adversarial Propagation through Subnetworks (TAPS) for improving certified adversarial robustness. Specifically, TAPS splits the network $f$ into a feature extractor $f_E$ and a classifier $f_C$. During training, TAPS first uses interval bound propagation (IBP) to bound the feature extractor's exact reachable set in the embedding space and conducts adversarial attacks (PGD) in the embedding space to the classifier. This method can also be combined with other state-of-the-art methods like SABR (called STAPS) that can further improve its performance. ## post-rebuttal I've updated my score since my concerns are adequately addressed. Strengths: 1. This paper is well-motivated and well-organized. 2. The idea of connecting adversarial training and IBP training is novel and impressive. 3. The experiments show that the proposed method can outperform state-of-the-art methods for several settings. Weaknesses: 1. The certified robustness for a larger perturbation bound (8/255) is not comparable with SortNet, showing the limitation of TAPS/STAPS under certain settings. Additionally, the comparison under a larger perturbation bound (>1/255) for TinyImageNet is not provided. Based on the comparison under 8/255 for CIFAR-10, this reviewer infers that SortNet may also outperform TAPS/STARPS for a larger dataset with a larger perturbation bound. 2. The details of how to combine TAPS and SABR are not provided in Section 3.5. 3. This work only focuses on $\ell_\infty$ robustness and does not show scalability to other metrics of robustness, such as $\ell_2$-norm. 4. It seems that several experiments in this paper were not completed when submitted (Tables 2 and 3). However, I think this is acceptable since the main comparison is complete. The authors should complement the missing results if accepted. ### Minor Comments 1. Line 11. I personally suggest removing the claim of publication for your implementation and networks, even though you have anonymized the code link. As this paper is still under review, the code has not yet been published. 2. Line 44. Lack of the full spelling and reference for SABR. Also, the details for SABR are not sufficiently introduced in this paper. 3. Line 105. I suggest replacing the claim "vulnerable" of adversarial training under stronger attacks with a more moderate tone. Even under AutoAttack, the robustness of an adversarially trained model is slightly lower than PGD. So far, adversarial training is still one of the most effective methods to improve adversarial robustness. Thus, this assertion on adversarial training methods is unfair for this research area and may mislead the community. 4. Line 148. Is $\theta_F$ exactly $\theta_E$? 5. Line 595. The author stated that they provided detailed descriptions, but the location is shown as "??". Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Line 36: Can you explain what "tractable" means in this context? 2. Lines 114-115: How is $x'$ selected for SABR? I suggest adding more details here since SABR is the main baseline of your method. 3. Although the code is provided, this reviewer strived to understand it. In particular, where is the command and variable for splitting the network? 4. What is the constraint ($\epsilon$-ball) for PGD attack in TAPS? Since PGD is conducted in the embedding space, the constraint on the input space may not be effective. 5. How about using PGD for the feature extractor and IBP for the classifier? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer $\Rt$ for their insightful feedback, helpful suggestions, and interesting questions. Below, we address their questions. **Q1: Why is only $\ell_\infty$-norm robustness and no $\epsilon$ larger than $1/255$ for TinyImageNet evaluated? How would SortNet perform in such a setting?** We follow the conventions of the field in both of these respects: First, works investigating convex relaxation based certified robustness (such as TAPS) almost always investigate only $\ell_\infty$-norm robustness [1,2,3,4,5,...]. Second, obtaining non-trivial certified accuracy on TinyImageNet is already highly challenging for $\epsilon = 1/255$, thus larger radii are typically not considered in the literature [1,3,4,6] (we are not aware of any results). Further, while SortNet’s [6] performance on CIFAR-10 for $\epsilon = 8/255$ is impressive, we believe this to not necessarily be an indication of great performance on TinyImageNet at larger radii. In fact, on TIN at $\epsilon = 1/255$ it is only on par with IBP and on MNIST, it is dominated by STAPS regardless of radius. **Q2: Can you expand the background on SABR and add how TAPS and SABR combine to STAPS?** Please see the main response for a detailed reply! **Q3: Are the results in Tables 2 & 3 complete?** A: Great question, we will clarify this in the text. Table 3 is complete and missing results simply show the instability of single-estimator PGD, thus highlighting the importance of our multi-estimator PGD. In Table 2, we decided to not complete the evaluation of the last two rows as this would require well over 100 GPU days and the partial results already show a severe drop in certified accuracy. **Q4: Can you clarify the meaning of ``tractable'’ (L36)?** A: Generally the verification problem is NP-hard [7]. However, recent branch-and-bound-based approaches [8] can solve many practical instances efficiently. We will clarify this. **Q5: In your code, what is the command and parameter for splitting the network?** A: TAPS is implemented in ```torch_model_wrapper.py``` in the class ```BoxModelWrapper```. The method ```split_net_to_blocks``` of this class splits the network into feature extractor and classifier. When the code is publicly released, we will provide documentation containing such details. **Q6: What are the constraints/bounds for PGD in the embedding space utilized in TAPS?** A: As we outline in L133ff, we first propagate $\mathcal{B}(x, \epsilon)$ via IBP through $f_E$. This results in interval bounds $[\underline{\mathbf{z}}, \overline{\mathbf{z}}]$, describing a hyper-rectangle (i.e., a stretched $\ell_\infty$-ball). We then conduct an adversarial attack using PGD within this hyper-rectangle in the latent space of the model. **Q7: Could you use PGD for the feature extractor and IBP for the classifier?** A: While this is possible in theory, in practice this is infeasible. For each bound (upper and lower) in each dimension we require 1 PGD attack. As the latent space in many layers has over 20.000 dimensions, this requires large amounts of compute and memory per sample. Note, that when using the last layer, we only need to upper bound logit differences and thus require only (#classes - 1) attacks. **References** [1] Müller et al. "Certified Training: Small Boxes are All You Need.", ICLR’23 [2] Balunovic and Vechev. "Adversarial training and provable defenses: Bridging the gap." ICLR’19 [3] De Palma et al. "IBP regularization for verified adversarial robustness via branch-and-bound." [4] Shi et al. "Fast certified robust training with short warmup." NeurIPS’21 [5] Zhang et al. “Towards Stable and Efficient Training of Verifiably Robust Neural Networks” ICLR’20 [6] Zhang et al. ““Rethinking lipschitz neural networks and certified robustness: A boolean function perspective” arXiv [7] Katz et al. "Reluplex: An efficient SMT solver for verifying deep neural networks." CAV’17 [8] De Palma et al. "Improved branch and bound for neural network verification via lagrangian decomposition." arXiv --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for the rebuttal. Most of my concerns have been addressed, and I will raise my score when review editing is allowed. For Q4, I am still a bit confused about what is ``tractable``. Is this means the verification problem is not NP-hard? Additionally, please repaint my name with red color, which is my favorite color. Best, --- Reply to Comment 1.1.1: Title: On the tractability of neural network verification Comment: We thank Reviewer $\textcolor{red}{8yjL}$ for their quick reply and are happy that we were able to address all their concerns. **Tractability of the Neural Network Verification Problem** Generally, neural network verification remains NP-hard. However, as with many NP-Hard problems, many instances are efficiently decidable (think SAT/SMT solvers). In the case of neural network verification, wether an instance is is efficiently solvable depends on the combination of input, robustness specification, network, and verifier. Until recently, so called incomplete verification methods were commonly used, which (typically) have a fixed precision and can either decide a property or return that the result is unknown. Such verifiers (like IBP) were generally only able to verify networks heavily regularised towards verifiability (at the cost of significantly reduced standard accuracy). While complete verification methods can decide any neural network verification property given sufficient (in the worst case exponential) time, these methods were typically based on mixed integer linear programming or SAT/SMT solvers and to inefficient for neural networks of relevant size. However, recently much more efficient Branch-and-Bound based complete verifiers have been proposed which are efficient enough to be applied to much less heavily regularized networks. This is what we mean by their certification has become (practically) tractable. $\textcolor{red}{\text{Unfortunately, we can not change the color of reviewer }} \textcolor{red}{8yjL \text{ in the main response, however, we hope they enjoy this red text.}}$
Summary: The paper proposes TAPS -- a method to combine IBP and PGD to train better certified networks. The authors observe that IBP on its own leads to overestimation of the inner adversarial loss, where as PGD leads to an underestimation. Therefore, intuitively, the approximation errors may compensate for each other during training. Thus, TAPS leverages PGD on the pre-classification logits output using IBP to generate a better loss approximate. Given the non-differentiable nature of this, the authors propose a rectified linear gradient approximation to allow end to end training. Empirical results are provided for CIFAR10, MNIST and TinyImagenet, showing improvement over pure IBP and other IBP-approximation approaches. Strengths: 1. The proposed approach is well-motivated and the authors provide clear justification through experimental analysis. 2. The gradient connector is novel mechanism allowing end to end training of PGD + IBP networks. This is a creative approach towards combining two connected but complementary methods, essentially improving on COLT which sequentially leverages both. 3. The experiments are thorough and support the claims in the paper. 4. The paper is well-written and thoughtfully explains the intuition for every step in the algorithm. Weaknesses: While not a major weakness, I would be interested to see if leveraging stronger or weaker attacks during the PGD training step significantly changes results. Perhaps a simple experiment with varying PGD iterations, or even a stronger attack like Autoattack would clearly answer this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the computational overhead of TAPS/STAPS over other methods? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have clearly mentioned limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer $\Ro$ for their insightful feedback, helpful suggestions, and interesting questions. Below, we address their questions. **Q1. What is the computational overhead of TAPS/STAPS over other methods?** When using single-estimator PGD, TAPS is strictly faster than SABR [1], as it only requires a partial IBP propagation and adversarial search only over the classifier component. When using multi-estimator PGD, the runtime trade-off depends on the number of classes and size of the classifier component. Here, TAPS requires multiple PGD attacks over a smaller network (component) while SABR requires a single attack over the whole network. Compared to TAPS, STAPS requires an additional adversarial attack over the whole network and is thus always slightly slower than SABR. However, both TAPS and STAPS are notably faster than COLT [2], as IBP propagation is much faster than DeepZ [3] and the gradient connector makes COLT’s complex training in multiple stages obsolete. For TinyImageNet, we find TAPS to already be consistently slower than SABR but believe that designing strategies between single- and multi-estimator PGD to be an interesting item for future work that has the potential to change this. We provide runtimes in Table 6 and 8 in Appendix B, for TAPS and STAPS, respectively. **Q2: What is the effect of the adversarial attack’s strength on the obtained results?** Great questions! We have conducted a corresponding experiment using 1 to 100 attack steps with 1 or 3 restarts to investigate this for MNIST $\epsilon=0.3$ and the CNN7 architecture used for our main results and report results in the table below. Interestingly, even a single attack step and restart are sufficient to achieve good performance with TAPS, in particular, outperforming standard IBP. As we increase the strength of the attack, we can increase certified accuracy slightly while marginally reducing natural accuracy, agreeing well with our expectation of regularization strength increasing with attack strength. We are happy to include these results in the next revision of the paper. We also provide nicely rendered version of this table in the PDF attached to the general reply. | Restarts | Number of Steps | Natural | Certified | |:--------:|:---------------:|:----------:|:----------:| | 1 | 1 | **0.9822** | 0.9336 | | | 5 | 0.9790 | 0.9315 | | | 20 | 0.9778 | 0.9343 | | | 100 | 0.9794 | **0.9346** | | 3 | 1 | **0.9822** | 0.9347 | | | 5 | 0.9790 | **0.9355** | | | 20 | 0.9799 | 0.9352 | | | 100 | 0.9799 | **0.9355** | **References** [1] Müller et al. "Certified Training: Small Boxes are All You Need.", ICLR’23 [2] Balunovic and Vechev. "Adversarial training and provable defenses: Bridging the gap." ICLR’19 [3] Singh et al. "Fast and effective robustness certification." NeurIPS’18 --- Rebuttal Comment 1.1: Title: Comment on the rebuttal Comment: Thank you for answering my questions. After reading through the other reviews and the rebuttal, I maintain my earlier score. Overall, I find the paper to present an interesting approach to connecting adversarial training and certification based methods.
Rebuttal 1: Rebuttal: $\newcommand{Ro}{\textcolor{purple}{jSFD}}$ $\newcommand{Rt}{\textcolor{green}{8yjL}}$ $\newcommand{Rtr}{\textcolor{blue}{5pzA}}$ $\newcommand{Rf}{\textcolor{orange}{hZus}}$ We thank all reviewers for their insightful feedback, helpful suggestions, and interesting questions. We were encouraged that they found our work well-motivated ($\Ro$, $\Rt$), novel ($\Ro$, $\Rt$) and well supported by our state-of-the-art empirical results ($\Ro$, $\Rt$, $\Rtr$ $\Rf$) and extensive ablations ($\Rf$). We answer the sole shared question here, before addressing the reviewer-specific ones in individual responses and look forward to the reviewers’ replies. **Q1: Can you expand the discussion on STAPS in Section 3.5 and the relevant background on SABR? ($\Rt$, $\Rf$)** Yes! We are happy to extend the background on SABR as well as the discussion of STAPS in Section 3.5, as outlined below. At a high level: IBP training propagates the entire input region $\mathcal{B}(x, \epsilon)$ for each sample $x$ in order to evaluate and then optimize the IBP loss (Eq. (2)). In contrast, SABR propagates only a small subset $\mathcal{B}(x’, \tau) \subseteq \mathcal{B}(x, \epsilon)$, chosen by performing an adversarial attack to select $x’$, but otherwise uses the same loss and training procedure. In STAPS, we simply replace the IBP component with SABR, thus only propagating a small subset $\mathcal{B}(x’, \tau) of the input region, selected using an adversarial attack, to compute the bounds for the adversarial attack in the latent space of the feature extractor. **In the attached PDF we provide a version of Figure 5 that highlights the difference between IBP and TAPS (as well as SABR and STAPS) for Q3 of $\Rf$.** Pdf: /pdf/05e73ac97d950706617da0358a2058536800917a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Precision-Recall Divergence Optimization for Generative Modeling with GANs and Normalizing Flows
Accept (poster)
Summary: The paper proposes a way to train generative models such that they obtain a user defined tradeoff between sample fidelity and variety. Main findings are that the definition of precision and recall from previous works (by Simon et al.) can be written as a $f$-divergence (named PR-divergence) and that other $f$-divergences, such as KL or reverse-KL, can be formulated in terms of PR-divergence. Strengths: The strengths of the paper are thorough analysis of the $f$-divergences and solid theoretic foundation of PR-divergence. The paper provides interesting insights into what PR-tradeoff often used divergences, such as the KL, optimize for. Additionally, it proposes an algorithm to train generative models using an auxiliary divergence, since training models with $f$-GAN is difficult in practice. Weaknesses: The main weakness of the paper are empirical results. Although the method works as expected from the theory and it can be used to explicitly control the tradeoff between precision and recall it is far behind the current state-of-the-art. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. To further demonstrate the usefulness of explicitly controlling PR-tradeoff, I would like to see FID and PR curves from a following comparison: truncating BigGAN vs. applying classifier-free guidance to ADM models vs. training a separate model using different $\lambda$ to control the PR-tradeoff (on CIFAR-10, ImageNet 128x128 datasets). 2. Why do the FID results using BigGAN differ from the official results, reported by Brock et al. [1], on CIFAR-10 (13.37 vs. 14.73) and ImageNet 128x128 (9.83 vs. 8.7)? Two things that come to mind could be that you either use a validation set to compute FID or a different instance of Inception-V3 network (see App. A from [2] for other possible explanations). Also a minor note, on ImageNet 128x128 the baseline actually has the best FID instead of the model with $\lambda=0.2$. Minor notes from checking the proofs: 3. There are different versions of defining $\lambda \in [0, \inf]$ in Definition 4.1 and Theorem 4.3. What is the difference? 4. Proof of Theorem in App. Eq. (6): Is the last term missing $\hat{p}(x)$, but it is correctly there in Eq. (7)? 5. App. Eq. (8): the first integral term has extra $($ and is missing $\textrm{d}\boldsymbol{x}$? 6. App. B.5: Title should be Proof of Theorem 4.4. 7. App. Eq. (30): Third term is missing $u_{\textrm{min}}$ from the denominator but it reappears correctly in Eq.(31)? Why does $c(1/u_{\textrm{max}})$ change suddenly to $c(0)$ (it cancels out but I’m still curious)? 8. Why in App. Eq (32) $\textrm{lim}$-term appears when differentiating w.r.t $u$? 9. Is there a typo in Theorem 5.2? Should it be $r(x) = \nabla g^*(T(x))$ instead of $f^*$? [1]: Brock et al., Large Scale GAN Training for High Fidelity Natural Image Synthesis [2]: Kynkäänniemi et al., The Role of ImageNet Classes in Fréchet Inception Distance Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your comprehensive review and insightful comments are greatly valued, and we thank you for them. We would like to address the concerns and questions you raised: **1. Empirical Results and PR-tradeoff:** We acknowledge the importance of empirical results. We've added results for truncation on the Baseline BigGAN in the General Rebuttal. As observed, truncation primarily enhances the baseline precision, but it does not significantly improve recall. **2. FID Differences:** The discrepancy in FID for CIFAR-10 arises from our use of PyTorch, for which pretrained weights for BigGAN are unavailable. Consequently, we utilized a version of BigGAN that we trained ourselves. Additionally, as you rightly pointed out, we employed the PyTorch version of the Inception model, leading to FID differences from the official TensorFlow metrics. To address this, we recalculated FIDs for every different model using a consistent method. **3. Addressing the Main Weakness:** The primary objective of our paper is to introduce a loss function that can effectively trade off between precision and recall. This loss can be applied to a wide range of generative models, including StyleGANXL and diffusion models. We firmly believe that our experimental setup validates the efficacy of our method. It's essential to note that our focus isn't on achieving only state-of-the-art results. Instead, we aim to showcase the versatility of our loss function. The notion of "state-of-the-art" is contingent on the specific model used and the user's preference for either recall or precision. **Typos and Clarifications:** - **Lambda Domain:** The domain of $ \lambda $ only differs in notation. In both cases, it encompasses all positive values, including 0 and $ +\infty $. We will ensure consistent notation in the final manuscript. - **Equation Typos:** We appreciate your keen observation of the typos in Eq (6), Eq (8), and the title for App B.5. These will be corrected. - **Bounds on Lambda:** We transitioned from 0 and $+\infty $ bounds on $ \lambda $ to $ 1/u_{max} $ and $ 1/u_{min} $ in theorem 4.4. The discrepancies in the proofs that you highlighted will be addressed. Thank you for pointing them out. - **Theorem 5.2 Notation:** The notation in theorem 5.2 is accurate. However, we acknowledge the reversed $f $ and $ g $ notations in the appendix. This will be rectified in the final manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough answers to my feedback and the new interesting experiments! 1. Thank you for the additional experiment of explicitly controlling P&R tradeoff vs. truncation. Truncation is a way to trade variation to fidelity, thus, the result I hoped to see is that much higher Recall/Coverage could be achieved by explicitly controlling P&R tradeoff with the proposed method and I am glad to see the new data supporting this. ImageNet results are not inline with [1] and original results of Brock et al. [2] as truncation should improve fidelity at the cost of variation, what could be the reason for this discrepancy? As a minor note, to improve the quality of presentation of these results, I recommend showing a figure of P&R curves as in Fig. 6 from [1] to more easily observe the overall trends. 2. Thank you for checking FID calculation and making it consistent with the literature. 3. I agree that the experimental setup demonstrates that your method works as expected from the theory, and that “the best” generative model heavily depends on the downstream task that it is used for. However, it might be valuable for future work to point this direction of applying your method to diffusion models or larger scale generative models in the conclusions section. With the new data that you provided, I am happy to update my score. References: [1]: Kynkäänniemi et al., Improved Precision and Recall Metric for Assessing Generative Models [2]: Brock et al., Large Scale GAN Training for High Fidelity Natural Image Synthesis
Summary: The paper introduces a technique for training generative models using an objective which approximates so called PR-divergence. By adjusting a parameter, the paper claims that it is possible to train an array of models, from those that prioritize high precision to those that prioritize high recall (mode seeking vs mode covering), as well as more balanced models. The supporting experiments involving BigGAN training on CIFAR10, CelebA64 and finetuning on ImageNET128 and FFHQ256 are described. Strengths: 1. Authors consider the family of f-divergences and show that , given a trade-off parameter $\lambda$, for a particular element ${\cal D}_{\lambda -PR}$ of this family, the minimization of this element is equivalent to the maximization of the value at $\lambda$ of the first component of so called PR curve from Saijidi et al [40]. 2. As the minimization of ${\cal{D}}_{\lambda -PR}$ via f-GAN approach would fail, the authors describe a way to minimize a certain approximation to this objective and give an estimate for the error of this approximation. 3. The provided experiments show that for NFs-GLOW on 2D synthetic dataset, MNIST and FMNIST, and for BigGAN on CIFAR-10, CelebA64, the training with the approximation objective and small $\lambda$ leads indeed to models with better mode covering, and, to a much lesser extent for BigGAN, the bigger tradeoff parameter $\lambda$ leads to better quality of generated samples. Weaknesses: Although the approach looks interesting, the paper raises several red flags that prevent me from endorsing it. 1. The experimental validation is limited, especially the gain in precision, i.e. the sample quality, for high tradeoff is insubstantial, compared with other methods. One example of this is the quality of samples on Figure D.11.d : all 100 supposedly high quality samples, at the highest tradeoff $\lambda=20$, are clearly of bad quality. 2. Similarly, the NFs results on Fig 1 in the simple 2D synthetic setup of eight gaussians do not really demonstrate excellent precision despite choosing the parameter lambda supposedly largely favoring the quality of generated samples. Compare this with e.g. Wasserstein-GP GAN from 2017 paper (https://github.com/caogang/wgan-gp). 3. The code for the reproducibility check is not provided, despite the mentioning of an anonymized repository on page 8, the link is absent. This is especially disappointing as some evaluation numbers looks somewhat strange, like the P-value at $\lambda=20$ in Table 2 compared with FID, but it is not possible to try to reproduce this number or to see the details of its calculation. 4. Despite the fact that there are numerous variants in the literature of precision-recall scores, only one variant, based on k=3-NN only, from [27] is provided for evaluation of the models, which is known to not always work properly especially in regions of low density. 5. The related works description has some lacunas like "Reliable Fidelity and Diversity Metrics for Generative Models" (ICML'2020) from which another variant of PR scores can be used for evaluation, or "A Domain Agnostic Measure for Monitoring and Evaluating GANs" (NeurIPS'2019). 6. Out of big diversity of GAN models, only the training of BigGan is tested with the proposed approximation objective. 7. In several places throughout the paper, the likelihood ratio $p(x)/\hat{p}(x)$ is used, e.g. in Theorem 5.2. However it is known that, especially in high-dimensional spaces, the support of distributions is rather small and there can be large regions with $\hat{p}(x)=0$ on which the methods involving the ratios as above give infinite or not well-defined answers. How to deal with this in each of the encounters of this quantity ? 8. The clarity of the presentation can be improved, in particular the relation with other, more intuitive definitions of Precision and Recall scores from e.g. [27] and the "Reliable Fidelity and Diversity Metrics for Generative Models" (ICML'2020) paper, is not explained. 9. The complexity of the training is not described. 10. The error bars are absent in the main experiment reported in Table 2. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. What is the complexity of the training procedure? 2. In several places throughout the paper the likelihood ratio $p(x)/\hat{p}(x)$ is used. How to deal with regions with $\hat{p}(x)=0$ in each of the encounters of this ratio ? 3. In the experiments reported in Table 2, was only a single initialization used in each case to test the proposed method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations related with the complexity of the training procedure are not provided. Also the limitations concerning the convergence of the proposed minimax procedure based on two different f are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and feedback on our paper. We appreciate the time and effort you've dedicated to understanding our work. We would like to address the concerns you raised: **1. Improving Recall:** While methods like rejection sampling, instance selection, or truncation primarily focus on enhancing precision, often at the expense of recall, our method aims to improve recall without significantly deteriorating precision. For the sake of completeness, we have added results for BigGAN using truncation for comparison to further illustrate this point. **2. Experimental Setup and Model Choice:** The primary objective of our experimental setup is to demonstrate the efficacy of our method in tuning any given model. Our theoretical framework establishes that our approach can be applied universally across GAN or NF architectures to balance precision and recall. We believe it's unnecessary to test our method on every existing architecture, especially considering the environmental impact of training large models like StyleGanXL. Moreover, in our 2D experiments, we intentionally used a model with limited expressivity to clearly illustrate how our method operates. Using a more complex model would have perfectly matched the distribution, distinguishing from real-world dataset and thus, making our method redundant. **3. Code Availability and Additional Results:** We apologize for the oversight regarding the code. We have provided the code to the area chair. Additionally, we've included extra results in the general rebuttal, showcasing the Precision Recall (Kynkäänniemi et al.) for k=5 and the Density Coverage (Naeem et al.) for k=5. These new metrics further validate our findings. Addressing the Questions: **Training Complexity:** As discussed in our paper, our training algorithm closely mirrors the original GAN training procedure. Consequently, both share similar algorithmic complexities, contingent on the neural network architecture used. We will make this clear in the paper, and add precise training times of our architectures. **Estimating PR Divergence:** We estimate the PR divergence using the primal form of the f-divergence. This primal form is estimated by sampling generated data points from $\widehat{ P}$: $$ \mathcal{D}_{\lambda}(P \Vert\widehat{P}) = {\mathrm{E}} \_{\widehat{P}} \left[f \_{\lambda}\left(\frac{p(x)}{\widehat{p}(x)}\right)\right]. $$ Points where $ \widehat p(x) = 0$ will *not* be sampled for estimating the primal form. Thus, the value of the density ratio estimator on those points has no effect on the estimation of the PR divergence. We will make this clearer in the paper. **Metrics Evaluation:** Due to computational considerations, the metrics in Table 2 are evaluated on only one instance of the model. We observed similar numbers on other runs as well, and are open to adding error bars for the revised manuscript. **Related Works:** The two papers you mention do make a good addition to the related works section - we thank you for pointing them out. However, we emphacize that our training method is focused on improving the precision and recall as defined in the works of [40, 43], and thereafter used in numerous follow-up works. **Complexity of training:** The complexity of our training method (both computational and memory) is comparable or no greater than that of existing approaches like f-GAN. We will make this clear in the paper. We hope that our clarifications address your concerns, and we are committed to refining our manuscript based on your valuable feedback. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: I'm thankful to authors for their response, which elucidated some points. However several issues were not properly addressed, among them: 1.From the paper's abstract: "our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either __precision__ or recall". The weakness W1 was concerned with the __poor precision__ of the method applied to the BigGAN model. Indeed, the Figure D.11.d demonstrates the 100 samples obtained by the paper's method applied to BigGAN with the tradeoff parameter set to $\lambda=20$, which corresponds to the proposed method's highest precision. The samples are obviously of unsatisfactory quality. Why the rebuttal answer is about improving recall? The question was about performance with respect to precision. In the rebuttal, in their response to the weakness W1 concerning the problematic precision, authors admitted that "our method is aimed at improving recall", not precision. This changes the entire paper narrative concerning the "improving either precision or recall" and achieving "specified precision-recall trade-off" as described in the paper's abstract and throughout the paper. To the reviewer opinion this important change of narrative requires an update of the paper and after that another round of reviewing. 2.The rebuttal contains the results of the experiment on comparison of the proposed method with the truncation method applied to BigGAN model trained on ImageNet 128 dataset. The authors report very low precision values for the truncation method, in the range 20-28. The authors also state that the truncation method fails in this case as it diminishes both the recall and the precision. The results for the truncation method reported in the literature in this case are actually much higher, in the range 82-88, see eg "Improved precision and recall metric for assessing generative models" NeurIPS 2019, Figure 6. There is also the clear trend of the increasing precision as the truncation diminishes on this Figure 6 from the literature. This raises the question as to whether there was a fair comparison between the methods, as the authors results on the baseline method do not match the results from the previous works. 3.The paper contains numerous encounters of the formulas with division by $\hat{p}(x)$ in the theoretical proofs of the principal results, however $\hat{p}(x)=0$ in vast regions of the ambient space. When asked to clarify this, the authors didn't explain how to interpret rigorously these expressions in their theoretical proofs, but only mentioned how they approximate such quantity in their experimental part. 4.The provided at the rebuttal link to the anonymous github repository, which was last modified on the 8th of August, contains the code for one of the paper's experiments. However I could not use it for the thorough verification of the reproducibility of the paper's results because of ethics consideration, as this would be unfair with respect to concurrent papers which didn't have an opportunity to have extra two months for preparing their supportive materials. Because of the outlined issues I'm maintaining the score. --- Reply to Comment 1.1.1: Comment: **Response to Reviewer's Comments:** Thank you for your detailed feedback. We appreciate the time and effort you've put into reviewing our work. We'll address each of your concerns in turn: **Precision vs. Recall:** In our rebuttal, we emphasized the improvement in recall because our method is unique in its ability to enhance recall compared to other methods like truncation, reject, or instance selection. We understand the confusion arising from our rebuttal's narrative. There is no change in the paper's narrative: we propose a method to trade-off precision and recall, and for some dataset and some values of the parameters $\lambda$, we can achieve state of the art results. While we accept that the visual quality might not be properly assessed for datasets like CIFAR10, our method has demonstrated its effectiveness on MNIST, FashionMNIST, CelebA, FFHQ, and ImageNet. **BigGAN on ImageNet128:** In various experiments, particularly in "Improved Precision and Recall" [1], the authors utilized the TensorFlow pretrained version of BigGAN, likely the "BigGAN-deep" version, explaining their superior results (There is no mention to BigGAN in their official Github Repository). It's worth noting that the truncation experiments in [2] are not as straightforward as they might seem. Some works have shown that the relationship between truncation and precision and recall is not as clear-cut as expected (See Figure 6 in [2]). Moreover, the maximum precision and recall for k=5 and 10k samples (the setup we adopted based on your recommendations) for ImageNet128 are 84 and 82 respectively (See Table 5 in [2]). In [3], the precision and recall for BigGAN are reported as 86 and 35. This demonstrates that metrics from the literature can vary significantly and should be used to compare different models within the same framework as we did. **Mathematical Expressions of the likelihood ratio:** As we mention in the `Rebuttal`, the fact that $p$ and $\widehat p$" can be non-singular in practice does not pose any problems to the definition of PR divergences and the proofs. Wherever you find a likelihood ratio $p(x)/\widehat p(x)$ in the theorems and proofs, either in a function $f$, $\min$, or $\max$, it's multiplied by $\widehat p(x)$. Hence, for every $x$ in the sample space $\mathcal{X}$ where $\widehat p(x)=0$, we have $\widehat p(x) f(p(x)/\widehat p(x)) = 0$. In fact, for similar reasons, the framework of $f$-divergences can be extended to non-singular measures as outlined in [4]. **Anonymous GitHub Repository:** We understand your ethical concerns regarding the verification of reproducibility using the provided code. Our intention was to provide as much support as possible for our claims, and we appreciate your understanding in this matter. Thank you again for your insights and constructive feedback. We hope this response addresses your concerns more comprehensively. [1]: Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved Precision and Recall Metric for Assessing Generative Models. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada., October 2019. arXiv: 1904.06991. [2]: Terrance DeVries, Michal Drozdzal, and Graham W. Taylor. Instance Selection for GANs, October 2020. arXiv:2007.15255 [3]: Axel Sauer, Katja Schwarz, and Andreas Geiger. StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings, pages 1–10, Vancouver BC Canada, August 2022. ACM. ISBN 978-1-4503-9337-9. doi: 10.1145/3528233.3530738 [4]: Polyanskiy, Yury; Yihong, Wu (2022). Information Theory: From Coding to Learning (draft of October 20, 2022)
Summary: The paper aims to balance the precision and recall of generative models including GANs and normalizing flows. Theoretically, the paper shows that the f-divergences to minimize can be reformulated as sums of PR-divergences, which provides a theoretical explanation between the optimization objective and the precision-recall trade-off. Empirically, the paper proposes a series of techniques to optimize the precision-recall divergence. Strengths: The paper is qualified to be accepted to NeurIPS in the following aspects: 1. Novelty: Unlike previous heuristic works on the precision-recall trade-off, this paper established a fundamental theoretical understanding of the PR trade-off. To the best of our knowledge, the induced method is the first work to achieve a good PR trade-off from the perspective of divergence training. 2. Significance: The proposed theoretical results can not only serve as a guide in training generative models but also as an extension of studies on the traditional PR trade-off, thus might be applicable to other tasks like retrieval and recommendation systems. 3. Clarity: The paper is overall clear and well-written even with the heavy mathematics. 4. Soundness: No obvious errors were found in the proofs. The theories are consistent with the empirical results (Fig. 4). Weaknesses: The main concern is the proposed optimization algorithm involves an additional hyperparameter $\lambda$, which needs to be tuned for a new dataset. Therefore, it might be an alternative to optimize the Area Under the PR Curve (AUPRC) instead of a single point in the PR curve, which has been studied in ranking problems [1,2,3,4]. The paper could be more instructive if the theoretical results can inspire the extension along this path. Ref: [1] Wang et. al. Momentum accelerates the convergence of stochastic auprc maximization. ICML, 2022. [2] Wen et. al. Exploring the algorithm-dependent generalization of auprc optimization with list stability. NeurIPS, 2022. [3] Cakir et. al. Deep metric learning to rank. CVPR, 2019. [4] Chen et. al. Ap-loss for accurate one-stage object detection. T-PAMI, 2020. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply appreciative of your thorough review and the positive feedback on our paper. Your insights are invaluable, and we would like to address the concern you raised: **Optimizing for AUC:** Your suggestion to optimize for the Area Under the Curve (AUC) is indeed insightful. In fact, we have previously explored this avenue for lower-dimensional datasets such as 2D, MNIST, and CIFAR-10. Our approach was formulated as maximizing $$ \mathrm{AUC} = \int_{0}^{+\infty}\alpha_\lambda(P\Vert \widehat P)^2d\lambda. $$ By parameterizing with $\lambda=\tan(\theta)$ where $\theta\in[0, 1]$, we trained models to optimize the AUC. However, our observations indicated that the model behavior was closely aligned with $\lambda=1$. Given that this did not truly lead to a trade-off between precision and recall, we chose not to include this study in the main paper. However, considering your interest, we are more than willing to add a dedicated section in the Appendix detailing our experiments with AUC optimization. --- Rebuttal Comment 1.1: Comment: Thank you for your feedback. After reading other reviewers' comments and the authors' responses, I have decided to keep the original rating. Looking forward to your thoughts on optimizing the AUC.
Summary: This paper focuses on the fine-grind optimization regarding the generation precision and recall. More specifically, considering the ambiguity of FID, the authors propose to directly optimize the precision and recall of generated images, by developing a certain class of f-divergence, that is, the PR-divergence. This paper also shows that a linear combination of the proposed PR-divergence can represent arbitrary existing f-divergence. This paper also proposes a practical primal-dual estimation on the PR-divergence, for relieving the gradient vanishing issues during training. The experimental results verify the feasibility of adjusting lambda to adjust the generation precision and recall. Strengths: This paper proposes a unique class of f-divergence named PR-divergence (up to an affine transform) that can explicitly optimize the precision and recall of GANs. The established theory of the PR-divergence is well aligned with the precision and recall of GANs, namely, the PR-curve. The author prove its connection to the vanilla f-divergence. The primal estimation of the divergence has also been provided for the ease of optimization, which is proved to be equivalent to minimizing the dual object in terms of the Bregman divergence. The presentation of this paper is clear for me. Experimental results have verified the effectiveness of optimizing different lambda-valued PR-divergence can alter the precision and recall. Weaknesses: One of my main concerns is regarding whether the PR-divergence can impede the generation quality, although it is proved to be able to alter the precision and recall during training. I was wondering, whether optimizing the PR-divergence for some lambda could at least retain the same best FIDs with other existing methods, under the same architecture and batch sizes. For example, as I read from the details in the supplementary material, the authors employed larger batch size (equal to 128) than the default setting (equal to 64) when training CIFAR-10 under the BigGAN architecture. However, the default setting of 64bs already achieved FIDs lower than 10, whereby the results are reported by https://github.com/POSTECH-CVLab/PyTorch-StudioGAN. All the FIDs for CIFAR-10 reported in this paper are larger than 10, even trained by 128bs. Why would this happen? Will the PR-divergence deteriorate the overall generation quality? If I understood correctly, the proposed PR-divergence can be well applied to divergence-based or likelihood-based generative models. So I was wondering whether this method can be applied to diffusion models? Also for the normalization flows, did the authors compare the NFs on real-world datasets, other than on the synthetic datasets? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The notation \leq or \geq when comparing two distributions should also be explained. For example, the notation \geq appears in P\geq\beta\mu in Definition 3. 2. It seems Theorem 4.4 is not straightforward, where the proof or brief explanation is necessary. 3. The authors propose the g-divergence for the primal estimation. How the g-divergence is chosen in practice? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see my weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you dedicated to reviewing our paper, and we would like to address the concerns you raised: **1. Training Settings and FID Scores:** We acknowledge the discrepancies in the batch sizes used for training. Due to the unavailability of pretrained weights for BigGAN on CIFAR-10 (in PyTorch), we had to retrain the model ourselves. The batch size was chosen based on the computational resources available to us. Our primary goal is to demonstrate the tunability of models using our method, and we believe that a minor difference in FID scores (13.37 (ours) vs. 14.73 ([1])) does not significantly impact our main findings. It's worth noting that, as per your reference, most of the FID scores for CIFAR-10 are indeed higher than 10. **2. Applicability to Diffusion Models:** Our method can be extended to diffusion models. While the KL divergence in diffusion models is computed using a closed form between two Gaussians, our PR-Divergence doesn't have a closed form. However, we can approximate it using our technique. To achieve this, the discriminator would need to be $t$-dependent, as seen in some papers [2]. We are confident that this adaptation is feasible, but this would be an entirely different paper. **3. Training Normalizing Flows:** In our experiments (Section 6), we trained GLOW, a type of Normalizing Flow, on MNIST and Fashion MNIST using our method. Given the large model size of Normalizing Flows, we deemed it unnecessary to train on higher dimensions for the scope of this paper. **4. Choice of g-divergence and Theorem 4.4:** In practice, we utilize the $\chi^2$ divergence. Theorem 4.4 essentially states that the auxiliary function $g$ should exhibit strong convexity. In Appendix C, we demonstrate that using an auxiliary $\chi^2$ divergence yields better results compared to KL. We will certainly provide a clearer explanation for the notation $P\geq \mu$ and offer more insights into Theorem 4.4 in our revised manuscript. [1]: Andrew Brock, Jeff Donahue, and Karen Simonyan. Large Scale GAN Training for High Fidelity Natural Image Synthesis, February 2019. arXiv:1809.11096 [cs, stat]. [2]: Dongjun Kim, Yeongmin Kim, Wanmo Kang, and Il-Chul Moon. Refining generative process with discriminator guidance in score-based diffusion models. ArXiv, abs/2211.17091, 2022. --- Rebuttal Comment 1.1: Comment: My concerns have been partially addressed the reviewers. However, I still feel unsatisfied on the response of experimental results. The authors claimed that there are no pre-trained weights. So they may need to train everything from scratch. However, why using bs=128 for CIFAR-10 (and even for CelebA) is still unclear for me, given that fact that most existing GANs use bs=64. I would expect bs=128 can further enjoy smaller FIDs, for which I didn't find in this paper. Btw, I found the proof of Theorem 4.4 by myself. The authors wrongly referred to Theorem 8 in the supplementary. After reading the comments from the other reviewers, I am still skeptical on the effectiveness of the PR divergence in terms of improving the overall generation quality. Given that this paper focuses the precision and recall, and does work, I tend to keep my score. --- Reply to Comment 1.1.1: Comment: **Response to Reviewer's Comments:** Thank you for your feedback and for taking the time to re-evaluate our work. Regarding the batch size concern: We understand the common practice of using `bs=64` for CIFAR-10 and CelebA in many GANs. In particular for older generation GANs such as SA-GAN, WP-GAN, or Progressive-GAN. Note that while larger batch sizes can sometimes lead to better FID scores, it is not a guaranteed outcome. The relationship between batch size and performance is complex and can be influenced by various factors, including model architecture, optimization techniques, and dataset specifics. In particular, the structure of GAN we have used in this work, *BigGAN*, is known for achieving better results for large batch size for low resolution ($32\times 32$ and $64\times 64$). As a matter of fact, more recent works such as the BigGAN paper [1] and the StyleGAN-XL [2] are indeed advocating for larger batchsize: * "We begin by increasing the batch size for the baseline model, and immediately find tremendous benefits in doing so." [1] * "The main factors for BigGANs success are larger batch and model sizes." [2] * "We find it beneficial to use a large batch size [...] on lower resolution ($16^2$and $32^2$), similar to [1]." [2] Therefore, having trained multiple baseline models, and in accordance with our computational ressources set up, we have find that the best model we could train is for a batch size of $128$. We apologize for the oversight in referencing Theorem 8 in the supplementary material. We appreciate your diligence in locating the proof of Theorem 4.4. Lastly, our primary focus in this paper is indeed on trading-off between precision and recall. We believe that our method offers a novel approach to balance these two aspects, and our experiments demonstrate its effectiveness in this regard. We'll continue our research to further validate and refine our approach. Thank you again for your insights and constructive feedback. [1] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large Scale GAN Training for High Fi- delity Natural Image Synthesis, February 2019. [2] Axel Sauer, Katja Schwarz, and Andreas Geiger. StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings, pages 1–10, Vancouver BC Canada, August 2022. ACM. ISBN
Rebuttal 1: Rebuttal: Dear Reviewers, In response to the feedback from the reviewers, we have added some additional results for a more comprehensive comparison. Two reviewers inquired about a comparison with other methods such as truncation, and one recommended adding Density and Coverage metrics from Naeem et al. Moreover, Precision and Recall are computed for $k=5$. Below are the results for four different datasets: ### CIFAR-10 | Model | FID | Precision | Recall | Density | Coverage | |---|---|---|---|---|---| | Baseline BigGAN $\psi=1.0$ | 13.38 | 86.54 | 65.63 | 0.76 | 0.81 | | Baseline BigGAN $\psi=0.7$ | 22.25 | 90.81 | 48.01 | 0.90 | 0.67 | | Baseline BigGAN $\psi=0.5$ | 36.10 | 92.34 | 22.11 | 1.00 | 0.48 | |---|---|---|---|---|---| | $\lambda=0.05$ | 13.88 | 85.29 | 68.40 | 0.72 | 0.83 | | $\lambda=0.10$ | 11.62 | 81.78 | 74.58 | 0.66 | 0.83 | | $\lambda=0.20$ | 13.36 | 84.85 | 65.13 | 0.74 | 0.82 | | $\lambda=0.30$ | 14.41 | 84.24 | 69.42 | 0.71 | 0.82 | | $\lambda=0.50$ | 14.50 | 83.27 | 68.23 | 0.70 | 0.81 | | $\lambda=0.67$ | 15.15 | 82.57 | 68.34 | 0.69 | 0.81 | | $\lambda=1.00$ | 15.26 | 81.96 | 72.51 | 0.65 | 0.80 | | $\lambda=1.50$ | 16.68 | 84.64 | 63.92 | 0.73 | 0.79 | | $\lambda=2.00$ | 18.25 | 79.53 | 72.90 | 0.59 | 0.78 | | $\lambda=3.00$ | 26.66 | 85.28 | 55.49 | 0.76 | 0.74 | | $\lambda=5.00$ | 32.54 | 83.39 | 56.94 | 0.68 | 0.73 | | $\lambda=10.00$ | 39.69 | 84.11 | 39.29 | 0.75 | 0.67 | | $\lambda=20.00$ | 67.03 | 89.64 | 20.52 | 0.97 | 0.56 | ### CelebA 64 | Model | FID | Precision | Recall | Density | Coverage | |---|---|---|---|---|---| | Baseline BigGAN $\psi=1.0$ | 9.17 | 78.48 | 51.36 | 0.89 | 0.49 | | Baseline BigGAN $\psi=0.7$ | 23.72 | 87.82 | 31.11 | 1.29 | 0.49 | | Baseline BigGAN $\psi=0.5$ | 43.64 | 91.01 | 11.54 | 1.53 | 0.39 | |---|---|---|---|---|---| | $\lambda=0.2$ | 8.79 | 83.37 | 44.07 | 1.09 | 0.54 | | $\lambda=0.5$ | 6.03 | 77.60 | 55.98 | 0.88 | 0.50 | | $\lambda=0.7$ | 9.24 | 81.08 | 46.71 | 1.03 | 0.51 | | $\lambda=1.0$ | 13.07 | 81.70 | 36.85 | 1.00 | 0.47 | | $\lambda=1.5$ | 13.21 | 83.56 | 38.89 | 1.09 | 0.51 | | $\lambda=2.0$ | 14.23 | 82.98 | 32.87 | 1.16 | 0.49 | | $\lambda=5.0$ | 22.44 | 84.04 | 25.67 | 1.21 | 0.43 | ### ImageNet 128 | Model | FID | Precision | Recall | Density | Coverage | |---|---|---|---|---|---| | Baseline BigGAN $\psi=1.0$ | 9.84 | 27.97 | 40.92 | 0.14 | 0.17 | | Baseline BigGAN $\psi=0.7$ | 11.39 | 23.12 | 31.77 | 0.11 | 0.15 | | Baseline BigGAN $\psi=0.5$ | 15.49 | 20.25 | 20.08 | 0.10 | 0.14 | |---|---|---|---|---|---| | $\lambda=0.2$ | 9.92 | 26.69 | 42.04 | 0.13 | 0.17 | | $\lambda=0.5$ | 10.82 | 26.83 | 42.38 | 0.13 | 0.16 | | $\lambda=1.0$ | 20.42 | 29.72 | 28.21 | 0.15 | 0.15 | | $\lambda=2.0$ | 20.21 | 30.27 | 30.49 | 0.14 | 0.14 | | $\lambda=5.0$ | 20.76 | 30.87 | 28.38 | 0.15 | 0.15 | ### FFHQ 256 | Model | FID | Precision | Recall | Density | Coverage | |---|---|---|---|---|---| | Baseline BigGAN $\psi=1.0$ | 41.42 | 65.54 | 10.02 | 0.52 | 0.47 | | Baseline BigGAN $\psi=0.7$ | 56.44 | 76.60 | 4.83 | 0.70 | 0.41 | | Baseline BigGAN $\psi=0.5$ | 82.04 | 84.51 | 1.50 | 0.89 | 0.32 | |---|---|---|---|---|---| | $\lambda=0.2$ | 35.66 | 78.70 | 9.45 | 0.88 | 0.60 | | $\lambda=0.5$ | 35.24 | 78.41 | 9.66 | 0.89 | 0.60 | | $\lambda=1.0$ | 35.91 | 78.95 | 8.32 | 0.90 | 0.57 | | $\lambda=2.0$ | 36.33 | 81.10 | 8.69 | 1.05 | 0.64 | | $\lambda=5.0$ | 38.16 | 84.31 | 8.52 | 1.15 | 0.63 | **Observations on the Tables:** 1. **Cifar and CelebA:** - Truncation primarily enhances Precision at the expense of Recall, yielding results comparable to our method. - However, when it comes to improving Recall, our method stands out. By training models with $\lambda < 1$, we've been able to boost the baseline Recall, a feat truncation fails to achieve. 2. **Imagenet:** - Truncation appears to be counterproductive, diminishing both Precision and Recall. This suggests that the mean of the Gaussian might be mapped outside the support of the target distribution. 3. **FFHQ:** - Our method demonstrates a superior trade-off compared to truncation. For instance, at a Precision of 84%, our method achieves a Recall of 8.52% versus truncation's 1.5%. --- Thank you for the constructive feedback. We've made efforts to address the concerns raised and hope that the additional results and explanations provided here shed more light on our approach.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a training method for generative models (normalizing flows and GANs), which can control the precision-recall trade-off of the generative models. The method is to design a new divergence named precision-recall divergence to bridge the precision-recall curve and the f-divergence. Then, the generative models optimized using the precision-recall divergence can control the precision-recall tradeoff by adjusting the hyperparameter lambda. The experiments show that the proposed method can control the precision-recall tradeoff and improve the performance of the baseline model (i.e., BigGAN). Strengths: 1. This paper proposes a divergence that is directly related to the precision-recall curve, and the proposed method can control the precision-recall tradeoff during training. 2. The results show that the proposed method can control the precision-recall tradeoff. 3. The results show that the proposed method can improve the performance of BigGAN. Weaknesses: 1. I think some combination of different kinds of f-divergence can also control the precision-recall tradeoff during the training. For example, KL-divergence + lambda * reverse KL-divergence. The paper does not compare with this kind of method, and does not explain the advantages of the proposed method in practice. 2. Although the existing methods including truncation and rejection sampling can not control the precision-recall during training, they can control the tradeoff through post-processing. The authors do not compare with these methods. What are the practical advantages of the proposed method compared to these methods? 3. The proposed method is not evaluated on larger resolutions (e.g., 512*512) and larger datasets. The ImageNet dataset is just used in the fine-tuning setting. 4. The proposed method is only evaluated on the BigGAN backbone. I am more interested in the use of StyleGAN, since StyleGAN is more widely used. 5. In Table 2, StyleGAN-XL performs much better than the proposed method. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In addition to the theoretical one, what’s the practical advantages of the proposed method comparing to the existing methods such as truncation, rejection sampling, and combination of different f-divergences (e.g., KL + reverse KL)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have addressed the limitation in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to express our gratitude for the time and effort you dedicated to reviewing our paper. We appreciate the constructive comments. We would like to address the concerns you raised: **1. Combination of Different Kinds of $f$-divergence:** You rightly pointed out that a combination of different f-divergences, such as KL-divergence and reverse KL-divergence, might also control the precision-recall tradeoff. However, a combination of KL and reverse-KL, while being a suitable f-divergence, does not have a easy close form convex conjugate function $f^*$. The conjugate function depends on the Lambert function. So, to train a model to minimize a combination of KL and reverse KL, it would either require two discriminators (one per divergence) or using a more complex version of our algorithm. While being a nice intuitive idea, combining different divergences is more complex than it appears to be. Secondly, even if we were to train for a combination of KL and reverse KL using multiple discriminators or another method, it is not a priori clear what PR trade-off is achieved by a combination, whereas the PR-divergence makes this trade-off very explicit. We discuss this in lines 193-199, but we will add a paragraph explaining this more clearly in the paper. **2. Comparison with Pre or Post processing Methods:** We recognize the importance of comparing our method with post-processing techniques like truncation and rejection sampling. Our method's primary advantage is the ability to control the precision-recall tradeoff during training. The other methods for sampling [1] or instance selection [2], can only shift the trade-off to improve the Precision while degrading the Recall. We have added in the general rebuttal some results for truncated latent distribution on the baseline model (BigGAN) and show that the method marginally improves the precision by limiting the recall. **3. Evaluation on Larger Resolutions, larger Datasets and more popular models:** We acknowledge the limitation regarding the evaluation of larger resolutions and datasets. Our primary focus is to demonstrate the efficacy and versatility of our method, and we chose the settings that best facilitate this. Our choice of BigGAN was based on its compatibility with our method and the easier implementation/training procedure of BigGAN in practice. As a matter of fact, BigGAN is often used as a base to test new techniques and approaches [2, 3, 4]. We strongly believe that our experimental set up shows the efficiency of the method for a variety of models and datasets. In particular, a larger dataset, a higher dimension or larger model (like StyleganXL) would drastically increase the computational resources and environmental impact. **4. Practical Advantages Over Existing Methods:** Beyond the theoretical advantages, our method offers several practical benefits, the most important being: **Explicit Trade-off:** The precision-recall tradeoff is made explicit in our approach, allowing for more predictable and controlled outcomes during training. We directly optimize well-established precision and recall measures, which others do not. Also, our method provides: **Minimal Cost:** One of the standout features of our method is that it introduces minimal computational overhead. The complexity of training remains unchanged, ensuring that the benefits of our approach come at no additional cost. **Improved Recall:** Our method has the potential to enhance recall, a feat that other techniques like rejection sampling, importance sampling, and truncation cannot achieve. [1] Humayun, A. I.; Balestriero, R.; and Baraniuk, R. 2022. Polarity Sampling: Quality and Diversity Control of Pre-Trained Generative Networks via Singular Values. ArXiv:2203.01993 [cs]. [2] Terrance DeVries, Michal Drozdzal, and Graham W. Taylor. Instance Selection for GANs, October 2020. arXiv:2007.15255 [3] Hanxiao Liu, Andrew Brock, Karen Simonyan, and Quoc V Le. Evolving normalizationactivation layers. In NeurIPS, 2020. [4] Mario Lucic, Michael Tschannen, Marvin Ritter, Xiaohua Zhai, Olivier Bachem, and Sylvain Gelly. High-fidelity image generation with fewer labels. arXiv:1903.02271, 2020 --- Rebuttal Comment 1.1: Comment: Thanks for the response. However, my major concerns are not fully addressed. 1. Combination of Different Kinds of f-divergence. The authors stated that it needs to incorporate two discriminators. Actually, it is not needed, and two different divergences can be incorporated into the objective function directly. There are papers discussing this kind of method. Thus, I am still not sure about the advantages of the proposed method over the method of combination of different divergences. 2. Comparison with Pre or Post processing Methods, and the practical benefit of the proposed method I still don't get the point about the practical advantage of controlling the precision-recall tradeoff during training. For example, if we need a specific precision-recall tradeoff, for the proposed method, we need to train many different models and select the model that satisfies our requirement. But for the Post processing Methods, we just need to train one model and then adjust the sampling scheme. This is good for the environment (since the authors mentioned the environment in the response). 3. Evaluation on Larger Resolutions, larger Datasets and more popular models I still believe that generalizing the proposed method to a larger dataset and to StyleGAN is important. Besides, I agree with some points raised by Reviewer Pocn. Overall, I would like to change my score to reject.
null
null
null
null
null
null
Learn to Follow: Lifelong Multi-agent Pathfinding with Decentralized Replanning
Reject
Summary: This paper addresses the multi-agent pathfinding problem by proposing an approach that utilizes a combination of a planning algorithm for constructing a long-term plan and reinforcement learning for reaching short-term sub-goals and resolving local conflicts. The results show that the proposed method outperforms decentralized learnable competitors and centralized planner. Strengths: 1. The method is straightforward and concise. 2. The writing is clear and easy to understand. Weaknesses: 1. The proposed method follows a hierarchical reinforcement learning framework, which has been extensively studied in previous works. There are limited contributions to the design of sub-goal selection. 2. In the heuristic sub-goal decider, A* is used to construct a path, which requires global information. As the sub-goal decider will be used multiple times during the episode, the overall method seems not fully decentralized. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How the non-stationary problem of multi-agent reinforcement learning is addressed? As the policy directly optimizes the r^i. 2. Could the authors show the makespan of all methods? 3. As the policy is shared, could the authors explain how the agents handle situations where their paths cross? Given that the shared policy tends to result in similar actions, there is a possibility that the agents might end up both staying still or moving together into a collision. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This work has little negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weakness 1: Thank you for pointing out the familiarity of the hierarchical reinforcement learning framework in previous research. We acknowledge that the hierarchical RL framework has indeed been a subject of extensive investigation in the literature. However, in our work, we advance beyond the conventional application of hierarchical RL by introducing a distinctive emphasis on sub-goal selection within the context of multi-agent interactions in the MAPF domain. Our method introduces a departure from the typical hierarchical RL paradigm by allowing the Follower agent to pursue and accomplish multiple sub-goals along its trajectory. Unlike traditional approaches where an agent focuses on a single sub-goal per episode, our approach recognizes the potential benefits of considering a sequence of sub-goals. This opens up new avenues for more sophisticated decision-making policies, such as avoiding conflicts with other agents or strategically deferring immediate rewards for the sake of higher cumulative rewards. Furthermore, it's worth noting that the application of hierarchical RL to MARL has not been as extensively explored [1]. In our work, we successfully apply this technique to the challenging MAPF domain, offering a significant advancement over many existing works that primarily focus on simpler, toy examples. This demonstrates the practical relevance and effectiveness of our approach. Once again, we appreciate your comment as it certainly holds merit for inclusion in the methodological and related work sections. [1] Pateria, Shubham, et al. "Hierarchical reinforcement learning: A comprehensive survey." ACM Computing Surveys (CSUR) 54.5 (2021): 1-35. ### Weakness 2: Our method, Follower, is decentralized in a sense that it can be executed on a single agent and no communication with the other agents and/or central controller is needed (as Follower does not need to know the goals/paths/actions of the other agents). Indeed, Follower uses A* under the hood to find a path to the goal. We assumed in this work that each agent knows the static map of the environment and this map is utilised in A*. However, even if the full static map of the environment is not available we can easily substitute A* with one of its numerous variants tailored to partially-observable maps, like D*lite. This still will keep Follower decentralised (in the sense described above). ### Question 1: The issue of non-stationarity within multi-agent environments represents a crucial challenge in the realm of Multi-Agent Reinforcement Learning (MARL). In our work, we use the implementation of the PPO algorithm with a decentralized critic. In a number of papers [1,2], it has been shown theoretically and empirically that such an implementation in small-sized problems copes well with the problem of non-stationarity. Our work proposes integration with a high-level planner, which reduces a high-dimensional non-stationary task (with up to 128 agents) to a set of small-sized non-stationary tasks, which PPO with a decentralized critic successfully handles. [1] Yu, Chao, et al. "The surprising effectiveness of ppo in cooperative multi-agent games." Advances in Neural Information Processing Systems 35 (2022): 24611-24624. [2] Sun, Mingfei, et al. "Trust region bounds for decentralized ppo under non-stationarity." Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems. 2023. ### Question 2: Makespan is a performance indicator that is not directly applicable to lifelong MAPF. This measure is used in case of single-shot MAPF. To show the makespan we have modified the code of the algorithms to solve single-shot MAPF instances and run an additional experiment. The evaluation was made on the maps/instances taken from PICO’s repository with 20x20 grid size and 30% density of obstacles. Follower and PICO were not retrained for this type of instances, while for PRIMAL2 we took the weights provided by the authors, that were specifically trained for single-shot MAPF. The episode length was set to 256. In case if the algorithm was not able to find a solution within the given amount of steps, the makespan for the corresponding instance was set to 256. The obtained results (presented in the attached file of Author Rebuttal section) demonstrate that Follower significantly outperforms the competitors on the instances with up to 32 agents, while on the instances with 64 agents all the approaches demonstrate poor performance. Overall, out of all the instances that were solved by at least one of the evaluated approaches, Follower has found a better solution in ~83% cases, PICO - ~13%, PRIMAL2 - ~4%. ### Question 3: Though the policy is shared between the agents, it is conditioned on each agent’s current sub-goal. This sub-goal is determined using the agent’s individual global goal (which is unique and not shared between the agents) and the cost penalty heat map (which is also unique for each agent as it is constructed from the individual experience of an agent). In addition to this each agent has its unique observation history $\tau^u$, which aids him to prevent conflicts and make informed decisions based on its past experience. Finally, the policy is stochastic, meaning that each agent samples an action from action distribution, allowing the agents to make different actions in the same situations. These concepts are well demonstrated in the examples provided in our supplementary materials, specifically in Appendix A. The appendix contains a link to an anonymized repository (code in repository is unchanged since initial submission) that includes examples of animations showcasing the effectiveness of our approach in conflict resolution. --- Rebuttal Comment 1.1: Comment: Thank you for the response. However, the contribution still seems more focused on applying hierarchical RL to a specific task like MAPF. --- Reply to Comment 1.1.1: Title: HRL and the MAPF task Comment: We thank the reviewer for engaging in the discussion and sharing the post-rebuttal opinion. First, we note that most of the initially raised concerns (W2, Q1, Q3, Q4) seem to have been adequately addressed by us in the rebuttal, as they are not mentioned in the reviewer’s reply. Thus, we wish to discuss the remaining concern (W1) regarding the novelty of our approach in the context of HRL. First, we would like to note that we are not positioning our method as an HRL approach due to fundamental differences from classical HRL methods like the Options framework and the Feudal approach. While in HRL, one of the main ideas is to allocate sub-tasks specific to the environment (like ‘moving to the door’) and to learn low-level abstract policies (skills) that can be reused during further learning, a set of different skills, tailored to different sub-tasks, is not formed in our approach. In our approach, sub-goals reduce the sparsity of the reward function and allow a low-level RL-based policy to focus not on the pathfinding aspect but on conflict resolution (a very important skill that is hard to design in a deterministic/heuristic fashion). Still, we agree that our approach is relevant to HRL and can potentially give an impetus to developing new methods within HRL. Please also note that the number of works where (classical) HRL is applied to MARL is very limited. In most of them, toy environments with few agents are considered. Second, it is true that we have been focused on a specific multi-agent problem setting, i.e., multi-agent pathfinding (MAPF). The choice is not arbitrary but rather well-grounded. This setting is particularly challenging as practical scenarios may involve dozens, hundreds, and thousands of simultaneously acting (moving) agents. The desired output, i.e., policy, should be highly-generalizable (to unseen instances and types/topologies of maps) as in MAPF, we are interested not in solving decentralized POMDP in a certain environment (in RL terminology) but rather in an (apriori unknown) distribution of the environments (i.e., different maps). To our knowledge, no current MARL methods can efficiently solve this nontrivial problem. Moreover, MAPF is a very hot topic in the search community (as conventional ‘golden standard’ search techniques like A* struggle in multi-agent settings and specific involved machinery should be introduced to cope with the curse of dimensionality). However, the methods the researchers from this community typically develop are centralized and therefore do not scale well to many agents. We suggest tackling this problem by including the RL techniques in the loop. Of course, we are not the first to follow this line. However, as the paper shows, our approach leads to a policy that consistently outperforms state-of-the-art (in the learnable MAPF). In this context, we would like to mention also the well-known in the community Flatland competition [1] that has been held several times and was an official NeurIPS contest in 2020. This competition assumes solving a variant of the MAPF problem (similar to the one considered in our paper) without restricting methodology, i.e., both learnable and search-based solvers are allowed. Nevertheless, as the results of the previous competitions have shown, the learnable methods (RL) seriously lagged behind the classical ones in their performance. Our work bridges the gap between these approaches and demonstrates how two approaches can leverage each other when combined thoughtfully. [1] https://arxiv.org/abs/2012.05893
Summary: This paper introduces a decentralized hierarchical approach without agent-to-agent communication for Lifelong Multi-agent Pathfinding (MAPF). The framework adds a congestion-based heuristic to an A*-planner and a low level Reinforcement Learning (RL) - based controller to follow the provided sub-goals. Experimental results show that the proposed method has a higher throughput (or rate of reach of new goals) for a range of maps. Strengths: - Easy to understand. Uses the well-studied A* planner with additional heuristics and a low level RL-based controller simply trained to reach goals promoting long term performance. - Hierarchical framework is simple and could be effective way for decentralized control in an MAPF problem. Weaknesses: - The change of the heuristic in the A* planner seems weakly substantiated. While empirical results are promising, the need for hyperparameter tuning for the score and lack of guarantees on behavior may impede the use of this new heuristic. - Confidence intervals for higher density experiments may be too large to claim better performance. (E.g. Table 1, 16 agents, proposed approach has throughput $ 0.56 \pm 0.34$ vs Primal2 having $0.31 \pm 0.14$ ). This may point to noisier behavior in the presence of more obstacles. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Does the learnable follower explicitly handle collisions between agents? If I understand correctly, it is rewarded for reaching the goal (sub-goal and global) thus implicitly handling collision-avoidance. 2. What other metrics of interest are there apart from throughput? Is there a measure of the success rate of the given algorithm on a map such as mentioned in Primal2 [1] ? 3. Some typos/clarifications? 1. L205 : “An crucial” → “A crucial” 2. L207: “congestion often arise” → “congestion often arises” 3. L238: “If while reaching the current goal the agent goes too far away from it,” → This is referencing the global goal? 4. L263: “the reward function used is simple and does not require involved manual shaping.” → L237 mentions an empirically determined reward so this appears incorrectly stated. References: [1] PRIMAL2: Pathfinding via Reinforcement and Imitation Multi-Agent Learning - Lifelong, Damani et al, RAL 2020 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: - Access to global map is assumed for use of the A* algorithm. - Several empirically determined reward function portions may hinder generalizability to different maps. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weakness 1: On the one hand, the suggested penalizing-transitions technique for A* does not violate any of its properties (i.e. A* with such a modified cost function is still guaranteed to be complete and to find the cost-optimal path). On the other hand we agree that this technique has no strict theoretical guarantees to increase the performance of Follwoer w.r.t. throughput. Still it provided a substantial boost in performance of Follower in all the scenarios that were evaluated. Moreover, there was no tuning of the penalty transition hyperparamter (i.e. C) for each of the experiments on different maps. Instead, this hyperparameter (as well as all the others) was tuned only once - during training on maze-like maps. ### Weakness 2: You're correct; agent behavior and results become more erratic as the number of obstacles increases. Additionally, significant variations arise from the fact that the outcomes are averaged across different maps with identical density. Despite the consistent maps density, their complexities differ considerably. To address this issue, we have conducted an additional experiment on randomly generated maps with a density of 30%. We increased the number of different LMAPF instances to 100 per map per number of agents, in contrast to the 10 instances mentioned in the submitted version of the manuscript. Comprehensive box-and-whisker plots are presented in Figure 2, which is enclosed in the attached document. Overall, the results clearly show the superiority of Follower. ### Question 1: Yes, you are right. The agent handles collisions implicitly through sub-goals and a global goal reward signal. The collision of agents results in them staying in their positions and not receiving rewards. The absence of this positive signal can be interpreted as an implicit method of collision avoidance. ### Question 2: In the original Primal2 the authors considered two MAPF scenarios: single-shot and lifelong. In the single-shot scenario when an agent reaches the goal it never receives a new one. In this case it is possible (and natural) to consider such measures as Success Rate and Makespan. In the lifelong variant of MAPF each agent upon arriving to the goal cell is immediately assigned a new one, thus the measures from the single-shot MAPF can not be straightforwardly applied. The most common measure of success for Lifelong MAPF is the Throughput. It is this measure that we used in our evaluation as in our work we are focused on the Lifelong MAPF. Please note that in the Primal2 paper the authors also use only the Throughput for measuring the success of the lifelong MAPF (and use the other measures for the single-shot MAPF only). Still, for this rebuttal we have performed an additional experiment comparing Follower, PICO and Primal2 on single-shot instances on random maps with 30% density of obstacles. Follower and PICO were not retrained for this type of instances, while for Primal2 we took the weights provided by the authors, that were specifically trained for single-shot MAPF. The plots regarding the obtained success rate and makespan are presented in the attached file (Author Rebuttal section). In brief, Follower outperformed other approaches in this experiment. ### Question 3: (1, 2, 4): Thank you, we will fix these shortcomings. (3): *"If while reaching the current goal the agent goes too far away from it"* Is referenced to sub-goal. We will explicitly state it in the main text. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I appreciate the response provided the authors and the additional experiments providing evidence that even with considering noise, their approach performs reasonably better than other MAPF algorithms like Primal2. The point that the hyperparameters tuned on one set of maps worked on different maps during test time is also impressive. For this reason I am willing to raise my score to a weak accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their involvement in the discussion. We are glad our rebuttal cleared the points raised in the initial review. We are committed to improving the paper following the reviewer’s comments.
Summary: The paper considers a decentralized multi-agent pathfinding (MAPF) problem. The main idea is to combine heuristic-based search and reinforcement learning. This work first determines subgoals and uses this information as intrinsic rewards. Empirically, it outperforms two baselines in the literature, PRIMAL2 and PICO, in domains with different sizes and different numbers of agents. The method also demonstrates the ability to generalize to domains unseen during training. Strengths: The main observation is that pure heuristic search would not have a good performance in complex domains, where collaborative behaviors like congestion avoidance may not emerge. This work shows an inspiring combination of heuristic-based search and reinforcement learning. The proposed algorithm also outperforms a centralized control algorithm (RHCR) when the number of agents is large or when the computational budget is small (so the centralized algorithm is expensive). Weaknesses: Some concerns about the practicality of the algorithm: 1) It requires some hyperparameter selection. 2) It requires pre-train a neural model. When it outperforms RHCR in some settings when RHCR is run for 10s, we need to consider the computational overhead of running the RL algorithm during training. In terms of performance, the performance between FOLLOWER and PRIMAL2 is very close in some domains. **Minor points.** Line 203, duplicate “node.” Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are the concerns in the weakness section correct? In case I misunderstood the results. Usually hand-design the Intrinsic reward is difficult. It’s possible that the agent keeps collecting intrinsic rewards without reaching the real goal. Have the authors tried different intrinsic reward values? It seems that both PRIMAL2 and PICO have access to more information than FOLLOWER (information about goals on the global map, communication between agents), but FOLLOWER still outperforms the baselines? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have mentioned the limitations of this work to be the assumptions of static environment, perfect perception. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weakness 1: Hyperparameter selection is a necessary part of almost any learnable method. Moreover, in the path planning domains even the non-learnable search-based methods often require setting their parameters. E.g. the state-of-the-art search-based LMAPF solver with whom we compare, i.e. RHCR, requires setting the re-planning frequency, planning horizon etc. (It is worth mentioning here that we did try different values of the RHCR parameters and finally picked the ones that performed the best). Meanwhile, we tuned the hyperparameters of Follower to optimize the throughput on the maps from the training dataset, i.e., only maze maps from Primal2 paper. Then, at the test time we did not re-tune any hyperparameter, they all were left the same as when training had been finished. Subsequently, the results of Follower on out-of-distribution maps (i.e., on maps with topologies that it did not see during training) clearly demonstrate that Follower can generalize to unseen problem instances without requiring parameter re-tuning. ### Weakness 2: Regarding the computational overhead, we acknowledge that training a neural model requires resources, and running the RL algorithm during training can add to the computational burden. However, we want to clarify that the Follower model was trained only once using the training set of the maze maps. The evaluation phase involved using the same pre-trained model with frozen weights for all experiments, including experiments on the maps that notably differ in their topology from the ones used for training. There was no additional training performed on these maps during evaluation. Therefore, we believe it is correct not to consider the training time of the Follower as a computational overhead during the evaluation phase. ### Weakness 3: In terms of performance, Follower and Primal2 exhibit very close results in the maze-like domains, such as mazes and warehouse maps. It is worth noting here that Primal2 was tailored specifically to reason about corridors and possible deadlocks occurring in them (e.g. see Figure 3 of the Primal2 paper), while our method is not (as we purposefully avoided any narrow specification of our approach). That means that even without special corridor reasoning Follower is able to be on par with Primal2 in corridor-rich domains and even outperform it. When the environment is not maze-like, e.g. on random maps, Follower’s superiority is clearly visible. Moreover, for this rebuttal we additionally conducted extra experiments on 6 maps of varying topology (without any re-training of Follower/Primal2) and, again, Follower significantly outperformed Primal2 on each of these maps - see Fig.1 in the attached file (see Author Rebuttal section) for the detailed plots . ### Question 1: Yes, designing the reward function is a tricky process. Thus, in our research, we tried to keep it as simple as possible. We used only two reward components: $r_g = 1$ and $r_s = 0.1$, for reaching the goal and subgoal, respectively. We believe these rewards are easy to interpret. Moreover, we did not specifically tune these values in the hyperparameter search procedure; instead, we used the values determined during preliminary experiments. ### Question 2: Yes, formally, Follower has access to less information than Primal2 and PICO. However, in our experiments, it outperforms the latter. We believe this is due to a proper combination of both search-based and learning-based components, as suggested by us in the paper: using search to find a path and utilizing an RL-policy to navigate along this path. Both Primal2 and PICO also incorporate demonstrations during training, which could introduce additional inductive bias to their policies. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks to the authors for the responses and the additional results. In the additional experiments, it's clear that Follower outperforms the baseline algorithm. My questions have been answered. I believe this work has its merits. However, I think the authors also acknowledge in the general response that this HRL framework is not completely novel. It's also tailored for a very specific setting (lifelong MAPF). So I'll keep the score the same. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the discussion and are glad that our initial response (as far as we can get from the reply) has helped to answer all of the questions/concerns, i.e., W1, W2, W3, Q1, Q2, explicitly formulated in the original review. We have also elaborated on our motivation to focus on the MAPF problem and the novelty of the approach in reply to VcwW.
Summary: This paper proposes a novel method for decentralized lifelong MAPF. The method consists of two components: a heuristic sub-goal decider, which assigns sub-goals for each agent using a heuristic (e.g., A*), and a learning-based policy network, which outputs actions for achieving the short-term subgoals. The paper compares the proposed method with both learning-based decentralized methods and the search-based centralized method on extensive setups and demonstrates the proposed method's superiority. The paper also provides insightful ablation studies. Strengths: 1. The idea of using heuristics to solve long-term planning and utilizing learning-based policies for achieving short-term sub-goal is reasonable and also commonly used in many other tasks. 2. The paper compares the proposed method with both learning-based decentralized methods and a search-based centralized method on extensive setups and demonstrates the proposed method's superiority and generalization capability. 3. The paper also provides insightful ablation studies and verifies the necessity of each proposed component. 4. The paper is well-written and easy to follow. Weaknesses: 1. Since I'm not active in the MAPF field right now, I am unsure if there is literature sharing similar ideas in the MAPF tasks. But at least I know that the idea of using heuristics for long-term sub-goal assignment and learning-based policy for low-level sub-goal achievement is quite common in the RL field. 2. How the RL policy handles the collision and deadlock is unclear to me. What will happen if the agent (or two agents) choose the action(s) that will cause a collision? What will happen if there is a deadlock (e.g., two agents want to pass a narrow corridor)? 3. I am a little confused about what the RL policy can learn if the K is set to 2, which means the sub-goal is just two steps away from the current location. Are there many candidate paths to a sub-goal, which is just two steps away? 4. Lines 211-218: Since the agent doesn't know the future locations of other agents, how does the method count the "number of times the other agents were seen" in a future step? Does the method use a static heat map (for only the current step) to count that? 5. Lines 242 and 247: the symbol H was used twice with different meanings. 6. Figure 1 is not mentioned in the text. 7. Lines 175-176: "as the ratio of the episode length to the number of goals achieved" -> "as the number of goals achieved to the ratio of the episode length"? 8. Line 203: "node node" Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you provide more concrete qualitative examples to demonstrate that the learning-based policy is better at avoiding congestion or collision? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weakness 1: Learnable low-level policies and heuristic sub-goal allocation procedures are commonplace in many hierarchical RL approaches tailored to single agent problems. However, such techniques are rarely explored in the context of multi-agent RL (MARL). Existing studies primarily demonstrate their results within simplistic environments, leaving ample room for further research in this area. In this context, we reference a fairly recent review paper [1] that highlights only two methods (PoEM [2] and HQMIX [3]) in the domain of decentralized partially observable multi-agent tasks. Among these, PoEM, a method closely related to ours, utilizes preexisting demonstrations to identify sub-goals, which poses a significant limitation. In contrast to our approach, all the methods we are aware of present their findings using scenarios with a small number of agents. Nonetheless, we intend to augment the section on related works by incorporating a comprehensive discussion of the advancements in the field of Hierarchical MARL. [1] Shubham Pateria, Budhitama Subagdja, Ah-Hwee Tan, and Chai Quek. 2021. Hierarchical Reinforcement Learning: A Comprehensive Survey. ACM Comput. Surv. 54, 5. [2] Miao Liu, Christopher Amato, Emily P. Anesta, J. Daniel Griffith, and Jonathan P. How. 2016. Learning for decentralized control of multiagent systems in large, partially-observable stochastic environments. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI’16) [3] Hongyao Tang, Jianye Hao, Tangjie Lv, Yingfeng Chen, Zongzhang Zhang, Hangtian Jia, Chunxu Ren, Yan Zheng, Changjie Fan, and Li Wang. 2018. Hierarchical deep multiagent reinforcement learning. arxiv:1809.09332. ### Weakness 2: The collision model adopted by us in this work is borrowed from Primal2 (our main learning-based competitor). When two or more agents decide to occupy the same cell at the next time step only one of them (decided by the environment) succeeds and the others stay where they were. When two agents wish to swap locations simultaneously (at the same time step), they both stay where they were.  As for the deadlocks, they, indeed, can happen, and it is the responsibility of the policy to resolve them. The suggested hybrid policy (heuristic search + RL) handles the deadlock pattern mentioned by the reviewer - ‘two agents want to pass a narrow corridor’ - quite effectively. This can be seen in the animation, which is visible if one follows a link in Appendix A (this is a link to an anonymized repository, which contains a readme with an animation on the title page). ### Weakness 3: We understand your confusion about the effect of setting K to 2, which implies that the sub-goal is only two steps away from the current location of the agent. Please note that Follower essentially aims to maximize rewards through the accomplishment of multiple sub-goals on the way to the (global) goal. Consider two illustrative scenarios: (1) The agent achieves a reward for reaching a sub-goal but subsequently encounters a conflict with another agent, impeding their further progress. (2) The agent allows another agent to pass and does not obtain an immediate reward for reaching sub-goal, but this action could actually lead to a more substantial reward later on (after successfully accomplishing several subsequent sub-goals). Our approach facilitates the agent's ability to learn the second type of behavior, thereby adapting its actions based on the potential for higher cumulative rewards in future. In terms of the training process, the agent learns using rewards discounted over full length trajectories (512 steps), which are divided into rollouts of size 8 for the RNN head. Thus, an agent's decision-making process isn't confined within the immediate two-step radius of its sub-goal. Instead, it extends towards a more distant time horizon. The rationale behind the selection of the hyperparameter value K=2, which emerged as the optimal choice in our hyperparameter sweep, is its ability to provide a dense reward signal. ### Weakness 4: We assume that an agent does not know the future locations of the other agents. Thus, it uses the past observations to construct a heat map of the cost penalties. ### Weaknesses 5-8: Thank you, we will address these minor issues. ### Question 1: During training, each agent gains experience that in cases of congestion it might be preferable to yield to achieve a higher long-term reward (i.e. sacrifice short-time gain of achieving the local sub-goal in favour of achieving a sequence of sub-goals later on). Thus all agents (as the policy is shared) naturally learn to cope with deadlocks/congestions. If this learned policy is removed the performance significantly drops as confirmed by our experiments with Randomized A*, which is essentially Follower w/o learnable policy (see Fig. 5 in the original submission). --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thanks to the authors for the detailed reply. Most of my concerns have been addressed.
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for their insightful reviews and comments. We appreciate that you found our work to be well-written and easy to follow. Additionally, we are pleased that you recognized the strength of our approach over both centralized planning and decentralized learnable methods, and your appreciation of the insights provided by our ablations. We did our best to address the raised concerns (formulated both as weaknesses and direct questions to us). Here, we highlight three general points of our rebuttal. 1. Novelty. We agree with the reviewers that a combination of learnable low-level policies and heuristic sub-goal allocation procedures arises in many hierarchical RL approaches. However, they are mostly tailored to single-agent problems and are rarely explored in the context of multi-agent RL (MARL) and Multi-agent path finding (MAPF). Moreover, existing studies primarily demonstrate their results within simplistic environments, leaving ample room for further research in this area. In the context of most hierarchical RL frameworks, an episode for a low-level agent typically revolves around a single sub-goal. Our method introduces a departure from this paradigm by allowing the Follower agent to pursue and accomplish multiple sub-goals along its trajectory. This creates opportunities for advanced decision-making policies, like preventing collisions with other agents via delaying instant rewards to achieve greater cumulative rewards. Additionally, a noteworthy innovation of our methodology is that a high-level subgoal generator tackles the challenge of conflict resolution in a long-term scenario by strategically distributing agents across the map (via the introduced cost-penalty heatmap). 2. Hyperparameters. On the one hand, it is true that the suggested hybrid method for solving (decentralized) Lifelong MAPF, i.e., Follower, requires setting various hyperparameters. On the other hand, hyperparameter selection is a necessary part of almost any learnable method. Moreover, in the path planning domains, even the non-learnable search-based methods often require setting their parameters. E.g., the state-of-the-art search-based LMAPF solver with whom we compare, i.e., RHCR, requires setting the re-planning frequency, planning horizon, etc. Having that said, we wish to emphasize that we tuned the hyperparameters of Follower only while training which was carried out only on the maze-like maps (from Primal2 paper).  Then, at the test time, we did not re-tune any hyperparameter, they all were left the same as when training had been finished. Subsequently, the results of Follower on out-of-distribution maps (i.e., on maps with topologies that it did not see during training) clearly demonstrate that Follower can generalize to unseen problem instances without requiring parameter re-tuning and thus a ‘hyperparameter concern’ is not crucial for Follower. 3. Empirical evaluation. For this rebuttal, we have conducted a range of additional experiments to address the concerns raised by the reviewers. I.e., we compared Follower to Primal2 on six more maps whose topology differs from the maps used for training Follower/Primal2 (see Fig. 1 in the attached file). Additionally, we performed further comparisons of Primal2 on randomly generated maps (with a high obstacle density of 30%), increasing the number of problem instances per map per agent count (see Fig. 2 in the attached file). We evaluated Follower, PICO, and Primal2 in the single-shot setup, measuring the success rate and the makespan (see Fig. 3 in the attached file). In all cases, Follower outperformed the competitors, providing additional evidence of the superiority of the suggested approach. Pdf: /pdf/7ee4b90dc0f77015c2be0df88c24657b6fed38d3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Implicit Variational Inference for High-Dimensional Posteriors
Accept (spotlight)
Summary: The paper proposes LIVI a variational-inference-based approach for Bayesian NNs. It hinges on implicit VI (IVI) which allows to easily sample from an arbitrary distribution, yet obtaining the gradients w.r.t its parameters or the density is hard. As the ELBO in VI decomposes to two terms, an expected likelihood term and a KL term, under this framing the KL term cannot be evaluated, and even may be ill-defined. To remedy that, the authors propose to use a Gaussian deep latent variable model (DLVM), i.e., the generator network in VAEs, as a replacement for the implicit density. And, to approximate the entropy term in the KL divergence the authors propose to linearize the output of the network. The authors also propose a lower-bound to the ELBO by approximating the log-determinant of the covariance using the minimal singular value, which can be calculated more efficiently compared to all of the values. The authors compare their method to several Bayesian methods on in-distribution and out-of-distribution tasks showing improved performance over baseline methods. Strengths: * A novel approach for using hyper-networks to learn a Bayesian model using VI. * Good results on OOD tasks compared to baseline methods. * The paper is written clearly and addresses relevant related studies. * The results seem to be reproducible - exact experiential details were given along with the code. Weaknesses: **Method** * Although I think the idea is nice a major shortcoming of this method is that it is challenging to apply it to large networks because of the use of Hyper-networks. Even in the experimental section, the largest network has 2.7 million parameters which is considered small these days. How do the authors propose to handle that issue and scale it to networks of 1-2 orders of magnitude larger? * To estimate the entropy term the std of the noise is taken to be zero on only one summand (the bilinear function), but not on the logdet of the covariance. How do the authors justify that? **Experiments** * The compared methods are somewhat outdated. Many Bayesian models were published in recent years, and all of them show that they beat deep ensembles in one way or another. I believe that more recent baselines should have been evaluated. * I find it a bit odd that the authors didn't compare LIVI to methods with similar pipelines, such as flow-based methods [1, 2]. * The method was evaluated on UCI benchmarks, MNIST and CIFAR-10. In my opinion, it is not enough to showcase the advantage of the method and more challenging datasets should be considered. [1] Krueger, D., Huang, C. W., Islam, R., Turner, R., Lacoste, A., & Courville, A. (2017). Bayesian hypernetworks. arXiv preprint arXiv:1710.04759. [2] Louizos, C., & Welling, M. (2017, July). Multiplicative normalizing flows for variational bayesian neural networks. In International Conference on Machine Learning (pp. 2218-2227). PMLR. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * How did your method of approximating the entropy is compared to standard MC sampling as suggested in the beginning of section 3.2? * How does your method perform compared to baseline methods in terms of memory and run-time? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors did not address the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for a detailed review. We answer your concerns below. **W1: How could we make the method scale to tens or hundreds of millions of dimensions?** We absolutely agree that current state-of-the-art DNNs are much larger than the networks our approach can serve. However, this does not make our approach any less relevant to the Bayesian deep learning community, as research in this area does not solely focus on transformers. No prior work in the Bayesian literature has been able to scale expressive variational approximations to such a large number of latent variables. To scale our method even further, one could consider independent hypernetworks for each layer of the BNN, which would reduce the size of hypernetwork at the cost of losing correlations across layers of the BNN. This should help increase the modelling capacity significantly. One could also consider efficient ways of putting priors over deep networks to curb dimensionality like [1] that proposes implicit BNNs that have priors over activations rather than weights and biases, and [2] which also assumes priors over units in a neural network and models weights using these latent distributions. While we consider scaling to even higher dimensions as future work, we agree that the topic is important for our paper and have added a discussion on the issue of scaling to the paper. [1] Trinh et al., "Scalable Bayesian neural networks by layer-wise input augmentation", arXiv 2020, https://arxiv.org/abs/2010.13498 [2] Karaletsos et al., "Probabilistic Meta-Representations Of Neural Networks", UDL workshop 2018, https://arxiv.org/abs/1810.00555 **W2: Why is $\sigma_m^2$ not taken to be zero in the logdet of the covariance (e.g., in Eq. (A.18))?** It is correct that both $E_{z\sim q(z)} [\log \det C(z)]$ and $E_{z\sim q(z)} E_{\theta \sim q_\gamma(\theta | z)} [ h(\theta, z) ]$ in Eq. (A.13) depend on $\sigma^2$. For smaller networks, $E_{z\sim q(z)} [\log \det C(z)]$ can be computed analytically, but $E_{z\sim q(z)} E_{\theta \sim q_\gamma(\theta | z)} [ h(\theta, z) ]$ involves a matrix inverse, which requires a computationally expensive and unstable procedure. We, therefore, choose to approximate this term, which we do by letting $\sigma^2 \rightarrow 0$. We could have chosen other approximations, but this one makes sense, as $\sigma^2$ is already defined to be small. We do not apply the same approximation to $E_{z\sim q(z)} [\log \det C(z)]$ since we do not have to (as long as we consider small networks), and introducing an unnecessary approximation makes little sense. To derive our second bound, $\mathcal{L}''$, rather than approximating $\mathbb{E}_{z\sim q(z)} [\log \det C(z)]$, we introduce a lower bound, which was presented by Geng et al. (2021) [1]. From an optimisation perspective, a bound makes more sense than an approximation, and this bound is furthermore efficient to compute. [1] Geng et al., "Bounds all around: training energy-based models with bidirectional bounds". NeurIPS 2021. **W3: The baselines are somewhat outdated. Many recent Bayesian models beat deep ensembles.** Thank you for raising this concern. We do our best to make fair comparisons by including both recent and well-performing models. We are not aware of the models you refer to, but if you have specific ones in mind, we will happily try to test them and report back to you. **W4: There should be comparisons to flow-based methods.** NFs are powerful models, but scaling them to millions of dimensions is incredibly difficult and computationally demanding. In fact, these challenges were some of the main motivators for our work. Regarding the multiplicative normalising flows (MNF), we have added results for these on the UCI datasets (tables 1 and 2) and MNIST (table 3), see the attached PDF. The flows are difficult to get to converge, however. Additionally, the training time and memory consumption are shown in table 4. Compared to MNF, LIVI is both faster to train, consumes much less memory, and performs better. **W5: There should have been more challenging datasets in the experiments.** While we certainly agree that more challenging datasets exist, we chose these ones for a number of reasons. The datasets are all commonly used in the literature, which makes it easier to find competing methods and compare our work to others. The simpler datasets, such as rotated MNIST, are easy to understand intuitively and can therefore provide us with insights into the models' behaviours and their abilities to learn inductive biases. However, if you have specific datasets in mind, we will happily try to test LIVI on them and report back here. We should say, however, given compute limitations, for any dataset, the models should be able to run on a single GPU. **Q1: How does LIVI compare to standard MC sampling?** We compare the LIVI bounds to an importance sampling estimate in figure F.1 in the supplementary. Although the two bounds systematically underestimate the entropy (which makes sense as they are lower bounds), crucially, they largely follow the behaviour of the sampling estimate. This gives us some confidence that they are useful objectives. Note, however, that the experiment is only possible to do for very small models as importance sampling is challenging to get to work in more than a few hundred dimensions. **Q2: How does LIVI compare to other methods in terms of memory and runtime?** We agree that these are very important metrics and the only reason the runtime was not reported in the submission is that, during the deadline rush, the experiments were performed on different GPU devices and hence we did not have one-to-one time comparisons for all the methods. We have now completed benchmark measurements on MNIST, see table 4 in the attached PDF, which shows the training time required for convergence as well as the memory consumption. Compared to other expressive VI methods like AVB and MNF, LIVI is faster to train and consumes less memory. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: I thank the authors for the response. I also appreciate the effort in evaluating MNF on the UCI datasets toward this rebuttal. The main two shortcomings of this paper, in my opinion, remain. * **Scalability issues**. First, I didn't mention transformers, even standard ResNet architectures generally have tens of millions of parameters which is more than one order of magnitude compared to the largest network used in this paper. Second, in terms of "relevant to the Bayesian deep learning community" - I respect the author's opinion, but I do not feel the same. I think that in this era of deep learning, an important aspect of the model is the ability to scale it beyond the network sizes in this paper. While I do not expect the method to be readily adjusted to massive networks, scaling to network sizes such as ResNet-18 and ResNet-34 is a reasonable requirement. The authors suggested several alternatives for scaling their model which sound great. In my opinion, the submission is incomplete without showcasing that. * **Experimental section**. I stand behind my original comment that neither the datasets nor the baseline methods are strong enough. Therefore, I cannot attribute much value to the empirical evaluation in this paper when comparing LIVI to the proposed baseline methods. The authors wanted me to suggest alternatives. Well, in my opinion, this is the job of the authors as the setups in the paper are quite ubiquitous in the literature. Nevertheless, here are some suggestions: * In terms of datasets, CIFAR-100 is already more challenging than CIFAR-10, fine-grained classification datasets such as CUB, Cars, and Pets are another alternative. All appeared in the Bayesian literature before, and there are many more of course. * In terms of baselines to state a few, SWA/SWAG family which often shows good performance [1], deep kernel learning and its recent follow-up works [2], VI in function spaces and follow-up works [3], infinite-deep NNs [4, 5], and partially stochastic BNNs [6, 7]. There are other methods that I may have missed. I do not expect the authors to compare LIVI to all of these methods, but to at least some of them, I do. Overall I would like to state that I do value the proposed approach and its merits. Nevertheless, I think that currently, this paper is not ready to be published at NeurIPS. Hence, I decided to raise the score to 4 and not further. [1] Maddox, W. J., Izmailov, P., Garipov, T., Vetrov, D. P., & Wilson, A. G. (2019). A simple baseline for bayesian uncertainty in deep learning. Advances in neural information processing systems, 32. [2] Wilson, A. G., Hu, Z., Salakhutdinov, R., & Xing, E. P. (2016, May). Deep kernel learning. In Artificial intelligence and statistics (pp. 370-378). PMLR. [3] Sun, S., Zhang, G., Shi, J., & Grosse, R. (2018, September). FUNCTIONAL VARIATIONAL BAYESIAN NEURAL NETWORKS. In International Conference on Learning Representations. [4] Nazaret, A., & Blei, D. (2022, June). Variational inference for infinitely deep neural networks. In International Conference on Machine Learning (pp. 16447-16461). PMLR. [5] Xu, W., Chen, R. T., Li, X., & Duvenaud, D. (2022, May). Infinitely deep bayesian neural networks with stochastic differential equations. In International Conference on Artificial Intelligence and Statistics (pp. 721-738). PMLR. [6] Sharma, M., Farquhar, S., Nalisnick, E., & Rainforth, T. (2023, April). Do Bayesian Neural Networks Need To Be Fully Stochastic?. In International Conference on Artificial Intelligence and Statistics (pp. 7694-7722). PMLR. [7] Daxberger, E., Nalisnick, E., Allingham, J. U., Antorán, J., & Hernández-Lobato, J. M. (2021, July). Bayesian deep learning via subnetwork inference. In International Conference on Machine Learning (pp. 2510-2521). PMLR. --- Reply to Comment 1.1.1: Comment: Thank you for the detailed reply and for taking the time to provide a set of references for us to consider. We address your concerns below. **Scalability issues** We have started training LIVI to do inference in a WideResNet(28,10) on CIFAR-100. This architecture contains 36.5M parameters and has been used in multiple works like [1, 6, 8]. Without much tuning of LIVI, it achieves an accuracy of $76.7\\%$ and an NLL of $0.617$, which can be compared to $77.68\\% \\pm 0.29\\%$ and $0.944 \pm 0.002$ reported by [6] for full-network VI. We will continue tuning LIVI and include these results in our paper, but we wish to highlight that the hypernetworks used for LIVI in all experiments, including this, only contain about twice as many parameters as the networks they are modelling the posterior over - the same amount of parameters required for mean-field VI. Thus, we hope this experiment shows that LIVI can scale to modern network sizes. **Experimental section** While we respect the reviewer's opinion, we wish to state that we based our setup on experiments and baselines from [5, 7, 9]. By following their setups, as other works do too, a reader can compare and place LIVI in the broader literature on the topic. Moreover, we compare to KIVI [10] and AVB [11], which both focus on implicit VI, and hence we believe our experimental evaluation is up-to-date and comprehensive. Regarding the suggested references, we briefly discuss them below. We understand that the list is not meant to be comprehensive, but we wish to clarify why we did not include them in the submission. In short, LIVI is an approximate inference method where the downstream tasks are merely an assessment of the quality of our approximation, not the goal itself. We have therefore focused our experiments on comparisons with other approximate inference methods, in particular implicit VI methods, not general methods for solving the downstream tasks. SWAG, proposed in [1], was used as a baseline in [9], which we compare LIVI to. [9] found SWAG difficult to tune and DEs to be a stronger baseline overall, which is the reason we chose DEs to compare with. We will clarify this in the paper. Deep kernel learning, proposed in [2], aims at making Gaussian process (GP) modelling more expressive. As the GP posterior is available in closed form, this work is not directly related to the task we are trying to solve. [3] considers BNN inference in function-space using process-based inference, not distribution-based inference, which LIVI handles, so it is a different line of research. [4] introduces an infinitely deep BNN and a VI scheme for this model, and [5] focuses on continuous-depth neural networks, where they use an SDE to implicitly parametrise the posterior over the infinitely many weights. While both are interesting, they do not present inference methods for general BNNs, which is what we are concerned with. Both [6] and [7] focus on partially-stochastic networks in contrast to fully-stochastic networks, which was our primary target with LIVI. [6] is mainly a discussion and comparison paper, arguing that networks do not need to be fully stochastic - an open discussion in the community to which LIVI adds new evidence. While [6] compares a few simple strategies for choosing subsets of weights, their aim is to compare these to fully stochastic networks only, not to suggest an optimal selection strategy. They do show results for a WideResNet(28,10) on CIFAR-100, and we will thus compare to their results. The partially stochastic method of [7] builds on linearised Laplace, which we now compare to using the same library from [9] that both [6] and [7] use. We also consider last-layer Laplace approximations, which is a partially stochastic method as well. We do appreciate the list of references, many of which are important to discuss in the context of LIVI. We will therefore add them to our related works section. References: [1] Maddox et al. (2019). A simple baseline for Bayesian uncertainty in deep learning. NeurIPS. [2] Wilson et al. (2016). Deep kernel learning. AISTATS. [3] Sun et al. (2018). Functional Variational Bayesian Neural Networks. ICLR. [4] Nazaret & Blei (2022). Variational inference for infinitely deep neural networks. ICML. [5] Xu et al. (2022). Infinitely deep Bayesian neural networks with stochastic differential equations. AISTATS. [6] Sharma et al. (2023). Do Bayesian Neural Networks Need To Be Fully Stochastic? AISTATS. [7] Daxberger et al. (2021). Bayesian deep learning via subnetwork inference. ICML. [8] Nado et al. (2021). Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning. arXiv:2106.04015. [9] Daxberger et al. (2021). Laplace Redux - Effortless Bayesian Deep Learning. NeurIPS. [10] Shi et al. (2018). Kernel Implicit Variational Inference. ICLR. [11] Mescheder et al. (2017). Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks. ICML.
Summary: This work proposes one approach to implicit variational inference that avoids using adversarial training. This is achieved by first applying a small amount of Gaussian noise to the implicitly generated weights $\theta$ to ensure that they are equipped with a valid distribution (enabling valid divergences), and then the now valid ELBO is approximated through linearizing the entropy regularization term. A further lower bound to this approximation is also proposed that allows for better scaling in high-dimensional settings, which is common for applying VI to neural models. Strengths: I felt that the approach presented was well justified, easy to follow in terms of motivation and development, and enables much more complex settings for variational inference to be applied to (in a stable manner) without having to suffer from the typical limiting mean-field assumptions. The experimental results were impressive as well, seemingly achieving much better uncertainty quantification than the other competing methods while still retaining quality predictive performance. Weaknesses: The paper states in the contributions (line 49) that it "derive[s] a novel lower bound for variational inference"; however, I believe this is not quite the case. It is my understanding that the "novel lower bound" being mentioned here refers to $\mathcal{L}'$ and the further lower bound $\mathcal{L}''$ used for better scaling. This statement is in direct contradiction to equation 18 that states: $\log p(\mathcal{D}) \geq \mathcal{L}(\gamma) \approx \mathcal{L}'(\gamma) \geq \mathcal{L}''(\gamma)$. Strictly speaking, $\mathcal{L}'$ is not a lower bound, but rather an approximation to an actual lower bound $\mathcal{L}$. To be clear, I do not think this is bad by any means, it just needs to be communicated clearly. Aside from this, I think the experiments can be bolstered with a few additional comparisons. Namely: - The impact of using $\mathcal{L}''$ over $\mathcal{L}'$ when the latter is still eligible (i.e., when using smaller models such as the UCI dataset tasks). - How much performance (both in and out of distribution) differs between your proposed method and HMC (or other MCMC methods). This again would need to be done in low-dimensional settings, but I believe it should be feasible for at least a smaller neural network or linear model on some of the UCI dataset tasks. Lastly, one of the contributions listed cited a novel generator architecture (line 53); however, the details of this seem to be relegated to the appendix. In future revisions, I believe if you are going to cite this as one of the major contributions then the details should at least be partially included in the main paper. I understand this probably wasn't done due to space limitations, but it is worth considering in my opinion. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I do not have any direct questions, see the weaknesses for my main comments to be addressed please. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors did discuss the limitations adequately. Negative societal impact is not directly applicable here in my opinion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for a careful review of our work. We are delighted that you find our exposition easy to follow. We address you individual questions and suggestions below. **W1: The contributions state a novel lower bound, however, it is actually an approximation to a lower bound.** Thank you for highlighting this. We agree that we should have been more clear that $\mathcal{L}'$ is an approximation to a lower bound. We will clarify this in the paper. **W2: An experiment on the impact of using $\mathcal{L}''$ over $\mathcal{L}'$ would be good.** Thank you for suggesting this experiment - it would indeed be informative. We now include experiments to compare our two bounds, $\mathcal{L}'$ and $\mathcal{L}''$ on the UCI datasets, see tables 1 and 2 of the attached PDF. The results show that models trained with either $\mathcal{L}'$ or $\mathcal{L}''$ perform quite similarly both in terms of test log-likelihood and test RMSE, giving empirical justification for the lower bound on $\log\det(J J^\top)$. **W3: An experiment on the performance difference between LIVI and HMC in low-dimensional settings would be good.** This is also a great suggestion. For a qualitative assessment, Figure 1 of the main paper shows such an experiment for a toy problem. Furthermore, we now include experiments comparing our two bounds, $\mathcal{L}'$ and $\mathcal{L}''$, and HMC on the UCI datasets in tables 1 and 2 of the attached PDF. The results show that the model trained with $\mathcal{L}'$ performs close to or as well as HMC, suggesting that the local linearisation does not harm the expressivity dramatically. A model trained with $\mathcal{L}''$ is not much worse. **W4: The novel generator architecture should be briefly discussed in the main paper.** Thank you for pointing this out, it is a very good point. We will add a short description of the architecture to the main paper. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for answering my concerns and providing more experimental results. I am satisfied by the response and maintain my original score assuming promised changes are incorporated in the camera-ready version of the paper.
Summary: The paper studies the problem of approximating high-dimensional multi-modal posteriors through neural samplers specifying implicit distributions. While Bayesian methods promise a variety of benefits in terms of generalization and calibrated predictions, in practice they have seen limited success due to intractability of exact Bayesian approaches and the tradeoffs made in approximate Bayesian methods. Implicit Variational Inference provides an alternative to approximating exact Bayesian posteriors by maintaining distributions implicitly by transforming samples from simple distributions, allowing it to admit much richer distributions. Implicit Variational Inference typically requires some density ratio based adversarial objectives, which can fail on high-dimensional problems (e.g. parameters of neural networks). The authors note two major issues in existing approaches a) KL being ill-defined due to the implicit density lying on a low dimensional manifold b) intractability of the entropy of the implicit density and its gradients. To tackle these issues, the authors first introduce Gaussian noise to the output of the sampler making it a Gaussian DLVM resulting in a well-defined KL over the parameter space. Next the authors approximate the generator with a local linearization resulting in a Gaussian approximation of the output density, and obtain easy to compute approximation of the differential entropy of the output density. This results in a novel approximation to the ELBO based on the entropy approximation, which is scalable to high dimensional parameters. The authors discuss two variants based on computing the whole Jacobian or using a differentiable lower bound on the the determinant, which trade-off compute and quality of the approximation. Finally, the authors evaluate the method on a variety of tasks including impressive results on fairly large BNNs (WideResNet). Strengths: * The paper studies the important problem of approximating expressive Bayesian posteriors on high-dimensional spaces. Due to the general applicability and promising results, the work is significant and relevant to the community. * To the best of my knowledge the main contributions of the paper namely addressing the ill-defined KL, local linearization for the entropy and the LIVI bound on the ELBO, are all novel. * Introducing Gaussian noise to the sampler to induce a Gaussian DLVM is a neat and simple way of fixing the KL with minimal additional restrictions to the model * Similarly, the local linearization of the neural sampler is a nice idea to obtain a cheaper estimator for the entropy. * The experimental results are quite impressive - in particular the results on the WideResNet. I also appreciate the authors including the code with their submission to aid reproducibility of the results. * Overall the paper is well-written with a clear exposition of ideas and most relevant details covered. Weaknesses: * The paper proposes a local linearization to make the entropy computation tractable. However, what is not clear to me is how the local-linearization affects the expressivity of the posterior and general performance. The experiments indicate that the effect is not large, but these are still “relatively” simple tasks so a thorough study of this would be useful * The empirical comparisons also do not consider alternative approximate Bayesian methods. * Some recent work [1] proposes a closely related approach which might be worth discussing in the paper [1] Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks. Kristiadi et al., NeurIPS 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * What do you think would be the critical challenges in applying the method to even larger modern networks, e.g. transformers? * What is the challenge in implementing the KIVI baseline? (since that is not included in all the experiments as the authors note) * The runtime of the method is not explicitly mentioned anywhere in the paper except for the remark relative to Deep Ensembles in the appendix. This is important information which should be included in the main paper. * Minor typos: L117 “in the following” -> “in this section” L123 “aGaussian” -> “a Gaussian” Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Despite the impressive results in the paper, there still remains a gap between the models studied in the paper and the size of models considered in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful review. We very much appreciate that you find our work significant and relevant to the community and that the manuscript is well-written. We answer your specific concerns below. **W1: How does the local linearisation affect the posterior in terms of expressivity and performance?** This is a great question. We linearise the neural sampler $g_\gamma(z)$, but only when it is used within the estimation the intractable entropy and its gradients (see Eqs. (9) and (12)). However, this could indeed lead to sub-optimality in how we train the implicit approximation and the quality of the resulting distribution. We hope to obtain a highly flexible implicit distribution, and as you point out, our results indicate that the obtained posterior approximation is at least as good as the one found by our competitors. To investigate this further, we have added a comparison of our method with HMC on the UCI datasets, see tables 1 and 2 in the attached PDF. The results show that the model trained with $\mathcal{L}'$ performs close to or as well as HMC, suggesting that the local linearisation does not harm the expressivity dramatically. If the reviewer has suggestions for other experiments assessing the expressivity of the posterior, we would be happy to hear them. **W2: The empirical comparisons do not consider alternative approximate Bayesian methods.** We agree that comparing our proposed method to other approximate Bayesian methods is crucial. In the paper, we consider approximate Bayesian methods like MFVI, Laplace and Adversarial Variational Bayes, which also falls into the category of implicit methods. We now also include experiments with two versions of full Laplace, see tables 1 and 2 in the attached PDF. Furthermore, we have added results for multiplicative normalising flows (MNF, Louizos and Welling, 2017, [1]) on the UCI datasets (tables 1 and 2) and MNIST (table 3). The MNF are difficult to get to converge, however. All these results have been added to our paper too. If you have specific baselines in mind that you think we are missing to make a comprehensive benchmark, we will happily take them into consideration. [1] Louizos and Welling, "Multiplicative normalizing flows for variational Bayesian neural networks", ICML 2017. **W3: Posterior refinement (Kristiadi et al., 2022) is closely related and should be discussed.** Thank you very much for pointing us to this work. It is indeed relevant and we will add it to the related works section of the paper. Posterior refinement works by using the Laplace approximation as a clever base distribution for a normalising flow, which is then optimised to model the posterior of BNN. This can work well given a sufficiently expressive flow, which, however, typically comes at a high computational cost, especially in high dimensions. This is why Kristiadi et al. (2022) focuses on last-layer approximations. In contrast, LIVI uses a neural sampler to implicitly represent the posterior of the BNN, which means we can model the full posterior in an expressive manner. While there are pros and cons of both methods, it is difficult to imagine posterior refinement being scaled to millions of dimensions as we do here. **Q1: What is needed to apply LIVI to large, modern networks?** The hypernetwork poses the biggest challenge, in our opinion, as this is the network that is supposed to efficiently parametrise the high-dimensional implicit posterior. We do think that there are still ways to go beyond what we have done here. One approach would be to consider independent hypernetworks for each layer of the BNN, which would reduce the size of the individual hypernetworks at the cost of losing correlations across layers of the BNN. This itself should help such a framework to increase its modelling capacity significantly. One could also consider efficient ways of putting priors over deep networks to curb dimensionality like [1] that proposes implicit BNNs that have priors over activations rather than weights and biases, thus reducing the dimensionality of latent variables, and [2] which also assumes priors over units in a neural network and models weights using these latent distributions. [1] Trinh et al., "Scalable Bayesian neural networks by layer-wise input augmentation", arXiv 2020, https://arxiv.org/abs/2010.13498 [2] Karaletsos et al., "Probabilistic Meta-Representations Of Neural Networks", UDL workshop 2018, https://arxiv.org/abs/1810.00555 **Q2: Why was KIVI not implemented?** The official code for KIVI is written in a probabilistic programming subpackage developed by researcher at Tsinghua University and used by researchers there. The package is built with TensorFlow 1 and hence is not easy to port. We tried to implement our own version, but we were not confident in our implementation and decided to not include results from this method when they were not available in the original paper. **Q3: The runtime of the method should be included in the paper.** We agree that this is very a important metric by today's standards and we will include this in the main paper if space allows, otherwise the supplementary. The only reason the runtime was not reported in the submission is that, during the deadline rush, the experiments were performed on different GPU devices and hence we did not have one-to-one time comparisons for all the methods. We have now completed benchmark measurements on MNIST, see table 4 in the attached PDF, which shows the training time required for convergence as well as the memory consumption for LIVI and four other methods. When compared with other expressive VI methods like AVB and MNF, LIVI is faster to train and consumes less memory. **Q4: Minor typos.** Thank you very much for pointing these out. They have been corrected. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the response and apologies for the delayed response! > How does the local linearisation affect the posterior in terms of expressivity and performance? Thanks for the clarification and additional results! It is indeed a bit challenging to test claims about expressivity, but I appreciate the additional experiment. It indeed appears to be the case that the performance / expressivity are not impacted considerably. > The empirical comparisons do not consider alternative approximate Bayesian methods. Thanks for the additional results. > Posterior refinement (Kristiadi et al., 2022) is closely related and should be discussed. Thanks for the explanation. I think this does merit some additional experimental validation but at the very least I hope the authors include this in the paper. > What is needed to apply LIVI to large, modern networks? Thanks for sharing these insights. On the topic of hypernetworks I would also mention recent advances (e.g. [1]) on this topic. > The runtime of the method should be included in the paper. Thanks for these results, these are indeed quite impressive and it would be great to have these in the paper. I am satisfied by the author response and encourage the authors to make the relevant changes for the camera ready version. I will maintain my score. [1] Knyazev, B., Hwang, D., & Lacoste-Julien, S. (2023). Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?. arXiv preprint arXiv:2303.04143.
Summary: ## Post Rebuttal Update I have engaged with the authors for the rebuttal, and found their responses informative, prompting me to increase my score from a 7 to an 8. ## Original Review The paper addresses two issues that are common in implicit variational inference methods such as amortised neural samplers, namely - 1. The implicit density often lies on low dimensional manifolds making the KL infinite/ill-defined 2. The gradients and entropy of the implicit density are intractable The paper addresses these issues by (i) adding Gaussian noise to the output of the neural sampler, making it continuous w.r.t $\theta$, and (ii) linearising the neural sampler, resulting in a Bayesian linear model approximation to the non-linear model, giving closed-form solutions for the distributions. On different benchmarks the authors show that these methods work well on a range of small scale experiments compared to other state-of-the-art, such as last-layer Laplace and deep ensembles. Strengths: The paper is well motivated, has clear mathematical development, and solves common issues in implicit variational inference models. I like the use of linearisation to make the entropy and gradients tractable by getting closed-form solutions. The experiments are clear, and the baselines are well-considered. The results are also quite impressive in this domain, as it is often really difficult to beat deep ensembles, and the method seems to consistently perform really well. Weaknesses: In general I think the paper is well-written, however I have one major criticism - 1. I'm not convinced that the current approximation to $\log(J J^T)$ has been well-motivated or properly ablated. It would be nice to see some ablations on smaller-scale problems where calculating the full log determinant is tractable, and comparing it to the approximation made using just the highest singular value. Or doing a sweep over adding subsets of singular values vs the highest. Or even, plotting the eigenspectrum of $J J^T$ to show that it is true that the highest singular value is often much more dominant than the others. 2. If the paper is using the full-rank Jacobian of the neural sampler in order to estimate the entropy, I think a fairer comparison to make would be against full Laplace, not last-layer Laplace, which should be possible for UCI datasets and MNIST at least, using the Laplace library the authors cite. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I think there are some really interesting connections between the obtained linearised implicit variational model, and existing methods such as linearised Laplace and regular variational inference, and I would love for there to be a small discussion about these things potentially. For example, in linearised Laplace, you assume a fully Bayesian posterior over a NN parameterised by $\theta$, then linearise the model around the MAP estimate, resulting in a tractable Gaussian approximation to the posterior. However, this posterior is more rich compared to a mean-field variational approximation, because the covariance of the posterior is given by $J_{\theta}^T \Lambda J_{\theta} + \sigma^2 I$, where $\Lambda$ is the prior over weights, and $\sigma^2$ is the noise variance. This form of the posterior looks very close to what is obtained by implicit variational inference, where the neural sampler can be considered similar to the NN model in linearised Laplace. I would be really interested in seeing discussions about these potential connections if possible. In fact, optimising the marginal likelihood for a linearised Laplace model is akin to doing ELBO with a Gaussian approximation. I would be really interested in seeing these parallels. 2. Given that the authors are using the entirety of the Jacobian of their neural sampler, and comparing to only last-layer Laplace, where only the last layer is modelled probabilistically, I would be really interested in seeing if they can run experiments on full linearised Laplace, especially on the small UCI and MNIST datasets, where this should be tractable. In fact, there are methods that perform full probabilistic inference for linearised Laplace using samples from the posterior, such as in [1]. 3. I am also really interested in seeing what approximations to the Jacobian are best for estimating log det (J J^T), such as last-layer only, or through samples using Hutchinson's estimate, or by using more than one singular value and ablating through what the optimal number of singular values to consider is. [1] Antorán, Javier, et al. "Sampling-based inference for large linear models, with application to linearised Laplace." arXiv preprint arXiv:2210.04994 (2022). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful evaluation of our paper. We are happy that you liked our proposed method and found the results impressive. We respond to your specific questions and concerns below. **W1: The lower bound on $\log\det(JJ^\top)$ has not been motivated sufficiently nor properly ablated.** The motivation for the bound on $\log\det(J J^\top)$ comes from Geng et al. (2021) [1]. To clarify a possible source of confusion, the bound uses the *smallest* eigenvalue, not the largest. Essentially, it simply states that the sum of all log singular values is larger than (or equal to) the smallest singular value times the number of dimensions. While one can view this as a trivial and potentially quite loose bound, the smallest singular value is efficiently computed using the LOBPCG algorithm, and efficiency is very important for the high-dimensional problems we are focusing on in this work. To empirically assess the effect of the bound, we now include experiments to compare our two bounds, $\mathcal{L}'$ and $\mathcal{L}''$, which demonstrate the effect of the bound on the log determinant. The results, which can be found in tables 1 and 2 of the attached PDF, show that models trained with either $\mathcal{L}'$ or $\mathcal{L}''$ perform quite similarly both in terms of test log-likelihood and test RMSE, giving empirical justification for the lower bound on $\log\det(J J^\top)$. [1] Geng et al., "Bounds all around: training energy-based models with bidirectional bounds", NeurIPS 2021. **W2: Full Laplace is a fairer comparison than last-layer Laplace.** This is a great suggestion. Full Laplace using a full-rank Hessian is not feasible, even on MNIST, due to the size of the BNN that we use in our experiments. However, we have added results for two versions of full Laplace to the paper, 1) a non-linearised full Laplace using a low-rank approximation to the Hessian, and 2) a linearised full Laplace using a KFAC factorisation of a Generalised Gauss-Newton approximation to the Hessian, see table 3 and figures 1 and 2 in the attached PDF. The non-linearised version of full Laplace underfits due to the quite crude approximation to the posterior over the full network. The linearised version works slightly better in terms of ECE and NLL on the rotated-MNIST benchmark (figure 1) and in the OOD entropy-CDF test (figure 2). Note, however, that linearised Laplace is not directly comparable to our proposed model, see the discussions for Q1 and Q2 below. **Q1: It would be nice to see a discussion on the connections between LIVI and existing methods such as linearised Laplace and regular variational inference.** We would like to clarify that we do not linearise the posterior; we linearise the neural sampler $g_\gamma(z)$, but only when it is used within the estimation the intractable entropy and its gradients (see Eqs. (9) and (12)). The resulting posterior approximation is, therefore, still a highly flexible (non-linearised) implicit distribution. It is correct that the terms appearing in a Laplace posterior and our linearised approximation are similar, but it is hard to draw direct analogies between the two approximate Bayesian methods as they are fundamentally quite different. Whereas linearised Laplace linearises the BNN and the predictive function, we linearise the neural sampler/hyper-network within the variational distribution $q_\gamma(\theta)$, thus obtaining the approximation $\tilde{q}_z(\theta)$, but only when it is needed to approximate the entropy. This is also the reason for why we do not linearise our BNN for predictions, which is common practice with linearised Laplace. We acknowledge that the distinction was not presented clearly enough in the original manuscript and will update the paper accordingly. **Q2: Please run experiments on full linearised Laplace on the UCI datasets and MNIST.** Thank you for this suggestion. We have added results for full Laplace without linearisation (low-rank Hessian) and full Laplace with linearisation (KFAC factorisation of a GGN approximation) to the paper and the PDF attachment here, see table 3 and figures 1 and 2 for results on MNIST. We tried our best to make the Laplace library work for UCI regression but could not due to errors thrown by the second-order backends used (BackPack and Asdl), we will keep working on this and report back soon.While these are both relevant baselines, we would like to emphasise that, because of the reasons noted above, our results cannot be compared one-to-one with linearised Laplace as that posterior measures the uncertainty of a generalised linear model approximation and not the actual BNN. **Q3: Are other approximations to the Jacobian are better for estimating $\log \det (JJ^\top)$?** This is a very valid question, although one that is perhaps best answered in future work given the scope of it. We did not consider a last-layer-only approximation, as we have focused on the conventional Bayesian treatment where all latent variables (weights and biases) of the respective BNNs in all experiments have been modelled probabilistically. Basing the bound on more singular values than just the smallest one is a good idea, as it might give us a tighter bound. However, a motivating factor for using just the smallest singular value is that this can be found in linear time using the LOBPCG algorithm and, as we show in tables 1 and 2 in the attached PDF, the bound empirically works well. Using Hutchinson's estimator is a good idea too, however, empirically this estimator provides an upper bound, not a lower bound; please see Geng et al. (2021) who compare the smallest singular value estimate against Hutchinson's estimator for $\log \det(J^TJ)$. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, and for the clarifications. I see now that I had a fundamental misunderstanding in where the models are being linearized, and it is indeed not fair to compare this method to Linearized Laplace. However, since LLA is a high performing method in this domain anyways, I appreciate the authors running LLA as a comparison. Regarding the linearized LLA version, I'm curious why the authors need to run a KFAC Approximation to the GGN matrix? Is it not possible to invert the GGN matrix in closed form using a decently sized Gpu? It might not be, and this is merely a matter of curiosity, as the current results the authors added are quite satisfactory to me. Thanks for the pointers regarding JJ^T, and the comparisons of L' and L'' as well, these clear up any doubts I might have had. I'm happy to increase my score to an 8, and thank the authors for an engaging rebuttal.
Rebuttal 1: Rebuttal: We thank all the reviewers for carefully reading our paper and providing constructive feedback. We appreciate that the reviews found that “the paper is well motivated, has clear mathematical development, and solves common issues” (MstJ) and that “experimental results were impressive as well” (Q4Sr). We have replied to each of you individually and have further attached a PDF with additional results that were requested. We look forward to further discussions with you. Pdf: /pdf/6ac7f9554b0f24262059fce5158fe5c8ca9adabd.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Offline Reinforcement Learning for Mixture-of-Expert Dialogue Management
Accept (poster)
Summary: This paper tackles issues with using RL for dialogue management related to covariate shift when using offline RL and the requirement for many online human-bot interactions when using online RL, and both suffering from a large action space. Current NLP models lack the ability to plan dialogue interactions beyond the next interaction, which is a crucial aspect of successful conversation. To address these points, the authors propose a mixture-of-expert language model (MoE-LM) and design several offline RL algorithms with different benefits. The structure of the MoE-LM is a hierarchical one, where there are a number of experts each optimised for a different intent (things like empathy, rage, etc.) and a dialogue management model that chooses one of the expert utterances conditioned on the conversation history. This latter aspect should help with non-myopic objectives that are naturally part of conversations. The authors compare their MoE-LMs trained with several different algorithms to SotA offline RL algorithms as well as behavior cloning and a bandits method that greedily optimises the next conversation turn (just reward maximisation). On two datasets, they show their method outperforms the offline RL baselines in terms of return in a simulated conversation (DialoGPT). They also show that the best performing RL algorithm for MoE-LMs which they propose has a more uniform selection of experts than the worst performing one. Strengths: Very clear and detailed explanation of the algorithms, well-motivated and important problem. Strong results of proposed method compared to baselines on two different dialogue tasks. Weaknesses: Some of the claims seem to not be substantiated by the results of the experiments: - It seems likely that the MoE-LMs generate more diverse conversations, but strictly you can't assume that based on a higher return in terms of sentiment; can you supplement the results with metrics that evaluate diversity? - It's not clear to me from experiment 2 that the MoE method is able to do long-term planning. Again, it seems likely given the performance increase over the offline methods and especially the Bandit, but simply based on the higher return in sentiment this can't be claimed. Can you isolate / evaluate the effect of long-term planning? How do we know from these results that the method is actually better at long-term planning and the performance increase is not due to other aspects that result in a better sentiment? - I might have missed it but there does not seem to be an explicit treatment of the sample-efficiency of your method over others; can you quantify this? It's pretty hard to interpret these results based solely on sentiment calculated over conversations with a simulated user. A baseline that would help interpret this to some extent is a simple prompted LLM to hold these conversations; what kind of sentiment would it achieve? Additionally, can you do a small human eval comparing your method to bandits and the best-performing offline RL method? Perhaps this can be combined with the questions above about diversity and long-term goal achieving / planning, asking humans to rate the conversations along these axes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Answers to the questions written in Weaknesses would address my main concerns with this paper. Some small other questions/suggestions that are not related to my score: - I don't really follow contribution 2; how does leveraging pre-trained LMs and prior regularization result in high-level dialogue management? - I would rewrite line 58-62, very hard to parse with usage of "-- --" twice - Line 102 should be textual citation (citet not citep) - Table 1 and 2 need more comprehensive captions. Without reading the main text a reader should understand what the numbers and error bars refer to. What are your methods, what are the takeaways from these numbers, etc. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There's no separate limitations section or discussion of the limitations of your proposed method; that would be a welcome addition. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the useful feedback. We will address the individual comments in the following shortlist of responses. ### Diversity of MoE responses; Diversity metrics We concur with the reviewer that high sentiment improvement in a conversation doesn't necessarily equate to utterance diversity. Though our return metric doesn't directly promote diversity, it is facilitated by the diverse semantic representation space and various candidate utterance generators within the pretrained MoE LMs. Empirically, as seen in Figure 2 and 3, the MoE-specific offline RL methods, while scoring some of the best returns, also tend to select responses from more diverse experts. This implies a greater variety in utterances. Conversely, utterance diversity doesn't guarantee high conversation return. For instance, the agent trained with the KLC offline RL method produces diverse responses but ranks among the worst performers, as shown in Table 1. ### Can the MoE method perform long-term planning; How to justify the efficacy of long-term planning The reviewer astutely questioned the long-term planning capabilities of the MoE-based RL method. Initially, we didn't compare with bandit-based baselines, as the advantages of RL-based methods for long-term planning are well-established in dialogue management (see Snell et al., 2023, Jaques et al., 2020). We followed similar considerations to Jaques et al., 2020, using the same datasets. To address the question, we evaluated the MoE-VRL offline RL method against a bandit agent, using a discounting factor of 0.0 (gamma=0.0), on the Cornell dataset. The 5-step cumulative model-based return for the bandit agent was **1.53**, significantly lower than the MoE-VRL results at a discounting factor of 0.8 (**3.62**). This experiment highlights the effectiveness of RL-based MoE dialogue management for long-term planning. We will include a discussion and the results in the final paper to highlight the benefit of long-term planning. ### Sample efficiency The improved efficiency arises from both modeling and algorithmic choices. First, we adopt the MoE LM framework (Chow et al., 2023) and greatly simplifying the action space of the RL dialogue management problem, as we no longer rely on RL to directly control the token-level autoregressive generation of the language model but rather to select the best utterance to output at the current conversation turn from the pool of candidate responses generated by MoE-based model. Second, we developed our offline dialogue RL planning algorithms for MoE-LMs under the IQL offline RL methodology, which has shown in Snell et al., 2023 to have improved performance. While these factors intuitively improve sample efficiency, we acknowledge the reviewer’s comment that explicit analysis have not been conducted in our paper, and therefore we will update the conclusion section in the final paper to clarify these points and soften our claims about sample efficiency improvement. ### Baseline comparisons with prompting LLMs Comparing the responses of our MoE-LM agents with the ones generated by prompting LLMs is an interesting direction for future research. We would like to bring to the reviewer's attention that (i) rather than developing SOTA chatbots, the motivation of the work is to research different offline RL methods that make MoE LMs effective for multi-turn dialogue management; and (ii) the MoE LMs in our experiments are much smaller (~42M parameters) than standard LLM-based chatbots that generate diverse responses with various persona (e.g., a full GPT2 model has 1.5B parameters). With such a difference in model sizes, one may not expect the current MoE LMs can match the behaviors of any commercialized chatbots. On the other hand, it is already quite impressive to see that MoE LMs do possess different language skills, persona and have the capabilities to “smartly” switch among different language skills to improve the conversation. ### Human evaluation on diversity and RL performance Appendix E provides a human evaluation of different offline RL methods w.r.t to fluency and sentiment improvement in the overall conversation. Performance comparison of RL-based agents and the myopic, bandit counterpart has only been done via their corresponding cumulative returns (see above comments). We will add the human evaluation of the bandit agent in the final paper. ### Contribution 2 This contribution summarizes our arguments about the sample efficiency (see above comments) improvements of specialized offline RL methods for dialogue management with MoE-LMs, via leveraging the MoE structure to simplify the RL dialogue planning problem and utilizing specialized offline RL methods to better solve this problem. We will clarify the presentation of contribution 2 in the introduction by including the above explanations. ### Table 1 & 2 We acknowledge the reviewer’s confusion caused by the condensed presentation of these tables and will include more detailed explanations in the final paper. These tables present the average return (discounted sum of per-turn rewards in the dialogue conversations) of the dialogue agent of interest, accumulated over a 5-turn conversation. The return is averaged over 100 conversations, and the standard error is also provided. A higher value indicates the corresponding agent is able to perform better dialogue planning and results in better overall sentiment improvement. ### Formatting issues Thanks for suggesting several modifications to improve the readability of our paper, we will incorporate them in the final draft. ### References Snell, C., Kostrikov, I., Su, Y., Yang, M., & Levine, S. (2023). Offline RL for Natural Language Generation with Implicit Language Q Learning. https://arxiv.org/abs/2206.11871 Chow, Y., Tulepbergenov, A., Nachum, O., Ryu, M., Ghavamzadeh, M., & Boutilier, C. (2022). A Mixture-of-Expert Approach to RL-based Dialogue Management. https://arxiv.org/abs/2206.00059 --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: **Diversity** Though I understand that your method promotes diversity, that doesn't mean you shouldn't also evaluate diversity if you want to claim it actually generates diverse dialogue. Just saying it probably will generate diverse conversations based on the method without evaluating it explicitly does not suffice. **Long-term planning** I still don't see how this experiment shows that your method is better at planning than just any non-bandit RL method. Again, the higher return might be due to other aspects. **Baseline comparison** I understand that you won't be able to surpass SotA LLMs without also using those as a base, of course, however my comment is directed at a baseline that helps in interpreting sentiment scores. **Human eval** Since you claim in the main text that your methods outperform offline RL baselinee, and since the human eval in Appendix E seems to show 2/3 of your methods do worse than baselines in terms of fluency and sentiment, actually showing the method is more sample-efficient seems important for the current work. I would like to see the human eval section brought to the main text. All in all, I remain with my points that the claims about the diversity and sample-efficiency as well as overall performance (shown by the human eval) are overstated, and will keep my rating. --- Reply to Comment 1.1.1: Comment: **Diversity** We appreciate your feedback on the evaluation of diversity. As previously mentioned, our work builds upon the work by Chow et al., 2023, which has already demonstrated that MoE dialogue managers are capable of generating diverse utterances with different experts. Similar to Chow et al., 2023, Tables 3 and 4 in our appendix also quantitatively showcase the diversity and skill-related scores of our MoE experts. Furthermore, our contribution, as illustrated in Figure 2, is to develop a compositional dialogue manager that better utilizes the distinctiveness of these experts while achieving higher returns. This implies that the diversity of our MoE utterances not only contributed to the different diversity of each MoE expert but also the utilization of diverse intents via RL. If deemed necessary, we can also add a diversity-based rater’s evaluation in the final paper. **Long-term planning** Thank you for raising this point. To the best of our knowledge, the standard measure in RL for evaluating long-term planning is via comparing cumulative returns. While we acknowledge that this might not be the perfect metric for dialogue planning, we emphasize that our primary objective in this paper is to introduce RL methods to the MoE framework, rather than to develop SOTA planning-based dialogue managers for particular applications. Our evaluation methodology is consistent with Jaques et al., 2020, which also utilized similar metrics and evaluations. That being said, designing evaluation methods to gauge the planning ability of dialogue managers presents an exciting avenue for future research. Please also let us know if you have any specific ideas in mind. **Baseline comparison** Our apologies for the earlier oversight. We understand now that you're suggesting using a Large Language Model (LLM) as an oracle for sentiment scoring. While it's a valuable suggestion and offers a compelling reward for future studies, in this work we decide to stick with a RoBERTa-based sentiment classifier for sentiment scoring because it is also what other related work, e.g., Jaques et al., 2020, used to set up their dialogue management environments. Nonetheless, when our methods are applied to larger-scale problems, adopting such an LLM score would be very beneficial. We will make a note of this in the final paper. **Human Evaluation** We acknowledge your concerns about the human evaluation results. It is important to highlight that most of our proposed methods (IQL, MoE-VRL, FtRL) significantly outperform earlier baselines, notably KLC and BC, in these open-domain dialogue management tasks, especially the offline RL methods we've designed for MoE (MoE-VRL, FtRL) also consistently outperform the KLC, BC baselines. We'd also like to mention, albeit cautiously, that human evaluations inherently have a degree of variability. However, we believe our results are indicative of the effectiveness of our approaches. **Sample Efficiency** Based on your feedback, we will revisit our claims regarding sample efficiency in the final paper. Our intention is to provide clarity and avoid overstating our results. In the revised manuscript, we will further temper our assertions to ensure accuracy and reduce any potential ambiguity.
Summary: The authors of the paper proposed a suite of off-line reinforcement learning methods utilizing Mixture-of-Expert Language Models (MoE LMs) to train dialogue management agents. Moreover, they experimented their RL methods on two open-domain dialogue datasets and showed better overall performance of their methods (MoE specific offline RL) over SOTA offline RL methods. Strengths: S1: In their experimental section, it is shown that their MoE specific RL methods outperformed the SOTA offline RL methods. S2: Human evaluation is also done by recruiting 80 workers. S3: It is shown that those agents which have better performance utilize all the knowledge of different experts in a balanced way. Weaknesses: W1: In the conclusion section of the paper, it is stated that their specialized offline RL methods have better sample efficiency, however, I did not see any experimental proof for it. W2: In the introduction section (page 2), they described their first component of their methods two times: “Our methods consist of three main components: 1) a primitive LM which, using a probabilistic encoder and decoder, is capable of generating diverse semantic intents 1) a primitive LM that uses a probabilistic encoder-decoder pair to generate sentences with diverse semantics and intents”. W3: In the section 6 (page 7) before experiment 1, the second appendix is not referenced properly: “More details and results can be found in Appendix E and ??” W4: The implementation is not provided so that one cannot reproduce their results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the useful feedback aimed at improving our paper. Please find individual responses to comments below. ### Conclusion indicates improved sample efficiency The improved efficiency arises from both modeling and algorithmic choices. First, we adopt the MoE LM framework (Chow et al., 2023) and greatly simplify the action space of the RL dialogue management problem, as we no longer rely on RL to directly control the token-level autoregressive generation of the language model but rather to select the best utterance to output at the current conversation turn from the pool of candidate responses generated by MoE-based model. Second, we developed our offline dialogue RL planning algorithms for MoE-LMs under the IQL offline RL methodology, which has been shown in Snell et al., 2023 to have improved performance. While these factors intuitively improve sample efficiency, we acknowledge the reviewer’s comment that explicit analysis has not been conducted in our paper, and therefore we will update the conclusion section in the final paper to clarify these points and soften our claims about sample efficiency improvement. ### Typo in the introduction section Thanks for catching this, we will remove the duplicate texts in the final draft. ### Reference in the experiment section We apologize for the formatting issue, we meant to say Appendix A and E, we will fix the appendix references in the final version of the paper. ### Implementation details are unclear Unfortunately due to IP concerns at this point, our institution has not approved our request for open-sourcing code. We indicated that restriction in our initial submission checklist and will try to release the code by the final submission timeline. In the meantime, we tried our best to provide detailed explanations about experimental setup, model architectures, and RL training procedures in Appendix B to D (and we also follow the implementation details of the original MoE paper: Chow et al., 2023), so that the reader can implement these concepts. ### References Snell, C., Kostrikov, I., Su, Y., Yang, M., & Levine, S. (2023). Offline RL for Natural Language Generation with Implicit Language Q Learning (arXiv:2206.11871). arXiv. https://arxiv.org/abs/2206.11871 Chow, Y., Tulepbergenov, A., Nachum, O., Ryu, M., Ghavamzadeh, M., & Boutilier, C. (2022). A Mixture-of-Expert Approach to RL-based Dialogue Management (arXiv:2206.00059). arXiv. https://arxiv.org/abs/2206.00059
Summary: This paper introduces multiple reinforcement learning algorithms for dialogue management, in particular when combined with mixture-of-expert language models. Generally, a primitive (general) language model, as well as expert language models which have a specific intent or personality, generate candidate utterances. The dialogue management module then learns to choose among them. Strengths: The paper introduces multiple novel RL algorithms tailored for dialogue management. Using RL-based dialogue management over mixture-of-expert LMs is well-motivated. The language models may generate fluent and diverse outputs, while the limited size of the action space allows for efficient learning. Coordinating multiple language models (or especially a single language model with different adaptors/parameter-efficient fine-tuning modules) could be impactful by improving generation quality and diversity given a fixed number of parameters. Weaknesses: The description of the algorithms is quite dense. As multiple approaches are introduced, a more thorough discussion of which one(s) to prefer under certain conditions would be helpful. The number of turns is fixed in the experiments. Conversations could be much richer if the conversation length was more flexible. Without sharing the code, some of the experiments may be difficult to reproduce. The results in tables 1 and 2 are difficult to interpret. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could you clarify "Our experiments demonstrate that model-based evaluation can significantly improve dialogue management over the model-free counterpart [...]"? Does the evaluation method change the DM policy? Could you describe the evaluation approaches in more detail? What do the reported numbers exactly represent? Could the approach be adapted to work without a primitive LM (i.e. only expert LMs with a specific personality)? Do you have examples of conversations generated with different approaches? [L49/50] Component #1 is repeated. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Some limitations are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for the useful feedback and appreciation of the prospect of our work. Please find individual responses to comments below. ### Dense algorithmic descriptions Our contributions are on developing several MoE-specific offline RL algorithms and comparing different offline RL approaches on MoE dialogue management, which results in the sheer amount of technical details in the main paper. We will shift some of these details to the appendix and add more intuitive explanations of different methods (when one is preferred over another in practical scenarios) in the final paper. ### Fixed conversation turns in experiments In our experiments, the conversation turn has been fixed as a way to evaluate the different methods. This quantity can be regarded as the horizon of the RL problem. The underlying number of conversation turns can still vary because the user/agent can indicate an end of the conversation by outputting an EOS token before the maximum conversation turn is reached. To check for the quality/richness of our generated conversations, we also evaluate our methods in raters studies whose results are given in Appendix E. ### Tables 1 and 2 We acknowledge the reviewer’s confusion caused by the condensed presentation of these tables and will include more detailed explanations in the final paper. These tables present the average return (discounted sum of per-turn rewards in the dialogue conversations) of the dialogue agent of interest, accumulated over a 5-turn conversation. The return is averaged over 100 conversations, and the standard error is also provided. A higher value indicates the corresponding agent is able to perform better dialogue planning and results in better overall sentiment improvement. Table 1 and Table 2 respectively compare the results between offline RL methods (that are not necessarily adapted to the MoE framework) and offline RL methods that are specifically designed with the MoE framework in mind. We can see that MoE-specific offline RL methods can manage to perform significantly better than standard offline RL methods in our dialogue management experiments. ### Model-based/free evaluation Model-based and model-free approaches differ in the way of policy extraction. In the case of model-free methods the MoE policy only relies on the learned Q-function as the scoring function, while in the model-based experiments the policy is constructed as the softmax the value-to-go function (V function) that corresponds to to the augmented conversation history w.r.t. the predicted next-user utterance (given by the utterance prediction model that takes conversation history and candidate utterances as inputs) Details of evaluation approach: Each numerical result in our evaluation is the averaged return of 100 conversations driven by the RL agents of interests. As mentioned in Line 328, this metric corresponds to the discounted sum of the reward for each dialogue turn. The reward function is defined on the user sentiment score in the next conversation turn (affected by current conversation history and the bot response) as described in Line 318. ### Without primitive LM (Please let us know if we mis-interpret your question) Yes, while our offline RL approach is built on top of the more general MoE framework that consists of a universal representation space that embeds diverse semantics and a gamut of various expert generators, it does not rely on the details of these experts. Therefore, our approach can also be directly applied to a (simpler) setting when there is only an expert LM with a particular personality. ### Code sharing Unfortunately due to IP concerns at this point our institution has not approved our request for open-sourcing code. We already indicated that restriction in our initial submission checklist and will try to release the code by the final submission timeline. In the meantime, we tried our best to provide detailed explanations about experimental setup, model architectures, and RL training procedures in Appendix B to D (and we also follow the same MoE implementation details illustrated in the original paper: Chow et al., 2023 ) so that the reader can implement these concepts. ### Sample conversations We apologize for not including the sample conversation, we have included some sample conversation in the extra PDF posted above and will include that in the final paper. ### Typos Thanks for catching this, we will correct it in the final draft. ### References Chow, Y., Tulepbergenov, A., Nachum, O., Ryu, M., Ghavamzadeh, M., & Boutilier, C. (2022). A Mixture-of-Expert Approach to RL-based Dialogue Management (arXiv:2206.00059). arXiv. https://arxiv.org/abs/2206.00059 --- Rebuttal Comment 1.1: Comment: Thank you for your response. For `Without primitive LM`, I wanted to more clearly understand the differences between $\mathcal{G}_{\ge 1}$ and $\mathcal{G}_0$. $\mathcal{G}_0$ appears to have a particular and distinct status, but the notation is very similar to the expert distributions. Could it make any sense to only have $\mathcal{G}_{\ge 1}$, but not $\mathcal{G}_0$? --- Reply to Comment 1.1.1: Comment: Thank you for your comment. We will try to clarify the difference between $\mathcal{G}_{\ge 1}$ and $\mathcal{G}_0$. $\mathcal{G}\_0$ has a distinct status in methodology, where it was designed and trained to discover the semantic space via an embedding (and a generic sampler for sampling in this latent embedding space), while also learning the encoder and decoder. On the other hand, $\mathcal{G}\_{\ge 1}$ leverages the encoder and decoder learned by $\mathcal{G}\_{0}$, but further finetunes the latent space sampler to represent the respective experts' intent/behavior, using the sentiment based reward. Yet, when integrated into RL method for DM, both entities operate *without* any distinction. To address your query, you are right, it would be perfectly fine to only have $\mathcal{G}_{\ge 1}$, but not $\mathcal{G}_0$.
Summary: The authors address offline RL training of dialogue models. They note that gathering trajectories with humans-in-the-loop is potentially expensive (and perhaps even dangerous). Their method is a MoE involving a general LM and several specialized intent-accounting-for LMs, which generate candidates for a dialogue manager to select from: this process reduces the action space significantly (because selecting the next utterance becomes a task at the utterance level, rather than at the word level). The reward is a roberta sentiment classifier applied to simulated dialogue responses. Experiments compared to several single-policy/MoE methods show the proposed method is better able to elicit positive sentiment responses in simulation. Strengths: The authors study an interesting problem --- offline RL for dialogue, and frame the task well --- DialoGPT offers a nice simulation environment, and I understand the motivation for the reward function the authors optimize. I commend the authors for working in a multidisciplinary area, mixing RL and dialogue modeling contributions. The experiments cover a broad range of RL algorithms, as well as 2 dialogue corpora. Weaknesses: - I would have liked to have seen some human evaluations of response quality/elicited sentiment (I think the question that operationalizes the reward may be: "Which model's response would cause you to write a more positive response?"). Current evaluations are limited to reward optimization, which makes sense from an RL perspective. But, at the very least (if a small-scale human evaluation were prohibitive), it was odd to see a paper about dialogue with no dialogues shown. - The reward seems a bit reductive --- 1) What is the RoBERTa classifier trained on? Unfortunately, Liao et al. 2021 is locked behind a paywall so I can't check. I know sentiment models are often trained on specific domains (e.g., movie reviews or yelp reviews), so it might not generalize well (I am fearful of reward hacking); and 2) should it really be the case that the optimized reward be entirely based on sentiment? This seems like it might risk just pushing the dialogue manager to be a sycophantic "yes-man". I would have appreciated some justification for this choice. (in fact, few potential limitations of the work are discussed --- this type of reflection would have been appreciated). - The model itself seems a bit convoluted --- the idea of compressing the primitive encoder's output into a single vector (and then sampling from a mixture of normal distributions conditioned on that encoding as the MoE), felt a bit roundabout. Why not, e.g., not pool, and train a separate decoder for each expert, ala T5? Latent variable modeling is cool, but it felt superfluous, even to the core message of this paper (which, for me, focuses on MoE for state space reduction) --- I at least would have liked to have seen the ablation of simpler methods, e.g., over-generation with differently prompted LMs in a zero-shot way. UPDATE: the authors do have human eval (and will surface from the appendix) and made a few more clarifications in response). I have raised my score. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is the reward model from Liao et al. 2021 trained on? - Is it possible to run human evaluation or include discussion of the outputs to make sure reward hacking isn't occurring? - Why the latent variable model instead of, e.g., just encoder/decoder? It seems a bit orthogonal, and some simpler baselines suggested by this encoder/decoder choice are missing. - Can better discussion be added for potential limitations of this particular reward in this particular setup? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No, see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their useful feedback. Please find individual responses to comments below: ### Human evaluation We conducted a human evaluation of the generated conversations by asking raters to score how our MoE bot managed to improve the overall sentiment within the conversations with different RL algorithms. The results can be found in Appendix E (Table 10) which aims to compare the overall fluency and sentiment improvement of the conversation. Details of the rater evaluation can be found in the Appendix. This experiment sheds light on how our chosen automatic evaluation metrics align with human evaluations on demonstrating how our offline RL methods, when paired with MoE models, can improve the overall conversation objective. We will also move these experimental results into the main paper (given the extra page provided), as they are important to demonstrate the advantages of using specialized offline RL methods. Additionally, Appendix A2 contains evaluations of the MoE embedding space and different experts (Tables 3 and 4), echoing the original MoE paper by Chow et al., 2023. ### Sample dialogues: We acknowledge the reviewer’s comments on the lack of sample dialogues. We focused on displaying quantitative studies in the original submission. We included a snippet of sample dialogues in our rebuttals to showcase the effectiveness of our methods in sequential conversations, and we will add more sample dialogues in Appendix E in the final version. ### Sentiment Classification Model We regret the confusion regarding the referencing of sentiment classifiers. In our work, we utilized the HuggingFace RoBERTa model trained on the Twitter dataset to recognize sentiment (https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest). Initially, we cited Liao et al., 2021, aiming for specificity in sentiment classification models. Acknowledging the reviewer's concern about potential confusion, we will update the reference to TimesLM (Loureiro et al., 2022), an open-source model utilized beyond sentiment analysis. It's essential to clarify that our paper's primary focus is on introducing specialized offline RL methods for dialogue management within the mixture of expert frameworks. The selection of the sentiment classifier and reward design are experimental decisions, unrelated to the underlying RL algorithms. ### Reward Hacking and Model Generalization: We acknowledge the reviewer’s concern about reward hacking in sentiment optimization. As mentioned above, our experiments are meant for demonstrating the effectiveness of different offline RL dialogue management methods rather than developing a full-blown universal conversation bot. We decided to optimize w.r.t. user sentiment transition partially because of Table 2b of Jaques et al., 2020, which experimented with the same 2 conversation datasets and showed that the user sentiment signal is the most correlated with real human feedback of conversation quality (measured w.r.t. rater’s upvotes). To avoid language model overfitting, our agent adopted the MoE framework that restricts dialogue responses to be selected from among the set of dialogue utterances generated by experts of different skills (e.g., positive/negative sentiment, semantic coherence/diversity, question, etc.). Experiment results show that our RL method tends to select a more diverse set of experts (Figure 2a) to avoid sycophantic responses. Human evaluation (Appendix E) further shows that our agent tends to be more fluent and leads to user sentiment improvement over multiple turns. ### MoE framework; Shared latent is superfluous; Separate decoder per expert The MoE framework's core idea lies in representing multiple experts within a language model across various parts of the semantic latent space. This allows the experts to generate candidate utterances in a modular way that is suitable for downstream dialogue management tasks. This approach not only lessens computational demands by enabling a range of responses with different intents but also streamlines the token-level MDP formulation in dialogue RL, resulting in more effective management. For a detailed motivation, refer to the original work by Chow et al., 2023. Our research builds on this, focusing on enhancing offline RL capabilities, and we'll enrich the paper's introduction with more insights into MoE-LM. The shared encoder in the MoE framework, trained for both accuracy and diversity, encodes the conversation history into a versatile embedding space. This serves as the foundation for expert utterance generation (phase 2) and MoE-MDP state space (phase 3). Specifically, it permits (i) the creation of responses with varied intents by sampling from specific latent space regions, and (ii) RL planning in a reduced, continuous state space (see Eqn 4-6 in our paper). Utilizing the same decoder minimizes distribution shifts during data generation, simplifying RL training compared to non-MoE approaches (refer to the original paper's Table 2 for details and ablation studies). ### Better reward discussions Our RL reward choices follow primarily from the open-domain offline RL dialogue management paper: Jaques et al., 2020, and the MoE paper: Chow et al., 2023. Both for fluency/coherence (quality) and sentiment improvement (task success). We will add additional discussions regarding RL reward choices in the final paper. ### References Chow, Y., Tulepbergenov, A., Nachum, O., Ryu, M., Ghavamzadeh, M., & Boutilier, C. (2022). A Mixture-of-Expert Approach to RL-based Dialogue Management. https://arxiv.org/abs/2206.00059 Jaques, N., Shen, J. H., Ghandeharioun, A., Ferguson, C., Lapedriza, A., Jones, N., Gu, S. S., & Picard, R. (2020). Human-centric Dialog Training via Offline Reinforcement Learning. http://arxiv.org/abs/2010.05848 Loureiro, D., Barbieri, F., Neves, L., Anke, L. E., & Camacho-Collados, J. (2022). TimeLMs: Diachronic Language Models from Twitter. https://arxiv.org/abs/2202.03829 --- Rebuttal Comment 1.1: Title: Thanks! Comment: Thanks for your response. Here are some response-responses : - Human eval: thank you, this is great! I did not see these and would definitely recommend moving them into the main body. - Thanks for sharing these! I think, ideally, in addition to showing a few examples, some commentary could be added about the relative strengths of the predictions of each approach. Even better would be an error analysis. But --- thanks for this, it helps! - Sentiment: thanks for the clarification - MoE vs. a simpler baseline. I do appreciate the authors points about the advantages of MoE. But I do feel that training a separate decoder for each intent/expert is a similarly simple-to-train baseline that would have been nice to see. I will raise my score in light of these updates.
Rebuttal 1: Rebuttal: ## Sample Utterances We acknowledge the reviewer’s comments on the lack of sample dialogues. We focused on displaying quantitative studies in the original submission. We included a snippet of sample dialogues in the attached PDF to showcase the effectiveness of our methods in sequential conversations, and we will add more sample dialogues in Appendix E in the final version. Pdf: /pdf/ce459b8101df575de1725ea118623bb70a86de23.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DeepSimHO: Stable Pose Estimation for Hand-Object Interaction via Physics Simulation
Accept (poster)
Summary: The paper presents a stable hand pose estimation method leveraging a neural network for stability estimation trained from simulation to improve the physical stability of estimated hand poses. The main idea is training a neural network via simulated results. The network can further provide smooth gradients to refine estimated hand poses. Experiments on two datasets demonstrate the effectiveness of the method and the ability of DeepSim network to provide better gradients where the overall learning framework can benefit from. Strengths: - The idea of training a neural network as a simulator that can both provide accurate simulation results regrading stability and provide smooth gradients friendly for network training is sound. Gradients provided by the network are also demonstrated to be of higher quality compared to analytical gradients or those from finite differences. - Experiments are valid and reasonable that are able to demonstrate the superiority of the proposed method. Implementation details are also provided. Weaknesses: - The idea of designing neural networks as a differentiable simulator is not a new thing [1,2]. The network designed in the paper does not has explicit physics priors and only approximates the stability value, which make its generalization ability towards unseen and out-of-distribution data ambiguous. - Using networks to learn the stability prediction process is not fully explored. For instance, could the current strategy generalize well towards out-of-distribution test data? Is it possible to improve the generalization ability and train a very powerful stability prediction network by creating and leveraging a large scale synthetic dataset via simulator? Is there any opportunity to fuse physical priors into the design of DeepSim for generalization enhancement? [1] Mezghanni, M., Bodrito, T., Boulkenafed, M., & Ovsjanikov, M. (2022). Physical simulation layer for accurate 3d modeling. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 13514-13523). [2] Mezghanni, M., Boulkenafed, M., Lieutier, A., & Ovsjanikov, M. (2021). Physically-aware generative network for 3d shape modeling. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 9330-9341). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is there any in-depth analysis of the DeepSim network (please see weakness for details)? - Some works in robotics propose to relax and improve contact models so that smooth gradients can be provided for optimization [1,2]. It is hard to compare with them directly since they are not open-sourced. However, is it possible to conduct toy analysis to compare the effectiveness of soften contact models and the neural simulator proposed in the paper? [1] Pang, T., & Tedrake, R. (2021, May). A convex quasistatic time-stepping scheme for rigid multibody systems with contact and friction. In *2021 IEEE International Conference on Robotics and Automation (ICRA)* (pp. 6614-6620). IEEE. [2] Jain, S., & Liu, C. K. (2011, December). Controlling physics-based characters using soft contacts. In *Proceedings of the 2011 SIGGRAPH Asia Conference* (pp. 1-10). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are stated in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer fSiU: We thank you for providing valuable feedback and acknowledging the strength of our work. We hope the below responses can address your concerns. ### 1. Difference to Previous Physics-based Works. We thank the reviewer for providing other related works. However, the suggested works [B4, B5], similar to other works in grasp synthesis [6, 46], all address the task of *synthesizing* physically valid results, which are not directly comparable to our method **as we address a more challenging task of jointly estimating *stable* and *accurate* hand-object poses *conditioned on a monocular image observation***. In addition, **we tackle the problem of effective joint training of the DeepSim and base network to avoid overfitting and enforce the generalizability, which is not addressed by previous works.** For completeness, we will include [B4, B5] in the discussion of related works in the revised paper. We also refer the reviewer to the general responses #2. for more discussion and insights about our method compared previous works. ### 2. Generalizability of the DeepSim Network The trained DeepSim network can generalize to unseen test data thanks to our training strategy using large-scale perturbed hand-object data. Specifically, We mentioned in L189 of the main paper that to avoid overfitting, we include randomly perturbed initial hand and object poses before forwarding to the simulator when training the DeepSim network (note the augmentation is for training the DeepSim only). **This is essentially the same as generating large-scale synthetic pose-stability data pairs as the training progresses**. In addition, ablation study in Table 3 of the main paper further quantitatively proves the generalizability of the DeepSim network. In Table 3, we observe that the Approximation Error (AE, measuring the distance between the predicted stability and ground truth stability evaluated by the simulator on unseen test data), is significantly smaller than the stability threshold used in the physics metric, **indicating that the DeepSim is sufficiently accurate to distinguish if the estimated hand-object pose is stable or not during testing**, which is the key to the success of our method. ### 3. Integrating Physics Priors to the DeepSim Network In addition to initial hand and object poses, we included two feature vectors $\hat{\mathbf{c}}^h$ and $\hat{\mathbf{c}}^o$ as the contact priors for the input of the DeepSim network to improve its effectiveness, as mentioned in L174 of the main paper. Since the stability of the estimated hand-object pose depends primarily on the contact force, which is further dependent on the contact configuration, *i.e.* contact points, penetration volume etc., **these two vectors embed rich physics priors and help to improve the precision of stability prediction**. There are various ways to fuse additional physics priors to better faciliate the DeepSim network. First, the DeepSim network can condition on other physics properties, *e.g.* mass, gravity etc. for more accurate regression. In addition, since collision detection and penetration resolving often cause numerically unstable gradient, the DeepSim network can instead aproximate these subroutines by regressing components of contact forces *e.g.* contact positions, penetration direction and volume etc., leaving other rountines like object acceleration and velocity calculation handled by the simulator. We believe these practices are suitable for future research works. ### 4. Analysis on Soften Contact Model We thank the reviewer for providing additional works on the improved soften contact model. However, since we need to compute the overall gradient $\frac{\partial \mathcal{L}_s}{\partial \mathbf{q}_0}$ in order to refine the base network in an end-to-end fashion, **having only improved soften contact model is not sufficient to guarantee the robustness of the overall gradient**. First, as the base network can potentially generate initial poses where the hand penetrates the object, the penetration must be first resolved by the simulator by applying a large normal force to separate the hand and object mesh. We observe numerically unstable gradient often arises at this stage due to the sudden velocity change in penetration resolving. Furthermore, existing methods that successfully leverage differentiable physics simulators mostly consider primitive shapes only, where the collision detection procedure is straightforward. However, in our task the contacting object meshes often have complex and discontinuous mesh geometry, causing the state gradient associated to the collision detection to be numerically unstable as well. Therefore, we believe adopting only a soften contact model can not fully address the gradient issue and emphasize the necessity of the proposed DeepSim network. ### Bibliography [B4] Mezghanni, M., Bodrito, T., Boulkenafed, M., & Ovsjanikov, M. (2022). Physical simulation layer for accurate 3d modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13514-13523). [B5] Mezghanni, M., Boulkenafed, M., Lieutier, A., & Ovsjanikov, M. (2021). Physically-aware generative network for 3d shape modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9330-9341). --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for your clarification. After reading the rebuttal and other reviewers' comments, I think the paper is of some value to be published somewhere. But I still have some concerns about its significance and the potential impact. In short, the paper does not conduct a deep and insightful discussion on an interesting and valuable problem. **The DeepSim network:** The network and the representations used are not designed under careful thinkings or after rigorious explorations. There are many works on learning simulation that are carefully calibrated to inject priors to the network architecure like [PINNs,NCLaw]. In this work, the authors are expected to conduct thorough discussions w.r.t. how to design a network to predict the physical stability of a grasping pose. It involves the netowrk architecture, input and output, what to predict, and so on. Presenting the designing process to readers either in the ablation study or in the supp would make the work more inspring to others. The current content in the paper cannot fully convince readers that the MLP structure is the most suitable one for the stability regression. Seemingly it is not designed under careful considerations. Besides, how to represent the grasping is expected to be discussed in depth. The current approach simply leveraging signed distances from hand to object and from object to hand. An intuitive illustration through figures to demonstrate it's effectiveness in representing the grasping (sometimes are incorrect with penetrations) would be helpful to let others get its insights. **The simulator:** Is the current simulator enough to calculate provide correct simulation? Mujoco was published in 2012. After that, simulators for graphics or robotics have undergone a fast development. A comparison is presented in [Dojo]. Besides, RK4 is used as the integrator in Mojuco. However, it seems that explicit Euler is used here (Eq. 2)? Have you tried other simulators? A discussion w.r.t. which offline simulator should be leveraged to provide GT stability scores is expected to be covered in the paper as well. [PINNs] Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. *Journal of Computational physics*, *378*, 686-707. [NCLaw] Ma, P., Chen, P. Y., Deng, B., Tenenbaum, J. B., Du, T., Gan, C., & Matusik, W. (2023). Learning Neural Constitutive Laws From Motion Observations for Generalizable PDE Dynamics. *arXiv preprint arXiv:2304.14369*. [Dojo] Howell, T. A., Le Cleac’h, S., Kolter, J. Z., Schwager, M., & Manchester, Z. (2022). Dojo: A differentiable simulator for robotics. *arXiv preprint arXiv:2203.00806*, *9*. --- Reply to Comment 1.1.1: Comment: Dear Reviewer fSiU: We thank you for acknowledging the value of our paper. We hope the below responses can address your further concerns. ### 1. The DeepSim network We acknowledge that the design of the DeepSim network is important, however, we wish to highlight that in this paper, **our main focus is about effective and efficient learning from physics simulation to improve the stability of hand-object pose estimation, instead of proposing novel network designs**. Moreover, since the DeepSim network tackles a *simplified regression task* instead of directly approximating the entire simulation process, **we observe that the proposed designs are sufficiently accurate to achieve the goal, as justified in the Table 3 of the main paper**. To further prove the effectiveness, we have compared with several variants on the design choices in the ablation study, including architectures (MLP/LSTM) and predictions (S/T/RT), we will revise and provide more discussions on this in the final version. Finally, we acknowledge that our current designs can be potentially refined in future works to further improve the overall performance, however, exhausting all design variants is impractical and besides the main point of the paper. As we mentioned in the rebuttal, motivated by the simulation process, we include the signed distance vectors to reflect the initial contact configuration and better facilitate the regression. We thank you for your suggestion and will add additional figure illustration on this to clarify the insight in the final version. ### 2. The Simulator While MuJoCo was first published in 2012, its codebase [B6] is constantly maintained even nowadays to provide improved simulation precision. In addition, it is commonly applied in related works, *e.g.* [B7], considering its robustness and efficiency in collision detection and contact-related simulation. We therefore follow previous works to apply the MuJoCo simulator and empirically observe that the simulation reasonably aligns with real world physics. We mentioned in L88 of the main paper that while many other differentiable simulators like [25,12,8] exist, including the [Dojo], they are still on the development and currently only support for gradient calculation of *primitive collision shapes*, which are not applicable to our task since we tackle objects with complex *mesh collision shapes*, where the graident with respect to contact geometry is often problematic. The Eq. (2) uses a commonly adopted *implicit* Euler integration scheme, which is the default setting in MuJoCo. Please note that MuJoCo provides different options of solvers, as mentioned in [44] and implemented in [B6]. We have also tested the NimblePhysics [50] simulator, which is a recent feature-complete differentiable simulator that supports mesh contact shapes. The ablation study shows that the DeepSim network produces gradient of higher quality and better facilitates the back-propagation. All GT results are obtained using the same setting mentioned in the implementation details with MuJoCo, we will clarify this in the final version. ### Bibliography [B6] https://github.com/deepmind/mujoco [B7] Dasari, Sudeep, Abhinav Gupta, and Vikash Kumar. "Learning dexterous manipulation from exemplar object trajectories and pre-grasps." 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.
Summary: This work proposes using an external physics simulation to aid in monocular joint 3D hand and object pose estimation. By analyzing the stability of a perceived grasp inside the simulation, the proposed models learn to factor in grasp dynamics when estimating hand and object pose, producing stable and physically plausible grasps. To circumvent the problem of non-differentiable simulation, a DeepSim model is used as a proxy for learning the dynamics of the physics simulator and enables gradients to be propagated through. Quantitative and qualitative results show that the proposed method produces state-of-the-art results in pose estimation while improving physical realism. Strengths: 1. Leveraging the laws of physics effectively can benefit vision-based systems by providing physical prior. However, the effective use of physical laws and/or simulation is difficult as it creates extra overhead which may lead to intractable systems. This paper provides an effective hand/object pose estimation method that intelligently leverages a (simplified) world model (DeepSim) to learn physical prior from simulation. I find the formulation intuitive, and since the ending estimation pipeline is no longer reliant on the simulator, the pipeline is efficient. Essentially, the network is trained in a physics-aware fashion through the use of the world model. 1. I think the simplified world model (DeepSim) formulation is interesting and effective. By directly estimation the stability loss the model is easier to learn and can be directly used to optimize the objective. 2. The motivation of the paper is clear, and the proposed solution and components solve the raised issues. The experiments on the analytical gradient and numerical gradient show the necessity of learning the DeepSim model and provide a clear view of the limitation of the current differentiable physics simulator (namely, unstable gradient when dealing with complex contact geometries). 3. Results show that the learned pose and hand estimator outperform SOTA methods in terms of physical plausibility and are comparable in terms of pose estimation accuracy. Qualitative results and simulation videos also show that the proposed method is effective in estimating physically stable grasps. 4. The proposed stability analysis is general and could be applied to other base pose estimation networks as well as other domains such as stable human pose estimation. Weaknesses: 1. I find the applied adhesion force a little questionable. In the real world, humans grasp objects by applying forces, which is akin to small penetration. Applying an adhesion force is similar to having extra suction cups on the fingertips, which is not realistic. How is the force modeled? Is it a constant or is it a function of the contact forces like static friction? 2. The current formulation essentially biases the model towards firm grasps and static holdings of objects. All of the examples shown in the results are grasps that require almost all fingers. How about when the object is resting on the palm or not requiring the support of the hand? Since no ground or tabletop is modeled, would the model also provide grasping when the object is resting on the table? In that case, the object is supported by other forces and fingers do not need to apply to grasp. Would the model still bias toward a solution where all fingers are closed in on the object? 3. As dexterous manipulation is a study of motion, the current setup can be quite limited in modeling faster motion and movement of the objects. Similar to 2, the method is biased toward static and firm grasp, which is not always true when handling an object. The object's own momentum and movement can have a large effect on its stability, which a single-frame model would not factor in. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I would like to see some discussion on how he adhesion force is applied and modeled, and how well does the model handle non-grasp poses. --- After rebuttal, my question about adhesion force and some other details are addressed. I would like to maintain a positive rating of this work. --- Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weakness. I think the method is biased toward stable grasp and would not be able to model hand and object motion (as opposed to pose). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer bTxd: We thank you for providing valuable feedback and acknowledging the strength of our work. We hope the below responses can address your concerns. ### 1. Modeling Adhesion Force In practice, we find applying hand-object penetration to emulate the effects of joint torques can degrade the simulation realism as it often encourages the model biasing towards deeper penetration, also observed by [52]. In this way, stability of contact can be achieved by having deep but balanced penetration over multiple sides of the object, or even penetrating through the object, considering the limitation of the penetration resolving in modern physics simulators. **Such artifacts are undesired in the applications and violate physics realism, therefore we choose not to model the interaction via hand-object penetration**. For the details of adhesion force modeling, we investigate the training data and **carefully adjust the strength of the adhesion force so that a simple touch can not form a stable grasp**. The overall adhesion force strength depends on the number of hand-object contact, where for each contact point, the adhesion force is applied in the direction of contact normal with a fixed strength. The effect of the adhesion force can be observed in the supplementary video, which demonstrates that it reasonably simulates with real world physics **without the need of undesired hand-object penetration**. **Overall, we find it as a better model to emulate the effects of joint torques when the hand is required to remain static during simulation**. ### 2. Bias Towards Grasping Our method does not bias towards grasping or specific forms of grasping as it is additionally conditioned on the input image and supervised with accuracy losses. We mentioned in L219 of the main paper that we include training samples whose ground truth poses are both stable and unstable, *e.g.* with no hand-object interaction, and only impose the stability loss on samples whose ground truth poses are stable. For samples whose ground truth poses are unstable, we train on them using only the accuracy losses. Therefore, our method avoids blindingly producing firm grasping when no hand-object interaction is indicated from the input image. **Please also refer to the Fig.1 of the PDF in the general responses for qualitative results on such cases**. In addition, we make no assumptions on the contacting fingers and forms of grasping, but rely on the simulator to evaluate the effect of contact on *all detected contacting vertices*. Specifically, unlike [52] that only studies contact on predefined anchors, we consider contact forces on the entire hand mesh for all contact points detected by the simulator in collision detection. This allows us to explore various forms of contact other than grasping with all fingers. **We include more qualitative results in Fig.1 in the PDF of general responses to show that our method can generalize to various forms of grasping, *e.g.* resting on the palm or interacting with a few fingers**. ### 3. Generalize to Fast Object Motion In this work, we address the task of estimating hand and object poses from only a *single image input*, **which is already a challenging task**. Since we do not have information about the object's previous states, from the training data we make a reasonable assumption that the movement is slow and the internal object acceleration is negligible compared to gravity. However, **our method can be easily extended to estimating the hand-object motion from a sequence of input frames with various object initial states**. First, the object velocity $\dot{\mathbf{q}}_0$ can be adjusted to appropriate values if the object does not start to be static. Besides, if the object has its own internal driven force, *e.g.* equipped with a motor, the Eq.(2) can be correspondingly modified to take into the account of additional sources of forces. We believe these extensions are beyond the scope of our work and are suitable for future research. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: I thank the authors for the detailed response. My concerns about "Bias Towards Grasping" and "Generalize to Fast Object Motion" has been addressed. The only remaining question is still centered on the adhesion force. What is the "gain" (L259) in the context of adhesion force? Is the adhesion force also applied to other SOTA methods in the supplementary video? --- Reply to Comment 1.1.1: Comment: Dear Reviewer bTxd: We thank you for your comments on our rebuttal. For your remaining questions, the *gain* is a simulator-specific parameter used in the MuJoCo solver to scale the effect of the adhesion force. We describe it for completeness so that our simulation process can be reproduced with the same setting. In addition, we apply the same physics models and simulator, *i.e.* including the application of the adhesion force, for all compared methods in both quantitative and qualitative (including the supplementary video) evaluation for a fair comparison.
Summary: This manuscript presents a new approach for generating physically realistic estimates of hand and object pose during hand-object interaction. They argue that a weakness of existing approaches is that while they utilize approaches to avoid hand-object penetration or enforce contact, they do not enforce physically realistic contact that would obey e.g. gravitational forces, which they term dynamical constraints. Their approach is based on the assumption that the hand-object should be stable, that is, if physical forces were applied to an observed frame for a period of a few hundred milliseconds, the hand and object should show minimal displacement. First, following estimation of hand pose and mesh parameters and object rotation using existing approaches, they use a physics simulator to forecast displacements of the hand-object configuration forward 200 ms, and estimate a stability loss equal to the L2 norm of the displacement. They argue that the gradient of this loss relative to state in the physics simulator is unstable, and therefore they use a separate MLP they call DeepSim to approximate the stability loss, taking as input a concatenation of the hand and object signed distance field and hand-object configuration. They alternately train DeepSim and the base network. They compare their DeepSim approach on two benchmark datasets with state of the art techniques and achieve competitive, albeit weaker performance on hand and object estimation metrics, but improved performance on a set of more physical criteria – the penetration depth, distance of object displacement and fraction of frames with contact. They motivate the chosen DeepSim architecture through ablations, and show decreases in the magnitude and variance of the gradient using the appproximation approach. Overall I found the manuscript fairly complete, although of a fairly narrow scope that may mean it is more suited to a specialized computer vision conference. I haven concerns about some of the assumptions made about the physical environment and number of evaluations performed. If some of these questions are addressed I would consider moving up the score. Strengths: • The manuscript is well motivated, clearly explained, and well contextualized within the field. There are evaluations across multiple datasets, set of ablations. The conclusions for the most part match the results. This should be published somewhere. • The architecture and approach are novel to me, and the approach of having a network learn to smooth the state gradient of a physics simulator could be more broadly useful. • The qualitative examples are quite compelling and there are seemingly significant increases in the physics scores for the hand-object interactions. Weaknesses: • The method does not achieve state of the art in hand and object pose performance on the chosen datasets. • While the approach of smoothing the state gradient could be quite general, the application is narrow in scope. Not only is it limited to hand-object interactions, but the subset of interactions where the grasp is stable. It would be a bit awkward to use this method in practice since it is restricted to the subset of stably grasped frames and those would have to be pre-detected. • There are assumptions about contact force strength and the orientation of the gravity axis that are made and it is not clear to what degree they impact the results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: • How were hyperparameters for the losses chosen? • Can you provide accuracy comparison values (eg MJE) for the chosen examples? It would be nice to see if they are representative. Alternatively, it would help to see examples of what the failure mode of the techniques were. • Why are results from only a single random seed presented? • L258 “We set the gravity acceleration as 9.8 m/s2 in the y direction of the camera frame. For the adhesion force, we empirically set the gain as 100 and the force as 10N, so that a simple touch is not sufficient to stably grasp the object. “ To what degree is it accurate to assume the gravity axis in these datasets is aligned with the images? To what degree do slight variations in the gravity axis produce large changes in the object pose? Does this affect the comparison between techniques in Tables 1 and 2. • It is disappointing that the hand ground truth is not available in Table 2 and it makes it difficult to evaluate the technique as there is really only a single dataset with joint errors. Are there other splits or datasets that could be used? • Can you report the performance of networks trained with NimblePhysics and FiniteDifference gradients? It is unclear, especially with the Finite Difference network, if the networks simply don’t converge well or if there is a more catastrophic reason why they cannot be used for training. This was a key motivation for the DeepSim approach and it would be nice to verify that they are problematic. It would be interesting if they produced e.g. more physically plausible results. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer gag8: We thank you for providing valuable feedback and acknowledging the strength of our work. We hope the below response can address your concerns. ### 1. Concerns about Performance in Accuracy Please refer to the general responses #1. for the discussion about the accuracy performance. ### 2. Scope of Application **Our proposed method is a general framework and works with any task-specific base networks and physics models, therefore is not restricted in hand-object interaction only**. For instance, other tasks including human pose [B3] and motion estimation [B2] also require the results to be physically stable under the interaction to the ground and scenes. Our methods can be easily extended to these tasks with modified base networks and physics simulators, using the DeepSim network to smoothly connect the two parts. This is also acknowledged by the reviewer `bTxd` in strength 4. **Besides, our method is also not restricted to stable grasping frames only since we train the model jointly using accuracy and stability losses**. We mentioned in L219 of the main paper that during training, we include all samples whose ground truth are both stable and unstable, *e.g.* having no hand-object contact. To avoid biasing towards grasping, we mask out the stability loss for samples whose ground truth is not stable, and train on these samples using only accuracy losses. Therefore, our method can generalize to both types of test samples. We include more results on non-grasping samples in the Fig.1 of the PDF in the general responses for illustration. **In summary, since our model is conditioned on the input image, it will not blindly predict all samples as having a grasping hand if the image does not indicate so**. ### 3. Impact of Design Choices We make assumptions about the contact force strength and the direction of gravity based on the observation of the training data. We empirically find these design choices reasonably align with real world physics. Please refer to the supplementary video for qualitative evaluation. Since in practice, it is difficult to perfectly simulate real world physics, we take special care in training to avoid the model being misled by imperfect simulation. Specifically, we examine all training data and *do not* impose stability loss on samples whose ground truth pose are *unstable under our assumption*. For firm grasping cases and non-contacting cases, they remain stable/unstable regardless of the gravity direction. For other samples, only those that align with our assumption will be included for training with stability loss. **Consequently, the impact is restricted mainly to the reduction of training data**. In testing, we also ensure that the selected samples follow our assumption (see the GT in Table 1 of the main paper) or manually verified by the previous work [52]. Hence the physics metrics shown in Table 1 & 2 are meaningful and comparable. ### 4. Hyper-parameters in Losses We follow the same training pipeline as [32] to set the same weights for accuracy losses. For the stability loss, we set the weight as $\lambda_s = 0.1$ so that all losses are roughly in the same scale. ### 5. Accuracy Comparison for Selected Examples Please refer to Table 4 of the PDF in general responses for the accuracy of selected examples in the main paper. ### 6. Analysis of Failure Cases Please refer to Fig. 2 of the PDF in general responses for the analysis of the failure case. ### 7. Random Seed We show it in Table 3 of the PDF in general responses for the results of multiple runs on the two datasets. The result shows a low variance in our method and indicates a stable performance of our model. ### 8. More Quantitative Evaluation We further follow [52] to compare on the HO3Dv1 split for more quantitative evaluation. We adopt the same setting as [52] for a fair comparison. Please refer to Table 1 of the PDF in general responses for the result. Note that both [19, 52] are state-of-the-art methods that explicitly adopt physics priors when modeling hand-object interaction. Compared to them, our method achieves consistently better performance in both accuracy and stability. ### 9. Results for NimblePhysics and FiniteDifference As shown in the Fig. 4(a) of the main paper, both methods fail to converge as the training losses can not be decreased, **leading to significantly worse performance and out-of-range scores**. For FiniteDifference (green curve), we can clearly observe an increase of the loss as training progresses, indicating a failure of training. This is because it produces incorrectly large gradient due to the sudden velocity change during penetration resolving. For NimblePhysics, the noisy gradient, as demonstrated in Fig.4 (b), also prevents the loss from decreasing, which illustrates the necessity of our method in smoothing the gradient. In addition, we mentioned in L37 of the supplementary material that the NimblePhysics takes around 120 hours to train a single epoch, as computing the state gradient for complex contact geometry is computationally expensive. **In consequence, it is also intractable to apply it when training on large-scale datasets**. ### Bibliography [B2] Gärtner E, Andriluka M, Coumans E, et al. Differentiable dynamics for articulated 3d human motion reconstruction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 13190-13200. [B3] Tripathi S, Müller L, Huang C H P, et al. 3D human pose estimation via intuitive physics[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 4713-4725. --- Rebuttal Comment 1.1: Title: Thank you, slightly raising the scores Comment: I appreciate the authors' response. I like this paper and while I acknowledge that the use of physics simulators is not entirely novel, I do think there is a lot of value in the current approach. Because of this I am going to slightly bump my score, but I would not strongly advocate for acceptance. I am somewhat less concerned on a re-read about the method not hitting SOTA for object and pose keypoint detection but I also think the physics metrics could be presented in a realistic use case to be more convincing. I don't feel like two of my points are really addressed. I don't see how this method will apply to cases where the stability levels are unclear , eg during walking phases where your heel is off the ground higher stability can be achieved by increasing surface area. This is also noted by bTxd. I also don't see sufficient consideration of just guessing the gravity axis. --- Reply to Comment 1.1.1: Comment: Dear Reviewer gag8: We thank you for your kind support of raising the scores and further comments about this paper. We hope the below responses can address your remaining concerns. We acknowledge that the simulation process may not perfectly align with real world physics as we infer from only single image input. This can inspire our future research direction to exploit additional knowledge, such as temporal information from video inputs or statistical analysis of human physics, to better conform to reality. Nevertheless, we would like to highlight that our method can generalize to other use cases with modified physics models and simulators, **without relying on oversimplified rule-based heuristics or affecting the design of the DeepSim network**. For instance, we can follow the design of physics models and the simulator in other works like [B2] when modeling stability in human-ground interaction. In terms of the gravity axis, we acknowledge that we assume the gravity direction is known. In future works, we can calibrate a more precise gravity direction from the image observation by further exploiting semantics and normal map information. [B2] Gärtner E, Andriluka M, Coumans E, et al. Differentiable dynamics for articulated 3d human motion reconstruction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 13190-13200.
Summary: This work presents a novel pipeline for 3D hand-object pose estimation, focusing on improving arbitrary base hand/object pose estimators with applying physical simulation and its induced physical loss (fitted as a neural network). The performance is tested on DexYCB and HO3D data sets and the method is compared against many SotA baselines. Results show improved performance mostly on the physics metrics. Strengths: - The proposed idea and method are reasonable, interesting, and valid. - The performance on the introduced physics metrics gets improved over baseline methods. - The paper writing is good and the paper is easy to read and follow. - Qualitatively, the proposed method does generate more physically realistic hand/object interaction poses. Weaknesses: - My major concern is that the performance regarding hand/object pose estimations is worse than the baselines [32, 52] in Table 1 & 2. Since the proposed method is particularly emphasized on optimizing physical realism, I think it's under the expectation that the method performs better in terms of the proposed physical metrics. But the whole point of doing this should be to improve the hand/object pose estimation results, which is not achieved as shown in the tables. - I'm confused by Fig. 1. The gt object pose looks quite different from the input image. Is this the dataset annotation issue or any visualization issue? Why is the gt object pose annotated so off? It's hard to say it's the problem of the baseline [32] if the gt is so off. - Why the related work [47, 24] are not compared in Table 1 / 2? I think it's important to compare against them as they also considered physical realism for the same task and the authors have discussed them in related work that they have disadvantages. It's better to show them in numbers. - In additional, I feel the idea of using physical simulation as loss to supervise interaction-oriented perception tasks is not new, as the many papers cited by the authors in the related work section and more others. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: see weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: no issue found Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer oW7V: We thank you for providing valuable feedback and acknowledging the strength of our work. We hope the below responses can address your concerns. ### 1. Concerns about Accuracy Performance We hypothesize that the comment intends to mean that our method performs worse than the baseline method [32, 48] instead of [32, 52]. In Table 2 of the main paper we show that our method achieves superior performance in both accuracy and stability compared to [52], which also explicitly enforces physics realism in hand-object pose estimation. Our accuracy is comparable to [32, 48] even though they use more augmented data during training, and our method achieves superior results when the baseline methods are trained with the same amount of data. **Please refer to the general responses #1. for detailed discussion related to accuracy performance.** ### 2. Clarification for Fig.1 We believe that there are some misinterpretations for Fig.1. In Fig.1, the visualization for the ground truth annotation refers to the figure in the *first row*, second column, instead of the second row, second column, *i.e.* caption at the *bottom* of the figure. The figure in the second row, second column instead visualizes the hand-object pose in a rotated view angle in order to better demonstrate that our method generates stable contact in the occluded area. **In the correct figure for visualizing the ground truth annotation, we project the ground truth hand and object mesh in the image space and prove that they align well with the input image, hence there should be no significant errors in the annotation.** ### 3. Comparison to [47, 24] Both related works [47, 24] do not release codes for evaluation. [47] also does not release the physics models and exact settings used in simulation. We are therefore unable to reproduce their works for evaluation. Besides, both [47, 24] are evaluated on self-collected small-scale datasets, which are also not publicly available. Hence we are unable to evaluate our method on their datasets and compare with their performance. Nevertheless, in quantitative comparison, we emphasize the comparison with [52], which is a recent work that also enforces physical realism in hand-object pose estimation. Table 2 of the main paper shows that our method consistently outperforms [52] in terms of both accuracy and stability. ### 4. Clarification for Contribution and Novelty **Our main contribution is not about being the first to integrate physics simulator or simulator-induced losses in the refinement pipeline, but proposing a more *effective* and *efficient* method that learns from physics simulation for estimating stable hand-object pose with complex contact geometry**. Due to the intrinsic discontinuity in simulation process and resulting noisy state gradient, directly imposing losses from differentiable physics simulators and refining the base network via gradient descent is challenging. To this end, previous works perform brute-force search [47] over limited configuration space, or rely on additional global optimization [B2]. **Such test-time optimization strategy is computationally expensive and therefore has limited application**. Alternatively, other works [39, 11] propose to adopt a deep reinforcement learning framework to work with non-differentiable simulators. However, **these methods do not generalize to unseen data and are also difficult to converge in training**. In contrast, we propose to adopt a neural network DeepSim that can smoothly approximate state gradient from the simulator and effectively refine the base network via back-propagation, **which is not addressed by related works**. Besides, since our method does not require test-time optimization, more stable results can be produced via a simple forward call of the refined base network, which is more efficient and practically applicable compared to previous works. Ablation study in **Section 4.5 of the main paper also demonstrates the effectiveness and superiority of our method compared to directly using the physics simulator as supervision**. Furthermore, We refer the reviewer to the general responses #2. for the novelty of our method compared to other works that adopt neural networks for simulation approximation. ### Bibliography [B2] Gärtner E, Andriluka M, Coumans E, et al. Differentiable dynamics for articulated 3d human motion reconstruction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 13190-13200. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I've raised up my score to Borderline Accept.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and valuable comments. Below, we first clarify common concerns raised by multiple reviewers. ### 1. Concerns about Accuracy Performance (Reviewer oW7V, gag8) In this paper, we aim to address the task of estimating stable hand and object poses from single-image inputs, which is **important for applications that demand *robust* hand-object interaction**. For instance, in dexterous manipulation, ensuring successful grasp and manipulation of the target object often takes precedence over precisely replicating the exact contact points. However, previous learning-based methods such as [32, 48] often produce suboptimal results where although the hand fingers are close to the ground truth contact positions, a stable grasp is not actually formed. In consequence, while they exhibit higher estimation accuracy, i.e. with reduced hand and object pose errors, they are unsuitable for these applications. To this end, we place a stronger emphasis on physics metrics to better cater to the requirements of such applications. **Compared to previous methods [52, 19] that also explicitly optimize for physics realism, our method achieves state-of-the-art performance in both accuracy and stability**. In particular, [52, 19] impose over-simplified assumptions on contact dynamics, in contrast, we learn complete dynamics priors from the physics simulator and thus avoid biasing towards a restricted set of stable poses. We also highlight that our methods are more efficient compared to [52, 19] since no test-time optimization is needed. We note that baseline methods [32, 48] have a higher accuracy performance, this is because we evaluate them using their officially released model weights, which are **trained with significantly more augmented data**. In particular, they synthesize additional hand and object images and corresponding annotations from various rotated views to mitigate occlusion ambiguity. However, since rotating the hand and object poses may alter the stability status of the original configuration, we did not apply the same augmentation strategy and only used a reduced amount of data for the training with the stability loss. Nevertheless, our method still achieves comparable accuracy thanks to the dynamics priors learned from the physics simulation. Qualitative results also demonstrate that estimated hand and object poses visually align with input images. **In Table 4 of the attached PDF, we show that our method achieves better accuracy than [32] (which uses the same base network with us) when training with the same amount of data, proving that the accuracy of our method is comparable**. Furthermore, the evaluation also proves that having a higher accuracy performance does not necessarily result in better stability, which jusitfies the motivation of our work. ### 2. Difference to Methods using Neural Networks to Approximate Simulator (Reviewer HRq6, oW7V, fSiU) Unlike previous methods [42, 37] that attempt to directly approximate the entire physics simulator by regressing *complete simulated states*, **our key insight in designing the DeepSim is to regress the *scalar* stability loss supervised by the simulator, which is a simplified task and practically more feasible to achieve**. This improved design allows the DeepSim network to accurately infer the pose stability and better generalize to unseen test data, as proved in the ablation study, *i.e.* Table 3 of the main paper. To elaborate the motivation, our goal during physics refinement is to quantitatively evaluate the stability of the hand and object pose estimated from a base network and refine the estimation stability. While modern physics simulators can serve as such an evaluator, due to the *complex contact geometry* and resulting noisy state gradient, the stability loss evaluated from the simulator can not be directly back-propagated to refine the base network. We therefore propose the DeepSim network to learn from the simulator, *i.e.* asked to replicate the same evaluated stability supervised by the simulator, *instead of being itself as a simulator*, while preserving smooth gradient that are suitable for back-propagation. We believe the design of the DeepSim shares more similarity to the *discriminator* in the GAN [15], however, it has no adversarial relationship to the base network (akin to the generator) and thus is much easier to train. Pdf: /pdf/f059094b4e12407c1324873fca1ff4a2113e7953.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Authors propose an approach for 3D pose estimation for hand-object interaction from a single image. Unlike prior work which is purely data driven (except for some works in robotics literature) and focus on visual quality, this work aims to also grasp stability. To this end, authors use physics based simulation to model grasp stability. Since PBS is typically non-differentiable, authors propose to train a neural network (with full supervision from the simulation) to emulate the simulation. This allows them to differentiably approximate the simulation. Authors outperform baselines on DexYCB and HO3D datasets. Strengths: + Most existing works focus on visual plausibility. It is interesting to see this work also reason about physical stability. + The idea of approximating non-differentiable simulation has previously been used () but it is still under explored. It would be better if authors provide more context about prior work in this space. + Authors outperform the baselines. Weaknesses: [Technical] 1. Eq. 2: Can the authors provide a bit more motivation/intuition around the equation. What does it do? Why is it correct? Is it a general formulation or does it have specific applications. Current reference ([14]) is a 650+ page book on Classical Mechanics, which does not help the reader much. Maybe point to specific portion of the text that elaborates on the equation. It is not clear how did the authors arrive at this equation from Euler Lagrange equation. This derivations should be included in supp. mat. at least. 2. Sec 3.3: Approximating non-differentiable functions with a neural network is a well known problem. The most straightforward solution is to train a MLP with supervised data. Isn’t this exactly what DeepSim is doing? This can be traced back to “Approximation of functions and their derivatives: A neural network implementation with applications”, Nguyen-Thien et. al. Applied Mathematical Modelling, 1999 Other domains such as cloth simulation have also used similar techniques to train a neural network to predict the outcome of a physical simulation. (Holden et al. Eurographics’19) Can the authors elaborate how is their proposed DeepSim different? This is an under explored area so it is okay to have some similarities with prior work but authors should provide more context and flesh out what are the new insights here. 3. L142: How are the physical properties of the hand and object eg. mass, coefficient of friction etc. obtained from the input image for simulating forces? Does this generalise? 4. Eq. 7: Do we need to mark 8 corners on all template meshes? Isn't this restrictive? Why not put the loss on object mesh vertices directly? 5. Eq. 3: What about rotation? A grasp is still not stable if the object rotates due to a static grasp? Why consider only translation? 6. L140: If M is a matrix, what does M(\cdot) mean? [Minor] - Missing related work: TOCH. Zhou et al. ECCV’22. They learn to predict stable grasps from data. Jiang et al. ICCV’21: They learn to predict stable contacts on object and synthesise hand to match the contacts. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some key formulations are unclear to me (see pt.1) this is important to clarify as it is one of the main contributions of the work. More clarity around design choices would also help the manuscript (see pts. 2-6). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors discuss potential limitations and broader impact in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer HRq6: We thank you for providing valuable feedback and acknowledging the strength of our work. We hope the below responses can address your concerns. ### 1. & 6. Clarification of Eq.(2) For the notation, $\mathbf{M}(\mathbf{q}_t)$ denotes that the object inertia matrix $\mathbf{M}$ is determined by the object configuration $\mathbf{q}_t$, where the parenthesis $(\cdot)$ indicates the dependency. Other quantities in Eq.(2) follow the same convention. The notation is commonly used in related literature [44, 50]. In addition, we wish to correct a typo that the left-hand side of the first equation in Eq.(2) should be $\mathbf{M}(\mathbf{q}_t)$ instead of $\mathbf{M}(\mathbf{q}\_{t+1})$. We apologize for the confusion caused in understanding the equation and will revise it in the final version. To elaborate further, Eq.(2) defines how the object state, including the configuration $\mathbf{q}_t$ and velocity $\dot{\mathbf{q}}_t$, is calculated and updated at each simulation time $t$. Specifically, we formulate the first equation of Eq.(2) to state that the system momentum changes due to the corresponding impulses, *i.e.* $\mathbf{M}\dot{\mathbf{q}}\_{t+1} = \mathbf{M}\dot{\mathbf{q}}_t + \mathbf{f}\Delta t$, where $\mathbf{f}$ consists of the gravitational and Coriolis force $\mathbf{c}$ as well as contact-induced forces $\mathbf{f}_C, \mathbf{f}_A$. We use the above Lagrangian dynamics equation in order to work in generalized coordinates. Once we have solved the updated velocity $\dot{\mathbf{q}}\_{t+1}$, we can then compute the updated configuration $\mathbf{q}\_{t+1}$ using the discrete time Euler integration scheme, as defined in the second equation. The current citation [14], also co-cited in [B1], refers to the definition of the general Lagrangian dynamics equation for rigid bodies (Chapter 5), while the first equation of Eq.(2) is a specific instantiation of it in order to introduce relevant forces in $\mathbf{f}$ for our task. This equation is implemented by the MuJoCo simulator [44] used in our work. We will include more references [B1,44] to provide a better clarification on this equation. ### 2. Difference to Network-based Simulation Approximation Please refer to the general responses #2. for discussion of our contribution and the difference to other works that use neural networks to approximate physics simulation. ### 3. Obtaining Physics Properties We set the object physics properties based on the previous annotations (See footnote in page 2, supplementary materials). Since the hand remains static during simulation, the hand mass and inertia is irrelevant and we set them as constants in a similar scale to the object. For all physics coefficients, *e.g.* coefficient of friction, we use default values in MuJoCo, which are optimized to be particularly suitable and generalizable for simulating common objects. We manually verified that all simulation parameters reasonably align with real world physics, where the effects are demonstrated in the supplementary video. However, we mentioned in our limitation that the simulation settings may not be perfect for objects with complex and rare physics properties, which is best to be modified for task-specific requirements. Note that we do not estimate physics properties from input images. ### 4. Using Corner Losses We follow the same training pipeline as the baseline method [32] to use object corner losses for a fair comparison. Specifically, we compute the 8 object corners from the ground truth mesh by taking the max and min values along each of the 3 axes, resulting in 8 vertices of the tightest object bounding box. No manual annotation of these corners is required. Since we are refining the 6 DoF object pose as the base network output instead of individual vertex positions, training with corner losses should be equally effective in principle and computationally more efficient than vertex loss. It also better supports batched training when objects have different numbers of vertices. ### 5. Using Rotation in Stability Loss We design the stability loss using only object center displacement after simulation as it is empirically sufficient to determine the stability of the estimated pose. In the implementation, since we simulate for a relatively long time (T = 200ms) and set a small threshold (displacement less than 1cm) to classify as being stable, we empirically observe that samples with a large object rotation change after simulation also tend to exceeds the displacement threshold, which can be correctly classified as unstable. In addition, We show in Table 3 of the main paper that the model MLP + RT (using both rotation and translation), performs worse than MLP + T (translation only) due to the increasing difficulty of regression with the DeepSim. We therefore choose to consider object translation only. ### [Minor] Other Related Works We thank the reviewer for providing other related works. However, both suggested papers focus on generating or refining contact given *3D* inputs, which is not directly comparable to our methods as we estimate poses from *image* inputs. For completeness, we will include the discussion of these papers in Section 2 in the final version. ### Bibliography [B1] Andrews S, Erleben K, Ferguson Z. Contact and friction simulation for computer graphics[M]//ACM SIGGRAPH 2022 Courses. 2022: 1-172. --- Rebuttal Comment 1.1: Title: Post rebuttal update Comment: Thanks authors for the rebuttal. It clarified my doubts. After reading other reviews and rebuttals, I maintain my positive overview of the work. The the discussion around integrating rotation in the stability loss is interesting and can be briefly incorporated in the "limitations/ future works" section. --- Reply to Comment 1.1.1: Comment: Dear Reviewer HRq6: We thank you for maintaining positive overview of the work. We will further discuss the design of the stability loss in the final version.
null
null
null
null
null
null
The Distortion of Binomial Voting Defies Expectation
Accept (poster)
Summary: The authors extend the notion of a distortion of a voting rule to a distributional setting. That is, given a distribution over utilities (that the voters have i.i.d. on the candidates), the authors replace the classic distortion with a random variable that depends on the distribution. The expected value of this random variable is the expected distortion. The authors mostly seek rules that have good expected distortion, as compared to a voting rule with the best one possible. The two biggest issues with the paper are that the assumed model is highly unrealistic and, worse yet, that it does not seem to lead to any sort of robust conclusions. The assumption that each voter can have a arbitrary (and, possibly, correlated with other voters) preference order, but all the utilities come from the same distribution (i.e., there is a common distribution for each voter and each candidate, except that there is the hidden correlation that utilities that a voter generates for the candidates must respect their preference order) is highly artificial. Then, based on this highly artificial assumption, the authors claim that (under some further, arguably mild assumptions) some specific positional scoring rules (such as m/2-approval and the binomial rule from the title) perform particularly well (however, if I understand correctly, they do not perform _objectively_ well, but simply with respect to the best possible rule, which itself may or may not be very good---I missed if this is clarified in the paper). Now m/2-approval is well understood to be a poor rule because it can be very indecisivle (indeed, if all voters agree that the ranking of the candidates is a > b > c > ... then the rule would not realize that a is the best candidate). The binomial rule is certainly better, but---as argued by the authors---is not too far off m/2-approval. So, all in all, the authors make a highly questionable assumption and conclude that some rules that do not look too attractive on the outset, optimize some criterion. This is far too weak a conclusion for a paper that expects to be competitive for NeurIPS. On the positive side, the paper certainly includes high-quality mathematics. Still, I cannot really see it as more than an incremental addition to the distortion literature (I mean incrementality in terms of conceptual contribution, which---I guess---the authors wanted to stress; on the technical level the paper is far from being incremental). Strengths: The mathematics behind the results is certainly appealing. Weaknesses: The underlying assumption of drawing utilities IID is highly unnatural (especially when merged with the assumption that the utilities still have to respect submitted preference orders, which creates a weird form of correlation). The conclusions of the paper do not seem to have much practical meaning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: You write that "the metric assumption is arguably difficult to justify in most domains of interest". Could you provide the arguments that you have in mind? Q2: Does your research lead to high-level conclusions beyond "this rule performs well in our setting?". The only thing I could think of is that---perhaps---using some sort of family of distributions that either are IID as yours, or become highly correlated (between the candidates) perhaps one could show that the closer we are to the IID setting, the more natural it is to use positional scoring rules, whereas the more highly correlated are the utilities the better are Condorcet-consistent rules. However, this just a guess and I am not sure how such a result could be obtained. However, generally, resolving the argument between Borda and Condorcet based on the correlations between voters' utilities would certainly be something I would be far more willing to recommend for NeurIPS. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors admit that their assumption is highly unrealistic, but argue that others make similar assumptions. I understand it is tempting to say so, but given that their work does not seem to give truly valuable conclusions, I think the argument is too weak. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reviewer comment:** > The assumption that each voter can have a arbitrary (and, possibly, correlated with other voters) preference order, but all the utilities come from the same distribution (i.e., there is a common distribution for each voter and each candidate, except that there is the hidden correlation that utilities that a voter generates for the candidates must respect their preference order) is highly artificial. **Response:** This seems to be your main concern; we believe that it stems from a misunderstanding and that we can provide an effective rebuttal. When the utilities of the voters for alternatives are drawn i.i.d. from some distribution, the induced preference profile consists of rankings drawn independently and u.a.r. (as you may know, this is called "impartial culture" in social choice). One could define expected distortion with respect to the distribution over utilities, without conditioning on the preference profile. But this is a very "easy" setting for the analysis of distortion, because (especially in the large) all alternatives would have roughly equal social welfare. Therefore, this setting would not differentiate between different voting rules. When we condition on a preference profile, we are not making an assumption (nor imposing some hidden correlation). Rather, we are setting a tougher requirement. Think of it this way: in most cases, the voting rule would observe a preference profile that is almost symmetric, and then it doesn't really matter which alternative it chooses. But we are requiring the voting rule to do well in *every* situation, regardless of the preference profile it receives as input. This tougher requirement allows us to pinpoint voting rules that truly stand out in terms of their expected distortion (and expected welfare). To be clear, we are not saying that i.i.d. utilities are not an assumption we'd like to see relaxed in future work — this is something we openly discuss in the paper. But we believe your concern about correlation (which is later referred to as a "weird form of correlation") is unfounded. **Reviewer comment:** > Now m/2-approval is well understood to be a poor rule because it can be very indecisivle (indeed, if all voters agree that the ranking of the candidates is a > b > c > ... then the rule would not realize that a is the best candidate). **Response:** We do think binomial voting is more attractive, but let us say a few words in defense of m/2-approval. First off, the family of k-approval rules has received quite a bit of attention. A prominent member of the family is the veto rule, which corresponds to (m-1)-approval (each voter vetoes their bottom-ranked alternative). Veto would obviously have the same issue in your example. From a broader perspective, our model and results provide a novel way of evaluating voting rules. There are many other criteria, including axiomatic desiderata, maximum likelihood estimation under various noise models, distance rationalizability, etc. Your example shows that m/2-approval, without suitable tie-breaking, fails the unanimity axiom. Imposing such additional criteria narrows the search for suitable voting rules; in particular, binomial voting satisfies unanimity. In summary, the new criterion we propose should be seen as another tool in the social choice toolbox, which is meant to be used in conjunction with others. **Reviewer question:** > Q1: You write that "the metric assumption is arguably difficult to justify in most domains of interest". Could you provide the arguments that you have in mind? **Response:** We note that you are asking us to justify that something is hard to justify :-) But let us respond by way of an anecdote. When one of us talked in 2014 with Elliot Anshelevich, who pioneered the metric view of distortion, the primary example he gave was voting over movies. But we are not convinced that there's a choice of dimensions that gives rise to metric preferences. For example, if the dimensions were quality of the script, quality of acting, etc., then all voters would be located in the same place: the maximum in each dimension. And if one adds dimensions where voters disagree, such as genre, then preferences are no longer single-peaked (imagine a voter who likes comedies and science fiction movies). Admittedly, this example is a straw man. And don't get us wrong: we love the metric distortion literature and believe it's very valuable. But we also strongly believe in the importance of models that do not impose the metric assumption — especially when one is able to relax the common unit sum assumption, as we do. **Reviewer question:** > Q2: Does your research lead to high-level conclusions beyond "this rule performs well in our setting?". The only thing I could think of is that---perhaps---using some sort of family of distributions that either are IID as yours, or become highly correlated (between the candidates) perhaps one could show that the closer we are to the IID setting, the more natural it is to use positional scoring rules, whereas the more highly correlated are the utilities the better are Condorcet-consistent rules. **Response:** We believe this question stems from the misunderstanding about correlation, which is addressed above. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I certainly see your contribution as something that adds possibly useful results to the distortion literature, but I do not see it as valuable enough to be accepted for NeurIPS. In particular, I am not convinced that your results are useful beyond the distortion literature. If, indeed, you showed a rule that were appealing under a number of criteria _and_ additionally was good with respect to your distortion notion (and, better yet, the distortion view would help you in selecting among several such rules), I would be far more convinced. Q1: "We note that you are asking us to justify that something is hard to justify :-) " <-- No, I am asking you to be responsible for your words. If you write that something "arguably holds" you need to be able to give the arguments. "Weird form of correlation" <-- I see your point of view and I understand that it is better than looking at impartial culture. That said, I think it still involves a form of correlation that is hard to justify. I guess that if you want to convince readers like myself in the future (or in the revised version of the paper) then you would have to make your view as to why this unrealistic assumption makes sense (the argument that "we know it is not really realistic, but is a clear improvement over status quo and this is the best we can do for now" would be good for me, for most venues, but is not sufficient for NeurIPS in my view). All in all, your papers that two rules are good according to a criterion you invented. Should we use these rules? Should we recommend them? Why is your result important beyond the realm of distortion literature? --- Reply to Comment 1.1.1: Comment: Thank you for your response; we appreciate the opportunity to engage in a discussion with you. **Reviewer comment:** > If, indeed, you showed a rule that were appealing under a number of criteria and additionally was good with respect to your distortion notion [...] I would be far more convinced. **Response:** This comment is helpful, and will lead to a stronger presentation of our results. We do believe we can make a convincing case for the general appeal of binomial voting. You mentioned the debate between Borda and Condorcet in your original review, so you are likely aware of the arguments in favor of the family of positional scoring rules (which includes Borda). In particular, while no positional scoring rule is Condorcet consistent, they are (when viewed as social choice correspondences) the only voting rules that are anonymous, neutral, and consistent (unifying two profiles with identical winners doesn't change the winner). Importantly, binomial voting is not an outlandish voting rule designed purely to achieve low expected distortion. Rather, it is a positional scoring rule, and as such inherits the desirable properties of this family of rules. Furthermore, we are aware of very few positional scoring rules that have received attention in their own right, as it's typically hard to justify any specific choice of scores. Examples include plurality, Borda and veto. Another rare example is the harmonic scoring rule of Boutilier et al. [2015]; it was singled out because of its worst-case distortion guarantees, and has become rather well known. To summarize, expected distortion helps us pinpoint a new, "special" positional scoring rule, binomial voting, which inherits a number of desirable properties as a member of this family, and additionally guarantees low expected distortion. Much like the harmonic scoring rule, we strongly believe this would be of interest beyond the distortion literature. **Reviewer comment:** > I think it still involves a form of correlation that is hard to justify. [...] the argument that "we know it is not really realistic, but is a clear improvement over status quo and this is the best we can do for now" [...] is not sufficient for NeurIPS in my view. **Response:** To clarify, we did not suggest that correlation is "not really realistic" -- our comment regarding an assumption we'd like to relax was about i.i.d. utilities, which is a different matter. Instead, our point was that correlation is not even an assumption. Since our previous explanation was unsuccessful, let us offer *two* alternative explanations: 1. Consider a policy maker in charge of choosing a voting rule. There are various guarantees such policy maker might wish for the voting rule to satisfy with respect to expected distortion. A first guarantee is that *ex ante*, before an election is held, the distortion is expected to be low. Note that this expectation is over the entire space of voter utility profiles. While this is an appealing guarantee, it is rather weak (and easy to satisfy). Indeed, consider our policy maker not on the day they choose the voting rule, but rather a while later, in a specific election that uses this rule, after the votes have been cast and officially tallied. At this point in time, the ordinal preferences are public knowledge, and, depending on what they are, it might be the case that despite the expected distortion having been low *ex ante*, it turns out that the expected distortion is high *ex post*, that is, conditioned on all of the public information so far — i.e., on the *ordinal* preference profile — the (conditioned) expected distortion is high. A forward-looking policy maker might want a guarantee that this scenario cannot happen, i.e., a worst-case guarantee on the *ex post* expected distortion. This is of course a much stronger guarantee. In particular, it implies the same guarantee on *ex ante* expected distortion, and is considerably harder to satisfy. *This is our guarantee*. To technically phrase a guarantee on the *ex post* expected distortion one needs to condition on the realized publicly announced ordinal preferences, and such conditioning creates correlations. 2. Without conditioning on preference profiles, expected distortion is defined with respect to the joint distribution over utilities, and the requirement is to do well in expectation over the entire space of utility profiles. Preference profiles partition the space of utility profiles into regions, with each region containing utility profiles that induce the same preference profile. What we're asking is that the voting rule do well in expectation on each and every region of this space, which is a stronger requirement than doing well in expectation over the entire space. We feel strongly that our view of this issue of correlation is justified. If the arguments above are still unconvincing to you, we'd be thankful if you would allow us to further elaborate on points that are unclear.
Summary: This paper studies the expected distortion of deterministic voting rules in a distributional setting proposed by Boutilier et al. in 2015. The setting assumes each voter has a random utility for each alternative drawn i.i.d. from a distribution $D$. Then for a given preference profile $\sigma$, the expected welfare of a voting rule is defined as the expectation of winner's total utility conditioned on utilities are consistent with the preference profile $\sigma$. The expected distortion of a voting rule $f$ for a fixed $\sigma$: $ddist(f, \sigma)$ is defined as the expected ratio between total utility of the winner determined by the voting rule and the total utility of the candidate with the highest utility. Note that although utilities are i.i.d. from a distribution, the preference profile can be arbitrary and considered in the worst case. The first contribution is the authors' demonstration that the majority voting rule results in optimal expected distortion in cases where only two candidates are present. This finding aligns with extensive literature suggesting that majority (or plurality) is the sole "reasonable" voting protocol when faced with a two-candidate scenario. The paper also explores the connections between expected social welfare and expected distortion. By some standard concentration bounds, the author prove that any voting rule that approximately maximizes expected social welfare also approximately achieves optimal distortion under mild conditions on the distribution. Finally, Boutilier et al. [2015] identifies the optimal voting rule in terms of expected welfare. However, this optimal voting rule requires the knowledge of the utility distribution. The main contribution of this paper is that they propose a class of positional voting rules called binomial voting that has good guarantees on expected welfare/distortion for a wide range of distributions. In particular (generalized) binomial voting require no (or limited) information about the distribution. Strengths: - The distributional model considered in the paper is reasonable. Instead of resorting to questionable random ranking assumptions such as impartial culture, this paper only makes distributional assumptions for utilities but not preferences. - The new definition expected distortion under this model is well motivated and the authors provide diverse results under various assumptions on the distribution. - The binomial scoring rule proposed in the paper is novel. It is a nice result that a single voting rule achieves good expected welfare/distortion for a wide range of distributions. - This paper is very well-written. Weaknesses: - Though preferences are considered in the worst case, the assumption that all utilities are i.i.d. seems a bit strong. - The results for $m=2$ and connections between expected welfare and expected distortion are not very surprising and techniques involved are fairly straightforward. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Do authors know any results if utilities are still independent but from different distributions for different candidates? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors addressed all limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reviewer question:** > Do authors know any results if utilities are still independent but from different distributions for different candidates? **Response:** Relaxing the i.i.d. assumption across either voters $i$ or alteratives $j$ are both interesting directions for future work. We suspect that we may be able to relax the i.i.d. assumption across voters (allowing each voter to have their own distribution for all candidates), as we may be able to leverage similar arguments regarding order statistics. Relaxing the i.i.d. assumption across candidates is likely much more difficult, as the expected utilities of a given voter can no longer be described by order statistics. --- Rebuttal Comment 1.1: Title: Update Comment: Thanks for the response, I have no further questions.
Summary: This paper studies the distortion of voting rules, which is a measure of how well a voting rule performs with respect to optimal social welfare while having access to limited information about preferences. In particular, the focus is on expected distortion where the underlying utility vectors are drawn from an arbitrary prior distribution and the goal is to design a voting rule that doesn't use any information about such prior distribution, similar in spirit to the design of prior-independent auction mechanisms. The main approach is to first design an approximate expected welfare maximizing rule (EWMR) and then derive conditions under which EWMR approximates expected distortion maximizing rule (EDMR). When there are only two alternatives ($m=2$), it turns out that the majority is rule is both an EDMR and EWMR. However, this is not true for $m>2$ and the authors consider scoring rule based methods. The authors show that approving the top half is $1/3$-approximate EWMR for all symmetric distributions. For asymmetric distributions, the authors propose a new scoring called the Binomial voting rule which is $\nu/2$-EDMR for any distribution with median $\nu$. Furthermore, it is also shown that the voting rule can be adapted to incorporate partial knowledge about the underlying distribution e.g. various quantiles. Strengths: 1. I think the notion of prior independent voting rules is quite interesting as the voting rules don't need to depend on the underlying distribution. Furthermore, the results show that prior independent voting rules can achieve constant expected distortion. 2. The connection between EWMR and EDMR voting rules is non-trivial and might turn out to be an interesting approach to designing voting rules that aim to maximize welfare. 3. Finally, the Binomial voting rule is quite interesting and I also liked the result showing that the voting rule can be adapted to incorporate additional information regarding the prior distribution. Weaknesses: 1. The main weakness of the paper is that the utilities of the voters for the $m$ items are assumed to be i.i.d. from the prior distribution. In practice, the utilities of different alternatives are often correlated and this assumption excludes distribution like Gaussian with general covariance matrix. However, the authors have mentioned the limitations in the paper. 2. I am also a bit concerned about the guarantees provided by prior-independent voting rules. Repeated auctions are quite frequent in online platforms and providing bound on performance in expectation makes sense. On the other hand, voting rules are used less frequently -- political elections happen every couple of years, and people's preferences change from one iteration to the next. In such a situation, providing instance dependent bound on distortion is more practical. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. If the assumption of symmetric distribution is violated, can you obtain $\alpha$-EWMR for some distribution independent constant $\alpha$? I understand that Theorem 5.2 provides a negative result but the distribution in the theorem needs to depend on $n$, and $m$. Is it still the case for asymmetric distributions independent of $n$, and $m$? 2. Since the distortion of the binomial voting rule depends on the median of the prior distribution, a natural question is why median and why not some other statistic? In particular, if the median of the prior distribution is very small, should the distortion of any voting rule be small as well? Note that, mean can be large in this case as $E[X]$ can be as large as $\textrm{Median}(X) + \sqrt{\textrm{Var}(X)}$. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reviewer comment:** > I am also a bit concerned about the guarantees provided by prior-independent voting rules. Repeated auctions are quite frequent in online platforms and providing bound on performance in expectation makes sense. On the other hand, voting rules are used less frequently -- political elections happen every couple of years, and people's preferences change from one iteration to the next. In such a situation, providing instance dependent bound on distortion is more practical. **Response:** If we understand correctly, your concern is not about *prior-independent* voting rules, as infrequent elections make it unlikely that much information about the distribution would be available, which strengthens the case for prior independence. Instead, your concern seems to be about the very idea of optimizing *expected* distortion, and by "instance-dependent bound" you seem to be referring to the worst case over utilities consistent with the given profile. We believe we can provide some useful perspective. First, to say the obvious, this worst-case bound could be extremely bad; in fact, without the unit-sum assumption, there exist preference profiles for which no bound exists. Second, even when given a preference profile where the worst-case bound is reasonable, we would still prefer to optimize expected distortion (assuming these two measures disagree on the outcome), as it is *intuitively* (albeit not formally) likely to lead to an outcome with higher social welfare. (Formalizing this intuition is an interesting problem!) **Reviewer question:** > If the assumption of symmetric distribution is violated, can you obtain $\alpha$-EWMR for some distribution independent constant $\alpha$? I understand that Theorem 5.2 provides a negative result but the distribution in the theorem needs to depend on $n$, and $m$. Is it still the case for asymmetric distributions independent of $n$, and $m$? **Response:** The distributions in the proof of Theorem 5.2 are both Bernoulli distributions that only depend on $\alpha$. Therefore, if we consider all asymmetric distributions independent of $n$ and $m$, both of these distributions would be included. The proof in the theorem is dependent on $n,m$ because it required $m$ and $n$ to be sufficiently large. Therefore, Theorem 2 could be restated as "For any $\alpha$, there is no rule that is $\alpha$-EWMR for all distributions if we allow $n$ and $m$ to be arbitrarily large, even if we restrict to distributions that are independent of $n,m$". An interesting extension possibly related to your question is, if we restrict to $n,m < 100$ (or any constant), whether $\alpha$-EWMR is possible for some constant $\alpha$. The answer is yes (for example plurality will guarantee $\frac{1}{100}$-EWMR in this case), but characterizing the exact constant is left for future work. **Reviewer question:** > Since the distortion of the binomial voting rule depends on the median of the prior distribution, a natural question is why median and why not some other statistic? In particular, if the median of the prior distribution is very small, should the distortion of any voting rule be small as well? Note that, mean can be large in this case as $E[X]$ can be as large as $\textrm{Median}(X) + \sqrt{\textrm{Var}(X)}$. **Response:** You are correct that the median is not inherently special. The binomial voting rule can be generalized to any quantile or combination of quantiles with the generalized binomial voting rule in Theorem 5.11. If the median of the prior distribution is very small, then using higher quantiles (75%, 95%, etc) would give stronger guarantees with the generalized binomial voting rule. As you point out, medians are not necessarily representative of the mean or higher moments of the distribution. Our results use medians and quantiles because the EWMR calculates the order statistics of the underlying distribution. Moments/means do not provide much information about order statistics, however, quantiles can be used to lower bound order statistics as in Theorems 5.7 and 5.11. This naturally leads to voting rules that use quantiles such as the generalized binomial rule. That being said, it is definitely possible that another rule could use information about moments of the underlying distribution to better approximate the order statistics. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Dear authors, thank you for your rebuttal and answering my questions. I have read the rebuttal and other reviews as well. Overall, I think this work presents an interesting take on distortion, and would vote to accept the paper.
Summary: The paper studies voting rules in the context of expected distortion and expected welfare. In more detail, one of the trending topics in social choice theory is the design of voting rules with low distortion. The underlying assumption for this is that voters have utilities over the alternatives, but only report ordinal preferences. Then, the distortion quantifies the loss of social welfare (=sum of the voters utilities) caused by not knowing the cardinal utilities. To this end, one typically investigates the utility profile (and induced preference profile), where the ratio between the social welfare of the optimal alternative and the alternative chosen by the voting rule is maximal. This is clearly a worst-case measure and allows, without additional assumptions, only for rather negative results. The paper at hand therefore analyzes expected distortion: given a preference profile, the authors assume that, for each voter, the utilities for the alternative are drawn i.i.d from some distribution, and then try to find alternatives that have in expectation a high social welfare (and therefore a good distortion). Clearly, if we have knowledge about the distribution, it is theoretically possible to compute the alternative that maximizes the expected welfare. The authors thus rely on a model where the prior is not known (and we are thus not in the bayesian setting). In particular, while there is an underlying distribution, it is not known to the mechanism designer. In this setting, the authors then show that for 2 alternatives, the majority rule is the rule that optimizes the expected social welfare and the expected distortion for every underlying distribution. For more than 2 alternatives, the authors furthermore design a new rule called binomial voting (which is a scoring rule relying on the binomial coefficient for the score vector) which always chooses an alternative that has an expected welfare of at least v/2 times the expected welfare of the optimal alternative (where v is the median of the underlying, unknown distribution of utilities). For deriving this result, the authors establish a link between the maximization of expected social welfare and expected distortion. Finally, the authors also discuss several variants of their main result and the limitations of their approach. Strengths: The paper is well-written and, despite its technical complexity, I found it easy to follow the main ideas of the paper. Furthermore, the paper opens an interesting new direction for going beyond the worst-case distortion in voting by studying the expected distortion. Since the worst-case distortion is known to be prohibitive and the new model seems reasonable, I find the approach quite attractive. Moreover, the results are interesting as they give first, clearly non-trivial bounds for this new setting. Finally, since distortion in voting is a topic that has appeared in NeurIPS before. Weaknesses: On the negative side, it should be mentioned that many of the main claims of the paper cannot be verified based on the paper itself (as all proofs are deferred to the appendix). (I also did not read the appendix and can therefore not vouch for the correctness of the given results; the proof of Theorem 3.1, which is the only one that is (almost completely) presented in the paper seems correct). Furthermore, I find the comparison to the literature somewhat lacking. For instance, Ebadian et al. (Optimized Distortion and Proportional Fairness in Voting, 2022) suggest several voting rules with close to optimal worst-case distortion and it seems interesting or even necessary to reason why these rules are not good in the given setting. In particular, the harmonic rule by Boutilier et al. (Optimal social choice functions: A utilitarian view, 2015) is also a scoring rule and it thus seems like an interesting question whether it also has a good expected distortion. However, I have to acknowledge that the paper is already quite dense and that the discussion of these results might be too much. Feedback: 1) I am not a fan of the pun in the title. 2) I think it should be better motivated why the authors use the distribution to draw utilities. This assumption is crucial for the results, not the only way to define expected distortion, and I am not entirely sure about how convincing this model is. In particular, the assumption implies that the preference intensities of voters are (in average) similar, which may not capture reality. 3) Instead of Section 5.2 (which is certainly interesting, but rather dense and technical), one could think of introducing more detailed proof sketches or to discuss the expected distortion of known rules. In principle, I feel that this space is just not used in the most effective way (but the authors may disagree with me here, this is clearly subjective) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) Can the authors say something about the expected distortion of known rules? This seems necessary to get some intuition about how strong the results are. 2) Similarly, can the authors say more about upper bounds on expected distortion? Currently, Theorem 5.2 rules out the existence of voting rules with constant expected distortion, but can we, e.g., hope to find a voting rule which is v-EDMR (where v is the median of the distribution) rather than v/2-EDMR? 3) The importance of the results hinges on some implicit assumption. For instance, the main result of the authors is only appealing if the median of the underlying (but unknown) distribution is not too small. Can the authors comment on whether there is some evidence that this might be the case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reviewer question:** > Can the authors say something about the expected distortion of known rules? This seems necessary to get some intuition about how strong the results are. **Response:** Since expected distortion is defined with respect to a specific distribution and preference profile, we would need to know these parameters as well as the rule of interest. Given a specific distribution, preference profile, and voting rule, we can directly calculate or approximate the expected distortion using nested integrals or Monte Carlo sampling. That being said, our results do connect back to some known rules. For uniform distributions (or distributions close to uniform), Borda Count is the expected welfare maximizing rule and therefore a good approximation of the expected distortion maximizing rule. For the two alternatives case, Theorem 3.1 also shows that plurality does maximize distortion. For more than two alternatives, Theorem N.1 shows that plurality is an $\alpha$-EDMR rule for $\alpha = \max(\frac{1}{n}, \frac{1}{m})$. One reasonable (and interesting!) question for future work could be, what is the *worst* distribution and preference profile for a specific rule, and how does the expected distortion compare to that of the EDMR in this setting? Our guess would be that for most rules, there is some distribution/preference profile pair for which the rule performs very poorly relative to the EDMR. If so, this would imply that our decision of whether or not to use a certain voting rule (e.g. harmonic) should depend on our hypothesis of the underlying distribution. This makes a lot of sense -- we might want to use Borda Count when the underlying distribution is approximately uniform, but not when the underlying distribution is approximately Bernoulli. We could also choose to use a voting rule with distribution independent guarantees such as binomial voting. **Reviewer question:** > Similarly, can the authors say more about upper bounds on expected distortion? Currently, Theorem 5.2 rules out the existence of voting rules with constant expected distortion, but can we, e.g., hope to find a voting rule which is v-EDMR (where v is the median of the distribution) rather than v/2-EDMR? **Response:** We found proving upper bounds (such as no rule can achieve $\nu$-EDMR) to be significantly harder than proving lower bounds. While we do not have a tight upper bound for the best $\alpha$-EDMR rule, the example in the proof of Theorem 5.5 can give some insight into a possible upper bound. In that proof, the two distributions (which happen to be both symmetric) have medians of $0.5$ and $1$ respectively. The proof shows that it is impossible to achieve a $\sqrt{1/3}$-EDMR for both of these distributions. Therefore, no rule can do better than a $1.14\nu$-EDMR where $\nu$ is the median. Clearly, this still does not rule out the possiblity of a $\nu$-EDMR rule, but a more tailored example than the above would almost certaintly give stronger results. **Reviewer question:** >The importance of the results hinges on some implicit assumption. For instance, the main result of the authors is only appealing if the median of the underlying (but unknown) distribution is not too small. Can the authors comment on whether there is some evidence that this might be the case? **Response:** While the binomial voting rule does rely on the median being not too small, the generalized binomial voting rule can give stronger results that depend on higher quantiles. If the median is too low in a specific application for the binomial rule to give a strong result, then using higher quantiles in the generalized binomial voting rule will improve the guarantees. Therefore, one solution is to use a high enough quantile in the generalized binomial rule to get a useful guarantee. If even the higher quantiles are too small to give interesting guarantees, then the distribution has mass concentrated at $0$. A distribution with mass mostly concentrated at $0$ does require a substantially different approach, but this case may be "easier" because most voters have utility $0$ for most alternatives. --- Rebuttal Comment 1.1: Title: Thank you! Comment: We would like to thank the authors for replying to our questions. We have no further comments or questions at this point.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work introduces the notion of the expected distortion of voting rules. The setup involves n voters and m alternatives. Each voter $i$ has a utility $u_{ij}$ for alternative j, which induces a ranking $\sigma_i$ of the alternatives, where $j$ is ranked higher than $k$ in $\sigma_i$ only if $u_{ij} \geq u_{ik}$. A deterministic voting rule $f$ only has access to the ordinal preferences profiles $\boldsymbol{\sigma}$ instead of the actual underlying utilities $\mathbf{u}$. In the standard worst-case analysis, the distortion of a voting rule is defined as the social welfare incurred by the information loss, i.e.,$$\sup_{\boldsymbol{\sigma}} \sup_{\mathbf{u}: \mathbf{u} \triangleright \boldsymbol{\sigma}} \frac{\max_{j \in A} sw(j,\mathbf{u})}{sw(f(\boldsymbol{\sigma},\mathbf{u}))}.$$ This work considers a different setting. Given a preference profile $\boldsymbol{\sigma}$, a utility vector consistent with the preference profile $\mathbf{u}$ is drawn from some distribution $D$ as follows: Each voter $i$ draws $m$ utilities i.i.d from $D$, and these $m$ utilities are assigned from highest to lowest to the $m$ alternatives according to the order of $\sigma_i$. The paper then defines the distributional distortion of a voting rule $f$ for a given preference $\boldsymbol{\sigma}$ as a random variable $$ddis(f,\boldsymbol{\sigma}) = \frac{sw(f(\boldsymbol{\sigma}), \mathbf{u})}{\max_{j \in A} sw(j,\mathbf{u})}.$$ Subsequently, the expected distortion of $f$ for a given $\boldsymbol{\sigma}$ is the expectation of this ratio, taken over the random process described above. Similarly, it defines distributional social welfare $dsw(f, \sigma) = sw(f(\boldsymbol{\sigma}), \mathbf{u})$ and expected social welfare. Unlike the worst-case distortion, the benchmarks the paper considers is agnostic to the actual underlying utility profile, but rather the best voting rules that can know only to the distribution $D$ the utilities are drawn from. The best such rules are termed EDMR (expected-distortion-maximizing-rule) and EWMR (expected-welfare-maximizing-rule) for the expected distortion and expected social welfare objective, respectively. Regarding the results, for the two-alternative case, the paper shows that the Majority rule is both EDMR and EWMR. For multiple agents, by restricting attention to distributions that are independent of $n$ and $m$, and for sufficiently large $n$ or sufficiently large $m$, the expected distortion of an EWMR is a $1-\epsilon$ approximation of the expected distribution of an EDMR. The paper then attempts to approximate the expected welfare which in terms to approximate the expected distortion of EDMR, under the stated conditions. To achieve this, they first present a negative result, demonstrating that it is impossible for a single voting rule to achieve any constant $\alpha$ approximation of the expected social welfare for all distributions supported on $[0,1]$. To overcome this negative result, the paper shows that for symmetric distributions, a scoring rule that assigns a score of 1 to alternatives ranking in the highest half position and 0 to others, and outputs the alternative with the highest score, achieves an expected welfare 1/3 of the welfare of EWMR. IT also show that no voting rule can achieve a $\sqrt{1/3}$ approximation for this setting. Another way to overcome the aforementioned negative result is by assigning a score of $\sum_{\ell = k}^{m} {m\choose\ell}$ to an alternative ranked $k$. The paper demonstrates that this smoother version of the top-half score rule is a $v/2$ approximation to the EWMR, where $v$ is the largest median. Strengths: The paper is well organized. The high-level idea of moving beyond worst-case analysis for distortion is novel and of great interest. The observation of the connection between the optimal expected welfare and optimal expected distortion under different distributional assumptions is interesting. The distributional-independent adaptation of the scoring rule of Boutilier et al [2015] is intuitive (in a good way) and clever. Weaknesses: I find the benchmark to be relatively weak, and the positive results are somewhat marginal. Additionally, there are some correctness issues in the proofs. To expand on the first point regarding the benchmarks, let us recall that the standard worst-case distortion measures the greatest difference between the social welfare of a voting rule that only has access to the ordinal preference and the optimal social welfare of the utilities that are consistent with the preference. This work relaxes the measure in two ways: First, the ratio is now measured in expectation, which aligns well with the idea of the paper. Second, the benchmarks are voting rules that are also agnostic to the underlying utilities but can have knowledge of the distributions from which the values are drawn (EDMR and EWMR). However, it is not clear to me why the second relaxation is needed or interesting, and no motivation is provided regarding the choice of such benchmarks. In fact, without the second relaxation, we can define the expected distortion of a voting rule for $D$ as follows: $$\inf_{\boldsymbol{\sigma}} \mathbb{E}\left[\frac{sw(f(\boldsymbol{\sigma},\mathbf{u}))}{\max_{j \in A} sw(j,\mathbf{u})}\right].$$ Results regarding the above evaluation seem to better capture the ''average-case analysis'' the paper refers to. To this end, it is unclear how well the EDMR (possibly distribution-dependent) performs with respect to the above benchmarks, which is also interesting and arguably a more natural question to tackle first before distribution-independent voting rules. Regarding the second point, please note that the positive results in section 5 are all with respect to the expected social welfare considered by Boutilier et al. [2015]. The results presented in this work are stronger since they are distribution-independent, but they are weaker since the distributions are assumed to be bounded (supported on [0,1]). As far as I know, no such restriction is needed for Boutilier et al.'s results to hold. To translate these expected social welfare results into expected distortion results, one needs to further restrict distributions to the ones that are independent with respect to both $n$ and $m$ and with sufficiently large $n$ or $m$, which weakens the positive results. Additionally, none of the bounds in section 5 are tight. There are a few incorrect arguments in the proofs, the most important one I spotted being in the proof of Lemma F1. The first sentence states that by linearity of expectation, $\mathbb{E}[dsw(j, \sigma)] = \sum^n_{i = 1} \mathbb{E}[u_{ij}] = n\mu$ for all $j$, where $\mu$ is the mean of distribution $D$. The second equality seems to be incorrect. Consider a simple example with 2 alternatives and 2 agents with $D$ being Uniform$[0,1]$, and alternative $1$ is ranked second by any $i$. Then $u_{i1}$ will be the first-order statistic out of two draws for both agents, making $\mathbb{E}[u_{ij}] = 1/3$ instead of $=\mu = 1/2$. This, in turn, makes the proof of F1 invalid. With that said, I believe the statement of Lemma F1 is correct. A possible alternative proof is as follows: since $\sum_{j} dsw(j,\sigma) = \sum_{j}\sum_{i} u_{ij} = mn\mu$, and the maximum $dsw$ is always weakly higher than the average $dsw = n\mu$. Similarly, in the proof of theorem 4.2, the claim that the social welfare of any alternative $j$ can be represented as $dsw(j,\sigma) = \sum^n_{i=1} u_{ij}$, where $u_{ij}$ are $n$ i.i.d. random variables, is also incorrect since $j$ could be ranked in different positions for different voters $i$, and $u_{ij}$s are therefore only independent, not identical. This issue is again not major, as Hoeffding’s inequality only requires the variables to be independent and bounded. However, these false claims hinder my confidence in the rest of the proofs that I did not check to the same level of detail. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is there any results or intuition regarding how bad is the worst expected distortion for EDMR for different $D$? For example,for the special case of two alternatives, what's the worst expected distortion for majority for different $D$? It seems like all the utilities i.i.d drawn from the same distribution is need for a lost of the concentration bounds to go through. Does the results still hold if voters are drawing the values for candidates i.i.d from $D_i$? Regrading the assumption that D is supported on [0,1], it is true that for the results to go through one only need the utilities to be bounded? Or the maximum of the distribution indeed affects the guarantees? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The assumptions are distributions for the positive results to hold are rather restricted. The results are not tight. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reviewer comment:** > I find the benchmark to be relatively weak. [...] We can define the expected distortion of a voting rule for $D$ as follows: $$\inf_{\boldsymbol{\sigma}} \mathbb{E}\left[\frac{sw(f(\boldsymbol{\sigma},\mathbf{u}))}{\max_{j \in A} sw(j,\mathbf{u})}\right].$$ Results regarding the above evaluation seem to better capture the ''average-case analysis'' the paper refers to. To this end, it is unclear how well the EDMR (possibly distribution-dependent) performs with respect to the above benchmarks, which is also interesting and arguably a more natural question to tackle first before distribution-independent voting rules. **Response:** To paraphrase, you're asking why we aren't providing direct expected distortion (or expected welfare) bounds. This benchmark was indeed our starting point, but in the course of 9 months of working on this project (from September 2022 until May 2023) we shifted focus to approximating the EDMR and EWMR, where we could paint a much more complete picture. Importantly, our proofs can be adapted in order to restate our results in terms of direct bounds on expected distortion and welfare. For example, the following statements follow from our proofs: *Thm 4.2 restated:* For every $\epsilon > 0$, there exists $n_0$ such that if $n \ge n_0$, then the expected distortion of the EWMR is at least (1- $\epsilon$). *Thm 5.7 restated:* Let $\mathcal{D}$ be a distribution supported on $[0,1]$ whose largest median is $\nu$. Then binomial voting achieves expected welfare of at least $\nu n/2$. Note that both statements are true for all $\sigma$, and therefore also hold when expected distortion is defined with an $\inf$ as in your comment. In light of your comment, we absolutely agree that more discussion of the benchmarks is needed. In our revision, we commit to adding 1-2 paragraphs about this to the paper, as well as including an appendix to elaborate on the technical connection between the benchmarks. We believe that such a revision would address your concern and would be well within the scope of a conference revision. **Reviewer comment:** > There are a few incorrect arguments in the proofs, the most important one I spotted being in the proof of Lemma F1. The first sentence states that by linearity of expectation, $\mathbb{E}[dsw(j, \sigma)] = \sum^n_{i = 1} \mathbb{E}[u_{ij}] = n\mu$ for all $j$, where $\mu$ is the mean of distribution $D$. The second equality seems to be incorrect. [...] Similarly, in the proof of theorem 4.2, the claim that the social welfare of any alternative $j$ can be represented as $dsw(j,\sigma) = \sum^n_{i=1} u_{ij}$, where $u_{ij}$ are $n$ i.i.d. random variables, is also incorrect since $j$ could be ranked in different positions for different voters $i$, and $u_{ij}$s are therefore only independent, not identical. **Response:** Both issues are valid — thanks for catching them and kudos on reading Appendix F! The latter issue is a typo ("i.i.d." should be "independent"), but the former issue is an embarrassing, albeit easily fixable, mistake. The correct argument is actually commented out in our LaTex file; it appears that in the last proofreading round, one of us introduced the mistake by attempting to slightly simplify the argument. The paper (including appendices) had been carefully proofread by multiple authors over several days before submission. We therefore hope that the minor mistake will be seen as a fluke that doesn't reflect on the soundness of our results. **Reviewer question:** > Is there any results or intuition regarding how bad is the worst expected distortion for EDMR for different $D$? For example,for the special case of two alternatives, what's the worst expected distortion for majority for different $D$? **Response:** This question is partially addressed above. We do not know the answer your specific question about two alternatives, but not for lack of trying — we spent a while looking at regular and MHR distributions in this context. **Reviewer question:** > It seems like all the utilities i.i.d drawn from the same distribution is needed for a lot of the concentration bounds to go through. Do the results still hold if voters are drawing the values for candidates i.i.d from $D_i$? **Response:** Relaxing the i.i.d. assumption across either voters $i$ or alteratives $j$ are both interesting directions for future work. We suspect that we may be able to relax the i.i.d. assumption across voters (allowing each voter to have their own distribution for all candidates), as we may be able to leverage similar arguments regarding order statistics. Relaxing the i.i.d. assumption across candidates is likely much more difficult, as the expected utilities of a given voter can no longer be described by order statistics. **Reviewer question:** > Regarding the assumption that D is supported on [0,1], it is true that for the results to go through one only need the utilities to be bounded? Or the maximum of the distribution indeed affects the guarantees? **Response:** Bounded utilities are sufficient for the results to go through — we chose to focus on $[0,1]$ to make the proofs clearer mathematically, that is, the assumption is made purely for ease of exposition. The maximum of the distribution does not affect our guarantees, as expected distortion is invariant to scaling of the distribution. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have no further questions at this point.
Summary: This paper studies the concept of expected distortion for voting rules, where distortion measures the worst-case ratio between the maximum social welfare and the rule's welfare. Expected distortion considers the expectation over consistent utility profiles drawn from an underlying i.i.d. distribution. The paper shows majority is optimal for two alternatives. For more alternatives, expected welfare maximization approximates expected distortion maximization for large electorates or alternatives. A novel voting rule called binomial voting is proposed and shown to approximate expected welfare maximization in a distribution-independent manner. Its guarantee depends on the median of the distribution. Strengths: Provides an interesting perspective by analyzing distortion in an average-case Bayesian setting rather than worst-case. Establishes asymptotic equivalence between expected distortion and welfare maximization. Binomial voting is intuitive and has strong guarantees dependent on the median. Approaches expected welfare maximization. Solid theoretical analysis with proofs of key hardness and approximation results. Does a good job exploring the space. Weaknesses: Assumption of i.i.d. utilities may be unrealistic for some domains like politics. Correlated utilities more reflective of reality. The paper oversells itself. In the abstract it works for “all distributions”; in the intro, i.i.d. distributions; and then in the main body consistent utility profiles drawn where each value is drawn from some underlying i.i.d. distribution. I think I am in the minority on this, but I don’t love distortion as something to optimize. It assumes truthful voting, which is very dubious in positional voting schemes (like is suggested in the paper). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I got lost in the proof of Theorem 3.1. In particular, could not see why a coupling argument was necessary (or sufficient). I am not suggesting that the proof is wrong, rather (likely) unclear in its current condensed form (it could just be a me thing, but I am not a newbie to complex proofs). Could you help me out here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reviewer comment:** > The paper oversells itself. In the abstract it works for “all distributions”; in the intro, i.i.d. distributions; and then in the main body consistent utility profiles drawn where each value is drawn from some underlying i.i.d. distribution. **Response:** Thank you for letting us know this was unclear, and we are happy to augment the abstract with further details! The different wordings here are because we provide more specific details about the same model as the paper progresses, and not an attempt to oversell the paper. To clarify, the use of the term "all distributions" in the intro refers to the fact that the underlying distribution D (from which i.i.d samples are drawn) can be any distribution. In addition, consistent utility profiles are necessary when conditioning on a preference profile, which we believe is a harder and more interesting setting than taking an expectation over all preference profiles. This is because without conditioning on a preference profile, all alternatives have roughly the same social welfare (especially in the large), which obviates the need for a good voting rule. If interested, we provide further discussion of this point in our response to reviewer gjAU. **Reviewer comment:** > I got lost in the proof of Theorem 3.1. In particular, could not see why a coupling argument was necessary (or sufficient). I am not suggesting that the proof is wrong, rather (likely) unclear in its current condensed form (it could just be a me thing, but I am not a newbie to complex proofs). Could you help me out here? **Response:** Here is a higher level overview of how we used a coupling argument in Theorem 3.1. Please let us know if there are any parts that are still unclear! In Theorem 3.1, we want to show that the majority winner also has weakly higher expected distortion than the majority loser. To do this, it is sufficient to show that the expected distortion of alternative 1 when it is ranked first $k$ times is weakly lower than the expected distortion of alternative 1 when it is ranked first $k+1$ times. This is exactly the equation on page 5 following the sentence beginning with "Recall that...". Showing this is sufficient because, by definition, the majority winner is ranked first at least as many times as the majority loser. This brings us to the coupling argument itself. Define event $A$ as the event that alternative 1 is ranked first $k$ times, and define event $B$ as the event that alternative 1 is ranked first $k+1$ times. The only difference between these two events is that one voter prefers alternative 2 under event $A$ and prefers alternative 1 under event $B$. Therefore, because these two events only differ by a single voter, we can use a coupling argument to compare the expected distortion of alternative 1 under these two events. Assume WLOG that voter $n$ is the differing voter. Fixing the values of the utilities of the first $n-1$ voters (which are the same under both event $A$ and $B$), the only voter that changes the expected distortion is the $n$th voter. This allows us to use the law of total expectation to separate out the randomness of this $n$th voter from the randomness of the rest of the voters. Therefore, the coupling argument is that these first $n-1$ voters are coupled between the two events, with the only change coming from the $n$th voter. The proof concludes with the algebraic equations on the top of page 6, which show exactly the desired inequality between the expectation of the distortion under event $A$ versus event $B$. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for the thoughtful rebuttal.
null
null
null
null
Scaling Up Differentially Private LASSO Regularized Logistic Regression via Faster Frank-Wolfe Iterations
Accept (poster)
Summary: This paper studies how to equip differential priavate (DP) Frank Wolfe (FW) with the ability to cope with sparse data. The key insights are the proper priority queue data structures for handling the FW subproblem. To this end, a Fibonacci queue is proposed for non DP cases, and a big-step little-step sampler is designed for DP cases. The complexities of proposed approaches has improved dependence on N and D. Numerical results also suggests embracing sparsity fastens practical performance on large datasets. Strengths: (+) The proposed data structures are useful for handling sparse data for DP-FW. It provides a much better computation complexity with improved N and D dependence. (+) Empirically, the smart use of these data structures leads to significantly faster performance that is up to 20x - 30x. Here I am comparing Alg. 2 + 4 with Alg. 2, since Alg. 2 is possibly a more reasonable benchmark if hopes to advance FW with sparsity. (+) The numerical improvement tends to be more significant given a smaller $\epsilon$. This is helpful for settings with high requirement on privacy. Weaknesses: 1.On the highlevel, this work focuses on developing efficient data structures for solving FW subproblem (in DP setting). Comparison with other data structures, such as those locality sensitive hashing ones https://arxiv.org/abs/2111.15139, is missing. 2.Can the proposed method benefit other FW approaches, such as https://arxiv.org/abs/2110.04243 3.The assertion in line 19 should be demonstrated more carefully. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your key points that we will answer below and incorporate into the revision. We hope they satisfy your concerns, please let us know if we can further clarify anything. We respectfully note that our method is more than $20-30\times$ faster, as Alg. 2 is one of our contributions in this work. The purpose of Table 2 was to perform an ablation experiment to show that both Alg. 2 and Alg. 4 are important individually and together to obtain our speedups, which at a real-world privacy value of $\epsilon = 0.1$ range from $20\times$ up to $2451\times$ faster. >as those locality sensitive hashing ones https://arxiv.org/abs/2111.15139, is missing. Thank you for this highly related work we were unaware of. We will include it in the revision and related work for comparison and a more complete scope of literature. 2111.15139 tackles a very different setting from our own: they are focused on $N > D$, and non-private regression. We reached out to the authors during the rebuttal period who confirmed two further issues in performing a comparison. First, quoting the authors, "I believe it is possible to state that our [2111.15139] paper focused on the setting where d= log N", due to the LSH algorithm's domain of effectiveness being in situations where $D << N$. Second, the authors confirmed that no code was written for the paper and it is a theoretical work. Indeed, the manuscript does not state which of multiple possible data structures should be used for the maximum inner product search. While we made our own attempt in the time available, it does not yet work, and indeed, there are no baselines to compare against to know what is an expected speedup. Additionally, the paper is focused on the non-private case, meaning we would need to invent a new DP maxIPS data structure to use in our desired DP scenario. We suspect this alone would constitute a whole new paper of work and results. Other important differences include that the paper does not address sparsity in $D$, and more completely, their big-O complexity is $O(D + D N^\rho)$ where $\rho \in (0, 1)$ is a factor dependent on the maxIPS structure's efficiency on the current data. This thus does not tackle the $O(D)$ iteration cost we are concerned with in sparse data scenarios. In addition, if $T$ is the number of iterations needed to converge, their work proves they need $O(T/c^{2})$ iterations, an increase based on a maxIPS dependent constant $c \in (0, 1)$. The tradeoff between more iterations but faster per-iteration time would be another non-trivial factor. That said, the work is highly important in establishing a different approach to "queue maintenance" as we described in our work, by instead transforming the representation of the data, labels, and iterates, to accommodate the maxIPS data structures. They are also tackle the larger $N$ case instead of large & sparse $D$ of our own. We will synthesize these points into our revised related work and framing of our contribution. >benefit other FW approaches Our approach should be able to benefit many other FW approaches, but we can't state that it will work for _all_ other FW solvers. For the cited example of 2110.04243, we see that it should be a fairly direct application of our method to integrate with their own, with a few extra derivations of sparse updates for the added momentum term. We will cite and add this discussion to our related work, thank you. > line 19 The reviewer's point is well taken. To avoid ambiguity we will revise Line 19 as a statement to ``all \textit{iterative} DP regression algorithms we are aware of''. We provide a table listing the training complexity of iterative private training procedures for high dimensional regression. Please note that the statement can be refined to $\mathcal{O}(TND)$ for non-sparse aware algorithms. | Method | Complexity | |:---:|:---:| | Frank-Wolfe Methods [1, 2, 3, 4] | $\mathcal{O}(TND)$ | | ADMM [5] | $\mathcal{O}(TNDM)$ | | Iterative Gradient Hard Thresholding Methods [4, 6, 7] | $\mathcal{O}(TND)$ | | Coordinate Descent [8] | $\mathcal{O}(TND)$ | | Mirror Descent [3] | $\mathcal{O}(TNDM)$ | Note that $M$ represents an iterative parameter which is greater than or equal to $1$. Methods with $M$ have a double-for loop and thus have two iterative parameters ($T$ and $M$). We thank the reviewer for this comment and will insert this table into the final version of our paper. [1] Talwar, Kunal, Abhradeep Guha Thakurta, and Li Zhang. "Nearly optimal private lasso." Advances in Neural Information Processing Systems 28 (2015). [2] Bassily, Raef, Cristóbal Guzmán, and Anupama Nandi. "Non-euclidean differentially private stochastic convex optimization." Conference on Learning Theory. PMLR, 2021. [3] Asi, Hilal, et al. "Private stochastic convex optimization: Optimal rates in l1 geometry." International Conference on Machine Learning. PMLR, 2021. [4] Hu, Lijie, et al. "High dimensional differentially private stochastic optimization with heavy-tailed data." Proceedings of the 41st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems. 2022. [5] Wang, Puyu, and Hai Zhang. "Differential privacy for sparse classification learning." Neurocomputing 375 (2020): 91-101. [6] Wang, Lingxiao, and Quanquan Gu. "Differentially private iterative gradient hard thresholding for sparse learning." 28th International Joint Conference on Artificial Intelligence. 2019. [7] Wang, Lingxiao, and Quanquan Gu. "A knowledge transfer framework for differentially private sparse learning." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020. [8] Mangold, Paul, et al. "High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It is a nice contribution and my score remains the same. --- Reply to Comment 1.1.1: Comment: We are glad we could satisfy the reviewer's questions and for your support of the paper. Please do not hesitate to let us know of any further questions we can clarify.
Summary: The paper presents an approach to train differentially private regression models on sparse input data. It leverages a modified version of the Frank-Wolfe algorithm to reduce the training time significantly by making the algorithm sensitive to sparse inputs.The algorithmic complexity is improved from the standard linear complexity to sub-linear. The authors have used their proposed method on multiple high-dimensional datasets improving the accuracy by 26.3% compared to previous methods. Strengths: The paper is eloquently written, with algorithms clearly delineated and accompanied by ample commentary. The proposed algorithm showcases its versatility across various scenarios, offering a remarkable improvement in performance. Weaknesses: The paper does not provide proven error bounds, and all theoretical results appear to revolve around the concept of speed optimization. It remains unclear whether this speed enhancement comes at the expense of accuracy. Additionally, the lack of released code poses a challenge to reproducibility. Privacy accounting is implict and lacking, the authors didn't directly explain how does their algorithm satisfy the DP condition (advanced compostion?). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you please clarify how the proposed algorithm ensures compliance with the Differential Privacy (DP) condition? Are there any alternative privacy accounting methods that may potentially enhance the overall performance? For example, better privacy accounting [1], using GDP/RDP [2] or tighter compostion theorems [3]. The authors suggest that their implementation requires fewer FLOPs compared to the standard Frank-Wolfe approach. Does this reduction in FLOPs translate directly to a commensurate speed increase, or is computational efficiency still constrained by factors such as RAM and cache access? Could you elaborate on the current computational bottlenecks? The experimentation appears to have been conducted using a single core. Would there be any potential benefits, such as efficiency or performance improvements, if this method were to be adapted for multicore processing? Can this method be extended beyond LASSO Regularized Logistic Regression? If not, what's the main challenge? [1]: Altschuler, J., & Talwar, K. (2022). Privacy of noisy stochastic gradient descent: More iterations without more privacy loss. Advances in Neural Information Processing Systems, 35, 3788-3800. [2] Liu, Y., Sun, K., Jiang, B., & Kong, L. (2022). Identification, amplification and measurement: A bridge to gaussian differential privacy. Advances in Neural Information Processing Systems, 35, 11410-11422. [3] Kairouz, P., Oh, S., & Viswanath, P. (2015, June). The composition theorem for differential privacy. In International conference on machine learning (pp. 1376-1385). PMLR. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Could you please clarify how the proposed algorithm ensures compliance with the Differential Privacy (DP) condition? The original Alg. 1 has already been proven to be DP. Our work makes no changes that alter what is computed, and only avoids performing redundant calculations (i.e., multiplication by 0 gets 0), and thus all profs are still applicable. The reviewer's questions helped us realize this is not sufficient, see the below proof for additional confidence: Here we prove that our algorithm is $(\epsilon, \delta)$-DP. First, note that the sensitivity of each update step is $\frac{L\lambda}{n}$, where $L$ is the Lipschitz constant of the loss function with respect to the $L_1$ norm and $\lambda$ is the scaling factor of the $L_1$ ball to achieve the constraint region $\mathcal{C}$. This is done using Lemma 2.6 from [1], in which we directly bound $$\lvert \langle s, \nabla \mathcal{L}(\mathbf{w}; D) \rangle - \langle s, \nabla \mathcal{L}(\mathbf{w}; D') \rangle \rvert$$ where $s$ is any vertex of the $\mathcal{C}$. Now that we know the sensitivity, we can use the advanced composition theorem for pure differential privacy to find that $\epsilon = 2\epsilon'\sqrt{2T \log (1/\delta)}$. Rearranging, $\epsilon' = \frac{\epsilon}{\sqrt{8T \log (1/\delta)}}$. Thus, composing $T$ exponential mechanisms with privacy $\epsilon'$ produces a final result which is $(\epsilon, \delta)$-DP. In our algorithm, we use the Laplace distribution to implement the report noisy maximum version of the exponential mechanism at every iteration [2]. Thus our algorithm is $(\epsilon, \delta)$-DP. [1] Shalev-Shwartz, Shai. "Online learning and online convex optimization." Foundations and Trends® in Machine Learning 4.2 (2012): 107-194. [2] Bhaskar, Raghav, et al. "Discovering frequent patterns in sensitive data." Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. 2010. >Are there any alternative privacy... Our algorithm works by composing a number of $(\epsilon', 0)$-DP steps to produce an $(\epsilon, \delta)$-DP algorithm with advanced composition, where $\epsilon'$ is chosen appropriately [4]. Composing $(\epsilon', 0)$-DP steps with the advanced composition theorem is tight [5]. The works provided in this review address composition under different settings. [1] describes the setting in which privacy parameters are to be computed after training with Gaussian noise. This is common in DP deep learning systems. In our work, we set $(\epsilon, \delta)$ and $T$ prior to training, so accounting is not necessary. [2] describes Gaussian/Reyni differential privacy, which provides a tighter privacy composition when composing multiple $(\epsilon, \delta)$-DP algorithms. In our case, we compose multiple $(\epsilon, 0)$-DP algorithms, in which case the advanced composition theorem is tight. Finally, [3] describes a tight composition theorem for $(\epsilon, \delta)$-DP. Since wer are not composing $(\epsilon, \delta)$-DP steps, this is not necessary. We will ensure to cite these papers in our work and explain why using advanced composition to compose multiple $(\epsilon, 0)$-DP steps is tight. [4] Bhaskar, Raghav, et al. "Discovering frequent patterns in sensitive data." Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. 2010. [5] Near, Joseph P., and Chiké Abuah. "Programming Differential Privacy." (2021). >Does this reduction in FLOPs translate directly The reviewer is correct that it is not a one-to-one translation in FLOPs reduction to speedup, we will make this more explicit in the revision. For the non-private case, _we obtain no speedups_, as the Fibonacci heap is highly cache inefficient. The numerous cache misses cause the program to hit the canonical "memory wall", meaning our throughput is completely IO bound on main-memory access. This is true of the standard FW in Alg. 1, and so has almost identical runtime. This was mentioned on line 372. We are not concerned about the lack of speedup in the non-private case, because much faster algorithms than FW already exist in this scenario (e.g., Liblinear is several hundred times faster in our testing). In the DP case, Alg. 1 is always compute bound, and our Alg. 2 + 4 combination vacillates between compute and memory bound on a per-iteration basis due to fairly atypical memory access patterns. Different dimensions have different sparsity patterns, resulting in cache misses, and different ratios of compute-to-memory accesses. This is the cause of higher speedups at lower $\epsilon$ we mentioned, due to avoiding the compute-bound by highly non-informative sparser features that are selected more often at high privacy. > ...if this method were to be adapted for multicore processing? Not directly, as many parts are memory bound and multiple threads polling memory would only increase this issue, while also adding synchronization overhead. Other prior works on parallelizing FW might be adaptable, but because we do less work, this is unclear as the ratio of work-per-thread will change dramatically. > Can this method be extended beyond [LR]? Yes, this method should work for any FW optimizable objective (LR, Linear Regression, Hinge-loss SVM, Gambler's loss, etc). --- Rebuttal Comment 1.1: Comment: Thank you for providing clarifications. I find that this paper presents a good contribution to regression methods under Differential Privacy. I have decided to raise my score accordingly. --- Reply to Comment 1.1.1: Comment: We are very glad we could satisfy your questions and appreciate the raised score! Please note that an error appears to have occurred with OpenReview and we can only see your original review/score apparently. If the AC could confirm that they see the correct version we would appreciate it. Thank you for your time and valuable feedback!
Summary: This paper proposes new private variants of the Frank-Wolfe algorithm that takes advantage of the sparsity in the data. The proposed methods are shown to significantly speed up Frank-Wolfe algorithm in a multitude of tasks while still achieving similar accuracy. Strengths: - The authors make an effort to explain a lot of what is going on in their algorithm. - Since the result of the paper is about reducing computational complexity, I appreciate that the author denotes the complexity of every operation in their algorithms. - The proposed method seems to do well in practice. The speed-up is quite significant. Weaknesses: - Even though I appreciate the effort the author put in to explain their algorithms, I still feel like the explanation is very confusing. The paper would really benefit if the author can add a bit more discussion in the appendix to make things a bit clearer. - Some of the technical terms are used in the paper before being defined properly. For example, the term FLOPs is used in line 187 but only is defined in line 319. Or the Fibonacci Heap is used without even having a few sentences defining what it is. Again, I think the author should consider having more discussions in the appendix. - Section 3.3 presents a private algorithm without having a Theorem showing that the algorithm is $\epsilon$-DP. - Figure 1 and 2 are very confusing. For example, in the caption of Figure 1, it says that Algorithm 2 is the dotted line but every line is a dotted line. The legend of the chart also does not have either Algorithm 2 or Algorithm 1. - The authors use the phrase "handle something sparsely" often but it is a bit unclear what it means. Maybe explain it in the beginning or say something like "allow us to handle the update in 1 dimension"? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - In Algorithm 1, what do we need $g_t$ for? It isn't used anywhere else in the algorithm. Based on the experiment, seems like the authors want to output $g_t$ as the convergence gap, then the algorithm should output both $w_T$ and $g_t$? - I'm a bit confused about the statement in line 139. Isn't $\bar v_t^{(i)} = \sum_{j=1}^dX^{i,j}w_t^{j}$. Thus, when 1 single coordinate of $w_t$ changes, every $\bar v_t^{(i)}$ will also change? - Can the author explain how using $\gamma$ to update allow us to not use $\bar y$ anymore? - What are $\textbf{c}$ and $z_\Sigma$ in Algorithm 4? Those 2 are used without being defined. - What is the sparsity of the data used in the experiments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We believe, if accepted for the conference, the historical extra camera-ready page will greatly improve the readability of the manuscript and allow us to have more detailed exposition. Please see below for the answers to your questions, which we will incorporate into the revision. > what do we need $g_t$ for? $g_t$ is a measure of the convergence gap. Once $g_t = 0$, the algorithm has completely converged at the function minima. Use of $g_t$ is desirable in non-DP settings to confirm that a sufficiently good quality solution has been reached, and to confirm that the implementation is indeed converging. As Figure 1 shows, we converge to the solution at the same rate as the original dense Frank-Wolf formation. In the DP case, $g_t$ becomes noisy and is not as useful. >line 139 We must be precise about the sparsity patterns. Every $\bar v_t^{(i)}$ **where** $X^{i,j} \neq 0$, will change. Because most values are equal to 0, the majority of $\bar v_t^{(i)}$ will not change. Otherwise, the reviewer is correct in the definition. To restate, only one value of $j$ in $\boldsymbol{w}^{(j)}$ will change each iteration (the others are handled by our scaling factor). In most high-dimensionsal regression problems (like all the ones we have used), most coordinates $j$ are unused by most columns $i$, and so the update of $\boldsymbol{\bar v}$ can be done in a sparse fashion. >how using $\gamma$ to update allow us to not use $\bar y$ anymore Note that in Algorithm 1, the values of $\boldsymbol{\bar y}$ are fixed during each iteration, and do not change. $\boldsymbol{\bar y}$ only impacts the algorithm by offsetting the values stored in $\boldsymbol{\bar \alpha}$. Because Alg. 2 updates $\boldsymbol{\bar \alpha}$ sparsely for the amount of change that occurs, and the value of $\boldsymbol{\bar y}$ will never change, then we can ignore the $\boldsymbol{\bar y}$ factor after the initialization of line 12 in Alg. 2. >What are $\boldsymbol c$ and $z_\Sigma$ in Algorithm 4? $\boldsymbol c$ was defied on lines 280-281. There are $\lfloor \sqrt{D} \rfloor$ groups of variables in Alg 4, and so $\boldsymbol c$ is a vector of size $\lfloor \sqrt{D} \rfloor$. The $j$'th value $\boldsymbol{c}^{(j)}$ contains the cumulative weight of all variables contained in the $j$'th group. It is used to skip the $j$'th group if it's cumulative weight is smaller than the current step size, allowing us to perform a "Big Step". $z_\Sigma$ was defined on line 282. It is the cumulative weight of all variables being sampled from (i.e., $z_\Sigma = \mathop{LogSumExp}( \boldsymbol{c}) = \log \left( \sum_{j=1}^{\lfloor \sqrt{D} \rfloor} \exp\left(\boldsymbol{c}^{(j)}\right) \right)$. It is necessary as the normalizing constant so that we ensure our sample is a proper weighted uniform random sample. Please note Alg. 4 uses $N$ instead of $D$ to match the notation of the original A-ExpJ paper, since the page limit of NeurIPS prevents us from a more thorough explanation in text. >What is the sparsity of the data used in the experiments? These datasets are highly sparse, see the below table for % of non-zero values. | Dataset | % of non zero values | |---|---| | RCV1 | 1.5% | | News20 | 0.03% | | URL | 0.004% | | Web | 0.022% | | KDDA | 0.00018% | >add a bit more discussion in the appendix We will add a brief explanation and intro in the appendix for each major section to ensure the accessibility of the work. > used in the paper before being defined We scoped our assumption of background too narrowly, and we will go through the paper again to ensure this does not remain in the revised manuscript. FLOP: is *F*loating *P*oint *Op*erations, normally taken to be the number of multiplications, divisions, and other more expensive functional primitives (e.g., computing the $\exp$ is a single instruction in modern hardware). Counting the FLOPs provides a standardized and hardware-independent way of quantifying the amount of expensive computations being performed, as floating-point operations are generally many times more expensive than other integer operations. Fibonacci Heap: A classic heap data structure for inserting, finding-the-minimum, and removing-the-minimum value from the data structure. The Fibonacci heap is relatively unique in supporting a decrease-key function that allows altering the value of an item already in the heap. All operations on a Fibonacci heap can be performed in amortized $O(1)$ time, with the exception of removal that takes $O(\log n)$ time. Using the decrease-key operation allows us to devise the queue maintenance strategy of Algorithm 3. >a private algorithm without having a Theorem Please see our reply to reviewer LVpw, which provides a proof. Our method is mathematically equivalent to the original Alg 1., and so all proofs of Alg 1 still apply to Alg 2 + 4, but reviewer feedback has made it clear that making this more explicit would aid in understanding the equivalence and validity. >Figure 1 and 2 are very confusing Please see the 1-page attachment the adds new versions of these figures, we hope it clarifies your concern. > handle something sparsely We will revise the manuscript to make explicit that what we mean by this is: if the original data $X^{i, j}$ has a value equal to zero, then as few operations as possible should occur involving the $i$ and $j$ values as the zero values will have no impact on the solution. We do not prove a minimum number of operations, but show for the first time it is sub $O(D)$ via the existence of our method. --- Rebuttal Comment 1.1: Comment: Thanks for the response! My questions are pretty well-addressed. I will raise the score to 6. --- Reply to Comment 1.1.1: Comment: We are glad we are able to satisfy your questions, and very appreciative of the score raise! Please let us know if there is anything else that comes to mind.
Summary: This paper studies DP regression problem when the data is $\ell_1$-sparse and aims to improve the computational efficiency. Specifically, they consider LASSO regularized logistic regression model and sparse aware Frank-Wolfe algorithm. The proposed method computes mathematical equivalent results as in the original algorithms but computationally much more efficient for large sparse datasets. The experiments confirm that the proposed methods are significant faster than existing methods. Strengths: 1. This paper provides a practical tool for DP regression on large sparse datasets. Prior works on sparse regressions are either purely theoretical or non-scalable. As far as I know, this is the first paper that focus on the computational efficiency. 2. This paper is well-written and clearly states its contribution. Weaknesses: This paper does not discuss any utility guarantees in terms of its theoretical dependence on $d, \lambda, L, \varepsilon, n$. I am uncertain about the performance of sparse-aware DP Frank-Wolfe on this problem. Previous work [1] is known to be near-optimal in terms of utility. If the proposed method is sub-optimal, accelerating a sub-optimal algorithm may be of less interest. [1] Cai, T.T., Wang, Y., and Zhang, L. (2021). The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy. The Annals of Statistics, 49(5), 2825-2850. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weakness Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper has not addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We should have made the DP bounds more explicit, thank you for bringing this to our attention. Please note that our algorithm's utility is near-optimal. We provide a proof sketch below which follows that of [1]. Let $L$ be defined as in the paper: the Lipschitz constant of the loss function with respect to the $L_1$ norm. Let $S$ be the number of vertices of the constraint region, which we denote $\mathcal{C}$. Let $\lambda$ be the scaling factor of the $L_1$ ball to achieve the constraint region $\mathcal{C}$. Then for a loss with upper bound $\Gamma_{\mathcal{L}}$ on the curvature constant as defined in [2], running Algorithm 2 for $T = \frac{\Gamma_{\mathcal{L}}^{2/3}(n\epsilon)^{2/3}}{(L\lambda)^{2/3}}$ iterations, we have $$\mathbb{E}[\mathcal{L}(\mathbf{w}^{priv}; D)] - \min_{\mathbf{w} \in \mathcal{C}} \mathcal{L}(\mathbf{w}; D) = \mathcal{O} \left( \frac{\Gamma_{\mathcal{L}}^{1/3}(L\lambda)^{2/3}\log(n \lvert S \rvert) \sqrt{\log (1/\delta)}}{(n\epsilon)^{2/3}} \right).$$ To prove this statement, we use Lemma 5 and Theorem 1 from [2]. Lemma 5 states that at each step of the Frank-Wolfe algorithm, an inexact gradient can be used so long as its score is within $\kappa$ to that of the true minimum vertex. [1] computed the value of $\kappa$ and showed that with probability $1 - \xi$, this occurs over all steps. Then Theorem 1 in [2] stated that if Lemma 5 holds, an exact bound on the overall loss holds. Thus it follows that with probability $1 - \xi$, this statement holds. Finally, [1] used standard learning theory arguments to convert the bound in probability to one in expectation. Plugging in the desired value of $T$ finishes the proof. Note that this proof is not affected by our sparse-aware framework. Indeed, the algorithm presented in [1] is simply algorithm [2] in our paper with dense calculations. (Lines 7-13 of our Algorithm 2 calculate scores for the exponential mechanism, like line 3 in Algorithm 2 of [1]. Line 15 in our algorithm corresponds to line 4 of Algorithm 2 in [1]. Lines 16-21 of our algorithm correspond to line 5 of Algorithm 2 in [1].) For that reason, at every iteration, as our updates are equivalent to that of [1]. Note that $\Gamma_{\mathcal{L}}$ can be upper bounded for logistic regression. See [3] for details. Finally, using a fingerprinting codes argument, Theorem 3.1 of [1] showed that under weak conditions, an optimal DP-learning algorithm $\mathcal{A}$ has $$\mathbb{E}[\mathcal{L}(\mathcal{A}(D); D) - \min_{\mathbf{w} \in \mathcal{C}} \mathcal{L}(\mathbf{w}; D)] = \widetilde{\Omega} \left( \frac{1}{n^{2/3}} \right).$$ For this reason, the utility bound provided above is nearly-optimal. [1] Talwar, Kunal, Abhradeep Guha Thakurta, and Li Zhang. "Nearly optimal private lasso." Advances in Neural Information Processing Systems 28 (2015). [2] Jaggi, Martin. "Revisiting Frank-Wolfe: Projection-free sparse convex optimization." International conference on machine learning. PMLR, 2013. [3] Khanna, Amol, Fred Lu, and Edward Raff. "Sparse Private LASSO Logistic Regression." arXiv preprint arXiv:2304.12429 (2023). --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. It would be great if you could add such a discussion to the revision. --- Reply to Comment 1.1.1: Comment: We will absolutely be including this discussion in the revision. Your feedback and the other reviewers have helped us to realize the DP proofs needed further elaboration than what our original manuscript presented. We believe the final version will be a much stronger article because of it. We hope this has satisfied all of your concerns, please let us know if there are any outstanding questions.
Rebuttal 1: Rebuttal: Per the rebuttal instructions, we could include one page of a PDF file with any new plots or figures. Toward Reviewer Fgow's concern, we have a new version of Figure 1 and Figure 2 in the attached PDF, displayed as Figure 4 & 5 in the PDF file. Figure 4 replaces Figure 1, and we plot both the normal FW (Alg 1.) and our improvement in the non-DP case (Alg 2 + 3) side by side, showing that they are identical, which is the desired outcome. We have changed none of the underlying mathematics of the FW algorithm, and so we converge to the same solutions (barring floating-point differences). Figure 5 replaces Figure 2. We plot how many times fewer cumulative FLOPs are performed by iteration $T$ on the x-axis. At $T=1$ we perform basically the same number of flops due to initial setup, but each subsequent iteration performs sparse updates that need far fewer FLOPs, resulting in multiple orders of magnitude reduction in FLOPs as the number of iterations increases. Pdf: /pdf/fbbc2cd3e814cdbb314632483e5f1adc750bddd2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Topological Parallax: A Geometric Specification for Deep Perception Models
Accept (spotlight)
Summary: The paper presents a framework for analyzing the validity of deep classification models by comparing its multi-scale geometric features with that of the training dataset. The comparison is conducted through tools of topological data analysis, mainly using persistence diagrams to detect and characterize the nontrivial topological features (connected components, 1-cycles, 2-cycles, etc.) as they are revealed by changing the radius of data points to construct Ribs complex. The main challenge is to conduct such analysis on the implicit space of the deep classification model; the paper addresses this challenge by devising distance estimate algorithms for the implicit space, and comparing it with the ambient Euclidean distance to decide the matching of geometric features. The paper shows an application of the analysis on a single dataset and accompanying simple classification models. Strengths: The paper asks the important question of how to decide if a deep perception model faithfully captures the true characteristics of a dataset. It proposes to check if the learned implicit space of the perception model has geometric features that closely match the dataset distribution. The paper develops a theoretical framework to characterize the geometric matching of model and data. The important notion is to construct a parallel complex, that is obtained by the co-filtration of Rips complex by metrics of both the data set and the learned model. Based on this construction, the comparison of model and data becomes homological computations on the series of complexes, which draws on topological data analysis extensively. The paper discusses applications through a concrete example, and of the possibility of using the assessment as objective function for training more accurate networks. Weaknesses: The paper is theoretical in nature. To ease understanding, it is better to provide tables to summarize the notations introduced and use more figures to motivate and illustrate the ideas. I find this can be helpful particularly for the perturbation lemmas and local simplicial matching. Drawing more connections to TDA can be helpful too, to motivate the constructions introduced in this paper. After developing the theoretical framework, it is desirable to immediately position Sec.6 in context, maybe through the example of Sec.8 or other cases. More application examples can be used. The cyclo-octane dataset has a very unintuitive structure. If more intuitive datasets and perception tasks can be tested, the usefulness of this analysis can be more convincing. Such intuitive datasets can come from computer vision data, e.g. MNIST, CIFAR. Deep learning models are witnessing a shift from perception to generative modeling, achieved mostly through minor variations in objective function and output format. How do the motivation and framework proposed by this paper apply to generative models? This paper does not include any such discussion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The major questions have been posed in the above discussions about weaknesses. Minor questions: Definition 1.1, $K^\circ, \overline{K^\circ}$ should be defined clearly. A summary of important notations could be given in the text. Line54, the order of $M$ and $M^*$ seems to be reversed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed limitations extensively. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Regarding readability We thank the reviewer for their comments regarding readability and understandability. We will include a table of notation in the Supplemental Material to help the reader understand our mathematical notation. We are also planning on replacing Figure 2 with Figure 1 in the attached PDF to provide a high-level understanding of parallax, as well as the importance of matching the geometry between model and data. We will also expand our references to include standard TDA textbooks, such as Edelsbrunner and Harer's computational topology textbook, as well as Hatcher's algebraic topology text. We will also add a preview of section 6 and 8 to the introduction to help orient the reader. ### Regarding additional examples We agree that additional applications and examples (particularly in imaging data) are desirable to advance the field. Please see the general rebuttal for the description of another imaging dataset example. We are preparing a separate manuscript that surveys imaging data and popular vision networks. The rich field of convolutional neural networks deserves special attention due to the high extrinsic dimension and counter-intuitive metric geometry. In preparing this submission, we found that discussion of the often counter-intuitive metrics took too much attention away from the important discussion of topological interpretation and stability analysis. In this submission, we intentionally restricted the scope to an introduction to the core definitions and algorithms, and we demonstrated their meaning on examples that did not rely on convolutional layers, but we look forward to presenting followup work on imaging data in the near future. ### Regarding application to generative modeling We agree that generative modeling is increasingly important, especially in the context of adversarial robustness and safety. We are preparing a separate manuscript on that topic, but the mathematical formulation of generative models is more subtle than perception models. In this submission, we restricted the scope to perception models to keep the (already heavy) mathematical formulation manageable within the space available. ### Regarding minor comments We thank the reviewer for catching these mistakes. We will be sure to define all of the notation used in Definition 1.1 and we will add a table summary of notation used to the Supplemental Material. Indeed, the order of \\(\\mathcal{M}\\) and \\(\\mathcal{M}^*\\) in Line 54 is reversed and should be corrected. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I remain positive about the submission.
Summary: The authors suggest that a model is good if its geometry matches the geometry of data, and introduce a persistent-homology based method to evaluate this similarity. Strengths: There is a number of theoretical results that seem to support the soundness of the proposed approach. Weaknesses: (W1) The paper is extremely hard to read, which significantly limits its impact. For example, you mention that you rely on bi-filtered persistence module, but it is still unclear to me what the two filtrations are (what is reflected by alpha and epsilon), and why each of them is useful. As another example, rho_K(Y) is a crucial notion for your work, but you never name or describe it, let alone motivate it or provide intuitive explanation; also, this requires an explicit Definition. As a final example, Figure 2 remains a mystery. All definitions and lemmas could be named, but more importantly, they could be first motivated and then interpreted. The latter could be done in the beginning of a section, by summarizing all the results and their relevance in a paragraph or two, or throughout the section in a story-like manner. The same holds for the whole text, e.g., when you describe your procedure, you could describe the concepts in words (even if you did this before, it is good to remind the reader, especially since the paper is extremely notation-heavy): (2) for a model [nice] K ∈ M(X), compute [what is this] λlo,X(K) and [what is this] λhi,X(K)”, or “In this section, we provide algorithms to estimate [what is this] P_{alpha, epsilon}”. A well-organized notation table might also be helpful. For this reason, a lot of my feedback is an educated guess, and I am definitely open to substantially changing my score for this paper if it is presented more clearly, so that I can actually assess the soundness and contributions. (W2) Experiments consider only one data set. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: (Q1) Can you motivate the name Parallax? (Q2) You write: “We suggest that a model K is ‘good’ if the geometry of K matches the geometry of X.” How is this related to “If a certain architecture is incapable of expressing a decision region that is equivalent in topology to training data, then there is no hope of it ever generalizing to the true data” in [1]? More generally, articles [1]-[3] also consider persistent homology of the data and model, can you comment if and how they are related to your approach? (Q3) Can you elaborate on the relationship between your work and the related articles you mention in Section 1.3, in particular with [17] and [28]? (Q4) Where is the proof of Lemma 2.6? (Q5) “In Section 4, we introduced ‘local simplicial matching’ as a way to compare small-scale geometry. In this section, we introduce ‘homological matching’ as way to compare large-scale geometry.” Should this not be mentioned earlier in the paper? Is it possible to provide an illustration? (Q6) “We apply Algorithm 7.2 to estimate which edges in R are accepted by k1, and discover 2λlo,X(K1) = 3.45, which is the longest edge available. So, the Rips complex cannot distinguish K1 from the convex hull; the model does not reflect the geometry of X.” How do you make this conclusion, can you elaborate? This is related to weakness (W1). Other minor comments: - Line 28: should be applied -> could be applied? - Figure 1: It would be better if the axes ranges are also provided in the left plot, to make a connection with its PD. - Line 168: Def’n -> Definition - Explicitly mention that the code is made publicly available. - Is Limitations Section 9? [1] Guss, William H., and Ruslan Salakhutdinov. "On characterizing the capacity of neural networks using algebraic topology." arXiv preprint arXiv:1802.04443 (2018). [2] Ramamurthy, Karthikeyan Natesan, Kush Varshney, and Krishnan Mody. "Topological data analysis of decision boundaries with application to model selection." International Conference on Machine Learning. PMLR, 2019. [3] Khrulkov, Valentin, and Ivan Oseledets. "Geometry score: A method for comparing generative adversarial networks." International Conference on Machine Learning. PMLR, 2018. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Limitations are discussed in the final section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Re: the bi-filtration The bifiltration is the most technically challenging concept in this project, but necessary for the stability results (Lemma 3.4 and Theorem 5.4). The first parameter \\(\\alpha\\) is the filtration radius (that is, half the length) of \\(K\\) geodesic. The second parameter \\(\\varepsilon\\) is the difference between the length of that \\(K\\) geodesic and the corresponding \\(V\\) geodesic. This meaning is embedded in Definition 2.2. Consider two antipodal points on a unit circle, where the model is a tight annulus matching that circle. In the ambient \\(V\\), the Rips edge between those two antipodal points would have filtration radius 1. In the annular model \\(K\\), the Rips edge between those two points would have filtration radius \\(\\approx \\pi/2\\), because the geodesic has to go around the circle, so \\(\\varepsilon \\approx \\pi/2-1\\\) in this case. \\(\\varepsilon\\) keeps track of the local distortion of length between \\(V\\) and \\(K\\). This can be used to estimate the size of voids and perhaps be used to make estimates of the curvature of \\(\\partial K\\), for example. We would be happy to elaborate upon the meaning of the \\(\\varepsilon\\) parameter briefly, near Definition 2.2. We could also discuss the use of \\(\\varepsilon\\) to estimate the size of voids in the Supplemental Material. ### Re: \\(\\rho_K(Y)\\) The definition of \\(\\rho_K(Y)\\) is in lines 105--107, where the Rips complex is defined. For a particular edge \\(e=(x_0, x_1)\\), \\(\\rho_K(e)\\) is half the geodesic distance between \\(x_0\\) and \\(x_1\\) in \\(K\\). We agree that this jargon can be difficult for readers outside our speciality. We will help orient the reader by providing this brief clarification along with a reference. ### Re: Figure 2 Figure 2 is intended to show the structure of the bi-filtration, and how births and deaths of the bifiltration can be detected using TDA. Figure 2 is not crucial, and we will replace it with Figure 1 of the included PDF. ### Re: naming of lemmas We agree that this is a good strategy, and we named the definitions, theorems, etc. which we see as the most crucial statements. A minority of statements (2.1, 2.4, 2.6, 4.2, 4.3, 4.4, 4.5, 4.6, 5.2, 5.4, 7.1) were unnamed because they provide intermediate technical bounds needed elsewhere. We are open to additional suggestions for which statements deserve specific names. ### Re: motivation Sections 1.1 and 1.2 were intended to give this sort of story-like motivation and summary, but it is clear from reviewer comments that they could be improved. We will rephrase section 1.2 to provide a clear roadmap of the major definitions and lemmas, and add a table of notation to the Supplemental Material to help orient a reader. We will replace Figure 2 with Figure 1 of the attached PDF to help elucidate the main ideas behind parallax and the importance of understanding the geometry of models. ### Re: (W2) We agree that the paper would be improved with further exploration of parallax on other datasets, in particular on imagery datasets. We are planning a suite of experiments in which we will apply parallax to a broad set of commonly used imaging data, but could not include it here due to page constraints. The present work is meant to introduce the main definitions and provide justification for the theorems and ideas of the paper. ### Re: (Q1) Yes! Parallax was named by analogy to the method in astronomy. There is an inaccessible object that cannot be measured directly (in this case, the geometry of the model \\(K\\)), so we must infer its location by comparing multiple observations from the available vantage points (in this case, the points in the dataset \\(X\\)). We will add a sentence to this effect to the introduction. ### Re: (Q2) We thank the reviewer for reminding us of this paper and we will reference it. The key difference between this work and ours is that our theory quanitifies the ability of a specific model, independent of parameters or network architecture, to match the shape of a dataset in a specified way. The Guss et al. paper discusses the ability of any model produced from a specific network architecture to perform well on datasets of a specific shape. ### Re: (Q3) Both of these references track and study the topology of a dataset as it evolves through the layers of a neural network. They make empirical claims about what tends to happen as training accuracy increases, but do not provide geometric specifications about what should happen if one is to trust the model, and these specifications are sorely needed because, as has been shown in the literature, metrics such as perfect training accuracy are not sufficient. ### Re: (Q4) The proof is as follows, and should be added to the Supplemental Material or immediately after Lemma 2.6. If \\(\\varepsilon \\geq \\alpha \\), then the second conditions in Definition 2.2 becomes \\(\\rho_K(Y) - \\rho_V(Y) \\leq \\alpha\\), which is a trivial condition for \\(0 \\leq \\rho_V(Y) \\leq \\rho_K(Y) \\leq \\alpha\\). Thus, the sets \\(P_{\\alpha,\\varepsilon}\\) are identical for all \\(\\varepsilon \\geq \\alpha\\). ### Re: (Q5) Yes, we should have foreshadowed these terms in Section 1. Figure 1 of the attached PDF could be added in Section 1 or in the Supplemental Material to clarify the interpretation. ### Re: (Q6) Yes. Since \\(\\lambda_{lo}\\) equals the filtration level of the longest edge, all edges are included in the parallax complex. For every edge in the Rips complex of the original dataset, the geodesic representing that edge is contained in the model. Thus, the Rips complexes \\(R(X,V)\\) and \\(R(X,K)\\) are identical. This does not imply that \\(K=V\\), but any differences are undetected by the pairwise geodesics between the points of \\(X\\). ### Re: Minor Comments We thank the reviewer for these wording changes and figure formatting comments, and we will be happy to make them. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response and the additional clarifications (please make sure to include them in the paper, e.g. comments on the bi-filtration, \rho, the name parallax). As indicated earlier, I raised my rating. However, I still worry whether the authors are fully aware of the readability issues of the paper, even though they were raised by multiple reviewers and acknowledged in the general comment. For example, you say that the definition of \rho is in lines 105 -- 107, but the point I was trying to make is that it should be named and included in a separate Definition environment, since it is a crucial concept for the paper (any two lines in the text are easy to miss, and additional references are not going to help much here). I am curious to see the promised notation table. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the additional feedback. We take the readability concerns seriously. We have reserved the definition field in latex for definitions novel to the paper. Perhaps additional bolding or highlighting would help draw the reader's attention to the first use of mathematical jargon. The first sentence of Section 2 could be improved with the following introductory comment: "In the geodesic space \\(V\\), \\(B_{\alpha}(x)\\) denotes the geodesic ball of radius \\(\alpha\\) around \\(x\\). For a formal edge \\(e=(x_0,x_1)\\) between points in \\(X\\), \\(\rho_V(e)\\) is the minimum radius for which \\(B_{\rho_V(e)}(x_0)\\) intersects \\(B_{\rho_V(e)}(x_1)\\). Thus, \\(2 \rho_V(B)\\) is the geodesic distance in \\(V\\) between \\(x_0\\) and \\(x_1\\). The Rips complex \\( R(X,V) \\) is the simplicial complex generated by these edges, as filtered by \\( \rho_V(e)\\)[13, Section III.1]. More generally, for any \\(K \in \mathcal{M}(X)\\) ..." Here is an example notation table that might help orient the reader. | Notation | Plain Meaning | First Appearance | Term | |----------|---------|------------------|-----| |\\(V\\) | A geodesic space, such as \\(\\mathbb{R}^n\\) | p.2| Ambient Space | |\\(X\\) | A finite set in \\(V\\) | p.2| Dataset || |\\(k\\)| A perception function on \\(V\\) | p.2| Model (as function) | |\\(K\\)| Support set of \\(k\\) | p.2| Model (as set) |\\(\\mathcal{M}(X)\\)| Models compatible with dataset \\(X\\)| p.2| |\\(\mathcal{M}^*(K)\\)| Datasets compatible with model \\(K\\) | p.2| |\\(K^{\circ}\\)| Interior of set \\(K\\) in topological space \\(V\\)| p.2| |\\(\overline{K}\\)| Closure of set \\(K\\) in topological \\(V\\)| p.2| |\\(K^{c}\\)| Complement of set \\(K\\) in \\(V\\)| p.2| |\\(\Omega\\)| Bounded open set in \\(K^c\\) | p.2| Void | |\\(R(X,K)\\)| Rips complex of \\(X\\) in geodesic space \\(K\\)|p.4| |\\(\\alpha\\)|a filtration level or radius|p.4| |\\(B_{\alpha}(x)\\)|Geodesic ball of radius \\(\\alpha\\) about \\(x\\)|p.4| |\\(Y\\)| Chain (formal sum of simplices) in a Rips complex | p.4| | |\\(\rho_K(Y)\\)|Minimal filtration radius for \\(Y\\) in \\(R(X,K)\\) |p.4|| |\\(\\varepsilon\\)|Difference of filtration radius between \\(V\\) and \\(K\\)|p.4| |\\(P\\)|Parallax bi-complex for \\(X, K, V\\)|p.4|Parallax| |\\(HP\\)|Homology of \\(P\\)|p.4| |\\(L\\)|A 1-parameter path through \\(P\\)|p.4|Rips-like Path| |\\(HL\\)|Homology of \\(L\\)|p.4| |\\(\\overset{\\kappa}{\\approx}\\)|Pointwise perturbation of \\(X\\) in \\(V\\)|p.5|Perturbation| |\\(\\overset{\\kappa}{\\approx}_K\\)|Pointwise perturbation of \\(X\\) in \\(K\\)|p.5|\\(K\\)-Perturbation |\\(f_{\sharp}\\)|Induced map on a simplicial complex |p.5| |\\(f_*\\)|Push-forward map on homology |p.5| |\\(\lambda_{.}\\)|A meaningful filtration value in \\(HP\\). E.g., "lo", "ball", "sup", "hi"|p.6|
Summary: This paper introduces topological parallax, a theoretical framework for analyzing the similarity of multiscale geometric structures between datasets and models. It estimates the topological features in the model by examining the effect on the Rips complex of geodesic distortions using the reference dataset. It shows the stability of the proposed framework under dataset perturbations. It also provides a practical computational method on top of the theory. Strengths: - The paper provides a novel point of view. It claims that it is the first work to use TDA to express a desired geometric relationship between models and datasets, and to the best of my knowledge, I don't see any existing work studying this problem. - This paper provides interesting insights into the concepts of "overfitting" and "generalization" of neural networks, which are important for the safety and robustness of AI. - It proposes a theoretical framework with abundant derivations and proofs. - It also introduces a practical computational method and demonstrates it on concrete data and network examples. Weaknesses: - Section 8 shows an interesting example of a data space with novel topological structures. But is this method also applicable to other real-world scenarios such as image recognition? What is the complexity of the computation w.r.t. dataset size and dimensionality? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The theoretical framework is built upon a binary classification problem. Is it potentially generalizable to more complex model outputs, e.g. multi-class classification (especially when different output channels are inter-dependent)? How would the geodesics be defined in such a case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are well-discussed in the paper. As a theory work, I believe it won't have direct social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Regarding real-world scenarios such as image recognition Yes! See the general rebuttal for a discussion of further experimentation with imaging datasets. ### Regarding computational complexity The computation of parallax is the same "big-O" as the computation of Rips complexes and their persistence diagrams---albeit with a larger constant. This is because parallax merely inserts a model-evaluation step upon the examination of each edge. The constant therefore is \\(tN^2\\) for \\(N\\) points and a model that takes time \\(t\\) to evaluate. There are very interesting dimension- and structure-dependendent estimates for the real-life/expected timing of Rips computations, https://arxiv.org/abs/2211.09075 and, we would be happy to include a brief discussion of these considerations in either Section 7 or the Supplemental Material. ### Regarding multi-class classification This is a very interesting question. The theory is presented in this submission for single-class perception problems, but as emphasized by the reviewer, many key applications will involve multi-class problems (such as overlapping families in multi-label imaging datasets). To handle these situations, we typically consider each label separately (against the others) or study in semantically meaningful collections of label. The overall theme in this case is that the geometry of each class and the combined geometry of relevant mixtures of classes should be respected by the respective models. One can manipulate the definition of the model oracle \\(k\\) to account for relative likelihoods of various labels, and then study the output of parallax as those relative likelihoods vary. We can include a brief discussion of this consideration in the Supplemental Material. A more detailed analysis of multiclass problems, especially in the context of imaging datasets, will occur in a forthcoming manuscript focussed on that topic. --- Rebuttal Comment 1.1: Comment: Thanks for your reply! I think my questions are well-answered, so I'll keep my positive rating.
Summary: In this paper, topological parallax is introduced to compare a trained model with a dataset and determine if they share similar multiscale geometric structures. The authors argue that the model is "good" if the geometries are similar. To determine the similarity between the model and the data, they calculate a homologous matching that can be applied to many ML systems. This method can thus be used to assess whether a model has good generalization or is robust to perturbations. The authors validate topological parallax with a toy data set, and the qualitative and numerical results support the authors' claim. Strengths: Topological data analysis is emerging as a powerful tool for understanding AI systems. Based on TBA, the proposed topological parallax measures the geometric similarity between model and data, which helps to understand whether the trained model is good or not. Moreover, it has many good properties, e.g., it is stable to perturbation, which is crucial for AI attack detection. Weaknesses: I think that further experiments could be helpful to understand the proposed tool. For example, Figure 3 shows only the results of one neural network, and it is expected to show the comparison of multiple AI models using topological parallax. For example, compare the decision tree model and the neural network in Figure 1. Also, try to compare several known neural networks. For example, compare a ReLU MLP with a Sin MLP. Given an image, the ReLU MLP may not fit the data well, but if you replace ReLU with Sin in the MLP (called Sin MLP), the latter method can fit the data with almost zero error. Sin-MLP, also known as SIREN, was described in the article "Implicit neural representations with periodic activation functions". The reviewer is not familiar with this area and therefore basically respects the other reviews in principle. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How can topological parallax be used to derive geometric regularities and improve neural network training? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Regarding further experiments We agree with the reviewer that further exploration of real-world datasets is strongly desired in future work. Please see the comments in the general rebuttal for this regard as well as the PDF of attached figures for an example of parallax applied to a model on an imagery dataset. We thank the reviewer for the suggestion of comparing the decision tree model with the neural network of Figure 1, as well as comparison with Sin-MLP and Relu-MLP. The comparison within Figure 1 would be easy to perform, and importantly, would provide readers with another example of parallax and its interpretability. We will put this discussion and analysis in the Supplementary Material. The comparisons between Sin-MLP (SIREN) and ReLU-MLP are quite interesting, and we would like to pursue these and additional comparisons in followup work, in order to keep this submission from being increasingly complex. As the reviewer noted with Sin-MLP, we may assume that all methods will have "zero" or "almost zero" test error. One of the motivating ideas behind parallax is to distinguish models in the cases when traditional metrics (such as accuracy) give (near) perfect accuracy and thus are indistinguishable from the lens of the metric. We believe that parallax provides an additional metric by quanitifying geometric consistency between model and data. ### Regarding deriving geometric regularities and improving training Recent advances in topologically-inspired loss functions make including topological properties within the loss function feasible (see refs [7, 22, 24] in original submission). The workflow laid out in Section 7.1 highlights how parallax can be used in neural network training. We hope to implement this technique in code soon and describe its implementation and a suite of experiments in future work.
Rebuttal 1: Rebuttal: We thank the editors and reviewers for their high-quality work. It is clear that the reviewers read the submission carefully and thoughtfully, and that the overall editorial process at NeurIPS is efficient and productive. Overall, the weaknesses raised by the reviewers fell into two clear categories. ### 1. Exposition of Definitions and Notation The reviewers were very helpful in identifying sections that may be difficult for readers due to field-specific jargon. We agree that expository clarification of the definitions and notation is extremely important for readability among a broad interdisciplinary audience. We believe we can address these concerns by inserting some additional explanatory words (e.g. ``... the interior \\(K^\\circ\\)''), adding a table of symbols in the Supplementary Material, and by including citations to the standard definitions in the most common topology textbooks (Hatcher's *Algebraic Topology* and Mukres' *Topology, 2nd edition*) to orient the reader. Specific alterations are suggested in the individual rebuttals. ### 2. More comprehensive experimentation, particularly for imaging applications We agree that additional applications and examples (particularly in imaging data) are desirable to demonstrate the utility of parallax and to advance the field of topological analysis of ML models. The comments by the various reviewers are exactly in-line with our overall research agenda. We are preparing a separate manuscript that surveys imaging data and popular vision networks. The rich field of convolutional neural networks deserves special attention due to the high extrinsic dimension and counter-intuitive metric geometry. In preparing the current submission, we found that discussion of the often counter-intuitive metrics took too much attention away from the important discussion of topological interpretation and stability analysis. Therefore, we intentionally restricted the scope to an introduction to the core definitions and algorithms, and we demonstrated their meaning on examples that did not rely on convolutional layers. We look forward to presenting followup work on imaging data in the near future, but we feel that the present submission must stands on its own as laying structural groundwork for that experimentation and exploration. However, we believe we will be able to offer one additional example in the Supplemental Material without distracting too much from the intended scope of the manuscript. In Figure 2 of the attached figures PDF, we have included an additional experimental example based on a simplified version of the Utah teapot dataset consisting of images of teapots with their spouts and handles removed, essentially images of jars with lids. We construct a simple model which accepts all test data points but has very poor interpolation properties. We show that parallax detects this poor interpolation and we highlight 6 interpolated images that the model rejects. In particular, this example exemplifies the concern of Section 6: "if step (2) yields \\(\\lambda_{lo,X} (K) = 0\\), then \\(K\\) has voids between every pair of points in \\(X\\), possibly due to under-sampling or over-fitting, and should not be trusted for any interpolative purpose". Additional description of this dataset and CNN could be included in the Supplemental Material. Pdf: /pdf/e3c3533e4a30f5c0963e5d4754a9788c169292a3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces the concept of parallax as a bi-filtered persistence module that measures the geodesic distortion between the dataset and the model. The paper also proves that parallax is stable under perturbations of the dataset and provides a criterion of homological matching to assess whether the model captures the persistent features of the dataset. The paper demonstrates the effectiveness of parallax on two models using the cyclo-octane dataset and discusses the limitations and future directions of the method. Strengths: This paper presents an interesting and novel approach to evaluate the geometric similarity between a dataset and a model using topological data analysis. The paper is well-written, clear, and provides sufficient background and motivation for the problem. Weaknesses: There is little experimental analysis on real-world dataset. Perhaps the authors could consider merging some part into the appendix and include more analysis. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. This paper presented a novel theorical way to measure the performance of model by analyzing the geometry of model and data. However, we already have lots of metrics, such as accuracy, which can be measured once the trained and the corresponding dataset are available, it is not clear to me how the proposed method advance the existing method. 2. The paper is more related to math/statistics, while the analysis seems to be very solid. The analysis of overfitting and generalization capability (especially under the setting of covariate shift), which is claimed as the major contribution by the authors, is not easy to follow, more concrete examples are needed for illustration. 3. This paper does not discuss the details of models, i.e., ConvNet, RNN, transformer. I am curious whether the proposed theory could be applied to all models trained by back-propagation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Regarding real-world datasets We agree that further exploration of real-world datasets is strongly desired in future work. See the comments in the general rebuttal as well as the included PDF for an additional example of using parallax to interpret a model of an imagery dataset. ### Regarding comparison to other methods We agree that there are a variety of well-studied metrics for understanding the performance of models. We believe that parallax can be used in conjunction with other metrics to provide additional evidence that a model is well-behaved. For example, it is now common to train models to (near) perfect test accuracy, and in such scenarios, metrics like accuracy provide no distinguishability. In fact, this is one of the major points made by the Belkin's survey *Fit Without Fear*, which we discuss briefly in the paper but are happy to add more detail about. Parallax may be used to differentiate and highlight certain models compared to others. We agree that the connection between parallax and generalization capability is not fully explored in this work. Our intention was to introduce the idea of homological matching provided by parallax, prove statements about its stability, and show via an example it matches what we believe is a widely held intuition about "the shape" of manifolds under the manifold hypothesis. The cyclo-octane example and modified Utah-Teapot example do go a long way, we feel towards demonstrating that perception models which satisfy our criterion accept points which are reasonable and reject points which are not, and that the opposite is true of perception models which do not satisfy our criterion. This is intuitively connected to the idea of generalization capability, however we agree that it is far from a rigorous argument connecting our criterion to precisely quantified statistical concepts like the generalization gap. On the other hand, as the *Fantastic Generalization Measures...* survey shows, such rigorous arguments are few and far between in the literature. We are planning a new paper with a large scale experimental suite attempting to generate such a rigorous argument, at least empirically. ### Regarding details of models (ConvNet, RNN, transformer, etc) A fascinating aspect of parallax is that it does not depend whatsoever on the architecture or the training system of the model \\(k\\). As long as an evaluation oracle is available, parallax will apply. Thus, parallax provides a method to compare how well different models match the topology of a dataset, without relying on any architectural properties of the models. As discussed in the general rebuttal, we do intend to pursue this sort of comparison, particularly for image datasets, in followup work. --- Rebuttal 2: Comment: Thanks for the response. I would keep the rating.
Summary: The authors propose a method to evaluate how well a model learned a data distribution based on topological data analysis. The authors assume that the model is a classifier (the output of the model is binary, or consists of a finite set of classes that can be evaluated separately), and topologically compare the set of positive data samples to the set of data points where the model outputs a positive value. Topological properties are computed using filtrations of the Rips complex on the data points. The quality of a model is evaluated by checking 1) at which scale the complex in the full ambient space starts to diverge from a complex that is restricted to the subspace where the model is positive, and 2) if persistent features of the full complex are also present in the restricted complex. The authors given an example of measuring and comparing the quality of two models applied the a small dataset with known geometry of the data manifold. Strengths: - The approach for measuring the similarities between the manifold learned by a model, and the manifold implicitly described by a set of data samples seems interesting, and is novel as far as I can tell (although I am not familiar with topological data analysis). - It seems like it could be useful for finding adversarial examples for a model, or identifying regions where a model does not perform well. - If this approach would work with real-world data (or if it can be extended to do that), it could applicable quite broadly to evaluations of models that learn a distribution in high-dimensional spaces. Weaknesses: - The empirical evaluation is not thorough enough. I do not have a good intuition in which situations the proposed measure of model quality would be useful, and I do not think it is obvious from the description of the method (see below for a discussion). Therefore, a more thorough empirical evaluation is needed to show in which situations the measure is useful. Specifically, more datasets should be evaluated, and the metric used to evaluate success could be improved as well: the current metric using bond lengths is specific to the dataset used, and would not be useful for comparing the performance across different datasets. A more useful metric might be to use a dense held-out test set, and compare the model quality predicted by the proposed method to the model quality according to the test set. - The method is not compared to any alternatives for measuring the quality of the distribution learned by a model, or for finding adversarial examples. For example, what are advantages/disadvantages compared to the standard approach of using a held-out test set? (I can imagine that the proposed method might have advantages if the test set cannot sample the space densely enough, but disadvantages for regions outside the convex hull of the training samples.) Or compared to other methods for finding adversarial examples (there is a large body of literature, a lot of work can be found, for example, by searching for "adversarial examples" in Google Scholar)? This should be at least discussed in the related work, and the advantages/disadvantages are not clear from a theoretical standpoint, ideally an empirical comparison should be provided. - It seems like the measure could be less useful for detecting erros of the model (false positives or false negatives) outside the convex hull of the data samples, since the space outside the convex hull is not explored by the Rips complex. But it seems like real-world data might have true positives outside convex hull (i.e. the true data manifold may extend significantly outside the convex hull of the data samples). See details for a discussion. - The computational complexity of the method is not provided. Ideally the authors should provide the number of evaluations of the model that is typically needed, in addition to the overhead from building the Rips complex and computing the algorithm described in Section 7. - The exposition is very hard to follow for non-experts in topological data analysis, and additionally some symbols/variables are undefined (if these are common symbols in topological data analysis, I am not familiar with them). More details: * More empirical evaluation is needed to understand in which situations the proposed measure gives good results. From the description of the proposed measure alone, it does not seem clear to me in which situations the measure gives a good estimate of model quality. On one hand, I can see how topological properties could be relatively stable descriptors of a data manifold in high-dimensional space, but on the other hand, I do not have a good intuition how the data manifold typically behaves in high-dimensional space. For example in the manifold of natural images or their latent features, I can imagine that the true data manifold could lie significantly outside the convex hull of the data samples, and that it would be hard to capture enough data samples to cover the full data manifold in their convex hull. In that case, a good model would not just be a "thickening" of the manifold given by the data samples (as described by the authors), or even a thickening of their convex hull, but would also include samples significantly outside the convex hull. For example, a dataset like CLEVR may contain images showing a blue sphere at multiple random locations in the image, but some regions of the image may not be covered by the blue sphere in any of the samples; in that case, we would still expect a good classifier of the data manifold to also classify images as positive where the blue sphere is in a location that was not directly observed in the data samples. Thus, it seems to me that a good model needs to extrapolate significantly outside of the convex hull of the data samples. Would the Rips complex constructed on the data samples be able to capture the part of the true data manifold that extends beyond the convex hull of the samples? And would the proposed method therefore be able to handle such cases, which might be quite common in real-world datasets? Arguably the manifold would be better behaved in latent spaces that have a more semantical representation of the data, but this then means that the latent space that the proposed measure is applied to has to be chosen carefully, and it is unclear how to choose the latent space. * The exposition is missing definitions in several places: - In Section 1, there are a few missing definitions: - $K^\circ$ is not defined. - $\overline{K}$ is not defined. Does this denote the set complement? If so, would this not mean that $K^\circ$ is the complement of $K$, and in that case, why have two different notations for the complement? - $K^c$ is not defined. It could also be the set complement, but then there would be three different notations for the complement, so I guess both $K^\circ$ and $\overline{K}$ do not actually denote the set complement. - In Section 2, there are several missing definitions that make it hard to follow the exposition: - A Rips complex is not defined. Rips complex may be well-known in topological data analysis, but I think that only a small fraction of NeurIPS readers will be experts in topological data analysis, so giving a short definition would be good. Also, even if the Rips complex is known to readers, the arguments to the Rips complex that are used here may need to be defined. The first argument X is quite clear, but second argument $K$ is less clear. Does the second argument restrict the metric used to construct the Rips complex to geodesics in the subspace $K$? - A filtration of a Rips complex is not defined. - A chain $Y$ is not defined. - $B$ is not defined. - In Corollary 2.5, $i_*$ is not defined. * Eq. 2.2 could probably be simplified. Since $K$ is defined as subspace of $V$, $\rho_V(Y) \le \rho_K(Y)$ is true for all $Y \in R$ (according to Lemma 2.1) and does not need to be mentioned explicitly in the definition. Therefore a simpler definition would be: $P(X, K, V)_{\alpha,\epsilon} =$ {$Y \in R\ |\ \rho_K(Y) \le \rho_V(Y) + \epsilon, \rho_K(Y) \le \alpha$} * in Algorithm 7.5, step 2, should f(p) be k(p) instead? * On Line 21: I would define $K$ as {$x \in V\ |\ k(x) = 1$}, since $k^{-1}$ is not well defined for non-injective functions. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: A preview of the discussion of the advantages/disadvantages compared to related work might be useful, as wall as a discussion of the issue with evaluating regions outside the convex hull of the data samples. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: A few limitations have been discussed, including that the authors are not certain if the method would work on more complex real-world data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for helping identify sections that may be difficult for readers due to the field-specific jargon. We agree that clarification of notation is extremely important to readability to a broad interdisciplinary audience. Please see the discussion of clarifying notation in the general rebuttal. Here are some specific changes that may be beneficial to the exposition. In addition we will include Figure 1 in the attached PDF to help the reader visually connect our definitions to meaning. - In Definition 1.1, add prose: We define a model \\(K\\) to be the closure of an open set, colloquially known as a "solid," \\(K= \\overline{K^\\circ}\\). For any finite dataset \\(X\\), we consider the collection \\(\\mathcal{M}(X)\\) of all models for which \\(X\\) is contained in the interior of the model, \\( X \\subset K^\\circ \\). - In Definition 1.2, add prose: A void is a bounded open set \\(\\Omega\\) in the complement of a model, \\(\\Omega \\subset K^c\\), such that ... - In Section 2, Equation (2.1), we will introduce basic definitions of Rips complexes on subsets and cite standard TDA textbooks. - To clarify the notion of a "chain" by adding the parenthetical "(a formal sum of simplices)" and referencing Hatcher's *Algebraic Topology*. - Just before line 105, add a sentence: Let \\(B_\\alpha(x)\\) denote the closed geodesic ball in \\(V\\) centered at \\(x\\) of radius \\(\\alpha\\). - Note that \\(\\iota_*\\) in Definition 2.5 is defined in the previous sentence; \\(\\iota_*\\) is the homomorphism on homology that is induced by \\(\\iota\\) on complexes. To help orient the reader, we would provide a citation to *Algebraic Topology* by Hatcher. - We agree that Eq 2.2 can be simplified as stated. It was written as shown to emphasize the two-sided bound on \\(\rho_K(Y)\\), but either way is acceptable. - On line 21, and also line 290, replace \\(K=k^{-1}(1)\\) with \\(K=\{x \\in V : k(x)=1}\\) to avoid any confusion regarding the pre-image notation. - Yes, in Algorithm 7.5, step 2, \\( f( p ) \\) should be \\( k( p ) \\). ### Regarding comparison outside the convex hull We agree that models with good generalization will necessarily allow extrapolation outside the convex hull of the training set. This idea is expressed in three ways in the submission. First, in Section 1.1, lines 59--79, we comment on the relationship of reference [2] *Learning in High Dimension Always Amounts to Extrapolation* and the related topology. Second, our definitions of \\(X\\) and \\(K\\) in Definition 1.1 require that the dataset \\(X\\) is contained in the *interior* of the model \\(K\\), so there are necessarily points outside the convex hull of \\(X\\) that \\(K\\) would accept. However, the essence of parallax is to ask ``what can I detect about \\(K\\) using only \\(X\\)?'' We do not assume that \\(X\\) is an original training set, only that it is an available dataset. If more datapoints were available from an additional source, such as a generative model or from additional data collection, then those points should also be used for parallax. Third, recall that the main result is a stability result (Theorem 5.3), based on pointwise perturbation (Section 3). Thus, it exists to provide confidence that these methods remain consistent, even if the dataset is pushed "outwards" (or any other direction) by a modest amount. We absolutely agree that semantic interpretation of the latent data manifold is an extremely slippery concept, and we hope that parallax helps capture one (incomplete) aspect of that relationship. ### Regarding comparison to other methods We agree that it would be beneficial to the field to compare parallax to other ways of measuring generalization and robustness. The most comprehensive reference here is *Fanstastic Generalization Measures and Where to Find Them* by Jiang et al (arxiv id 1912.02178). As discussed in the general rebuttal, we do plan on a broad survey that applies parallax to a wide variety of datasets and network architectures, and that survey we will attempt to compute as many of these as is practical. We could add a brief conjectural discussion to this submission; however many of the generalization measures available do not have topological foundations (instead, information-theoretic or statistical foundations), so meaningful comparisons are difficult without a years-long interdisciplinary research program. --- Rebuttal Comment 1.1: Comment: Thanks for the interesting discussion and additional results. Adding the clarifications of the notation to the paper would help a lot. Regarding data outside the convex hull, the fact that the test set would also be used to construct the Rips complex, not only the training set, is a good point (it might be good to mention this explicitly in the paper, maybe as part of a description of how the method would be used in practice). Although it still seems likely to me that the true data manifold would extend significantly beyond the convex hull of these data points, so it seems unclear to me how much the Rips complex could help identify false negatives/positives of the model that are outside the convex hull of the data samples, beyond what a held-out test set already provides. A small perturbation of the data samples would probably not help a lot with this problem either, as the distance of the samples outside the convex hull would likely be much larger than any perturbation can be without introducing too many false positives. The added teapot experiment is appreciated, this is closer to the image domain most readers will be interested in (although something like a small 2D circle at random locations in e.g. the upper half of the image might address extrapolation outside the convex hull a bit more directly). However, it is likely that this overfitted model does not only have false negatives outside the convex hull, but also introduces holes inside the convex hull that the proposed method can detect. It would be more interesting to show that the proposed method can more accurately distinguish between a well-trained model, and an overfitted model than existing measures of model generalization/robustness, like a held-out test set. So I think the effectiveness of the method described in the paper is still quite hard to judge given the lack of comparisons to existing methods and the small set of toy experiments. But the idea of using topological data analysis to evaluate a model seems quite novel and interesting, and might inspire future work. Considering this and the clarifications promised by the authors, I raise my score by one point.
null
null
null
null
Dataset Diffusion: Diffusion-based Synthetic Data Generation for Pixel-Level Semantic Segmentation
Accept (poster)
Summary: - The paper introduces a method of generating synthetic training data using Stable Diffusion (SD) for semantic segmentation. - The class labels are appended to captions which is then used as text prompt to SD to generate synthetic image. - The segmentation map is generated by refining the cross-attention map (using only the class name as text prompt) using the exponentiated self-attention map. - The generated segmentation masks with uncertainty regions are used as pseudo-labels for training segmentation models. - Self-training is performed and the resulting model is evaluated using Test Time Augmentation Strengths: - Use of self-attention to improve the obtained segmentation mask - Including uncertain regions in the generated segmentation masks is a good idea as they are extracted from attention maps and hence not high-quality - Ablation of different components in Table 3 helps understand relative contributions (though more details on the ablation experiment setup would be helpful for the reader) - Visualization of failure cases Weaknesses: - L107: “Their text-prompts inputs to SD are simple” - which is inaccurate as the other methods also explore different ways of prompting. Similar to this work, the use of ChatGPT to generate prompts is also explored in [9] - Finetuning on a small amount of real data improves the performance of [8] significantly and it reaches mIoU higher than training on real data only. Similar experiment here would show how a small amount of real data can be leveraged - Ablation: simple text prompt with all class labels (i.e. Row1 + Row3 in Table 2) - In Table 2, the biggest boost seems to come from using all class labels - If simple text prompt with all class labels work well, we could do away with the time consuming extra step of using ChatGPT, BLIP, for generating complex prompts - The idea of using self-attention for improving the segmentation mask obtained from cross-attention is also explored in [9] -- not referenced in text when discussing - Missing comparison with [9] Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - is TTA used when evaluating other methods as well? - visualization of self-attention maps would be good to have - [8] shows results on open vocabulary segmentation and domain generalization as well, which would make the work more comprehensive Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations have been discussed in the paper. - limited to domains/classes for which stable diffusion can generate images - limited by the complexity of images stable diffusion can generate - segmentation masks obtained from the method can be noisy/low-quality - amount of synthetic data possible to generate depends on the inference speed of SD Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: L107: “Their text-prompts inputs to SD are simple” - which is inaccurate as the other methods also explore different ways of prompting. Similar to this work, the use of ChatGPT to generate prompts is also explored in [9].**\ **A1:** What we meant by saying these text-prompt inputs to SD are simple is that only a single object for an image is considered in these prompts, and not simple in the prompt construction. Thank you for pointing out, we will make it clearer in the revised version. **Q2: Finetuning on a small amount of real data improves the performance of [8] significantly and it reaches mIoU higher than training on real data only. Similar experiment here would show how a small amount of real data can be leveraged.**\ **A2:** Thanks for your comments. Actually, our finding is opposite as shown in Tab. E of reviewer *2Ket*. Pretraining on synthetic dataset does not help much in the presence of real data. It is not to mention, when real data are available, we can do other strategies to improve the performance with small labeled images such as few-shot learning or semi-supervised learning. Thus, we decided not to involve any real data in our approach and evaluation. **Q3: Ablation: simple text prompt with all class labels (i.e. Row1 + Row3 in Table 2). In Table 2, the biggest boost seems to come from using all class labels. If simple text prompt with all class labels work well, we could do away with the time consuming extra step of using ChatGPT, BLIP, for generating complex prompts**\ **A3:** Yes, using all class labels can boost the performance significantly compared to just using the simple prompts. However, there is still a very big gap between all class labels alone and combining them with captions (57.4 vs 62.0 in mIoU). **Q4: The idea of using self-attention for improving the segmentation mask obtained from cross-attention is also explored in [9] -- not referenced in text when discussing**\ **A4:** Thanks for pointing out, we will include in the revised version. DiffusionSeg [9] uses self-attention and cross-attention to generate segmentation mask, however, in a very different way from our approach. In particular, self-attention is employed for constructing the pixel-wise objectness and pair-wise affinity. Given that information, the energy function for each mask can be defined. The mask is then generated by minimizing that function using an off-the-shelf graph cut algorithm. In contrast, our method is much simpler by just multiplying powered self-attention with cross attention to obtain refined cross attention. **Q5: Missing comparison with [9]**\ **A5:** Please refer to A3 of reviewer *gEQM*. **Q6: is TTA used when evaluating other methods as well?**\ **A6:** Please refer to A5 of reviewer *2Ket*. **Q7: visualization of self-attention maps would be good to have**\ **A7:** Please refer to A2 of reviewer *KgKS*. **Q8: [8] shows results on open vocabulary segmentation and domain generalization as well, which would make the work more comprehensive**\ **A8:** Thank you for your question. We agree that results on open vocabulary segmentation and domain generalization would make our work more comprehensive in terms of comparison with previous work [8]. For open-vocabulary segmentation, experiments in [8] (Tab. 3) is conducted as follows: training on synthetic dataset of 20 classes of VOC and testing on all 20 classes and split the results into two groups: 15 seen classes and 5 unseen classes. It is not the standard setting where the model is trained on seen classes and tested on unseen classes. For completeness, we provide the results of our approach on the same setting as [8] in Tab. H. **Table H: Zero-shot evaluation** | | Segmenter | Seen | Unseen | Harmonic | :--------: | :--------: | :--------: | :--------: | :--------: | DiffuMask 60k | Mask2Former | 60.8 | 50.4 | 55.1 | Dataset Diffusion 40k | Mask2Former | **62.7** | **50.9** | **56.2** For domain generalization or cross-dataset setting, our method is a dataset-agnostic approach, as we only depend on given class names only. Therefore, training with our synthetic data then evaluating on different datasets such as COCO or VOC can be considered as cross-dataset evaluation already. Since [8] has not released the code yet, we conduct the following similar experiments. For each dataset in Tab. I, we extract the subset of six shared classes between VOC and Cityscapes, i.e., bicycle, bus, car, motorbike, person (human + rider), and train. Results in Tab. I suggest that our generated data could achieve competitive results on VOC and Cityscapes dataset, compared to using other cross-domain training data. **Table I: Cross-dataset evaluation** |Train set|Test set| bicycle | bus | car | motorbike | person | train |mIoU| | -------- | -------- | :--------: | :--------: | :--------: | :--------: | :--------:| :--------: | :--------: | Cityscapes|VOC|61.3|73.2|57.2|69.8|89.9|61.2|68.7 | Dataset Diffusion|VOC|75.9|92.3|88.8|87.1|93.6|92.1|88.3 | VOC|Cityscapes|74.5|48.4|94.8|49.8|87.5|19.0 |62.3 | Dataset Diffusion|Cityscapes|71.2|44.7|92.0|27.8|79.2|5.0 | 53.3 --- Rebuttal Comment 1.1: Comment: We hope that our answers address your concerns. If you have any other concerns, please let us know. Thanks! --- Rebuttal Comment 1.2: Comment: Thank you for providing detailed answers for all issues raised. I am inclined to raise my rating to borderline accept. I had a few additional clarifications: - For Q3 above, what I meant by combining Row1 + Row3 was provide a combination of simple text prompt with all labels, which in the example of Table 2 would be "A photo of aeroplane and boat". If this performs same as Row4 (using captions + class labels), we could do away with generating captions. - Is it possible to compare to Table 1b from [9] like they do by converting segmentation maps to bounding boxes? --- Reply to Comment 1.2.1: Comment: Thanks for your questions. **Regarding Q3**, we have taken your suggestion into consideration. We executed the proposed configuration that includes sample text prompts and class labels (referred to as "simple text prompts with all labels"). We have updated Table 2 in the main paper to the new version, Table 2-new, which now includes an additional row (row 4). The table clearly demonstrates that the performance of the newly proposed prompt aligns with that of class labels alone. This alignment indicates that the captions generated from BLIP significantly contribute to performance enhancement. **Table 2-new: Utilizing Simple Text Prompts and Class Labels** |Method|Example|mIoU| |--------|--------|:--------:| |1: Simple Text Prompts with 1 Label|a photo of **an aeroplane**|54.7| |2: Captions Only|a large white **airplane** sitting on top of a **boat**|50.8| |3: Class Labels Only|**aeroplane boat**|57.4| |4: Simple Text Prompts with All Labels|a photo of **an aeroplane** and a **boat**|57.6| |5: Caption + Class Labels|a large white plane sitting on top of a boat; **aeroplane** **boat**|**62.0**| **Question on the conversion of segmentation maps into bounding boxes**, we have followed your recommendation and achieved the reported results in Table M. Upon observation, it is evident from the table that the bounding boxes inferred from our segmentation outperform those inferred from [9] (by approximately 3 points). We intend to incorporate this table into the supplementary material. **Table M: Single Object Localization** |Method|VOC07|VOC12|COCO20K| | -------- |:--------:|:--------:|:--------:| |AttentionCut [9]|67.5|70.2|54.9| |DiffusionSeg [9]|75.2|78.3|63.1| |Dataset Diffusion|**77.4**|**81.5**|**66.6**| --- Rebuttal 2: Comment: Dear wdax, we would love to hear your thoughts. Did the rebuttal and the other reviews change your mind?
Summary: The present paper sims to solve the issue of expensive annotation in dense prediction tasks through generating of synthetic images and masks by utilizing frozen a frozen stable diffusion model. First, the authors use the captions extracted from the BLIP or COCO original dataset to generate images by stable diffusion, then combine self-attention and cross-attention maps to produce semantic masks. Finally, the segmenter is supervised by the generated masks with an uncertainty-aware operation, and subsequently self-trained on pseudo labels predicted by the segmenter after first-stage training. This work introduces a simple yet effective method of utilizing text-driven stable diffusion to synthesize images and masks, as opposed to relying heavily on labour-intensive annotations, yielding impressive results on VOC and COCO validation datasets. Strengths: Strengths: 1. This work applies the pretrained stable diffusion model to generate synthesized VOC and COCO datasets, which also enables evaluation of the diffusion model’s capability for real-world scenes generation and generalization. This approach could substantially reduce the annotation cost in semantic segmentation task while achieving impressive performance on VOC and COCO validation dataset. 2. For text prompt generation, this work uses the original captions from COCO dataset and leverages BLIP model to generate captions for VOC, and further introduces a calibration operation to address issues of mismatched and missing class names. Compared with DiffuMask which primarily focus on generating one, this work can generate more complex scene encompassing multiple categories. 3. For synthesized semantic masks generation, this work ensembles the attention maps of both self-attention and cross-attention. Additionally, they employ uncertainty-aware segmentation loss to alleviate unconfident parameters updates. Weaknesses: Weaknesses: This work proposes a simple but effective strategy to leverage text-driven stable diffusion however lacks novelty. Generating data via diffusion models might introduce computational burdens but holds the potential for effectively addressing the problem of imbalanced data distribution. Moreover, the submitted version lacks of experiments, as detailed in the Questions section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions: 1. This work lacks sufficient comparisons of finetuning on real data after training on synthesized data which has been provided in DiffuMask. 2. The key contribution of this work lies in reducing annotation costs, yet it significantly increases computational costs during training. It would be more convincing to see further comparisons with other generative or self-supervised methods on semantic segmentation tasks in terms of both performance and efficiency. 3. Is Table 1 a fair comparison with the DiffuMask method? Were self-training and TTA applied in the results of this method? 4. Since the stable diffusion is totally fixed, how about the results of crossing datasets? 5. The submission could benefit from meticulous proof-reading and clearer claims. For instance, there are inconsistencies between the values in Table 1 and those reported in the ablation study. Additionally, could the authors specify the type of segmenter and backbone used in the ablation study? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: This work proposes a simple but effective strategy to leverage text-driven stable diffusion however lacks novelty.** \ **A1:** Our proposed method introduces a simple and unique mask generation process by combining self-attention maps powered to $\tau$ then multiplied with cross-attention maps. To the best of our knowledge, no prior work has explored this particular technique. Therefore, it cannot be considered as lack of novelty. **Q2: This work lacks sufficient comparisons of finetuning on real data after training on synthesized data which has been provided in DiffuMask**\ **A2:** We provide the comparisons of fine tuning on real data after training on synthesized data in Tab. E below. We conduct this experiment with Mask2Former segmenter and ResNet-50 backbone. Training directly on 5k real data yielding 77.0 in mIoU (\%), while pre-tranining on synthetic data generated by DiffuMask with 60k images then fine-tuning on 5k real data yields a very marginal result of 77.6 in mIoU (\%). Thus, once the real data provided, it is not necesarry to pretrain on synthetic data as proposed in one of the experiments of DiffuMask. Therefore, we decide not to include any real data in our approach or evaluation to merely study the quality of the synthetic data. However, to fulfill the request, we also run Mask2Former pretrained on our synthetic data and obtain slightly better results than that of DiffuMask. **Table E: Comparisons of fine-tuning on real data after pre-training on synthetic data.** |Pretrained on synthetic data| Real data | mIoU (\%) | :--------: | :--------: | :--------: | Not used |VOC 5k| 77.0 | |DiffuMask (60k images) |VOC 5k|77.6 | |Dataset Diffusion (40k images) |VOC 5k | **78.0** | **Q3: The key contribution of this work lies in reducing annotation costs, yet it significantly increases computational costs during training.**\ **A3**: Our approach only employs pretrained Stable Diffusion without retraining or finetuning it, so it is a quite affordable and fully-automated process to generate synthetic data. In contrast, real dataset requires manual data collection and annotation, which is arguably cost more to have same number of annotated images as synthetic dataset. Furthermore, by employing synthetic data, we can effectively address the issue of imbalanced data distribution as suggested by the reviewer. **Q4: It would be more convincing to see further comparisons with other generative or self-supervised methods on semantic segmentation tasks in terms of both performance and efficiency.**\ **A4:** For other generative models like GAN, please refer to A5 of reviewer *gEQM*. For self-supervised methods, we provide the comparison in Tab. F below. It can be seen that our synthetic data generation approach significantly outperforms self/un-supervised approaches using real images with a very large margin. **Table F: Comparisons with other self-supervised methods on VOC2012 *val set*** | Method|mIoU (\%)| | -------- |:--------:| |CLIP$_\text{py}$ ViT-B|54.6 |MaskDistill+CRF|48.9 |Leopart|47.2 |MaskDistill|45.8 |Dataset Diffusion (ours)|64.8 **Q5: Is Table 1 a fair comparison with the DiffuMask method? Were self-training and TTA applied in the results of this method?**\ **A5**: In Table 1, we ensure a fair comparison with DiffuMask [8] because both methods train the segmenter with only pure synthetic data. In addition, DiffuMask did use the self-training, however, TTA was not mentioned. To address this potential discrepancy and ensure a comprehensive analysis, we present additional results without employing TTA in Tab. G below. As can be seen, we still outperform the version of DiffuMask with a margin of 2.0 mIoU even when not using TTA. **Table G: Comparative results without using TTA.** |Training set|Segmenter|Backbone|mIoU | -------- | -------- | -------- |:--------: | VOC|DeepLabV3|ResNet50|76.2 | VOC|DeepLabV3|ResNet101|78.7 | Dataset Diffusion|DeepLabV3|ResNet50|59.9 | Dataset Diffusion|DeepLabV3|ResNet101|63.1 | DiffuMask|Mask2Former|ResNet50|57.4 | Dataset Diffusion|Mask2Former|ResNet50|59.4 **Q6: Since the stable diffusion is totally fixed, how about the results of crossing datasets?**\ **A6**: Our method is a dataset-agnostic approach like DiffuMask, hence we only depends on given class names only. Therefore, training with our synthetic data then evaluating on different datasets such as COCO or VOC can be considered as cross-dataset evaluation already. Nevertheless, we also provide the results of cross-dataset evaluation following the setting of DiffuMask [8] for completeness purpose in Tab. I of A8 of reviewer *wdax*. **Q7: The submission could benefit from meticulous proof-reading and clearer claims. For instance, there are inconsistencies between the values in Table 1 and those reported in the ablation study. Additionally, could the authors specify the type of segmenter and backbone used in the ablation study?**\ **A7:** Thanks! To clarify, the ablation study was conducted using 20k images without self-training and test-time augmentation (TTA) unless explicitly stated in the experiment (L247) whereas the main results presented in Table 1 were achieved by utilizing 40k captions, self-training, and TTA. We used DeepLabV3 segmenter and ResNet101 as backbone for the ablation study. --- Rebuttal Comment 1.1: Comment: I appreciate the author's dedication to providing further clarification and incorporating additional experimental findings. After reading the review from reviewer gEQM, I share the some concern if the proposed method is still useful for non-common scenes. I will maintain my rating of 'borderline accept'. --- Reply to Comment 1.1.1: Comment: Thanks for your response. We are still working on the new suggested image domains from reviewer gEQM since we have to do the whole experiment again for the new image domains. Stay tuned! We will get you posted.
Summary: The paper addresses the problem of training data preparation for machine learning tasks using generative models. The paper proposes a way to generate pixel-level semantic segmentation dataset using Stable Diffusion. Given a set of target classes, chatGPT produces input text prompts that along with real captions are used to prompt stable diffusion which can sample images. To generate the corresponding segmentation maps, the self-attention map is used to refine the cross-attention map for each of the target classes. The estimated segmentation masks are then used to train a semantic segmentation network using uncertainty aware loss and self-training methodology. The efficacy of the method is presented by experiments using PASCAL VOC and COCO dataset. The existing datasets are further enriched by including the captions generated from BLIP. Comparisons are made by training DeepLab V3 and Mask2Former on the real dataset, synthetic dataset from DiffuMask, and the proposed methodology for generating the dataset. Ablations along impact of different design choices is clearly represented in Table 3, along with feature scale in table 4, and hyperparameters for defining the uncertainty in the generated mask. Strengths: The paper is well presented in terms of motivations first, followed by the technical details defined in detail to reproduce the results. The idea of using Stable Diffusion as a data source is not novel as presented in StableRep and instructPix2Pix, though the urrent paper lays out the method to use its intermediate activations for generating synthetic segmentation masks for training. The comparisons are promising and furthermore the ablations provide enough justifications for the design choice. The method is well described to the extent where it is sufficient for reproducing the results. The presented results are commendatble and in-line with other results on the use of synthetic datas for downstream tasks and representation learning [StableRep]. I am positive about the the use of stable diffusion priors for etracting useful training data. While the results in thsi paper don't beat training with real dataset from COCO and PASCAL VOC, but it the results point to the promise of synthetic data and priros of generative models. Weaknesses: 1. The method requires a fixed set of test classes to be defined beforehand. This is a limitation as the method cannot be used to generate segmentation masks for unseen classes or extend it in an open-vocabulary manner [See OpenSeg]. 2. I would like some more discussion on why self-attention has the information to further refine the cross-attention map. It seems like because of the NxN structure of the self-attention map, understanding the process that leads to refinement would be helpful for the reader. 3. How sensitive is the method qualitatively to the choice of the hyperparameters for defining the uncertainty in the generated mask? It would be helpful to see how the uncertainty parameters impact the segmentation masks qualitatively. 4. Since the model is heavily dependent on quality of data in LAION dataset, the bias in the data probably transfers to generated dataset? A section in the main paper or supplementary about some of the known biases in the generated dataset would be helpful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the uncertainty parameters also be conditioned on the input text prompt and the generated segmentation mask using existing segmentation data and its augmentation? 2. Minor suggestion: Please add a few more examples to Figure 6 as there is still some space. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Sufficient discussion was included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The method requires a fixed set of test classes to be defined beforehand. This is a limitation as the method cannot be used to generate segmentation masks for unseen classes or extend it in an open-vocabulary manner [See OpenSeg]**.\ **A1:** Since our method is one of the first works using text-to-image model to generate images and semantic segmentation, we evaluate with the standard semantic segmentations setting and have not considered the unseen classes or open-vocab setting. Moreover, given the synthetic dataset, we can treat it as a real dataset and apply SOTA approaches on unseen classes or open-vocab settings such as OpenSeg. We believe they will totally works fine. However, it is out of the scope of this paper and it might make paper more complicated and cluttered since we focus on the quality of generated synthetic dataset. **Q2: I would like some more discussion on why self-attention has the information to further refine the cross-attention map. It seems like because of the NxN structure of the self-attention map, understanding the process that leads to refinement would be helpful for the reader.**\ **A2:** We really appreciate your comment. Cross-attention maps capture the correlation between each position of the latent representation and the tokens of the text embedding. However, the cross-attention map only capture salient parts of the object and ignore non-salient ones. In these cases, the self-attention maps with the ability to capture the pairwise correlations among positions within the latent representation can help propagate the initial cross-attention maps to the highly similar positions, e.g., non-salient parts of the object, thereby enhancing their quality. Also, we provide an illustration of correlation maps extracted from self-attention maps in Fig. 1 in the global attached PDF. **Q3: How sensitive is the method qualitatively to the choice of the hyperparameters for defining the uncertainty in the generated mask? It would be helpful to see how the uncertainty parameters impact the segmentation masks qualitatively.**\ **A3**: Thanks! We provide qualitative results with different hyperparameters for defining the uncertainty in generated masks in the Fig. 3 and Fig. 4 of the global attached pdf. **Q4: Since the model is heavily dependent on quality of data in LAION dataset, the bias in the data probably transfers to generated dataset?**\ **A4**: We really appreciate your insight. Yes, the bias in the LAION dataset may be transfered to the generated dataset. This is the current limitation of Stable Diffusion as it was trained on a large-scale uncurated dataset like LAION. However, there are several studies addressing the bias problem in generative models: + **Seshadri, Preethi, Sameer Singh, and Yanai Elazar. "The Bias Amplification Paradox in Text-to-Image Generation." arXiv preprint arXiv:2308.00755 (2023)**: examines bias amplification in text-to-image generation, focusing on gender biases. + **Friedrich, Felix, et al. "Fair diffusion: Instructing text-to-image generation models on fairness." arXiv preprint arXiv:2302.10893 (2023)**: mainly discusses biases related to genders and human behavior. + **Su, Xingzhe, et al. "Manifold-Guided Sampling in Diffusion Models for Unbiased Image Generation." arXiv preprint arXiv:2307.08199 (2023)**: proposes a method to estimate the data manifold from the training data. The data manifold is then used as a constraint to guide the sampling process in diffusion models that can mitigate the general data bias. We believe that with these studies and future work on the topic of fairness in GenAI will help to mitigate the bias in the generated images. We will include this discussion in the revised version. **Q5: Can the uncertainty parameters also be conditioned on the input text prompt and the generated segmentation mask using existing segmentation data and its augmentation?**\ **A5**: If we understand your question correctly, you meant that can we have an adaptive threshold for each text prompt rather than a fixed threshold? Yes, we can adapt the threshold with the given text prompt. However, it is not straightforward to do it without significant modification such as a network to predict the adaptive theshold given the text prompts and produced segmentation map. To train the network, it requires a separate dataset which is also non-trivial effort. Therefore, we opt to use a fixed threshold for all text prompts as proposed. However, we still think your idea is intersting and worth more future work on it. Thanks! **Q6: Minor suggestion: Please add a few more examples to Figure 6 as there is still some space.** \ **A6**: Thank you for the suggestion. --- Rebuttal Comment 1.1: Comment: Hopefully, our responses addressed your questions and concerns. If you have any further questions, please let us know. Thank you! --- Rebuttal 2: Comment: Dear KgKS, we would love to hear your thoughts. What do you think of the rebuttal and the other reviews? --- Rebuttal 3: Comment: Thanks to the authors for addressing all concerns in the review. - It would be helpful if the section on why self-attention helps in refining the cross-attention can be expanded with a simple experiment to demonstrate this or even build the intuition using the case where this is method is not used. - Discussion on the bias transfer can be added to the limitation or discussion section so that the reader is aware of this when using the proposed method. Possible ideas of mitigating this bias would be helpful as well. I will keep the score the same as Accept. Thanks for writing an insightful paper. --- Rebuttal Comment 3.1: Comment: Thank you for your valuable suggestions and encouraging comments. We greatly appreciate your input and will incorporate these discussions into the revised version. Regarding the query about self-attention, we plan to enhance Tab. 5 in the main paper with the updated Table 5-new provided below. Specifically, we will introduce a new column with $\tau=0$, signifying the absence of self-attention for refining cross-attention. The revised table demonstrates that self-attention significantly enhances performance by refining cross-attention, resulting in a notable increase of approximately +15 mIoU. Additionally, we intend to integrate Fig. 1 from the attached global PDF into the main paper. This inclusion will clearly illustrate how self-attention aids in refining cross-attention. **Table 5-new. Absence of Self-Attention Refinement** |$\tau$|0|1|2|3|4|5| | -------- |:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| |mIoU|44.8|59.5|60.5|60.2|**62.0**|60.5
Summary: This work is about automatically generating synthetic data for training semantic segmentation models. Such synthetic data includes realistic input images along with their corresponding ground truth semantic masks. The authors employ text-to-image diffusion models (Stable Diffusion in this work) and propose a specific prompting and ground truth mask generation. The input prompts for Stable Diffusion are a concatenation of an image-level caption (given or generated) as well as the list of category names contained in an image. The ground truth segmentation masks are extracted by combining the self-attention (image tokens) and cross-attention (image and class name text tokens) maps. The evaluation is done on two datasets, Pascal and COCO, and shows superior accuracy compared to one concurrent work (DiffuMask). The ablation study demonstrates the (positive) impact of all aspects of the proposed framework. Strengths: - The general problem of generating synthetic training data for segmentation models from vision-language models is interesting and promising. - The proposed prompting for Stable Diffusion is simple but effective. - The paper is well written and easily comprehensible. The figures give a good overview and also explain each of the proposed aspects of the work well. - All aspects of the proposed solution are evaluated in the ablation study. Weaknesses: - L123: Unfortunately, I think creating synthetic data is most useful for exactly those applications that are not based on everyday scenes. There are plenty of datasets with everyday scenes already available, like Pascal, COCO or ADE20K. - Figure 2 and L133: Doesn't using captions from VOC and COCO bias the whole system and make it unfair when evaluating on those same dataset? Other methods that do not leverage these captions may be at a disadvantage. Using only generated captions would be fine, though. It would have been great to see the difference in final accuracy for ground truth captions and generated captions. - Table 1: The comparison to prior work is lacking. There is only a single comparison to a prior/concurrent work. Did DiffuMask [8] also use self-training? And why is there no comparison to DiffusionSeg [9]? Why is there no comparison to GAN-based methods for synthetic dataset generation? Is there a way to use the evaluation protocol from [8, 9] for a comparison? **Post-rebuttal:** I acknowledge that I read all reviews and the author's feedback. The author's feedback clarified several of my concerns and I raised my rating to "borderline accept". I'd love to see a discussion about the limitations (and potentially some supporting numbers) in the paper, as discussed in the comments. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: **Questions:** - The feature scales in Table 4 look sensitive. Do you get a similar conclusion for other datasets? **Suggestions:** - L35: I think it would be useful to the reader if some high-level context is given on how DiffuMask (or the proposed method) achieves generation of segmentation masks. - L147: Shouldn't $M$ rather be $M_i$ if it is image-dependent? - L148: Does concatenating two strings really need a method name like "text appending operation" or "class-prompt appending technique"? It's a very basic operation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, limitations have been addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: I think creating synthetic data is most useful for exactly those applications that are not based on everyday scenes. There are plenty of datasets with everyday scenes already available, like Pascal, COCO, or ADE20K.** \ **A1:** Thanks for your question. We want to clarify that our approach support all kinds of image domains a text-to-image model, i.e., Stable Diffusion can generate. In the paper, we present the experiments on everyday scenes like VOC and COCO since we only have the ground-truth semantic segmentation from this image domain for evaluation. Furthermore, even for the everyday image domain, there are still some cases such as imbalance data distribution, long-tail distribution of object categories, or rare classes that these datasets cannot represent but our approach can generate synthetic datasets for these cases very well as exemplified by some examples of rare classes in Fig. 2 of the global attached pdf. We also select 5 rare classes in the LVIS dataset to train a semantic segmenter on limited real training images (30 images in total) and a sufficient synthetic set (250 images with 50 images each) and report the results in Tab. A below. Our synthetic data generation is an effective tool for tackling rare classes. **Table A: Results on five rare classes of the LVIS dataset** | |horse buggy | garbage | hippopotamus | dice | mallet | mIoU |:-------- |:--------: |:--------:|:--------:|:--------:|:--------:|:--------:| | Training set of LVIS | **84.1** | 19.6 | 39.7 | 42.9 | 21.9 | 41.6 | Dataset Diffusion | 82.6 | **68.9** | **63.8** | **64.2** | **43.0** | **64.5** **Q2: Figure 2 and L133: Doesn't using captions from VOC and COCO bias the whole system and make it unfair when evaluating on those same dataset? Other methods that do not leverage these captions may be at a disadvantage. Using only generated captions would be fine, though. It would have been great to see the difference in final accuracy for ground truth captions and generated captions.**\ **A2:** Thanks! We consider two kinds of generated captions: from an image captioner and from an LLM like ChatGPT. For the former, we already tested it in VOC experiments since we do not have GT captions for this dataset. For the latter, we show the results on VOC in Tab. B below with generated 40k prompts using ChatGPT. We observe no considerable differences in performance between the two generated captions. An example of the text prompt generated by ChatGPT is "A yellow compact car driving through a city next to buses; car, bus". On COCO, we are the first to report the results of using synthetic data to train a semantic segmenter, thus, no comparison is available. It's worth noting that our proposed evaluation protocol based on the image caption is aimed to be a standard benchmark for future work in this direction. That is, we primarily focus on better segmentation techniques rather than on better prompt engineering with ChatGPT. **Table B: Results in mIoU (%) on the VOC test set with caption generated by BLIP and text prompt generated by ChatGPT with DeepLabV3 without TTA and self-training** | Source captions | ResNet50 | ResNet101 | |:-------- |:--------:|:--------:| | Captions from BLIP | 58.5 | 62.2 | Prompts from ChatGPT | 58.3 | 61.2 **Q3: Table 1: The comparison to prior work is lacking. There is only a single comparison to a prior/concurrent work. Did DiffuMask [8] also use self-training? Why is there no comparison to DiffusionSeg [9]? Is there a way to use the evaluation protocol from [8, 9] for a comparison?** \ **A3:** It's worth noting that we are one of the first works exploring the direction of generating a synthetic dataset for semantic segmentation using a text-to-image diffusion model. Other concurrent works like [8, 9] are published on Arxiv and have not published their code yet. Among them, only DiffuMask [8] can be directly compared to ours and we follow their evaluation setting on VOC. [8] also reported results with the self-training strategy on VOC. On the other hand, Diffusionseg [9] focuses on saliency detection instead and did not release code Therefore, we cannot compare our approach with them. **Q4: Why is there no comparison to GAN-based methods for synthetic dataset generation?** \ **A4:** GAN-based approaches like DatasetGAN [1], BigDatasetGAN [3], [38], and [39] are tested on different tasks such as part segmentation and keypoint detection in [1], and single object segmentation or saliency detection in [3], [38], and [39]. Also, it's not trivial to modify their codes to work with multiclass semantic segmentation like in VOC or COCO. Therefore, we cannot compare directly with these GAN-based approaches. However, we still try our best to adapt [39] to work with VOC and report the results in Tab. C, where 2 classes "person" and "horse" are excluded. The results from the table demonstrate the superior performance of our approach (a diffusion-based method) over the GAN-based approach for multiclass semantic segmentation. **Table C: Comparison results with [39] on 18 VOC classes** |Method|mIoU| |:--------:| :--------:| |[39] (GAN-based)|20.3| |Dataset Diffusion|**62.5**| **Q5: The feature scales in Table 4 look sensitive. Do you get a similar conclusion for other datasets?** \ **A5:** Yes, we also get a similar conclusion on COCO with 20k captions. We conduct this ablation study with the same setting as Tab. 4. We also achieve the best results when using a cross-attention map at 16 resolution and a self-attention map at 32 resolution as shown in Tab. D below. Hence, the results are consistent among VOC and COCO. **Table D: Study on different feature scales on COCO** |Cross-attention|Self 32|Self 64| | :--------: | :--------: | :--------: | | 8 |16.2 | 15.5 | 16 |**25.1** | 23.9 | 32 |21.6 | 21.7 | 64 |15.7 | 15.2 | 16,32 |24.2 | 23.5 | 16,32,64 |23.7 | 23.8 **Suggestions**: Thanks so much, we will revise the main paper accordingly --- Rebuttal Comment 1.1: Comment: **Q1:** > In the paper, we present the experiments on everyday scenes like VOC and COCO since we only have the ground-truth semantic segmentation from this image domain for evaluation There exist many segmentation datasets that could be used for evaluation I guess. [PapersWithCodes](https://paperswithcode.com/datasets?task=semantic-segmentation&page=1) provides a long list, including datasets for domains like satellite images, facial part segmentation, driving scenes, etc. **Q4:** > GAN-based approaches like DatasetGAN [1], BigDatasetGAN [3], [38], and [39] are tested on different tasks such as part segmentation You could also evaluate your method on part segmentation I guess, no? --- Reply to Comment 1.1.1: Comment: Thanks for your suggestions. We are working on your suggested image domains and soon have the results reported. We really appreciate your patience. --- Rebuttal 2: Comment: Thanks for doing the additional experiments. These certainly help address my concerns. And I think they also do point out the limitations of the proposed method better. Domains that are not well handled by the underlying diffusion method, won't work that well. Kind of obvious, but still important I think. I'll raise my rating, but I'd love to see a discussion about the limitations (and potentially some supporting numbers) in the paper. --- Rebuttal Comment 2.1: Comment: We would like to express our gratitude for your valuable comments to help point out the image domain that our approach does not work well. As promised earlier, we will add the discussion and support numbers to the paper. Thanks so much!
Rebuttal 1: Rebuttal: We thank all reviewers for their positive feedback. All the reviewers recognize the problem of generating synthetic training data for semantic segmentation from text-to-image diffusion model is interesting and promising. In addition, they also find that our paper is well-written and easy to follow. Moreover, reviewers *2Ket* and *wdax* praise the proposed approach incorporating self-attention and cross-attention maps to generate the semantic masks is simple but effective, can generate more complex semantic segmentation than DiffuMask. They also agree that employing uncertainty-aware segmentation is a good idea to alleviate the imperfect semantic masks. Furthermore, reviewers *gEQM*, *KgKS*, and *wdax* also compliment our comprehensive ablation study. Also, *gEQM* and *2Ket* also appreciate our proposed simple but effective prompting for Stable Diffusion. Finally, *wdax* comments that we have a good visualization of failure cases. Below, we address other comments point by point. Pdf: /pdf/2129cd2ae41c8205c4159482e838a4a8015ef1bc.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer
Accept (poster)
Summary: The authors propose training a small(er) classifier model (‘Cappy’) to predict an answer given a set of possibilities (if the answer set is closed) or multiple generations from an LLM (in the case of open-ended/generative tasks). The model is trained on data from T0, using LLMs to generate partially correct responses alongside positive and negative examples. They evaluate performance on T0 evaluation tasks (classification) and big-bench (generative) and find that Cappy outperforms same-size models, and consistently provides improvements over a base model when picking from generations. Strengths: The proposed approach is straightforward, using a trained classifier as a reranker for model generations. The improvements seem significant, and the fact that a classifier outperforms a similar-size generative model is interesting to see. The data gathering scheme for Cappy’s training data is interesting, and using rouge-l as a proxy for gold labels is interesting and seems to work well in ablations. The writing is clear and the paper is easy to follow. Weaknesses: **Comparisons to Best-of-N** - I’m somewhat concerned about the novelty of this work. The proposed method is very similar to the best-of-N sampling originally proposed in [1] (“Best-of-N sampling”) and discussed (sometimes under ‘reranking’) in [2] [3], [9], inter alia. These methods work by sampling multiple LM generations and then picking the answer with the highest reward as chosen by a trained reward model. The three core differences between these prior works and Cappy I can detect are: 1. Different use in classification - Cappy directly uses the set of possible answers for classification tasks, while these prior works do not. 2. Training data - Cappy is trained on prompt source-based data with augmentation, which is interesting and seems to help performance, while (afaik) best-of-n uses trained reward models, which are often trained using preference datasets. 3. Application - best-of-n approaches are often applied and focussed on how they aid with aligning to human preferences, while the focus in this work is on improving over benchmarks. While these are interesting differences, the authors do not compare to these prior works and explicitly make these comparisons. At the very least, I would expect these methods to be discussed in a related work section, and the decisions made in training Cappy contrasted to them. Ideally, they would be integrated as baselines/ablations to justify differences, highlighting the merits of Cappy over these approaches and covering what is needed to adapt these approaches to Cappy’s chosen setting. **Missing prior work for reranking** - Cappy effectively is serving as a reranker in the generative case. There is a long history of work in answer reranking, that would be useful to discuss as related work (e.g., [4] for multi-choice qa, [5,7] for summarisation, [5,6] for open-domain qa). **Further baselines and ablations** - There are other methods for training and applying rerankers, including contrastive training (as used to train a reranker in [7], with rouge as a proxy for human judgements), or encoding multiple samples at once as opposed to one-at-a-time (as done in [4]). It would be useful to test these changes, especially given the authors suggest that making use of contrastive information is helpful for Cappy in line 202. It would also be interesting to compare against self-consistency [8], another approach for improving models that makes use of multiple generations. Given all of these things, I am inclined to recommend rejection for this paper in its current state. Further changes differentiating this approach against best-of-n sampling and further experiments exploring the full space of answer reranking would greatly improve this work, and may sway my opinion, but require significant extra work. I think the overall idea is interesting and evidently effective, but requires further work to make a substantial contribution. I hope the authors make these changes and are successful in the future! [1] Stiennon et al. (2020). Learning to summarize from human feedback. NeurIPS. https://arxiv.org/pdf/2009.01325.pdf [2] Bakker et al. (2022). Fine-tuning language models to find agreement among humans with diverse preferences. ArXiv. https://arxiv.org/pdf/2211.15006.pdf [3] Glaese et al. (2022). Improving alignment of dialogue agents via targeted human judgements. ArXiv. https://arxiv.org/abs/2209.14375 [4] Kratzwald et al. (2019). RankQA: Neural Question Answering with Answer Re-Ranking. ACL. http://aclanthology.lst.uni-saarland.de/P19-1611.pdf [5] Revaut et al. (2022). SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization. ACL. https://aclanthology.org/2022.acl-long.309.pdf [6] Lee et al. (2018). Ranking Paragraphs for Improving Answer Recall in Open-Domain Question Answering. EMNLP. https://aclanthology.org/D18-1053.pdf [7] Liu et al. (2021) SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization. ACL. https://aclanthology.org/2021.acl-short.135/ [8] Wang et al. (2023). Self-Consistency Improves Chain of Thought Reasoning in Language Models. ICLR. https://arxiv.org/pdf/2203.11171.pdf [9] Dubois et al. (2023). AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback. ArXiv. https://arxiv.org/abs/2305.14387 Edit: The authors have responded with some of the experiments and comparisons I asked for, with positive results, and so I'm happy to raise my score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How is cappy different to best-of-n techniques proposed in the papers mentioned above? Do you think using preference data already available could help or augment your data collection strategy? 2. Did you explore contrastive objectives in pretraining cappy? How about encoding multiple answers at once? 3. Do you have further details on the distribution of rouge-l scores you used in the data augmentation setup? It would be interesting to see this, and do you think there are any limitations in using rouge-l against something like cosine similarity for this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I think the authors effectively discuss limitations. They miss some elements such as Cappy requiring multiple generations (thus increasing its inference cost over using a single-generation) and complicating the model pipeline, but cover Cappy’s weakness in the realm of complex logical problem-solving / mathematics, and its current reliance on supervised datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback that our data construction method is interesting, that Cappy’s improvements over other methods are significant, and that our writing is clear and easy to follow. **Comparison with Reranking and best-of-N sampling**\ We appreciate the suggestions on various sample selection methodologies and your insights on the distinctions between Cappy and these techniques. To summarize and extend your valuable observations, the key difference between Cappy and previous reranking and best-of-N sampling methods lies in two aspects: (1) Compared with methods that conduct ranking based on reward models [1, 2, 3, 9], Cappy doesn’t rely on expensive human-annotated data, which enables our large-scale pretraining; (2) Unlike reranking methods specifically tailored for QA [4, 6] or summarization [5, 7], Cappy exhibits broader generalizability across multi-task settings. However, we would also like to clarify that our key contribution is constructing a large-scale dataset, delivering the pretrained model Cappy, and applying Cappy into multi-task applications. In terms of sample selection, we actually use a rather trivial argmax manner that picks the sample with the largest score. This sample selection is also used in our Self-scoring baseline. As of now, we have not incorporated any fancy sample selection techniques such as best-of-N rejection sampling [1] or answer aggregation [8]. That being said, we will definitely explore more suitable sample selection strategies for Cappy in our future research. **Contrastive objective, encoding multiple answers at once, human preference data**\ We do incorporate contrastive information during the training of our model. However, this is not achieved through an explicit contrastive loss function or a modeling to encoding multiple answers at once. Instead, the contrastive information is sourced from our pretrain data, where there are several examples sharing the same instruction but have different responses and score annotations. We have demonstrated the effectiveness of this incorporation, by Cappy’s performance in the multi-task scenarios presented in our experiments. As for the use of human preference data, we acknowledge this can be very beneficial. However, integrating human-annotated data would require further algorithmic design. It's also worth noticing that obtaining human preference data can be costly, and the volume might be insufficient for large-scale pretraining. We remain committed to exploring strategies for effectively integrating human feedback in our future work. **Using Rouge-L score as a weak supervision**\ Thanks for the suggestion to analyze the Rouge-L distribution. Among all of our pretrain data with 160M data points, there are 30M samples are labeled 1.0, 90M samples labeled 0.0, and the other 40M has score annotations between 0 to 1. Specifically, the average of the third part is 0.3 and the whole distribution can be visualized in our author rebuttal pdf. From this analysis, we can see that all the scores ranging from 0 to 1 have significant number of samples, which demonstrates the effectiveness of our data construction. We agree that Rouge-L may not be the optimal proxy of correctness of model generations. Cosine similarity may measure the semantic information better than the n-gram based Rouge-L. However, using Rouge-L could be a more suitable design choice for our large-scale pretraining, because (1) cosine similarity is less efficient: it would be time-consuming to get an embedding and run a similarity score for every example in the large pretrain dataset. (2) besides, it would require specific embedding models for tasks in a specific domain, while Rouge-L doesn’t have this limitation. (3) The community has not reached a consensus on the best metric across all the tasks. Nonetheless, Rouge-L is commonly used in multi-task scenarios to report model performance for generation-style tasks, such as in OPT-IML paper. (4) As a weak supervision for a subset of data in our large-scale pretraining, we acknowledge and accept that some data examples might not be perfectly labeled. (5) Moreover, Cappy’s performance in our experiments is further evidence for Rouge-L to be a reasonable design choice. Investigating the most suitable metric for multi-tasking is a highly valuable research direction. Thank you for pointing this out, and we will keep exploring this in the future. Thanks for suggesting more potential limitations of our work. We will add our discussion to all these points in our next paper update. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Hi, thank you for the detailed response! I agree that the collection/generation of the dataset for training the cappy model is novel and interesting, and the way cappy is applied is simple. Additionally, thank you for the clarification with the contrastive data, that makes sense, and thank you for the rouge-l data. It’s interesting it's mostly centred around .2, which suggests lots of generations with low overlap - it makes me wonder if better balancing the distribution might improve results somewhat (but this is beyond the scope of this paper). However, I disagree that the methods in [1,8] are significantly different or fancier than what is proposed here. As stated on page 24 of [1], their best-of-n procedure is to “Sample N summaries from a supervised baseline at temperature 0.7, score them with a reward model, and take the summary with the highest score”. This is very similar to Cappy (argmax based on an answer rating module), but using slightly different decoding strategies, and using a reward model instead of Cappy. There are publicly available datasets for training reward models (e.g. https://huggingface.co/datasets/stanfordnlp/SHP, https://huggingface.co/datasets/Anthropic/hh-rlhf), and I think it is reasonable to compare a model trained using the Cappy data to models trained using these datasets - my guess is cappy would do better since it is focussed on correctness instead of helpfulness/harmfulness, but since the method is so close to prior best-of-n work, I think it is a necessary comparison. In my personal opinion, this is why I lean still reject and am not updating my score, but if other reviewers and AC are satisfied the method is novel enough, and the comparisons are rigorous enough, I think the other aspects of the work are solid and the empirical results certainly still useful for the community. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging that our response has addressed most of your previous concerns regarding our novelty, the incorporation of contrastive information, and the usefulness of our empirical results. We would like to emphasize that the focus of our work is multi-task learning where tasks are clearly defined, and actually the incorporation of human feedback is largely orthogonal to this work. Specifically, our application domain is consistent with those well-accepted pretrained multi-task LLMs mentioned in our paper, such as T0, FLAN, and OPT-IML. Notably, all of these models do not rely on costly human annotations during pretraining, and they often would not be directly compared with models using human preference data, either in multi-task learning or RLHF literatures, mainly based on the consideration of (1) fair comparison, and (2) their different application domains. Nonetheless, we appreciate your suggested comparison mentioned in page 24 of [1], i.e., using the reward model trained by human preference data to conduct argmax sample selection without applying RL algorithms. We include an experiment about it in the below. Specifically, we add two baselines with publicly available reward models trained by LAION-AI (OpenAssist), including * RLHF RM-large: https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2 * RLHF RM-base: https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base They are trained on the combination of these four human preference datasets: * Summarize_from_feedback (the human preference dataset in [1]): https://huggingface.co/datasets/openai/summarize_from_feedback * Anthropic_hh-rlhf (the one in your response): https://huggingface.co/datasets/Anthropic/hh-rlhf * Webgpt_comparisons: https://huggingface.co/datasets/openai/webgpt_comparisons * Synthetic-instruct-gptj-pairwise: https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise As a result, our Table 1 is updated as below. Consistent with our initial expectations, Cappy outperforms the baselines with reward models of RLHF. It is also interesting that the reward models trained from human preference data can outperform some multi-task LLMs like OPT-IML-30B. That also reflects the advantage of incorporating contrastive information against relying exclusively on ground truth data. We will update the results and add the discussion in our next paper update. | Model | Accuracy | |----------------------|----------| | BART0-base (140M) | 45.7 | | BART0-large (400M) | 50.2 | | OPT-30B | 47.6 | | OPT-IML-30B | 51.3 | | OPT-175B | 49.3 | | OPT-IML-175B | 56.5 | | T0-11B | 58.2 | | **RLHF RM-base (185M)** | 43.6 | | **RLHF RM-large (435M)** | 53.3 | | Cappy-base (120M) | 49.9 | | Cappy-large (360M) | 56.6 | Title: Re: Re: Rebuttal
Summary: This paper introduces an auxiliary module called Cappy, which aims to enhance the performance of large language models. Cappy operates by taking a task instruction and a proposed response as input, and it estimates the quality score for the response. The training process involves creating a dataset that combines correct and mismatched answers with various instructions, and the Cappy scorer is then trained using a regression objective. In downstream adaptation, the small scorer, Cappy, is fine-tuned on a small dataset that follows the same approach as the training set. During testing, predictions are determined by selecting the answer choice with the highest score. In essence, Cappy acts as a valuable filter that enhances the overall quality of generated responses. Strengths: The idea of leveraging a lightweight scorer as a ‘’filter’’ is interesting. Fine-tuning the filters on downstream tasks can be efficient and practical. Weaknesses: Cappy itself is only a filter for in-domain responses and can not impact the intrinsic ability of a pre-trained LLM. This method cannot handle tasks outside the LLM's expertise, emphasizing the need for a sufficiently general and powerful LLM. Alternatively, fine-tuning LLMs, as shown in Table 3, may be necessary but costly. By comparing able 2 and Table 3, fine-tuning is still very essential for task adaptation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: No additional limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments that our idea is interesting, and our proposed downstream adaptation approach is efficient and practical. **Not handling tasks outside the LLM's expertise**\ Indeed, the primary objective of Cappy is to enhance performance of tasks where the backbone LLM possesses a fundamental understanding of the data input. Notably, many multi-task LLMs, including FLAN-T5 utilized in our experiments, exhibit proficiency across a wide range of domains, encompassing areas like medicine, law, and coding. **Comparison of Cappy with finetuning**\ We would like to clarify that Cappy doesn’t mean to beat other adaptation methods, especially finetuning. Compared with other adaptation approaches, Cappy is an alternative free from the constraints associated with storage, device memory, model accessibility, and training sample limitations. Moreover, Cappy doesn’t have any assumption about the backbone model, enabling seamless integration with other adaptations. As we show in Table 3, Cappy provides steady and significant performance promotion with little additional cost, --- Rebuttal Comment 1.1: Comment: I appreciate your response. The strength of Cappy seems to lie in its ability to **adapt a single LLM to several domains** using lightweight filters, which is difficult to achieve through fine-tuning. I think emphasizing this in the paper could enhance its presentation. However, this is a good and practical work. I will keep my initial rating. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback that this is a good and practical work! Thanks for the suggestion regarding our presentation. We agree that our adaptation experiments can be summarized as “adapting a single LLM to several domains using a lightweight filter”. And we will definitely make this point clearer in the revised version. To add a little bit, in the future, Cappy as a pretrained model can potentially be used in other creative ways beyond on single LLMs. For example, Cappy might be used as a filter for generations from multiple LLMs. In this case, Cappy plays a role that selects the best LLM regarding a specific input. We will add the discussion, potentially with related experiments, to the next version of our paper.
Summary: Adapting SotA LLMs to novel tasks is difficult given their extreme size, yet is generally more effective than in-context learning alone. To address this, the authors propose “Cappy”, a scoring model (leveraging RoBERTa as a backbone) which is trained to score pairs of (instruction, response) for any task instruction and corresponding response. Cappy is pre-trained on many examples of ((instruction, response) -> score); after pre-training, Cappy can be easily fine-tuned (because of it’s small size) to a given task and then used as an augmentation to LLM generation by acting as a ranking mechanism atop several LLM generated responses. To create pre-training data for Cappy, the authors use the PromptSource dataset, training the model to predict 1 for correct pairs of (prompt, response) and 0 for mismatched pairs. Additionally, the authors generate additional responses using a large LLM (e.g. FLAN) and use the Rouge-L between the generated response and gold response as the regression target for the pair (prompt, generated response). To test the validity of Cappy, the authors evaluate Cappy in isolation, with no additional fine-tuning, on 11 held-out classification tasks from PromptSource. They show that Cappy consistently ranks the correct answer higher than many instruction-tuned LLMs, including OPT-IML-175B. Next, the authors evaluate Cappy on LLM augmentation using the BIG-Bench benchmark. For each task, Cappy is first fine-tuned, then used as a scoring function for multiple generations from a frozen FLAN-T5 LLM. Cappy consistently outperforms the proposed sampling methods, most notably improving upon the LLMs own self-scoring and In-Context Learning. Moreover, the authors show that Cappy can improve upon a FLAN-T5 model’s performance even when the LLM parameters are fine-tuned as well. Finally, the authors present an ablation study of their proposed training strategy for Cappy, showing that both the overall pre-training strategy and using Rouge-L regression targets from LLM generations are important to Cappy’s success. Strengths: - Cappy represents a light-weight option to adapt LLMs to a given target task which requires no updating of parameters to the LLM nor any back-propagation through the LLM whatsoever (e.g. this is not true for adapter layers or prompt tuning). - Moreover, even without adapting to a given task, Cappy presents some benefits over generic LLM sampling methods for task-specific predictions. - Additionally, Cappy can ostensibly be added on top of any LLM without additional training (i.e. train once, use anywhere). - Cappy is a very simple idea, and is easy to understand, yet the usage of a scorer in this capacity, as well as the training and data augmentation scheme, is novel to the best of my knowledge. - The ablation study is helpful in demonstrating the benefits of the proposed pre-training and data-augmentation scheme. Weaknesses: - Cappy is not compared to other parameter-efficient adaptation methods. While there is some justification given, because other methods require storing forward activations and back-propagating through LLM parameters, it nevertheless makes judging the effectiveness of Cappy difficult. - Moreover, Cappy’s improvements over a frozen LLM are comparatively small, e.g. while Cappy improves Flan-T5 Large performance by ~7%, fine-tuning alone increases performance by ~22%. It’s therefore unclear how significant Cappy’s benefit is without comparison to other adaptation methods. - In the zero-shot experiments (4.1) it is claimed that the improved performance of Cappy over much larger LLMs can be attributed to Cappy’s scoring-based learning strategy. However, T0 outperforms Cappy and T0 is significantly smaller than the OPT models; moreover, T0 is also trained on PromptSource only, similar to Cappy, and unlike the other LLMs. Thus, it seems that at least some of Cappy’s performance benefits compared to other LLMs may arise from domain mismatch between training and test domains, rather than the proposed scoring function. - While Cappy does only require a small number of parameters to train, inference requires a number of decoder forward passes to generate multiple candidate responses from an LLM (it's not clear how Cappy scales with the number of samples provided, also); to my knowledge, other adaptation methods such as Adapter Layers do not have the same limitation. So while it saves computation on one end, it does slightly increase computation on another end, and exploring this trade-off would be useful. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - I think it would be really helpful to understand how Cappy compares to other parameter efficient adaptation methods. Do you have any experiments that indicate that Cappy is e.g. comparable to prompt-tuning in terms of it's benefits? - Can you provide an analysis of how much additional computation is required to sample many generations for Cappy to score? - Also, do you know how Cappy's performance scales with the number of generated samples? e.g. does it's performance flatten out after only 4 samples, or can it's performance increase significantly if several generations are considered? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive comments, including that our approach is novel, our delivered model Cappy is light-weight and beneficial, and our ablation study is well conducted. **Comparison of Cappy with other adaptation methods**\ We would like to clarify that Cappy doesn’t mean to beat other adaptation methods such as finetuning, in-context learning, and prompt tuning. Compared with these approaches, adaptation with Cappy is an alternative free from the constraints associated with storage, device memory, model accessibility, and training sample limitations. Moreover, Cappy doesn’t have any assumption about the backbone model, enabling seamless integration with other adaptations, where Cappy also provides steady and significant performance promotion with little additional cost. To illustrate this, we add an experiment below that combines Cappy with in-context learning and prompt-tuning. Specifically, we add more comparison with prompt tuning and in-context learning in our BIG-Bench adaptation experiment with FLAN-T5-Large as the backbone model. For prompt tuning, we apply prefix tuning, which is usually considered suitable for generation tasks, with 20 virtual tokens. As demonstrated by the results presented below, Cappy offers further performance boost on top of in-context learning and prompt tuning. | Setting | Rouge-L | |-------------------------------------------|---------| | frozen FLAN-T5-Large + Cappy-Large (ours) | 30.08 | | In-context learning + Nucleus | 22.59 | | In-context learning + Self-scoring | 27.00 | | In-context learning + Cappy-Large (ours) | **31.84** | | Prompt-tuning + Nucleus | 34.00 | | Prompt-tuning + Self-scoring | 38.43 | | Prompt-tuning + Cappy-Large (ours) | **42.71** | **Train/test domain mismatch**\ Actually, PromptSource is part of OPT-IML pretrain data, and OPT-IML encompasses even more training tasks compared with Cappy, so that the domain mismatch might not be a benefit of Cappy. Thanks for asking this! We will make this more clear in our next paper update. **Computational cost of multiple generations**\ Given Cappy’s small size, the predominant computational overhead arises from generating multiple samples using the backbone LLM. That is, if N samples are generated, the computational effort would be N times that of a standard single-sample generation, assuming the samples are not batch-processed. However, collecting multiple samples for a single prediction is a common operation in many algorithms, such as self-consistency [1], and some re-ranking techniques designed for QA [2] and summarization [3]. Furthermore, by leveraging the batch processing capabilities of GPUs/TPUs, the actual time overhead, when compared with single-sample generation, can be significantly less than N-fold. This efficiency potentially explains the continued popularity of these techniques. [1] Self-Consistency Improves Chain of Thought Reasoning in Language Models\ [2] RankQA: Neural Question Answering with Answer Re-Ranking\ [3] SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization **How performance scales with the number of samples** We appreciate such a meaningful suggestion. We conduct this on a frozen FLAN-T5-11B model, with three settings as below: * 1 sample: a single nucleus sample (top-p=0.95) * 4 samples: 4 nucleus samples * 20 samples: 4 samples from 5 different decoding methods (Random Sampling, Temperature, Top-K, Nucleus, Beam Search). Results are shown as below, as the number of samples increases, Cappy consistently enhances task performance significantly, in contrast with the baseline Self-scoring. | | 1 sample | 4 samples | 20 samples | |--------------------|----------|-----------|------------| | Self-scoring | 27.33 | 31.15 | 32.62 | | Cappy-Large (ours) | 27.33 | **33.64** | **36.56** | Thanks for the suggestions and meaningful points above! We will add all the discussion above to the next version of our paper.
Summary: In this paper, the authors tackle the challenge of computational requirements and memory constraints in fine-tuning Large Language Models (LLMs) by introducing an innovative approach that enhances LLM performance without the need for backpropagation through the LLM or access to its parameters. They propose Cappy, a pre-trained scorer that evaluates texts generated by LLMs based on downstream instructions and assigns them a score ranging from 0 to 1. To create Cappy, the authors curate a diverse collection of 39 samples from the PromptSource dataset, including good (ground truth), bad (random), and intermediate examples. The intermediate examples are generated using top-k and top-p sampling from BART0 and T0-3B models. Cappy assigns a score to each example based on the ROUGE-L metric compared to the ground truth response. This dataset is used to create a regression task and train a RoBERTa model that serves as the foundation for Cappy. The authors demonstrate improved accuracy across 11 classification held-out tasks from PromptSource compared to OPT, OPT-IML, T0, and BART0. They also showcase Cappy's effectiveness as an LLM booster for downstream adaptation on the BIG-Bench generative tasks, outperforming other selection strategies and achieving higher ROUGE-L scores on both frozen and fine-tuned FLAN-T5 models. Additionally, the paper discusses scenarios where Cappy performs less favorably than the "self-scoring" strategy on certain BIG-Bench tasks, suggesting that the lack of "memory" may contribute to this performance difference. Furthermore, the authors conduct an ablation study, revealing that data augmentation using LLMs is more crucial for improved performance than pre-training Cappy, although pre-training still offers benefits. Overall, the paper presents an innovative approach, Cappy, for boosting LLM performance without backpropagation or parameter access. The experiments showcase its superiority in classification tasks and downstream adaptation, while highlighting the importance of data augmentation and the potential limitations related to memory. Strengths: - The idea of an auxiliary performance booster is an intriguing concept. The demonstrated improvements over the selected baselines highlight the effectiveness of this approach in enhancing language model performance. - The utilization of LLM generations for data augmentation and the regression-based evaluation using ROUGE-L scores is a novel and innovative methodology employed in this work. - Cappy introduces a unique integration of samples from LLMs and ground-truth information, leveraging correctness scores to enhance the quality of text generations. This ability to distinguish between good and poor responses is a novel idea that has the potential to enhance the performance of all language models. - Despite having significantly fewer parameters compared to large language models used in zero-shot baselines, pre-trained Cappy achieves comparable performance. This contribution is valuable as it demonstrates that Cappy can achieve performance on par with larger models, offering a more efficient and resource-friendly solution. - The versatility of Cappy is showcased by its potential for downstream adaptation and further fine-tuning to improve task-specific performance. This adaptability highlights the broad applicability and utility of Cappy as a solution in various contexts. Weaknesses: - The performance of Cappy, trained on BART0 and T0 generations, does not consistently outperform T0 and OPT-IML. While the zero-shot performance is better than BART0 (a relatively smaller model), it falls short compared to T0 in most cases. The evidence supporting the claim of boosting language model performance is not sufficiently strong, and the improvements are not consistently observed. - The training of Cappy to optimize ROUGE-L scores against ground truth data raises concerns about the choice of this metric. ROUGE-L may not necessarily be the most optimal metric for evaluation, as alternative responses with low ROUGE-L scores may be valid and informative. Including additional metrics for evaluation would provide a more comprehensive understanding of the overall increase in performance. - The discussion on the tasks in BIG-Bench where Cappy performs worse than the "self-scoring" strategy is insufficient. Given the expectations for Cappy to perform at least as well, if not better, further analysis and exploration of these instances would strengthen the paper and provide a clearer understanding of the limitations of Cappy's performance. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Could you provide an example illustrating how the limitation of instruction lengths affects in-context learning? Adding a specific example would enhance the understanding of this limitation and its implications. - In the context of downstream adaptation, it is not entirely clear whether the same LLM that is being augmented is used to generate the synthetic data. Clarifying this aspect in the paper would help readers better comprehend the process of downstream adaptation and the relationship between the augmented data and the LLM. - The reasoning behind the requirement for more "memory" in tasks like "sufficient_information" is unclear. It would be beneficial to provide a more detailed explanation either in the paper or in the appendix to shed light on the specific aspects that require increased memory in such tasks. - The results show that removing the Cappy pre-training step does not significantly impact the downstream score on BIG-Bench, implying that fine-tuning with augmented data from LLMs is sufficient. Have you explored the performance of RoBERTa initialization with data augmentation using LLMs? It would be insightful to investigate and report the results of this comparison to further validate the efficacy of the proposed approach. Minor Issues: - Line 186: PromprSource -> PromptSource - Line 285: 360 -> 360 M - Line 222: freezed -> frozen - This issue exists at several places in the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors thoroughly discuss various limitations, including mathematical complexities, reliance on supervised datasets, absence of multi-lingual extensions, and other important concerns. However, it would be beneficial to further address limitations related to the reliance on generations from existing LLMs and potential biases that may arise as a result. Additionally, the use of ROUGE-L as the sole metric for regression and evaluation may not provide a comprehensive and accurate portrayal of the overall performance. It would be valuable to acknowledge this limitation and discuss the potential impact on the interpretation and generalization of the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive feedback that our idea of auxiliary performance booster is intriguing, our methodology is innovative, our delivered model Cappy is versatile, and our multi-task application with Cappy is valuable, efficient and resource-friendly. **Boosting language model performance** \ Indeed, Cappy doesn’t beat T0 in the zero-shot setting. However, it actually gets an accuracy very close to that of T0 (Cappy 56.6 v.s. T0 58.2, as illustrated in Table 1), considering the substantial difference of their model sizes (Cappy 360M v.s. T0 11B). Furthermore, our BIG-Bench adaptation experiments demonstrate that Cappy provides steady performance improvement for LLMs of varied sizes, and either frozen or finetuned. Based on all these observations above, we overall conclude that Cappy boosts multi-task language models. **Using Rouge-L score as a weak supervision**\ We agree that Rouge-L may not be the optimal proxy for correctness of model generations. Our choice of Rouge-L is primarily based on three considerations as below: (1) The community has not reached a consensus on the best metric across all the tasks. Nonetheless, Rouge-L is commonly used in multi-task scenarios to report model performance for generation-style tasks, such as in OPT-IML paper. (2) As a weak supervision for a subset of data in our large-scale pretraining, we acknowledge and accept that some data examples might not be perfectly labeled. (3) Moreover, Cappy’s performance in our experiments is further evidence for Rouge-L to be a reasonable design choice. That being said, investigating the most suitable metric for multi-task applications is a highly valuable area of research. Thank you for pointing this out, and we will keep exploring this in the future. **Further analysis on BIG-Bench experiments**\ We appreciate the suggestion and we will deeply dive into more tasks on which Cappy doesn't perform better than Self-scoring. Here we present another two such tasks as below – “operators” and “physics_questions”. In line with the tasks we elaborated in our paper, they also demand heavy math and commonsense/physics knowledge, respectively. “operators”:\ *Instruction: Given the definition of the op operator, compute the result. op n1 n2 ... nn extracts the last multiple of 8 from the n listed numbers. op 4 32 128 132 =*\ *Target: 128* “physics_questions”\ *Instruction: Q: The historic Stanley Center for the Arts in Utica, New York is the proud owner of the world’s largest LED chandelier. The chandelier is 35 feet wide, 17 feet tall and has a mass of 2900 kg. It is directly supported by four cables which make an angle of 63° with the horizontal. Determine the tension in the cables. A:*\ *Target: 7974 N* **Response to Questions** * For example, consider an LLM with a large context length of 2048, and a task such as NLI, where the average length of each training example is approximately 32. In this case, in-context learning can accomondate about 2048 / 32 = 64 demonstrations within the input context at its maximum. However, many complex downstream tasks have tens of thousands of training examples, such as many tasks in BIG-Bench. * Yes, in the downstream adaptation, the augmented data for Cappy’s finetuning is from the model to be enhanced. * The task requiring memory ability is actually “codenames”, where a long list of words is given and the model needs to remember and identify a couple of them with a given association. * Our ablation of Cappy pretraining is already using RoBERTa initialization with data augmentation. We will make this clearer in our next paper update. Thanks for pointing out our typos and suggesting more potential limitations! We will fix/add them in the next version of our paper.
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for their insightful comments. We are encouraged by the reviewers' appreciation that our idea of auxiliary performance booster is intriguing (uzD3), that our methodology is novel and innovative (uzD3, XRdH), that our delivered model Cappy is valuable, efficient, resource-friendly, versatile and practical (uzD3, hbCr), that our ablation study is well conducted (XRdH, K4oy), and that our paper writing is clear and easy to follow (K4oy). Pdf: /pdf/0d4538523318e3f3b32d813993bddb26dafe6bf8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model
Accept (poster)
Summary: >**Rebuttal:** The provided details satisfy my concerns. I think this paper should be accepted after applying the agreed changes. >**TL;DR:** **Good paper.** The proposed WTA-CRS algorithm is based on the existing CRS algorithm and is used to reduce activation memory during training. WTA-CRS achieves up to 2.7× peak memory reduction with almost no accuracy drop and enables up to 6.4× larger batch size. However, WTA-CRS comes with computational overhead, which is discussed and explore. Addressing my concerns and questions would improve my score. The paper proposes the WTA-CRS algorithm to reduce the neural networks training activation memory, where the paper claims that activation memory is primary memory bottleneck during training. The WTA-CRS algorithm is an unbiased estimators for matrix production with reduced variance, which only requires storing the sub-sampled activations for calculating the gradient. WTA-CRS achieves up to 2.7× peak memory reduction with almost no accuracy drop and enables up to 6.4× larger batch size. The WTA-CRS algorithm works by sampling columns and rows to create an unbiased estimation of the original GEMM for the backpropagation. The WTA-CRS algorithm does not alter the neural architecture, and therefore the inference speed is left in tact. The experimental section shows that WTA-CRS outperforms existing prior work and is compatible with existing PEFT techniques. WTA-CRS adds a computational overhead due to sampling, however, WTA-CRS enables training on much larger batch sizes, which results in a 1.2× higher training throughput. Strengths: * **S.1.** The proposed WTA-CRS algorithm tackles an important problem in existing PEFT techniques, which makes LLM PEFT training more accessible to researchers with low resources. * **S.2.** The paper provides a theoretical analysis on WTA-CRS. * **S.3.** The proposed WTA-CRS algorithm outperform existing algorithms. * **S.4.** An anonymized code repository is provided as part of the submission for reproduction . Weaknesses: * **W.1.** Popular existing memory efficient training techniques such as tensor rematerialization (gradient checkpointing) [2][3] and ZeRO [1] are not compared to, although some are partially discussed in Appendix A. * **W.2.** The experiments are conducted on single neural network architecture (T5), although the proposed technique does not seem to be confined solely to that setting. * **W.3.** It is common practice today to train neural networks at a lower precision (quantization), however, it is not clear whether quantization (16bit) was used. Therefore, there is insufficient proof that the combined noise of WTA-CRS and quantization would be compatible. **Typos.** * Line #62: "Thus" → "Thus," * Line #240: "mAccording" → "According" * Line #297: "Thus" → "Thus," [1] Ren, J., Rajbhandari, S., Aminabadi, R.Y., Ruwase, O., Yang, S., Zhang, M., Li, D. and He, Y., 2021, July. ZeRO-Offload: Democratizing Billion-Scale Model Training. In USENIX Annual Technical Conference (pp. 551-564). [2] Jain, P., Jain, A., Nrusimha, A., Gholami, A., Abbeel, P., Gonzalez, J., Keutzer, K. and Stoica, I., 2020. Checkmate: Breaking the memory wall with optimal tensor rematerialization. Proceedings of Machine Learning and Systems, 2, pp.497-511. [3] Beaumont, O., Eyraud-Dubois, L. and Shilova, A., 2021. Efficient combination of rematerialization and offloading for training dnns. Advances in Neural Information Processing Systems, 34, pp.23844-23857. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * **Q.1.** In line #43 and Figure 2 it is noted that "storing activations (or feature maps) is the main memory bottleneck during training". Does this hold true for all model architectures? What about LLM training where the fine-tuning batch size is usually very small? * **Q.2.** Why was the WTA-CRS algorithm compared to the Deterministic top-k from [1] but not to the Bernoulli-CRS from [1]? What are the key differences between WTA-CRS and Bernoulli-CRS? * **Q.3.** The paper proposes WTA-CRS which sacrifices computation speed at the cost of lower peak memory. There are several existing common approaches (such as gradient checkpointing and DeepSpeed) for general memory efficient training which are compatible with PEFT techniques. Why are these comparisons not explored or detailed in the main paper? [1] Adelman, Menachem, Kfir Levy, Ido Hakimi, and Mark Silberstein. "Faster neural network training with approximate tensor operations." Advances in Neural Information Processing Systems 34 (2021): 27877-27889. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations are discussed in Appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1, Q3] Popular existing memory efficient training techniques such as tensor rematerialization (gradient checkpointing) and ZeRO are not compared to, although some are partially discussed in Appendix A.** Thank you for the suggestion. We conduct a more detailed comparision between gradient checkpointing and WTA-CRS using huggingface backend: We set 'gradient_checkpointing = True' in the huggingface backend and report the final memory saving. Table: Comparison between Gradient-checkpoint and WTA-CRS in terms of the memory footprints (GB) | Method | T5-Base | T5-Large | | :---: | :---: | :---: | | FP | 17.66 | 45.85 | Grad-checkpoint | 13.91 (1.27x) | 36.5 (1.25x) | LoRA+WTA-CRS@0.3 | 8.44 (2.1x) | 21.58 (2.1x) | LoRA+WTA-CRS@0.1 | 7.30 (2.4x) | 18.46 (2.5×) Regarding ZeRO [1], up to our knowledge, it is mainly designed for offloading the optimizer states, which is orthogornal to activation memory saving. **[W2] The experiments are conducted on single neural network architecture (T5)** We appreciate the reviewer for this thoughtful comment. We respectfully point out that we already conducted experiments on **both encoder-only architecture, such as BERT, and encoder-decoder architecture, like T5 in Table 1**. To further respond to the reviewer's comment, we also conduct additional experiments using the decoder-only architecture, OPT. For your convenience, we summarize the results of these experiments in the below three tables: one each for BERT-Large, OPT-350M, and T5-Large. We observed that WTA-CRS exhibits almost no drop in accuracy compared to Full training and LoRA on all three architectures. This observation strongly indicates the effectiveness of WTA-CRS, especially considering its consistent performance across diverse transformer architectures. Table: Encoder-only architecture: BERT-Large | Method | CoLA | MRPC | RTE | STS-B | Average | | :---: | :---: | :---: | :---: | :---: | :---: | | Full | 66.8 | 89.5 | 72.6 | 90.2 | 79.775 | LoRA | 65.9 | 90.8 | 71.3 | 90.3 | 79.575 | LoRA+WTA-CRS@0.3 | 66 | 89.7 | 72.4 | 89.7 | 79.45 Table: Decoder-only architecture: OPT-350M | Method | CoLA | MRPC | RTE | STS-B | Average | | :---: | :---: | :---: | :---: | :---: | :---: | | Full | 49.84 | 85.47 | 72.56 | 84.43 | 73.075 | LoRA | 52.3 | 88.36 | 74.01 | 87.21 | 75.47 | LoRA+WTA-CRS@0.3 | 51.8 | 88.43 | 74.01 | 86.61 | 75.2125 Table: Encoder-Decoder architecture: T5-Large | Method | CoLA | MRPC | RTE | STS-B | Average | | :---: | :---: | :---: | :---: | :---: | :---: | | Full | 61.3 | 93.4 | 85.3 | 91.8 | 82.95 | | LoRA | 63.3 | 93.5 | 84.2 | 91.7 | 83.175 | | LoRA+WTA-CRS@0.3 | 62.9 | 93.6 | 83.9 | 91.3 | 82.925 **[W3] It is not clear whether quantization (16bit) was used. Therefore, there is insufficient proof that the combined noise of WTA-CRS and quantization would be compatible.** We sincerely appreciate the thoughtful comment provided by the reviewer. We agree with the reviewer's suggestion that including the experiment results of WTA-CRS with bfloat16 quantization is essential to demonstrate its effectiveness under different settings. To address this concern, we have conducted additional experiments of WTA-CRS@0.3 on the T5-Base model with bfloat16 quantization applied to both the weight and activation map during training. The results of this experiment are presented in the following table. It is revealed that WTA-CRS@0.3 with bfloat16 quantization shows almost no drop in accuracy when compared with Full training and LoRA. This result explicitly demonstrates the effectiveness of WTA-CRS even combined with the bfloat16 quantization, indicating its robustness against the underflow noise caused by bfloat16 quantization. Table: Accuracy of WTA-CRS@0.3 on T5-Base with bfloat16 quantization. | Method | CoLA | MRPC | RTE | STS-B | Average | | :---: | :---: | :---: | :---: | :---: | :---: | | FP32 | 60.1 | 91.5 | 79.4 | 90.6 | 80.4 | LoRA-FP32 | 60.6 | 92.2 | 80.6 | 90.7 | 81.0 | LoRA+WTA-CRS@0.3-FP32 | 60 | 92 | 80.1 | 90.4 | 80.6 | LoRA+WTA-CRS@0.3-BF16 | 60.3 | 92.4 | 80.1 | 90.19 | 80.7 **[Q1] Are the activation memory still the bottleneck for LLM training?** This is a great question. The short answer is the bottleneck depends on the fine-tuning setting. In data parallel/single GPU training setting, we need to store the model and optimizer states in GPU memory, and the left space is for holding activations. Thus, when the size of LLM goes beyond a certain threshold, the model weight/optmizer state must become the memory bottleneck. This is also why LLM fine-tuning often comes with a small batch size, resulting in a low GPU utility. However, when the size of LLM becomes too large such that it and its optimizer cannot be hold into one single GPU, we must tune it with pipeline/tensor/model parallelism. In such setting, it requires the division of the model into smaller segments, which are then distributed across multiple devices. In this case, each GPU only hold a small part of models, and the space left for activations become much larger. In this case, if we enlarge the sequential length and/or batch size, the activation is still the bottleneck. **[Q2] Why was the WTA-CRS algorithm compared to the Deterministic top-k from [1] but not to the Bernoulli-CRS from [1]?** We thank the reviewer for this comment. We compare our algorithm to Deterministic top-k mainly because Deterministic Top-K works better than Bernoulli-CRS in the context of neural network (Theorem 2 in [2], Figure 1b and Figure 3 in [2]). Thus we follow [2] to compare against top-k CRS instead of Bernoulli-CRS. **[Typos] Line #62: "Thus" → "Thus,"; Line #240: "mAccording" → "According"; Line #297: "Thus" → "Thus,".** We appreciate the reviewer for the thoughful comments. We will fix these typos in our camera-ready version. [1] ZeRO: Memory Optimizations Toward Training Trillion Parameter Models [2] Faster Neural Network Training with Approximate Tensor Operations --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the detailed answers and results. The provided results and details satisfy my concerns. I will update my review accordingly.
Summary: In this paper, we propose a new method called WTA-CRS (Winner-Take-All Column Row Sampling) to address the main memory bottleneck issue during training, which arises from storing feature maps. To reduce memory usage during training, we sample the most likely column indices during backpropagation. Furthermore, they proposed method demonstrates the ability to significantly reduce peak memory usage, by approximately up to 2.7 times, when fine-tuning downstream tasks. It also showcases the potential for higher throughput, enabling more efficient training. Strengths: 1. The work clearly states its motivation and its solution and is easy to follow. 2. The authors show that their method reaches comparable performance with backpropagation using the full activation when combined with LoRA. 3. They also empirically measure throughput gains obtained by increasing batch size, which demonstrates the practical applicability of their method. Weaknesses: 1. The paper needs a comparative analysis of other researchs, such as gradient checkpoint/recalculation and CRS, aimed at reducing activation memory during the training phase, as shown in Fig. 6 and Fig. 9. 2. The paper should include an analysis of the overhead associated with the proposed WTS-CRS method, which involves sampling rows and columns. It is crucial to consider factors such as the computational cost of Equation 3 and any potential effects of lowering on the overall performance. Providing this analysis would enhance the clarity and completeness of the research. 3. There is a need of analysis on the effectiveness of the proposed approach, WTS-CRS, in distributed training environments such as tensor parallelism or pipeline parallelism. 4. It seems necessary to conduct performance evaluations on various LLMs of the GPT family, such as LLaMA and OPT. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * In Figure 9, it can be observed that the throughput of WTS-CRS is lower than that of full when the batch size is small. Is this due to the overhead caused by lowering? * When comparing the training throughput, how does CRS differ from full in terms of throughput? * Could the authors include statistics for GPU utilization in their experiments? It would be helpful to analyze the causes of improved performance more thoroughly. * Considering that most large models are trained using multiple levels of parallelism, would it be possible to verify results for pipeline parallel, tensor parallel, etc.? Also, it is unclear from the paper whether the data parallelism used was distributed data parallelism or naïve data parallelism. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: * As previously mentioned, it would be valuable to include additional experimental results for models that are more challenging to quantify, such as GPT-series (OPT, LLaMA). This would enhance the validity and applicability of the proposed method across a broader range of models. * Considering that most large-scale models are trained using multiple levels of parallelism, it is important to assess how much the proposed methods, such as pipeline parallelism and tensor parallelism, can increase throughput while taking into account overhead (such as GPU-to-GPU or node-to-node communication), memory reduction, and computational cost. Furthermore, it is not clear from the paper whether the data parallel processing used is distributed data parallelism or naive data parallelism. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1,Q2] Compare against gradient checkpoint/recalculation and CRS** For the comparison to CRS, from the accuracy perspective, we already compared them in Figure 8. We observe that CRS cannot maintain the accuracy. Thus, it is less applicable to fine-tuning, let alone memory saving. From the memory-saving perspective, CRS and WTA-CRS share the same implementation. Thus if the number of column-row pairs are the same, their memory saving is exactly the same. For the comparison to gradient checkpoint, **we discussed it in Appendix A**. Here we conduct a more detailed comparison between gradient checkpointing and WTA-CRS within the huggingface backend: We set 'gradient_checkpointing = True' in the configuration and report the final memory saving. Table: Comparison between gradient checkpoint and WTA-CRS in terms of memory footprints (GB) | Method | T5-Base | T5-Large | | :---: | :---: | :---: | | FP | 17.66 | 45.85 | Grad-checkpoint | 13.91 (1.27x) | 36.5 (1.25x) | LoRA+WTA-CRS@0.3 | 8.44 (2.1x) | 21.58 (2.1x) | LoRA+WTA-CRS@0.1 | 7.30 (2.4x) | 18.46 (2.5×) **[W2,Q1] The analysis of the overhead associated with WTS-CRS** We kindly draw your attention to **Appendix E.2**, where we already conducted an in-depth analysis. We present the table here for your convenience. The following table provides a breakdown of the latency in our implementation: Fwd', 'Bwd', and 'F-B' represent the time of forward pass, backward pass, and the total time for both the forward and backward pass, respectively. We summarize that, under the same workload, **the current implementation of WTA-CRS may experience a roughly 20% slowdown in linear operation**. This can be attributed to the extra sampling process. Although we remove 70% column-row pairs, the backward time is only slightly faster than the baseline. This is mainly because the current implementation separately indexes a subset of gradient tensor before multiplying with the subsampled activations, which incurs many extra I/O [2] (also, please check Figure 13 in [2]). **Fortunately, this overhead can be sigificantly reduced with kernel fusion using Triton [3]. According to Figure 13 in [2], we expected a 2X speedup for backward pass with this Tritnon implementation [3].** Table: Latency (ms) of Forward, Backward and Forward-backward pass. | | Method | T5-Attention | T5-FFN | T5-Block | T5-Large | | :---: | :---: | :---: | :---: | :---: | :---: | | Fwd | Full | 8 | 10 | 17 | 1052 | Fwd | WTA-CRS@0.3 | 22 | 16 | 37 | 2013 | Bwd | Full | 16 | 19 | 34 | 2073 | Bwd | WTA-CRS@0.3 | 15 | 14 | 30 | 1738 | F-B | Full | 24 | 29 | 51 | 3125 | F-B | WTA-CRS@0.3 | 37 | 30 | 67 | 3751 **[W3, Q4, Limitation2] Evaluate WTS-CRS under tensor parallelism or pipeline parallelism. Also, it is unclear from the paper whether the data parallelism used was distributed data parallelism or naïve data parallelism** For the model/tensor parallelism setting, first, up to our knowledge, it is rarely used in fine-tuning scenario [4], which is the main focus of this paper. For fine-tuning, the ideal case is to use one single GPU to tune a model as large as possible [4]. Second, the pipeline/model parallelism requires the division of the model into smaller segments, which are distributed across multiple devices. Thus, it requires the communication of activations between consecutive model parts, potentially causing substantial overhead [5]. In this context, WTA-CRS significantly reduces the communication volume by compressing the activation, thus reducing this overhead [5]. We leave it as future work. For the "data parallelism" question, the "data parallelism" in this paper refers to **distributed data parallelism.** **[W4, Limitation1] Conduct performance evaluations on LLMs of GPT family, such as LLaMA and OPT.** Here we conducted additional experiments of applying WTA-CRS to the OPT model in the below table: LoRA+WTA-CRS@0.3 shows almost no drop in accuracy when compared with full training and LoRA. WTA-CRS has been applied to various transformer architectures including encoder-only (BERT), decoder-only (OPT), and encoder-decoder (T5). With this comprehensive evaluation, we believe that the experiments sufficiently demonstrate the effectiveness of WTA-CRS across diverse transformer architectures. Table: Experiment results on OPT-350M. | Method | CoLA | MRPC | RTE | STS-B | Average | | :---: | :---: | :---: | :---: | :---: | :---: | | Full | 49.84 | 85.47 | 72.56 | 84.43 | 73.075 | LoRA | 52.3 | 88.36 | 74.01 | 87.21 | 75.47 | LoRA+WTA-CRS@0.3 | 51.8 | 88.43 | 74.01 | 86.61 | 75.2125 **[Q3] Statistics for GPU utilization** WTA-CRS has extra I/O (the sampling process) in return for reduced computations (FLOPs). Thus, the GPU utility of WTA-CRS is expected to lower than the standard training. Below we measure the GPU utility using torch.cuda.utilization: we presented the GPU utility during training T5-Large. Our result indicates that WTA-CRS reduces 30% GPU utilization of foward pass due to the extra sampling process. However, we note that **GPU utilization cannot reflect the wall-clock speed** : Although the GPU utility of the backward pass in full training is 100%, the wall clock time may still longer than WTA-CRS@0.3 as it requires 70% more FLOPs (workload), especially with kernel fusion implementation [3]. Table: GPU utilization of training T5-Large model on NVIDIA A5000 GPU. | Method | Fwd | Bwd | Average | | :---: | :---: | :---: | :---: | | Full | 75.2% | 100% | 87.6% | | WTA-CRS@0.3 | 40.6% | 100% | 70.3% | [1] GACT: Activation compressed training for generic network architectures [2] Deja vu: Contextual sparsity for efficient llms at inference time [3] https://github.com/FMInference/DejaVu/blob/master/Dejavu/src/ops/triton/gather_gemv.py [4] QLoRA: Efficient Finetuning of Quantized LLMs [5] Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers and results. I have read the authors' rebuttal as well as other reviews. I would like to keep my rating. Some of my concerns have been addressed, but i am unsure how memory-efficient the proposed method is compared to other parameter-efficient adaptation techniques like QLoRA[1] and AlphaTuning[2]. Unlike other parameter-efficient adaptation methods, I believe that the approach suggested by the authors may not yield benefits in terms of memory efficiency during the inference process in actual service execution. [1] QLoRA: Efficient Finetuning of Quantized LLMs [2] AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models --- Reply to Comment 1.1.1: Title: Additional Clarification Regarding Inference Time Memory Efficiency Comment: We thank the reviewer for your recognition and active engagement. We find your additional question regarding *inference-time memory efficiency* interesting, though this question might require some background/terminology clarifications to be properly addressed. First, we'd like to point out that **classic parameter-efficient fine-tuning (PEFT) techniques can NOT reduce inference memory usage** (e.g., LoRA [1]). However, **such efficiency can indeed be achieved with PEFT utilized in conjunction with model quantization.** For this conversation, we can roughly categorize PEFT techniques into three groups: 1. **Full precision PEFT**: where both the base model and the PEFT add-ons are in high FP precision (so no inference memory saving). e.g., LoRA [1] and Adapter tuning [2]. 2. **PEFT utilized with quantization**: where a standard PEFT technique from #1 is applied to a quantized base model. e.g., QLoRA [3], which uses standard LoRA on an NF4-quantized model (with some extra optimization designs engineered). 3. **Quantization-aware PEFT techniques**: much like #2, but this group of PEFT techniques is designed/adjusted to interfere with the quantization procedure. e.g., AlphaTuning [4], which tunes on the scaling factor of the quantized base model. Under this landscape, inference-time memory efficiency can only be gained with PEFT #2&3, where the fine-tuned model is (at least partially) quantized. We argue **by leveraging its orthogonality with QLoRA-like techniques, WTA-CRS may achieve the same goal of delivering a quantized fine-tuned model, thus reducing memory usage during inference** (given the nature of WTA-CRS is a randomized algorithm applicable to any matrix multiplication operations, which happen to be prevalent in LoRA-like setups). Below in Table 1, we demonstrate that **applying QLoRA and WTA-CRS over a base model quantized in NF4 results in no performance loss against the QLoRA baseline**. Moreover, such joint applications may enjoy the exciting training-time memory efficiency offered by WTA-CRS (over a naïve QLoRA), as illustrated in Table 2. Table 1: Apply WTA-CRS over quantized T5-Base in 4-bit NormalFloat (NF4) data format using `bitandbytes` [5]. | | Cola | MRPC | RTE | STS-B | Average | | :---------------------: | :---: | :---: | :---: | :---: | :-----: | | LoRA | 60\.6 | 92\.2 | 80\.6 | **90\.7** | 81\.025 | | LoRA + WTA-CRS@0\.3 | 60 | 92 | 80\.1 | 90\.4 | 80\.625 | | QLoRA (NF4) | 61\.3 | **92\.2** | 81\.2 | 90\.5 | 81\.3 | | [NEW] QLoRA (NF4) + WTA-CRS@0\.3 | **62\.1** | **92\.2** | **82\.7** | 90\.1 | **81\.775** | Table 2: Peak memory usage (GB) of fine-tuning T5-Base and T5-Large with different methods. | Method | Base | Large| | :---: | :---: | :---: | | LoRA | 13.84 | 36.83 | | QLoRA | 13.64 | 36.12 | | LoRA + WTA-CRS@0.3 | 6.50 | 17.44| | QLoRA + WTA-CRS@0.3 | **6.31** | **16.75** | We believe our added experiments/discussion justified the soundness of WTA-CRS in regards to inference memory efficiency, and we hope the reviewer may consider raising the score should you find it the same way, or specify what else we can offer to facilitate your judgment. --- [1] Hu & Shen et al., LoRA: Low-Rank Adaptation of Large Language Models. ICLR 2022 [2] Houlsby et al., Parameter-Efficient Transfer Learning for NLP. ICML 2019 [3] Dettmers & Pagnoni et al., QLoRA: Efficient Fine-tuning of Quantized LLMs. arXiv 2023 [4] Kwon et al., AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models. EMNLP 2022 [5] https://github.com/TimDettmers/bitsandbytes
Summary: The authors studied fine-tuning LLMs with limited memory. As the increased scale of current LLMs, the memory cost during fine-tuning is of great importance when adapting the pretrained LLMs to down-streaming tasks. In contrast to the existing work that mainly focus on the number of updated weights, this paper proposed to reduce the number of stored activations, also the inputs to each layer. Given the widely used stochastic gradient descent optimization pipeline, the authors proposed to store a subset of activations that can generate an unbiased gradient estimation. This way, the training memory and the training time decreased significantly. The authors provide both theoretical and experimental analysis on their CRS methods. Strengths: - This paper studied an important problem in LLM fine-tuning, i.e., how to fine-tuning LLMs with less memory consumption without increasing the computation cost. The authors provided solid quantitative results to show that the main memory consumption is from storing the intermediate activations. - The authors provided a general solution for fine-tuning LLMs under memory constraints. The solution can be applied in most transformer-based network architectures. - The authors provided solid mathematical proof on the unbiased gradient estimation, which is especially encouraged. - The extensive experiments on different network architectures showed the efficacy of the methods. - The released code can benefit the following researchers studying efficient LLM fine-tuning. Weaknesses: - I am not fully convinced by the comment made in Line241-244, i.e., the methods in the paper is orthogonal to the activation quantization. When activation is quantized into a lower bit width, it is very possible that the number of less important activations will decrease. This way, the selection on the top-k columns in activation matrices with the proposed methods may hurt the training accuracy or convergence. It would be great if the authors can provide some theoretical analysis or experimental results on this combination. Otherwise, it would be necessary to provide some comparison results w.r.t. the activation quantization. - It would be great if the authors can discuss the main difference of their paper w.r.t. [Randomized Automatic Differentiation, ICLR2021]. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Overall, I think this paper has a relatively high quality in both writing and scientific contribution. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Whether WTA-CRS is compatiable with activation quantization or not** (experiment done) We thank the reviewer for this thoughtful comment. We acknowledge the importance of demonstrating the orthogonality of WTA-CRS with activation quantization. To address this concern, we conducted additional experiments of combining WTA-CRS@0.3, along with activation quantization@8bit, on the T5-base model. The experiment results are presented in the following table. It is notabe that the combination of LoRA+WTA-CRS@0.3+Quant@8bit exhibits almost no drop in accuracy compared with Full training, LoRA, WTA-CRS@0.3, and LoRA+WTA-CRS@0.3. This observation clearly demonstrates the orthogonality of WTA-CRS with activation quantization, indicating that they can be effectively applied together without compromising performance. Table: Combination of WTA-CRS with quanzation on the T5-base model. | Method | CoLA | MRPC | RTE | STS-B | Average | | :---: | :---: | :---: | :---: | :---: | :---: | | Full | 60.1 | 91.5 | 79.4 | 90.6 | 80.4 | | LoRA |60.6 |92.2 |80.6 |90.7 |81.0 | | WTA-CRS@0.3 |60.9 |91.1 |78.7 |90.5 |80.3 | | LoRA+WTA-CRS@0.3 |60 | 92 |80.1 |90.4 |80.6 | | LoRA+WTA-CRS@0.3+Quant@8bit |60.3 |92.06 |81.2 |90.4 |81.0| **[W2] Discuss the difference about WTA-CRS and RAD** WTA-CRS and RAD shares the same spirit in the sense that they both trade gradient noise in return for reduced memory. However, the main difference between them lies in how they generate the noisy gradient. Specifically, WTA-CRS focues on approximating expensive matrix production operation. RAD proposes two noisy-yet-cheap gradient estimator, i.e. path sampling (sampling the computation path) and random matrix injection (apply random projection to activations). These techniques are orthogonal to each other. We will include this discussion in the updated version. --- Rebuttal Comment 1.1: Comment: I have read the authors' rebuttal as well as other reviews. I would like to keep my rating.
Summary: The paper's contribution is in proposing a practical, intuitive yet not trivial unbiased approximation to gradient training of matrix multiplication. It shows that even though totally deterministic sampling is biased, somewhat deterministic sampling is unbiased, and a judicious allocation of sampling to those pairs favored by deterministic thinking can lead to the use of a larger batch size with empirically negligible performance loss. This reviewer must declare that he does not check the derivation very carefully. Strengths: The proposed idea is practical and can be readily combined with virtually all first-order gradient-based training methods. The paper also derived why deterministic sampling is a biased estimator and empirically shown the associated bad performance, thus proving that the additional complexity of stochastic sampling over deterministic sampling is not only sufficiently better but also necessary. Weaknesses: It's just a few empirical comparisons, but the performance gap between CRS and WTA-CRS seems modest. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: This reviewer does not have a question. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for spending time and effort in reviewing our paper. We appreciate your constructive comments and suggestions for improving the quality of this work. The feedback truly encouraged us to increase our efforts in conducting quality and impactful research. --- Rebuttal Comment 1.1: Title: Authors' clarification Comment: **The performance gap between CRS and WTA-CRS seems modest.** We sincerely apologize that we previouly forgot to reply this weakness. Here we would like to clarify that (1) we theoretically and empirically show WTA-CRS has smaller variance than CRS. (2) the performance gap seems modest due to the extended y-axis range of Figure 8. We summarize their performance gap from Figure 8 to the below table. We can observe that their performance gap is about 1-3%, which shows the effectiveness of WTA-CRS. | Method | SST2 | MNLI | QQP | Average | | :---: | :---: | :---: | :---: | :---: | | CRS@0.1 | 93.9 $\pm$ 0.1 |82.2 $\pm$ 0.05 |85.5 $\pm$ 0.2 |87.2 $\pm$ 0.1 | | WTA-CRS@0.1 |94.7 $\pm$ 0.01 | 85.3 $\pm$ 0.01 |86.7 $\pm$ 0.1 |88.9 $\pm$ 0.04 |
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive comments and helpful feedback. We are encouraged to find that they have found our contributions to be technically solid (ky3t, cMiu, GDNX), timely and relevant for LLM research (cMiu, GDNX), mathematically solid (cMiu, GDNX), and easy-to-follow (cMiu, j1mL, GDNX). We have additionally performed experiments to address some of the evaluation concerns. Please find below our detailed response to the questions and any concern raised by the reviewers. We will incorporate all these comments and comprehensive experimental evaluations into the revised manuscript. We are grateful to the reviewers for all the suggestions to improve our work. Best regards, Authors ## Summary of Rebuttal We thank all the reviewers for their constructive comments and helpful feedback. We value their comments sincerely, and do our best to address the concerns. During the rebuttal, we provide the following new supplementary results and analysis: - (cMiu) We have conducted an additional experiment that combines WTA-CRS with activation map quantization. [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=S24PVc9nC9) - (cMiu, GDNX) We provide a detailed discussion on the distinctions between WTA-CRS and RAD [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=S24PVc9nC9), as well as ZeRO [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=3w86hHloPJ). - (j1mL) We present a technical comparison of WTA-CRS with CRS and Gradient-checkpointing, focusing on accuracy and memory cost. [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=C8D3lWSbd2) - (j1mL) We emphasize the importance of Appendix E.2 in our paper, as it addresses the overhead analysis of WTA-CRS, as requested by the reviewer. [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=C8D3lWSbd2) - (j1mL) We conduct an in-depth analysis of the performance of WTA-CRS in distributed training environments. [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=C8D3lWSbd2) - (j1mL) We provide statistics for GPU utilization to offer further insights into the efficiency of WTA-CRS. [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=C8D3lWSbd2) - (j1mL, GDNX) We include an additional architecture, OPT-350M, which is a decoder-only transformer, in our experiments, to demonstrate the effectiveness of WTA-CRS. [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=3w86hHloPJ) - (GDNX) We have conducted an additional experiment involving the deployment of WTA-CRS in bfloat16 fine-tuning. [[Redirection]](https://openreview.net/forum?id=SquMNyrk1O&noteId=3w86hHloPJ)
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Persuading Farsighted Receivers in MDPs: the Power of Honesty
Accept (poster)
Summary: This paper studied a Bayesian persuasion problem where the sender and receiver act sequentially. Below are two main changes in the new problem setting of this work: (1) The authors assume that the sender stops providing recommendations to the receiver if the receiver does not follow the recommendation. (2) Under this assumption, the authors also consider farsighted receivers as opposed to myopic receivers in previous works. In the new setting, the authors showed that the Markovian signaling schemes are not optimal (additionally, finding the optimal Markovian signaling scheme in the previous problem settings in NP-hard), and they instead introduced a new class of promise-form signaling schemes for the new problem setting. The authors also show that the promise-form signaling schemes can be found in polynomial time while guaranteeing the schemes satisfy \epsilon-persuasive property. Strengths: 1. Clearly listed all key theoretical findings under the assumption that the sender will 2. A novel class of promise-form signaling schemes is given with approximation algorithms that are relatively practical and can be completed in polynomial time Weaknesses: 1. Overall a difficult-to-read paper (more difficult than the popular papers the authors cited) due to inadequate description and lack of concrete examples of the problem. Why sequential move in Bayesian persuasion is an important research topic is not highlighted. For a general audience not familiar with Bayesian persuasion, it is hard to tell why the new scenario is important. 2. Lack of justifications for the critical assumption that the sender will stop providing recommendations. Further justifications are needed to show that it is rational for the sender to prefer to stop recommending over other strategies commonly used like tit-for-tat, etc. If this is the typical case in real-world applications, the authors should also point out that to improve the soundness of this assumption. Since all findings in this paper are based on this assumption, I encourage the authors to put more emphasis on this. 3. Lack of discussion on the possible limitations Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can you please provide application scenarios that fit well the new case 2. Can you please add a discussion on the potential limitations 3. In all problems that involve strategic manipulations, the potential fairness issues 4. Why are numerical experiments included in some of the related works, e.g., "Bayesian Persuasion in Sequential Decision-Making" but not here in this paper? What are some of the major differences that result in the decision of skipping the numerical experiment part? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Not that I can find due to my limited background in this topic. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to thank the Reviewer for providing useful feedback despite being unfamiliar with the topic of the paper. We provide below detailed answers to their questions. Finally, we are sorry to hear that they find the paper to be a difficult read. We will make our best effort to make a final version of the paper accessible to a general audience despite its technical nature. 1. *“Can you please provide application scenarios that fit well the new case”* Due to space constraints, we deferred an explicative example in Appendix A (lines 436-446). We agree with the Reviewer that a concrete example would help understand the problem at hand and with the additional page provided in the final version of the paper we will include this example in the main body of the paper. 2. *“Can you please add a discussion on the potential limitations”* As the Reviewers Rxqa and q53d suggested, we will highlight the fact that our work assumes common knowledge of the environment, and we will discuss how this assumption could be lifted in future works. 3. *“In all problems that involve strategic manipulations, the potential fairness issues”* Research on the connection between information design and fairness is still understudied. While we agree with the Reviewer that it would be worth investigating, we feel that this topic falls outside the scope of our work. 4. *“Why are numerical experiments included in some of the related works, e.g., "Bayesian Persuasion in Sequential Decision-Making" but not here in this paper? What are some of the major differences that result in the decision of skipping the numerical experiment part?”* We think that the main contribution of this work is theoretical as we answer an open question in the literature. We will leave experimental evaluation of our algorithm to future research. Moreover the authors feel that an experimental evaluation of the planning problem would be less interesting compared to the one in [Bayesian Persuasion in Sequential Decision-Making], as we are not currently considering the learning problem.
Summary: The paper discusses a history-dependent signaling scheme in persuading a farsighted receiver. It first show that it is necessary for the sender to adopt a non-stationary and non-Markovian signaling scheme. Specifically, for every step and state reached at that step, this scheme defines a randomized mapping from the sender's private observations to action recommendations for the receiver, based on the whole history of states and receiver's actions observed up to that step. While such signaling policy could be intractable to describe, the paper provides a crucial simplification, promised-form signaling schemes, that allows the sender to only design finite size signaling scheme with optimal performance. Finally, the paper proposes a PTAS algorithm to determine the optimal promised-form signaling schemes. Strengths: 1. The paper is very well written! The authors clearly explain the motivation, the model, the results, and the proofs with vivid intuitions, making the technical concepts very easy to follow. 2. The paper extends the previous work on signaling schemes in MDPs to a more general setting, where the receiver is farsighted. The paper made several important technical and conceptual contributions to this problem, including the necessity of non-stationary and non-Markovian signaling schemes, the simplification to the promised-form signaling schemes, and the PTAS algorithm to determine the optimal promised-form signaling schemes. Weaknesses: 1. The method is related to the literature of dynamic stackelberg equilibrium. The authors should discuss the relationship between their methods. 2. I expect the authors to provide some real world applications of their model and methods, e.g., expand on the ride-sharing example in Appendix A. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: How does the author think of the learning problem under this farsighted setup? Is it also possible to design a no-regret learning algorithm for the sender to learn the optimal promised-form signaling scheme? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to hear that the Reviewer finds our paper to be making several important technical and conceptual contributions. We provide below a detailed answer to their question on the learning problem, while we will follow Reviewer’s suggestions to include a discussion of related literature on dynamic Stackelberg equilibrium and to expand the ride-sharing example in a final version of the paper. - *“How does the author think of the learning problem under this farsighted setup? Is it also possible to design a no-regret learning algorithm for the sender to learn the optimal promised-form signaling scheme?”* Our main focus with this work was to deal with the known model setting. Recent literature considers the learning problem in sequential BP settings, e.g. [Wu et al., 2022, Bernasconi et al., 2022, Gan et al., 2022a,b]. By extending our interactions model as in the above works we think that the techniques introduced there can also tackle the learning problem in our setting. In particular, we conjecture that an estimation phase that explores uniformly and then commits to an optimal signaling scheme of the estimated model would work. This should lead to an optimal $T^{2/3}$ bound on the regret and constraint violation. We will happily add a brief yet detailed discussion on this point. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. After reading the rebuttal and other reviews, I decide to maintain my initial score.
Summary: This paper considers a specific model of information design, where the receiver takes actions on a sequential decision process under a global, unknown natural state $\theta$. To make the model simple, the work assumes that the sequential decision process to be an MDP with a known model, plus the ability of the receiver to exactly optimize the cumulative reward once a (belief of) $\theta$ is given. In this case, the signaling scheme represents a (more general) mapping from the natural state and the trajectory up to the current step to a distribution of actions as information revelation. Further, when making decisions the receiver is not allowed to use the posterior distribution/belief of the natural state obtained from previous steps (which means only $\hat\theta_h$ is used). Given the above assumptions in the model, the work provides a polynomial-time algorithm to obtain an $\epsilon$-persuasive signaling scheme. This disagrees with the NP-hard-like claims given in previous works. Such disagreement stems from the use of trajectory information and thereafter its simplified version of promise form. The algorithm is natural but quite creative. Strengths: 1. The work provides new models of information design in sequential decision problems. The new model no longer possesses theoretical hardness. 2. The work proposes a new polynomial-time algorithm to find an $\epsilon$-persuasive signaling scheme. Weaknesses: Several limitations persist: 1) MDPs are assumed to be exactly solved. 2) MDP models are known 3) Receiver can't aggregate historical information of the natural state 4) No experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This conclusion indeed applies to the model-based scenario (meaning that the receiver knows the state set $S$, the observation set $\Theta$, the state transition distribution $p$, and the observation distribution $\mu$). However, does this conclusion also apply to the model-free case? This question might not be within the scope of the work, but the conclusion drawn is a bit too broad if such estimation of model is now involved. If this situation is not discussed, the authors should emphasize this important assumption in the abstract and introduction. Specifically, it should be clarified which information the receiver is assumed to know and base their decisions on. This assumption represents a strong capability for the receiver. If it does not possess this knowledge, the sender's manipulation could be more powerful, and Markovian signaling schemes might be viable. For instance, if the receiver is unaware of the state and can only observe the sender's signals, and it needs to estimate the state and its transitions based on those signals, does the sender have the opportunity to confuse the receiver's judgments and achieve stronger persuasion? I find the claim "We consider the most general setting" in line 85 a bit too strong. Additionally, can the revelation principle argument still be applied in a sequential setting? Does recommending only one action for each state achieve the goal of persuading a receiver in an MDP? I could not find any relevant discussion on this. Has the author considered "future-dependent" signaling schemes: recommending a set of future actions for each state instead of just one action? Or sending a signal $m$ to encode a set of future actions they wish to recommend? Moreover, in the aforementioned scenario, can the sender confuse multiple states by sending signal $m$? If the revelation principle is abandoned, is a history-dependent signaling scheme still necessary? If there is no discussion on the validity of the revelation principle, this assumption should be prominently emphasized. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to thank the Reviewer for their positive feedback. We report below detailed answers to their questions, which we hope will make them appreciate our paper even more. - *“This conclusion indeed applies to the model-based scenario (meaning that the receiver knows the state set $\mathcal{S}$, the observation set $\Theta$, the state transition distribution $p$, and the observation distribution $\mu$). However, does this conclusion also apply to the model-free case? This question might not be within the scope of the work, but the conclusion drawn is a bit too broad if such estimation of model is now involved. If this situation is not discussed, the authors should emphasize this important assumption in the abstract and introduction. Specifically, it should be clarified which information the receiver is assumed to know and base their decisions on.”* In this work we assume that everyone knows everything. We agree with the Reviewer that studying the learning problem in this setting is interesting and worth studying. However this imposes some difficulties as, when the model parameters are unknown, the receiver cannot even know if the signaling scheme employed by the sender is persuasive or not. See also response to Reviewer Rxqa for a related discussion. - *“This assumption represents a strong capability for the receiver. If it does not possess this knowledge, the sender's manipulation could be more powerful, and Markovian signaling schemes might be viable. For instance, if the receiver is unaware of the state and can only observe the sender's signals, and it needs to estimate the state and its transitions based on those signals, does the sender have the opportunity to confuse the receiver's judgments and achieve stronger persuasion? I find the claim "We consider the most general setting" in line 85 a bit too strong.”* If the receiver does not know the model, it is not clear which is the “right” definition of rationality. We can assume that the receiver “learns” during time but there are too many possible options (do they use confidence bounds? Do they use a regret minimizer? Do they follow the recommendations when they are uncertain or not?). On the other extreme, we can assume that the receiver never learns anything. This is the myopic setting studied in previous works. - *“Additionally, can the revelation principle argument still be applied in a sequential setting? Does recommending only one action for each state achieve the goal of persuading a receiver in an MDP? I could not find any relevant discussion on this. Has the author considered "future-dependent" signaling schemes: recommending a set of future actions for each state instead of just one action? Or sending a signal $m$ to encode a set of future actions they wish to recommend? Moreover, in the aforementioned scenario, can the sender confuse multiple states by sending signal $m$? If the revelation principle is abandoned, is a history-dependent signaling scheme still necessary? If there is no discussion on the validity of the revelation principle, this assumption should be prominently emphasized.”* We followed the recent extensive literature of information design (even in sequential settings, e.g. [Wu et al., 2022, Bernasconi et al., 2022, Gan et al., 2022a,b]) in which the signals are single action recommendations, and the revelation principle is more or less assumed to hold. A formal proof of the statement would require a large amount of notation and labor and deviate from the authors’ main focus with this work. Moreover, it would use standard techniques to derive a non-surprising result. However, we will happily underline this choice better in the final version of the paper. --- Rebuttal Comment 1.1: Title: Response Comment: I've read other reviews and the rebuttal. The evaluation remains the same with the rebuttal (as not much more information is provided in the rebuttal). I thank the authors for the response.
Summary: The paper considers a (finite-horizon) dynamic persuasion problem between a sender and a receiver, both of whom are long-lived. At each time $t$, there is a publicly-observable state $s_t$ and a payoff-relevant quantity $\theta_t$, which is observed only by the sender, and whose distribution depends on the current state (and is independent of other quantities). Based on the observation of $\theta_t$ the sender recommends an action to the receiver (which may or may not be followed). The publicly-observable state then updates to $s_{t+1}$ according to a transition kernel that depends on the current state $s_t$, the quantity $\theta_t$ and the action chosen by the receiver. Both the sender and the receiver seek to maximize their total expected payoffs. Previous work on this topic, with few exceptions, has focused on myopic receivers, motivated by settings in which the receiver is short-lived. The difference here then is the focus on long-lived, far-sighted receiver. In this setting, the paper first shows (via an example) that the class of Markovian signaling schemes (whose recommendations only depend on the current state $s_t$) is insufficient for optimal persuasion, and the sender can do better using a signaling scheme that takes into account the history of the process. Due to the computational difficulties in working with general history-dependent schemes, the paper then considers promise-form signaling schemes, which make recommendations based not only on the current state, but also on a (history-dependent) "promise", which is a guarantee on the receiver's continuation payoffs. Essentially, the promise succinctly summarizes the history, thereby reducing the computational complexity to be polynomial in the size of set of promises. The authors show that, upon imposing a honesty condition on the promises across time, the class of promise-form signaling schemes suffice for optimal persuasion. The authors also propose an approximation scheme for computing approximately-persuasive promise-form signaling schemes with good payoff guarantees, that is polynomial in the approximation factor. Strengths: + The paper considers an interesting variation of the sequential persuasion problem, allowing for far-sighted receivers. This makes the problem substantially more complex. Nevertheless, the paper identifies a class of relatively simple and approximately persuasive signaling schemes that nevertheless achieve optimal payoffs for the sender, and are furthermore computationally tractable. + The paper illustrates well the insufficiency of the class of Markovian signaling schemes, and furthermore (adapting existing results) shows that finding a constant-factor approximation within the class of Markovian signaling schemes is NP-hard. + The class of promise-form signaling schemes is fairly simple and easy to implement; furthermore, it seems approximately-optimal such schemes can be computed via solving an LP (repeatedly). Weaknesses: + While the class of promise-form signaling schemes is interesting, there is a significant line of work in economics that studies the use of promises in repeated games with incomplete information. The paper does not cite those papers, nor does it place its contributions within that context. A particularly relevant paper in this line is Abreu, Pierce and Stacchetti (Econometrica, 1990), whose results imply the sufficiency of the class of "promise-form" strategies for discounted repeated games with imperfect monitoring. + Similarly, the paper would benefit from connecting with general literature on repeated games (with or without incomplete information). For instance, the insufficiency of Markov signaling schemes is very much in the same vein as the inefficiency of Markov perfect equilibria in, say, repeated prisoner's dilemma to sustain cooperation. With far-sighted receivers, it is not surprising that Markov signaling schemes are not optimal for the sender. + $\epsilon$-persuasiveness: In the analysis of history-dependent (or promise-form) signaling schemes, the authors relax the persuasiveness requirement to $\epsilon$-persuasiveness. There is a subtle issue in interpreting this relaxation. To explain, a natural relaxation would be that the receiver's expected continuation payoff from following the recommendation is at most $\epsilon$ worse than choosing any other action, *after* receiving the recommendation. Specifically, the expectation taken here would be with respect to the posterior belief after receiving the recommendation. However, the condition in Definition 1 requires something different; it states that the receiver's expected payoff from following a recommendation, *multiplied* by the probability of receiving that recommendation, should be at most $\epsilon$ worse. In particular, there is an extra factor equaling the probability of recommending a particular action. While this may seem like a minor technical issue, this has substantial implication on the assumption that the receiver would adopt such a recommendation. For instance, this suggests that as long as the probability of recommending an action is small, the sender can recommend an action that can yield substantially lower continuation payoff for the receiver, and still expect the receiver to accept the recommendation. This seems to be a very strong assumption on the receiver's behavior, that does not align with the assumption that the receivers are (approximately) Bayesian. Moreover, with such a strong assumption, it is no longer clear if $OPT$ is the right benchmark for comparison. A potential fix to this issue would be to impose the relaxation on the conditional expectation, i.e., to replace the $\epsilon$ term in the definition with $\epsilon \sum_{\theta} \mu_h(\theta|s_h)\phi_\tau(a|\theta)$. However, it is not clear if the later approximation results continue to apply with this change. + Finally, while the paper makes sound and rigorous technical contribution, there is not enough discussion motivating the specific model being studied. For instance, there is no discussion of the motivation behind far-sightedness assumption; the myopic behavior of the receivers in previous work is frequently motivated by assuming a series of short-lived receivers. In particular, are there any specific applications where a single sender and a single receiver interact in the manner studied? (I think this is especially useful given the somewhat complicated form of the signaling scheme proposed.) Some discussion here would benefit the paper by grounding the theoretical results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: + Do the approximation results continue to hold if the relaxation of the persuasiveness constraint is imposed on the conditional expectation? + With the current definition of $\epsilon$-persuasiveness, it may be possible to design mechanisms that achieve payoffs substantially better than $OPT$. Are there any guarantees on how small (or large) this difference can be? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The assumptions are stated clearly. Some discussion of the limitations induced by relaxing the persuasiveness/honesty requirements would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to thank the Reviewer for their thoughtful comments and for mentioning pieces of related literature that we will reference and discuss in a final version of the paper. We provide below detailed answers to their questions. - *“Do the approximation results continue to hold if the relaxation of the persuasiveness constraint is imposed on the conditional expectation?”* Is the reviewer referring to the fact that the relaxation could be normalized by the probability with which the action is recommended? In this case, we believe that our techniques can be extended without any major difficulties changing the definition of $\epsilon$-persuasion and modifying the proofs accordingly. Let us know if you need further technical details. - *“With the current definition of $\epsilon$-persuasiveness, it may be possible to design mechanisms that achieve payoffs substantially better than $OPT$. Are there any guarantees on how small (or large) this difference can be?”* In general this difference can be arbitrarily large. Let us take a simple example with a single node, a single state, and two actions $a_1$ and $a_2$. The first action is $\epsilon$ better for the receiver with respect to the second one. The sender’s utility is $1$ if the receiver plays $a_2$ and $0$ if the receiver plays $a_1$. The optimal persuasive mechanism recommends $a_1$ and it has sender’s utility $0$. The optimal $\epsilon$-persuasive mechanism recommends $a_2$ and has sender’s utility $1$. This is an edge case. In non-degenerate settings the utility of the optimal persuasive and $\epsilon$-persuasive mechanisms are close. Moreover, we want to remark that this is standard in any setting with relaxed IC constraints, like $\epsilon$-Nash equilibria, $\epsilon$-Stackelberg, etc. --- Rebuttal Comment 1.1: Comment: I was referring to the point that the actual persuasiveness constraint is on the conditional expectation, something along the lines of $\sum_{\omega} \phi(\omega | a) u(\omega, a) \geq \sum_{\omega} \phi(\omega | a) u(\omega, a')$. Here $\phi(\omega|a)$ is the conditional belief after receiving signal $a$. (I am using simplified notation here but hopefully it is still clear.) If this is relaxed to $\sum_{\omega} \phi(\omega | a) u(\omega, a) \geq \sum_{\omega} \phi(\omega | a) u(\omega, a') - \epsilon$, then this is a perfectly fine model of receiver behavior where one posits that any $\epsilon$-optimal action will be accepted by the receiver. However, currently the persuasiveness constraint is first modified to a unconditional form $\sum_{\omega} \phi(\omega , a) u(\omega, a) \geq \sum_{\omega} \phi(\omega , a) u(\omega, a') $, where $\phi(\omega,a)$ is the joint distribution of state and signal. (This is common trick in exact static settings to get to an LP formulation.) Then, this modified constraint is relaxed to an approximate version: $\sum_{\omega} \phi(\omega , a) u(\omega, a) \geq \sum_{\omega} \phi(\omega , a) u(\omega, a')- \delta$. However, if one goes back to the original (meaningful) persuasiveness constraint, this two-step relaxation amounts to modifying the original persuasiveness constraint as follows: $\sum_{\omega} \phi(\omega |a) u(\omega, a) \geq \sum_{\omega} \phi(\omega | a) u(\omega, a') - \frac{\delta}{\phi(a)}$, where $\phi(a)$ is the unconditional probability of sending signal $a$. Now, this constraint posits a behavioral assumption on the receiver that they will be willing to follow the recommendation as along as it is approximately persuasive, where the approximation can depend also on the probability with which each signal is sent. In particular, as along as a signal is sent with small enough probability (i.e., as long as $\phi(a) \ll 1$), the receiver will gladly adopt the signal. Moreover, the behavior of the receiver changes based on the signal mechanism chosen by the sender. From a behavioral perspective, this situation seems very suspect. My question was whether the results in the paper continue to hold if we adopt the (more appropriate) approximation of $\sum_{\omega} \phi(\omega | a) u(\omega, a) \geq \sum_{\omega} \phi(\omega | a) u(\omega, a') - \epsilon$. --- Reply to Comment 1.1.1: Comment: In the following we use *unnormalized* for describing the current definition of $\epsilon$-persuasiveness and with *normalized* the one proposed by the reviewer. We thank the reviewer for carefully clarifying this point. We decided to consider the unnormalized version of the $\epsilon$-persuasiveness since it is the standard way of defining approximate IC constraints in similar lines of work (e.g [Farina, Gabriele, et al. “Simple uncoupled no-regret learning dynamics for extensive-form correlated equilibrium”]). However we are now persuaded that the normalized version of the $\epsilon$-persuasive constraint is more appropriate for our work in which Bayesian rationality of the agents is a central concept and we agree that this makes the definition more sound from a behavioral perspective. Moreover, changing the definition to the normalized version comes at almost no cost. Indeed, we never directly use the definition of $\epsilon$-persuasiveness in the algorithmic part of the paper (Sections 5 and 6) . We use the definition of $\epsilon$-persuasiveness only in the proof of Lemma 3. In particular, we only use the fact that $\eta$-honest promise form signaling schemes are $H \eta$-persuasive (unnormalized definition) which is proved in Lemma 3. However the proof of Lemma 3 can be easily modified to show that $\eta$-honest promise form signaling schemes are $H \eta$-persuasive (normalized definition). Specifically, we only need to maintain the term $\sum_{\theta\in\Theta}\mu_h(\theta|s_h)\varphi_h(a| s_h,\iota_\tau^\sigma,\theta)$ in front of the $\eta(H-h)$ term in the last Equation after line 574. Notice that here we used an equivalent definition of normalized $\epsilon$-persuasiveness. In particular, using your notation, $\sum_\omega \phi(\omega|a) u(\omega,a) \ge \sum_\omega \phi(\omega|a) u(\omega,a’) - \epsilon$ is equivalent to $\sum_\omega \mu_\omega \phi(a|\omega) u(\omega,a) \ge \sum_\omega \mu_\omega \phi(a|\omega) u(\omega,a’) -\epsilon \sum_\omega \mu_\omega \phi(a|\omega)$. This shows that all the results of our paper still apply to the new normalized version of approximate persuasiveness constraints. We will implement these changes in the final version of the paper. We are very thankful to the reviewer for the discussion which we think greatly improved our paper.
Rebuttal 1: Rebuttal: We want to thank the reviewers for their useful feedback and for praising the technical contribution of our work. We will take their suggestions in great consideration to improve the final version of the paper in terms of discussion of additional related works, considered assumptions, and real-world applications. Before providing detailed replies to reviewers’ questions below, we want to uphold two important choices in our problem formulation. While reviewers rightly noted that the known-model assumption is restrictive, we think that solving the planning setting already required to overcome significant technical hurdles, which left no space for additional contributions. Moreover, we believe that the planning problem is a natural preliminary step towards addressing the corresponding learning problem, in which the model is (at least partially) unknown to the agents. The latter problem is a nice direction for future research on this topic. Finally, we will make an additional effort to explain why the farsighted receiver assumption matters in this setting. First, it complements previous works considering myopic receivers in MDPs, which is sometimes unrealistic in real-world applications, such as ride-sharing platforms where the same driver typically interacts with the platform multiple times. Secondly, it better aligns this line of research on information design in MDP settings with the classical MDP literature, which crucially assumes farsighted decision makers.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Solving a Class of Non-Convex Minimax Optimization in Federated Learning
Accept (poster)
Summary: The paper focuses on the minimax optimization problem within the context of the federated learning framework. In order to address this problem, the authors introduced two algorithms named as FedSGDA+ and FedSGDA-M, to handle distinct types of losses, specifically focusing on the concavity of the maximizer component. The study presented a comprehensive analysis encompassing both theoretical and numerical evaluations. Strengths: This paper contributed two novel algorithms to solve various types of minimax optimization problems within the federated learning framework, which holds significant impact. As many machine learning problems are closely tied to minimax optimization, the developed algorithms offer valuable solutions for a wide range of applications. The solid theoretical analysis not only validates the effectiveness of their proposed algorithms but also lays a foundation for future investigations in the fields of minimax optimization and federated learning. Weaknesses: The paper needs presentation improvement as well as more clarification in the motivation and algorithm development. Currently, I am inclining towards border acceptance with my questions below. I encourage authors to provide responses for my better assessment. 1. Line 27-29: The authors claim the federated learning is proposed to tackle the communication issue. But to my best knowledge, the federated learning has many advantages such as local data privacy preserving, utilizing more clients' data and enhancing computation power etc. Only highlighting communication advantages might be one-sided. 2. Line 145: The second inequality in (2) seems to be a strong assumption. It requires the deviation of the local data from the global average to be bounded. However, under severe data heterogeneity and large parameters magnitude , such bound might not hold. Many federated learning works have studied to avoid such assumption, e.g. [1]. Can we also relax this assumption here? 3. Line 167: This assumption seems to be an addition assumption compared with the existing works in Table 1. Basically, it requires the gradient of $F$ w.r.t. $x$ to be bounded. It might not hold for many applications. 4. Wonder whether linear speed up w.r.t. to clients' number hold for this study. Can the authors provide some numerical validation? 5. For the algorithm presentation, can authors separate the server-side and client-side for better understanding? 6. Is FedSGDA-M motivated by the STORM-type of variance reduction method? Why is the algorithm not applicable for NC-C case? 7. Section 4.1: The loss looks be to a constrained one. How do the proposed algorithm hold the constraint? 8. Figure 2: Is FedSGDA in the figure the proposed FedSGDA+ or FedSGDA-M? 9. Can the authors provide numerical studies on the data heterogeneity, clients' number and computation cost? [1] Karimireddy, Sai Praneeth, et al. "Scaffold: Stochastic controlled averaging for federated learning." International Conference on Machine Learning. PMLR, 2020. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The author acknowledged the need for further numerical studies and a more comprehensive analysis as a limitation of the paper. Since the research primarily focuses on theoretical algorithm development, there is no anticipated potential for negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply thankful for your time and comments. > Q1. "The authors claim the federated learning is proposed to tackle the communication issue..." Thanks for your suggestion. Due to page limitation, I mention parts of advantages of FL, such as communication issue in Line 27 and privacy in Line 32. I will add more details about the advantages of FL. > Q2. "Line 145: The second inequality in (2) seems to be a strong assumption ... ". Many federated learning works have studied to avoid such assumption, e.g. [1] ... Variance Bound assumption is a popular assumption in federated optimization. [1] use > Q3. Line 167: This assumption seems to be an addition assumption compared with the existing works in Table 1. This assumption is used for the theoretical analysis in NC-C setting. We follow the only NC-C algorithm Local SGDA+ [2] in Table 1 and please see Assumption 6 in [2]. > Q4. Wonder whether linear speed up w.r.t. to clients' number hold for this study. Remark 3.8 and Remark 3.14 show our two algorithms have linear speedup with respect to the number of worker nodes. > Q5. For the algorithm presentation, can authors separate the server-side and client-side for better understanding? Thanks for your suggestion. Because we only do average operation on the server, so we use denotes it as test mod (t, Q) = 0 in the Line 11 in algorithm 1 and Line 5 in the algorithm 2. > Q6. Is FedSGDA-M motivated by the STORM-type of variance reduction method? Why is the algorithm not applicable for NC-C case? FedSGDA-M uses a STORM-type variance reduction technique. The study of its application in NC-C case is in progress. > Q7. Section 4.1: The loss looks be to a constrained one. How do the proposed algorithm hold the constraint? We follow the task in [2] and constrained is not considered in the algorithm. > Q8. Figure 2: Is FedSGDA in the figure the proposed FedSGDA+ or FedSGDA-M? Thanks for the reminding. This is a typo. FedSGDA in the figure is FedSGDA-M. [1] Karimireddy, Sai Praneeth, et al. "Scaffold: Stochastic controlled averaging for federated learning." International Conference on Machine Learning. PMLR, 2020. [2] P. Sharma, R. Panda, G. Joshi, and P. Varshney. Federated minimax optimization: Improved convergence analyses and algorithms. In International Conference on Machine Learning, pages 19683–19730. PMLR,387 2022. https://proceedings.mlr.press/v162/sharma22c/sharma22c.pdf --- Rebuttal Comment 1.1: Title: Rebuttal Comment: > Q2. "Line 145: The second inequality in (2) seems to be a strong assumption ... ". Many federated learning works have studied to avoid such assumption, e.g. [1] ... The Variance Bound assumption is a popular assumption in federated optimization and it is used in many existing FL minimax works [1] [2] and FL works [3] [4]. Thanks for your suggestion and the relaxation of this assumption will be considered in the future work. > Q5. "For the algorithm presentation, can authors separate the server-side and client-side for better understanding?" We will separate the algorithms in the final version to make it more clear. > Q6. Is FedSGDA-M motivated by the STORM-type of variance reduction method? Why is the algorithm not applicable for NC-C case? FedSGDA-M uses a STORM-type variance reduction technique. The current theoretical analysis of FedSGDA-M depends on the PL-condition assumption. The study of its application in the NC-C case is in progress. > Q9. "Can the authors provide numerical studies on the data heterogeneity, clients' number and computation cost?" Since we cannot upload the images, I will add them to the final version. [1] Deng, Y. and Mahdavi, M., 2021, March. Local stochastic gradient descent ascent: Convergence analysis and communication efficiency. In International Conference on Artificial Intelligence and Statistics. PMLR. http://proceedings.mlr.press/v130/deng21a/deng21a.pdf [2] P. Sharma, R. Panda, G. Joshi, and P. Varshney. Federated minimax optimization: Improved convergence analyses and algorithms. In International Conference on Machine Learning, pages 19683–19730. PMLR,387 2022. https://proceedings.mlr.press/v162/sharma22c/sharma22c.pdf [3] Reddi, S., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Konečný, J., Kumar, S. and McMahan, H.B., 2020. Adaptive federated optimization. https://arxiv.org/pdf/2003.00295.pdf. [4] Khanduri, P., Sharma, P., Yang, H., Hong, M., Liu, J., Rajawat, K. and Varshney, P., 2021. Stem: A stochastic two-sided momentum algorithm achieving near-optimal sample and communication complexities for federated learning. Advances in Neural Information Processing Systems, 34, pp.6050-6061.
Summary: this paper proposed new algorithms for solving federated minimax problems in both nonconvex concave and nonconvex PL (or strongly concave) settings under the assumption of data heterogeneity. the authors propose a novel way to incorporate the variance reduction technique into the federated setting and provide solid convergence analysis in both settings. numerical experiments on auc maximization and fairness problems are reported to show the advantage of the proposed algorithms. Strengths: 1. the proposed algorithms achieve the best convergence rate compared with existing works in the same setting. the final rate comes with linear speedup under the heterogeneous setting. 2. the idea of updating y with a fixed x in the concave case helps to improve communication complexity. 3. the numerical experiments are convincing. the authors compare with the majority of existing works. 4. the paper is easy to follow. Weaknesses: the step size \eta needs to be very small to guarantee the convergence since it's inverse propositional to q, the local updates. this requirement hurts the performance in practice Technical Quality: 3 good Clarity: 2 fair Questions for Authors: both algorithms in this paper use the same batch data to compute the x gradient and the y gradient. in that case, those gradients (\nabla_x f and \nabla_y f) are not independent of each other. will this cause problems in the proof? can the author double-check? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: the algorithm names in the table are not consistent with the names in the algorithms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your time and comments. > Q1. "the step size \eta needs to be very small to guarantee the convergence since it's inverse propositional to q, ...." Thanks for your mention. The relationship between learning rate $\eta$ and local training iterations q is easy to understand because if we do not put a restriction on the learning rate, a large q will lead to divergence due to client drift in federated learning. In addition, this relationship is common in federated learning. Such as Corollary1 in [1] $\eta_l = \Theta (1/KL \sqrt{T})$ where K is the local update and Theorem 1 in [2] shows that $\eta_y \leq \frac{1}{8L_f \tau}, \frac{\eta_x}{\eta_y} \leq \frac{1}{8\kappa^2} $ where $\tau$ is the local training iterations. The relationship between the local step size and the local training iterations guarantees convergence. > Q2. "... those gradients (\nabla_x f and \nabla_y f) are not independent of each other ..." This has brought significant challenges to the proof and thus some works use double-loop architecture, namely multiple samples and update for y but only one step for x. We use simpler single-loop architecture. We use \nabla_x f and \nabla_y f to update x and y, separately. Lemma B.1, Lemma B.3 and Lemma C.2, Lemma C.3 show how to balance variable x and y. This challenge also shows the value of our work. [1] Reddi, S., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Konečný, J., Kumar, S. and McMahan, H.B., 2020. Adaptive federated optimization. https://arxiv.org/pdf/2003.00295.pdf. [2] Sharma, P., Panda, R., Joshi, G. and Varshney, P., 2022, June. Federated minimax optimization: Improved convergence analyses and algorithms. In International Conference on Machine Learning (pp. 19683-19730). PMLR. https://proceedings.mlr.press/v162/sharma22c/sharma22c.pdf
Summary: (1) This paper describes two algorithms, FedSGDA+ and FedSGDA-M, for solving non-convex minimax optimization problems. (2) Theoretical guarantees are establised for these two algorithms, under several assumptions. (3) Empirical tests are conducted. Strengths: **Originality:** The techniques used in this paper are not novel from an optimization perspective. However, they can indeed be applied to minimax problems and have a theoretical bound. **Quality:** The notations, problem statements, and mathematical proofs appear rigorous, although I have not scrutinized them word by word. **Clarity:** The readability of this article is good. **Significance:** This paper provides agorithms with better convergence rates (although some extra assumptions were made in the paper, ). The techniques (and tricks) may inspire and guide future researchers. Weaknesses: 1. The algorithms described in "Federated minimax optimization: Improved convergence analyses and algorithms" [36] and the ones in this paper are very similar. Specifically, FedSGDA-M and Local SGDA-M in [36] are basically the same, and the difference between FedSGDA+ and Local SGDA in [36] is very small, only in the step size, which may not be the most crucial part of the FedSGDA+ algorithm. Therefore, I believe that there is not much improvement in the algorithm aspect of the paper.   2. On the other hand, the convergence rates established in this paper appear to be better than previous algorithms, but the assumptions used are different from those of the previous. Thus, this comparison seems somewhat unfair and does not necessarily indicate a substantial improvement in convergence rates.   3. As for the experiments, firstly, this paper does not describe whether the experimental settings conform to the theoretical assumptions, so it is unclear whether the empirical studies can support the theoretical results. Secondly, the relationship between the complexity established in this paper and the number of worker nodes is not reflected in the experiments. Lastly, only Local SGDA was put into the experimental baseline set, and some algorithms that perform well in this field, such as FEDNEST and SAGDA, unfortunately did not become the experimental baselines. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I have detailed the issues I identified as weaknesses in the previous message. Please address these questions I posed to them. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, they have. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of the quality, clarity, and significance of our paper. >Q1. "... FedSGDA-M and Local SGDA-M in [1] are basically the same, and the difference between FedSGDA+ and Local SGDA in [1] is very small, ..." I respectively disagree with this statement. SGDA is a very classical algorithm in minimax optimization and many minimax algorithms are designed based on it. 1) Although both FedSGDA-M and Local SGDA-M are momentum-based algorithms, they are completely different since our FedSGDA-M introduces a momentum-based variance reduction technique, and reduces communication complexity and sample complexity. 2) FedSGDA+ and Local SGDA+ are both used for NC-C settings but the introduction of global learning in FedSGDA+ obviously improves the communication complexity. It should be mentioned that Local SGDA+ in [1] is from [2] but it provides a better theoretical analysis. Since [1] does not come up with a new algorithm for NC-C setting, the convergence rate of Local SGDA+ cannot match our results. >Q2. "... the assumptions used are different from those of the previous. ..." The assumption for FedSGDA+ and Local SGDA+ are $\textbf{completely same}$. Although Assumption Assumption 3.11. is different from the one used in [1], it is still a widely used assumption in optimization analysis. Many typical centralized stochastic algorithms use this assumption, such as SREDA [3], and VR-SMDA [4]. Similarly, it is also used in FL algorithms such as MIME [5] and Stem [6]. >Q3. "... experimental settings ..." We consider two tasks. 1) Fair Classification is from [1] and since FedSGDA+ and Local SGDA+ use the same assumptions, we compare these two methods in NC-C settings. 2) AUPRC maximization is from [7]. Since [7] not only uses a similar assumption (Assumption 1 (iv)), but uses a stronger assumption (Assumption 1 (ii)), we use this task to evaluate our algorithms. > Q4. "... the relationship between the complexity established in this paper and the number of worker nodes is not reflected in the experiments ..." I provide the theoretical analysis in our works. Since we cannot upload the images in rebuttal, we will add it in final version. > Q5 "... only Local SGDA was put into the experimental baseline set... " I completely disagree with the statements. I politely remind you that Local SGDA and Local SGDA+ in [1] are two different algorithms and only Local SGDA+ is used for NC-C setting [1]. SAGDA [8] considers the NC-PL setting and cannot be used for NC-C setting. This is why we only use Local SGDA+ as a baseline in Fair Classification. In the AUROC Maximization tasks, we use local SGDA, CODA+, Momentum SGDA, CODASCA, and SAGDA as baselines, not only Local SGDA. [1] P. Sharma, R. Panda, G. Joshi, and P. Varshney. Federated minimax optimization: Improved convergence analyses and algorithms. In International Conference on Machine Learning, pages 19683–19730. PMLR,387 2022 [2] Y. Deng and M. Mahdavi. Local stochastic gradient descent ascent: Convergence analysis and communication efficiency. In International Conference on Artificial Intelligence and Statistics, pages 1387–1395. PMLR, 2021. [3] Luo, L., Ye, H., Huang, Z. and Zhang, T., 2020. Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems. Advances in Neural Information Processing Systems, 33. [4] Feihu Huang, Xidong Wu, and Heng Huang. Efficient mirror descent ascent methods for nonsmooth minimax problems. Advances in Neural Information Processing Systems, 34, 2021. [5] Karimireddy, S.P., Jaggi, M., Kale, S., Mohri, M., Reddi, S.J., Stich, S.U. and Suresh, A.T., 2020. Mime: Mimicking centralized stochastic algorithms in federated learning. arXiv preprint arXiv:2008.03606. [6] Khanduri, P., Sharma, P., Yang, H., Hong, M., Liu, J., Rajawat, K. and Varshney, P., 2021. Stem: A stochastic two-sided momentum algorithm achieving near-optimal sample and communication complexities for federated learning. Advances in Neural Information Processing Systems, 34, pp.6050-6061. [7] Zhishuai Guo, Mingrui Liu, Zhuoning Yuan, Li Shen, Wei Liu, Tianbao Yang. Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks. ICML 2020 [8] Yang, Haibo, Zhuqing Liu, Xin Zhang, and Jia Liu. "SAGDA: Achieving Communication Complexity in Federated Min-Max Learning." arXiv preprint arXiv:2210.00611 (2022). https://openreview.net/pdf?id=wTp4KgVIJ5 --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for your response. I will consider your feedback. --- Reply to Comment 1.1.1: Comment: Thanks. Looking forward to your reply. --- Rebuttal 2: Title: Thanks for your review Comment: We truly thank your review. Since the discussion period already began, we will appreciate it if you can check our responses and let us know if there are any further questions. --- Rebuttal 3: Title: Thanks for your review Comment: I wanted to express our gratitude for taking the time to review our work and for providing feedback. We have submitted a rebuttal addressing the concerns and comments raised in your review. We kindly request that you take a moment to review our rebuttal because the discussion will end soon. Your feedback is of utmost importance to us, as it will help ensure the quality and rigor of our work. Thanks.
Summary: This paper studied a class of federated nonconvex minimax optimization problems. Authors proposed FL algorithms and provided sample complexity under three different settings including nonconvex-concave, nonconvex-strongly conconcave, and nonconvex-PL conditioned. Authors showed that derived rates has the best sample complexity. Experimental results demonstrated the superiority. Strengths: 1. The theoretical analysis is one main contribution of this paper. It's nontrivial to derive sample complexity for different settings. 2. Experiments also validated the efficacy of proposed algorithms. 3. Code is available and looks neat. Weaknesses: I have a major concern regarding the discussion and comparison of existing works in the paper. Specifically, when addressing local algorithms, the authors solely focus on SGDA (with and without momentum), which is considered a classical and relatively old algorithm. However, it is worth noting that several recent works have demonstrated improvements in convergence rates. For instance, in [1], Section 4.2 provides insights into enhanced rates, while [2] presents Theorems C.9 that showcase accelerated convergence rates for convex-nonconcave minimax problems. Additionally, [3] introduces Theorem 4.1, which is relevant to this discussion. Although these works may not directly address federated learning, their findings on accelerated convergence rates for general minimax problems should be brought into the discussion to avoid confusion among readers in the future. ### reference: 1. Mahdavinia et.al., Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems 2. He et.al., GDA-AM: ON THE EFFECTIVENESS OF SOLVING MIN-IMAX OPTIMIZATION VIA ANDERSON MIXING, ICLR 2022 3 Lee et.al., Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems, Neurips 2021 Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors discussed limitations, although generic. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your valuable and constructive comments! >Q1. " ... existing works in the paper ... Although these works may not directly address federated learning, their findings on accelerated convergence rates for general minimax problems " Thanks a lot for your suggestions. Due to page limitations, we pay more attention to FL minimax works. I added these minimax works in Sec 2.1 Single-Machine Minimax. Given that we cannot resubmit the new version, we will present them in the final version. --- Rebuttal Comment 1.1: Title: Rebuttal received Comment: I've read the rebuttal and other reviewers' comments. Most concerns are addressed. I have one additional question: ### Q1: the use of GDA. GDA has **poor convergence properties** for solving general minimax problems. Simultaneous GDA tends to diverge. Even alternating GDA can only show bounded convergence. This phenomenon is hidden when solving deep learning problems and applications considered in the paper. But it is problematic when solving bilinear/quadratic games or general 1d minimax functions. Have authors tried replacing GDA with our algorithms and what's the performance? --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: We sincerely thank the response from the reviewer. Minimax training is usually difficult and as shown in [1], the choices of the learning rate for x and y are important. We show the relationship between two learning rates in Corollary 3.7. and Corollary 3.13. In addition, in deep learning problems, the selection of learning rate of variable x, i.e. model parameter updating, is more critical. In this paper, we mainly focus on GDA (Gradient Descent Ascent) based approaches as GDA is still the most commonplace algorithm to solve minimax problems, especially when solving deep learning problems and applications considered in this paper. We note [2] conducts an in-depth investigation of limitations of GDA algorithm (e.g., smaller learning rate, cycling/divergence issue) and gives a systematic analysis of how to improve GDA dynamics. Nonetheless, integration of the proposed GDA improvement is not within this paper’s scope. We will discuss the limitations pointed out by [2] in greater detail in our final version. And we thank the reviewer for pointing out a future direction. [1] Lin, Tianyi, Chi Jin, and Michael Jordan. "On gradient descent ascent for nonconvex-concave minimax problems." In International Conference on Machine Learning, pp. 6083-6093. PMLR, 2020. [2] Huan He, Shifan Zhao, Yuanzhe Xi, Joyce Ho, Yousef Saad, GDA-AM: ON THE EFFECTIVENESS OF SOLVING MIN-IMAX OPTIMIZATION VIA ANDERSON MIXING, ICLR 2022
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: Authors propose federated stochastic gradient descent-ascent method for solving minimax problems in federated learning setting and demonstrate that oracle and communication complexity of their method is significantly better than that of analogues in nonconvex-(concave/non-concave/PL) cases with respect to dependence on desired accuracy. Strengths: Theoretical analysis of the proposed method shows that its complexity is the best among alternatives, construction of the algorithm is simple enough, important practical cases of nonconvex and PL functions are considered. Weaknesses: I guess, within the goals of the paper there are no significant weaknesses: theoretical results are good and well-justified, minimal experiments confirm the advantages of the proposed method. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I suggest authors to reflect deviation of accuracy for runs with different realisations of randomness on their figures in addition to particular realisation of convergence curve — for example, with shadow. Its important for understanding stability of advantage of the proposed method. And maybe the title should be changed to correctly reflect the content of the paper: "solving a class" seems to have unclear meaning, etc. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Everything is okay. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your appreciation. I will add extra deviation of accuracy in the final version and modify the paper according to your suggestion. Thanks! --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your work on the final version of your paper! The rebuttal has clarified my questions. I decided to keep my overall rating the same.
null
null
null
null
null
null
On Generalization Bounds for Projective Clustering
Accept (poster)
Summary: The authors study the generalization bounds for two clustering problems: center-based clustering and subspace (projective) clustering. The authors argue that the problems reduce to bounding the Gaussian complexity of the set of cost functions over all possible solutions. To achieve this, they apply the union bound on the telescoping sum over a nested sequence of solutions sets that are increasingly more accurate. To find a set of solutions at a specific level of accuracy, the authors use the \emph{terminal embedding} for the center-based clustering, and they propose a new dimensionality reduction method for the projective clustering. Strengths: - The authors provide the first provable bound for projective clustering. For a specific case with the squared cost, they show that the previous bound by Fefferman, Mitter, and Narayanan (2016) is optimal. - I find the chaining technique to be quite interesting, and it could be applied to other learning problems. - The authors did a decent job of giving a high level idea of their proofs - The experimental result on real datasets agree with the theoretical results. Weaknesses: My only concerns are regarding the applications and experiments. Since this is mainly theoretical paper, these should not be major concerns. - The problem of projective clustering should be better motivated in the introduction. What are some real applications of projective clustering? What are the common choices of $j$? - The experiment are performed with $j\in \{1,2,5\}$, with only $j=1$ presented in the main paper. In my opinion, there should be a bit of discussion on how the experimental results conform (or deviate) from theory as $j$ increases. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - As the authors mentioned in D.2, the projective clustering is serverly limited in computational aspect. Has there been any attempt to solve this issue? The authors probably should mention this limitation somewhere in the paper as well. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have mentioned several open problems related to this work, but I think the most important problem is whether the projective clustering rate of $\tilde{O}(\sqrt{kj^2/n})$ is optimal for $z\not= 2$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Regarding the questions and weaknesses: To the best of our knowledge, the most commonly used choice of $j$ are small constants. For example [1] never use $j$ larger than 4. The reason why we focussed on $j=1$ in the main body is that there already exists a phase transition in terms of the computational complexity between the normal $k$-median and $k$-means problems and using lines as centers, while still admitting more positive results than other subspace clustering problems [2,3,4]. In addition, the problem is often considered in computational geometry as it can be interpreted as finding the ($k$) closest cylinders. But we are open to putting more attention on other values of $j$. In terms of tractability: the $(k,1,z)$ problem is inapproximable up to any factor, even in 2 dimensions, for any choice of $z$ [5]. Similarly, the $(1,j,z)$ problem is APX hard [6,7] unless $z=2$ in which case the problem is variously known as PCA or low rank approximation. To run an EM-like algorithm for $(k,j,z)$ clustering, as we did in the experiments, we require a very accurate (read $(1+\epsilon)$ approximation). The currently best known algorithms for $z=1$ and $n$ points run in time $O(\exp((j\cdot\epsilon^{-1})^{O(z)})$ [6,8]. Unfortunately, running these algorithms even for small values of $j$ is still very impractical. Thus we relied on heuristic EM-like methods which is also what [1] suggests. [1] René Vidal: Subspace Clustering. IEEE Signal Process. Mag. 28(2): 52-68 (2011) [2] Dan Feldman, Amos Fiat, Micha Sharir: Coresets forWeighted Facilities and Their Applications. FOCS 2006 [3] Dan Feldman, Amos Fiat, Micha Sharir, Danny Segev: Bi-criteria linear-time approximations for generalized k-mean/median/center. SCG 2007 [4] Pankaj K. Agarwal, Cecilia Magdalena Procopiuc, Kasturi R. Varadarajan: Approximation Algorithms for a k-Line Center. Algorithmica 42(3-4): 221-230 (2005) [5] V. S. Anil Kumar, Sunil Arya, H. Ramesh: Hardness of Set Cover with Intersection 1. ICALP 2000 [6] Kenneth L. Clarkson, David P. Woodruff: Input Sparsity and Hardness for Robust Subspace Approximation. FOCS 2015 [7] Amit Deshpande, Madhur Tulsiani, Nisheeth K. Vishnoi: Algorithms and Hardness for Subspace Approximation. SODA 2011 [8] Dan Feldman, Michael Langberg: A unified framework for approximating and clustering data. STOC 2011 --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for addressing my concerns and for the references. As the proved convergence rates are quite general and distribution-free, I think the paper provides significant contribution.
Summary: This paper investigates generalization bounds for center based and subspace clustering, providing upper bounds on excess risk for $(k,z)$ and $(k,j,z)$ clustering and lower bound for $(k,j,z)$ clustering for special case of $z=2$. The bounds for $(k,j,z)$ clustering are novel and lower bounds helps establish optimality of previously known bounds. Strengths: The paper is well written and proof sketch is easy to follow. This work provides improvements over existing work in extending $(k,z)$ clustering bounds. Getting chaining type analysis to work in this setting is non-trivial. The techniques used for proving upper bounds for subspace clustering are novel and using multiple dimensionality reductions for analysis is indeed interesting. The lower bound construction is clean. Overall, I like the results in this paper. I have not verified all the proofs in the appendix but the proof sketch and flow seems right to me. Weaknesses: The bounds are interesting in certain parameter regimes, specifically when $j$ and $z$ are constants. Limitations in current approach to extend beyond this is not discussed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Q1: Could this approach also help obtain improvements for when additional structural assumptions are imposed on $\mathcal{D}$? Q2: What changes in bound for Lemma 4.4 when points do not lie in low-dimensional space? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Please refer weakness and questions. Work is theoretical in nature, negative societal impact is not apparent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We acknowledge the reviewer's comments. Regarding the limitations in $j$ and $z$, for $z\rightarrow\infty$, no generalization bounds are possible, at least in this problem setup. The main issue is that problems with $z\rightarrow\infty$ are too sensitive to outliers and even regions with an extremely low density are important. An exponential depedency on $z$ is likely necessary, as for $z'\in \Omega(\log n)$, where $n$ is the sample size, the cost of a $(k,j,z')$ clustering and a $(k,j,\infty)$ clustering is very close. For large values of $j$, there are no limitations in our analysis, barring perhaps a quadratic dependecy for $z\neq 2$. The only reason why we mentioned that $j$ is a constant because this is typically assumed in practice [7]. The main limitations are only in terms of finding a tractable algorithm that solves the problem. For example, for $(k,1,z)$ clustering, the problem is inapproximable even in the Euclidean plane [1]. Exponential time algorithms that solve these problems exist [2,3], but are mainly of theoretical interest. Q1: It may be possible. For $k$-means, previous work has achieved a learning rate of $O(1/n)$ under certain assumptions, see [4,5]. It may be possible to weaken these assumptions and/or extend these ideas to k-median. It is, for example, a very interesting open problem to show that assuming ORSS-stability [6], a learning rate of $O(k/n)$ is possible. Doing so will likely require a different approach than what we are currently doing. For $(k,j,z)$ clustering, we are not aware of any prior results. It is likely that some assumptions exist such that this possible. Q2: This lemma itself does not change. But using our ensemble of dimension reductions, we obtain the bounds for Lemma 4.5, which are independent of $d$. We only use lemma 4.4. once we know that we can map the points to some low-dimensional space. If the dependency on $d$ can be eliminated entirely, one would bypass the necessity for dimension reduction and directly achieve an optimal learning rate for $(k,j,z)$ clustering. [1] V. S. Anil Kumar, Sunil Arya, H. Ramesh: Hardness of Set Cover with Intersection 1. ICALP 2000 [2] Kenneth L. Clarkson, David P. Woodruff: Input Sparsity and Hardness for Robust Subspace Approximation.FOCS 2015 [3] Dan Feldman, Michael Langberg: A unified framework for approximating and clustering data. STOC 2011 [4] C. Levrard. Nonasymptotic bounds for vector quantization in hilbert spaces. The Annals of Statistics 2015 [5] Shaojie Li, Yong Liu: Sharper Generalization Bounds for Clustering. ICML 2021 [6] Rafail Ostrovsky, Yuval Rabani, Leonard J. Schulman, Chaitanya Swamy: The effectiveness of lloyd-type methods for the k-means problem. J. ACM 59(6): 28:1-28:22 (2012) [7] René Vidal: Subspace Clustering. IEEE Signal Process. Mag. 28(2): 52-68 (2011) --- Rebuttal Comment 1.1: Title: Official comment Comment: Thanks for the response and indulging the questions. I believe the paper makes significant contribution so I maintain my score.
Summary: This paper presents several generalization bounds for clustering objectives such as k-median and subspace clustering. When the centers are points or constant dimensional subspaces, the upper bounds are optimal up to logarithmic terms. For projective clustering, this work gives a lower bound showing that the results obtained by [34] are nearly optimal. A key technique was using an ensemble of dimension reduction methods with guarantees. Strengths: This paper has the following contributions: For center-based objectives, this work shows a convergence rate, which matches the known optimal bounds for k-means, and extends it to other important objectives such as k-median. For subspace clustering with j-dimensional subspaces, this work also shows a convergence rate. For the specific case of projective clustering, which generalizes k-means, a converge rate is provided. Weaknesses: 1.Insufficient research on related work. This work is not the first random projection clustering work, such as papers [1,2]. These papers are not cited in this paper. The superiority of this work cannot be verified. Please compare them from the theoretical analysis, experimental results, algorithm complexity, etc. [1]Yin R, Liu Y, Wang W, et al. Randomized Sketches for Clustering: Fast and Optimal Kernel $ k $-Means[J]. Advances in Neural Information Processing Systems, 2022, 35: 6424-6436. [2]Yin R, Liu Y, Wang W, et al. Scalable Kernel $ k $-Means with Randomized Sketching: From Theory to Algorithm[J]. IEEE Transactions on Knowledge and Data Engineering, 2022. 2.The experiments is not enough. There are many related works in this field. If this paper can be compared with related work through experiments and the performance of the work can be analyzed in detail, it would be better. 3.The presentation of references is not standardized, such as: [20] P. Chou. The distortion of vector quantizers trained on n vectors decreases to the optimum as Op(1/n). In Proceedings of 1994 IEEE International Symposium on Information Theory, pages 457–, 1994. doi: 10.1109/ISIT.1994.395072. [28] V. Cohen-Addad, D. Saulpic, and C. Schwiegelshohn. Improved coresets and sublinear algorithms for power means in euclidean spaces. Advances in Neural Information Processing Systems, 34, 2021. [50] Y. Liu, S. Liao, S. Jiang, L. Ding, H. Lin, and W. Wang. Fast cross-validation for kernel-based algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP:1–1, 01 2019. doi: 10.1109/TPAMI.2019.2892371. 4.The organization and presentation of this paper can be further improved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See “Weaknesses”. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We will add the desired aforementioned references. We can also standardize the references. However, we seriously question the validity of the criticism that we did not compare "theoretical analysis, experimental results, algorithm complexity, etc". In words of the reviewer, "This work is not the first random projection clustering work" and the reviewer seems to imply that the two proposed references are the first to do so. 1. The proposed references focus on kernel k-means. k-means is not the primary subject of this study, having been solved optimally in previous work and our techniques go beyond what is possible for k-means, and indeed what is being done in those two papers. Our considered objective functions and settings are different. 2. The reviewer seems to think that random projections are a core part of our work. This is false. Nowhere do we use random projections for $(k,j,z)$ clustering in any way and we specifically made a point to discuss shortcomings of all existing dimension reduction methods. 3. Random projections for clustering have been used as early as 2010 by [BZD], and subsequently studied in [CEMMP,MMR,BBCGS], all of which have appeared before the two papers mentioned by the reviewer and, it must be said, both failed to cite any of these earlier works. With the exception of [BZD], which we admittedly forgot and will rectify, these papers *are* in fact cited by us. Thus we strongly believe that the criticism that we did not compare "theoretical analysis, experimental results, algorithm complexity, etc" to the references suggested by the reviewer or that we did not do due diligence with the related work has no basis whatsoever. As for the organization, we would appreciate if the reviewer could provide details. [BZD] Christos Boutsidis, Anastasios Zouzias, Petros Drineas: Random Projections for $k$-means Clustering. NIPS 2010 [CEMMP] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, Madalina Persu: Dimensionality Reduction for k-Means Clustering and Low Rank Approximation. STOC 2015 [MMR] Konstantin Makarychev, Yury Makarychev, Ilya P. Razenshteyn: Performance of Johnson-Lindenstrauss transform for k-means and k-medians clustering. STOC 2019 [BBCGS] Luca Becchetti, Marc Bury, Vincent Cohen-Addad, Fabrizio Grandoni, Chris Schwiegelshohn: Oblivious dimension reduction for k-means: beyond subspaces and the Johnson-Lindenstrauss lemma. STOC 2019
Summary: The authors study two clustering problems from the perspective of generalization: -standard center-based clustering objectives such as k-means, k-median and more generally the different norms associated with the objective -projective subspace clustering: where the goal is to find a k subspaces such that if you project the points there, you minimise a natural distance objective. The main question addressed for both of these problems is: If we are given a sample set of n data points drawn independently from a fixed (unknown) distribution, and we perform clustering on those n points, how fast will the solution on the sample converge to the optimal clustering on the fixed (unknown) distribution? Strengths: +presentation and literature review is done in a careful manner +results for both clustering objectives are novel and interesting +generalization bounds almost tight Weaknesses: -A lot of the tools needed by the authors seem to have been used in prior works too. Having said that, there are some proposed new techniques for dimension reduction which, though tailored to the problem at hand, seems interesting. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
null
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies generalization bounds for clustering problems, including ceter-based clustering and subspace clustering. For center-based clustering, they recover the optimal bound (up to log term) of $\widetilde{O}(\sqrt{k/n})$; the technique can be extended to k-median also. For subspace clustering, the derive the first bound $\widetilde{O}(\sqrt{kj^2/n})$ for the generalized $(k, j, z)$-clustering objectives, which had been established only for the scenario $z = 2$; for the special case $z=2$, they further refine the bound to $\widetilde{O}(\sqrt{kj/n})$ that meets the current upper-bound in the literature, and prove that this is tight. Experiments are given. Strengths: The paper advances the knowledge on generalization bound for general subspace clustering, which is the main merit of this paper. The authors also prove the tightness for the case $z=2$. Given the intricate relevance between the clustering problem with coreset construction, dimension reduction and so on, the paper did a good job on discussing the relevant work and laying out their proof sketch, while pointing out the challenges for generalizing the results (from $z=2$ as in the paper's reference [34]) to the case $(k, j, z)$. It is then clear that the chaining technique, which has resulted in tight bounds for coreset clustering and especially $(k, j, 2)$ clustering, is not compatible with existing (generic) dimension reduction techniques. From there, the authors design an ad-hoc dimension reduction technique for subspace clustering to overcome the obstacle. Several insights and techniques in the paper are thus new and of independent interests, while such arguments seem restrictively applicable to only clustering case. Weaknesses: While the paper is good in terms of technicality and novelty, it can be made more interesting if the authors can discuss better on use cases and importance of $z > 2$. In fact, the case $z=2$, which would correspond to the $\ell_2$ loss seems much more prevalent and important in practice. The current few-line discussion in the paper is a bit succinct and also does not point to any reference. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N.A. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are addressed. There is no negative societal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. The loss functions for different values of $z$ are often considered in theoretical papers (for instance [26,35]). The most important variant for large values of $z$ is undoubtedly $z\rightarrow \infty$, as it generalizes problems such as the minimum enclosing ball and $k$-center. Nevertheless, for the learning setting, large values of $z$ render the problem infeasible. As a more feasible alternative with provable guarantees, one can interpolate between and $\ell_2$ loss and $\ell_{\infty}$ by considering higher higher values of $z$, which has been done, for example, in [28]. Moreover, as observed by [*] (who we forgot to add to the references but should be included), learning bounds for $k$-means in $\ell_1$ space can be obtained by studying $(k,4)$ clustering in $\ell_2^2$ space. We will add these references and discussion in the potential final version. [26] V. Cohen-Addad, D. Saulpic, and C. Schwiegelshohn. A new coreset framework for clustering. STOC 2021 [28] V. Cohen-Addad, D. Saulpic, and C. Schwiegelshohn. Improved coresets and sublinear algorithms for power means in euclidean spaces. Advances in Neural Information Processing Systems, 34, 2021. [35] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. STOC 2011 [*] Lingxiao Huang, Nisheeth K. Vishnoi: Coresets for clustering in Euclidean spaces: importance sampling is nearly optimal. STOC 2020
null
null
null
null
null
null
Robust and Actively Secure Serverless Collaborative Learning
Accept (poster)
Summary: The main contribution of this work is a decentralized learning protocol that is robust to malicious clients attempting to both hinder the learning process (model poisoning attacks) as well as break confidentiality (reconstruction attacks). Robustness against poisoning attacks is achieved by modifying a class of robust aggregation techniques that require the individual clients updates to be in a restricted space (referred to as "computational surjectivity" in the paper) and verifying the validity of the inputs (compliance with the restrictions) through distributed zero knowledge proofs (DZKPs) and using verifiable secret sharing (VSS) for aggregation. Defense against reconstruction attacks is achieved through malicious-secure multi-party computation (MPC) for aggregation. It has been claimed that all the above protocols can be implemented with reasonable efficiency for small-scale models (with up to 10^6 parameters) applied to simple problems like MNIST. Strengths: 1) The paper address an important problem in collaborative learning, which is how to orchestrate collaboration when no party (clients and servers) can be fully trusted. 2) It uses a whole array of cryptographic tools (secure multi-party computation, verifiable secret sharing, distributed zero knowledge proofs) in addition to existing robust aggregation techniques to achieve the stated objectives. Weaknesses: 1) First and foremost, the framing of double robustness (against malicious clients and servers) appears to be incorrect because there are no servers involved in the proposed protocol. Instead, the double robustness can be interpreted as robustness against poisoning and reconstruction attacks. 2) The most critical weakness of the proposed approach is the failure to demonstrate how all the cryptographic components will work together. While it is true that ZKPs can verify the validity of inputs, malicious-secure MPC and VSS can be used for checking the correctness of aggregation, and random committee selection can be done in a distributed fashion, it is not clear how all these tools can be tied together to form one complete system. For example, let us consider the formation of the aggregation committee. This step itself will require a number of cryptographic operations to ensure that the committee is indeed selected "randomly" without any collusion (otherwise, the binomial approximations used to prove that the committee satisfies honest majority condition would fail). Secondly, the honesty condition in the committee selection is merely about following the MPC protocol honestly. It does not guarantee that these committee members will not be providing malicious inputs (updates) to the collaborative learning protocol. Similarly, it does not guarantee that the committee members will verify the ZKPs honestly. There could be clients who can follow one part of the protocol honestly, but not some other part. Finally, it is not clear if VSS, malicious-secure MPC, and DZKPs can all be implemented simultaneously. If such an implementation is possible, what are the underlying trust assumptions? Note that each of the above cryptographic primitives operate on their own trust assumptions (in terms of key generation, distribution, etc.). 3) Some of the claims in the paper appear unbelievable. For example, one of the experiments talks about having 20 peers of which there are 10 malicious workers. In this case, how can an aggregation committee with honest majority be formed using random selection? For the same experiment, it has been claimed that security does not impact robustness. Is this true even if the all the malicious workers collude with each other? 4) What is the real impact of all the robustness modifications on the final utility/accuracy? Specifically, what is the loss in accuracy compared to a vanilla FedAvg based FL aggregation by a single server? Also, what is the collaboration gain compared to stand-alone model training? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1) Can you demonstrate a pipeline of how all the cryptographic primitives can be tied together to implement the complete system? What are the trust assumptions of such a complete system? 2) Is it possible to come up with new poisoning attacks within the restrictions on the update space, possibly through collusion? 3) How will the proposed approach work in the case of real-world non-iid (hetergeneous) scenarios with distribution shifts? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No limitations have been discussed in the paper. The paper does not have any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. >**W1. Interpretation of ‘double robustness’.** “Double robustness” communicates protection against malicious attacks mounted from the client side and server side of typical collaborative learning approaches (e.g. federated learning). We guarantee security against attacks mounted from the server side of FL by *delegating the server’s work* to a cryptographic protocol conducted by an ‘aggregation committee’ – a subset of the workers. We use ‘server’ in place of ‘aggregation committee’ in order to align with prior literature. We achieve robustness to many malicious server behaviors beyond reconstruction attacks. In most previous works a malicious server may tamper with the global model by excluding updates of certain clients, adding fabricated updates, or simply altering the global model *arbitrarily* before sending parameters to clients for the next round. **We protect against all of these attacks** and more. Specifically, clients are guaranteed to receive the correct result of the aggregation algorithm on the submitted updates. >**W2a. Security while using multiple cryptographic components** **All building blocks in our framework are secure under universal composability [UC] and thus their compositions (i.e. using them together, either in sequence or in parallel) are secure.** Concretely, in our context maliciously secure MPC protocols and DZKP all take verifiable secret shares as input; by UC these protocols can be composed together without any extra steps or concern. Key generation or distribution are not required assuming point-to-point channels, which are provided by standard TLS. *Also see answer to Q1.* [UC] Canetti, Ran. "Universally composable security: A new paradigm for cryptographic protocols." Proceedings 42nd IEEE Symposium on Foundations of Computer Science (FOCS). IEEE, 2001. >**W2b. Formation of aggregation committee.** Secure random selection of committee members is indeed required by our framework. This can be achieved using secure coin flipping, a standard and efficient cryptographic technique [CF]. We will add text to the Appendix recalling this building block (omitted for brevity). [CF] Blum, Manuel. "Coin flipping by telephone a protocol for solving impossible problems." ACM SIGACT News 15.1 (1983): 23-27. >**W2c. Security if committee members submit malicious updates or don’t follow protocol.** Our security model accounts for all of these behaviors. Submitting malicious updates (even from committee members) is tolerated via robust aggregation. VSS, malicious-secure MPC, and DZKP ensure aggregation is computed correctly – any cheating will be caught by honest committee members. Finally, the standard composition theorem [UC] applies as all building blocks are secure under composition as explained in W2a. >**W3a. Number of malicious workers in section 6.1.** Typical empirical evaluations in Byzantine robustness literature (e.g. [22]) use higher adversarial proportions than are tolerated by the cryptographic elements of our framework, and it is important to ensure that our underlying aggregation algorithms meet these standards (even when modified for efficiency). Accordingly, we benchmark *accuracy* and *robustness* of the aggregation algorithms *outside* of the cryptographic elements, finding that the robustness guarantees are retained. We will clarify that these experiments use higher adversarial proportions than are tolerated by the end-to-end protocol in the main-text. >**W3b. Security against collusion.** Our security model accounts for *arbitrary* behavior of malicious workers, including collusion. >**Q1. Pipeline of cryptographic primitives in the complete system.** We lay out the whole protocol step-by-step, elaborating on the security assumptions. 1. **Committee Election** All clients use the method from W2b to randomly select the aggregation committee $C$ with malicious security. Our analysis in Appendix C1 guarantees that $C$ has honest majority. 2. **Client Local Computation.** Each client computes $F^C$ and $F^P$ to obtain a preprocessed local model update given their data, and the global model parameters. 3. **Verifiable Secret Sharing of Updates.** Each client secret shares their update with threshold $|C| / 2$, and sends a share to each committee member. Since $C$ has honest majority, this guarantees that adversaries cannot alter or reveal the client updates. 4. **DZKP of valid update.** Clients prove to the committee that their updates are valid using a DZKP protocol that takes the secret shares as input. E.g. in P2P RSA, client updates must be binary-valued, so committee members create shares of a check value which is guaranteed to be $0$ if the update was binary-valued, while leaking no further information (see Appendix C.3.1 for details). Here security follows from the security of the DZKP and the VSS schemes. 5. **MPC for computing global updates.** Committee members compute $F^R$ in MPC, using client shares as input, to obtain a global model update. E.g. in P2P RSA, committee members sum the shares of all client updates using the standard secure addition protocol on Shamir secret shares. The committee members then reconstruct the shared sum to obtain the global update (correct reconstruction is guaranteed by VSS). As above, security follows from the security of the MPC and VSS schemes. 6. **Global updates sent to clients.** All committee members send the recovered value to all clients. Since $C$ has honest majority, the clients are guaranteed to recover the correct global update by accepting the majority result. >**Q2. New poisoning attacks possible within the setting** We guarantee correct computation of the underlying robust aggregation algorithm. Thus our work **only reduces** the space of possible poisoning attacks – any new attack developed in our setting would also be possible in the standard FL setting. >**Q3. Non-iid Data:** We show experiments on non-IID data (see attached pdf). --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: I thank the authors for their detailed response, which address some of my core concerns (regarding the composition of the various cryptographic primitives and the overall pipeline). However, I would like to see a more clear presentation of the threat model and trust assumptions in the final version. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We agree these will help the presentation and will alter the main-text Section 3 along with additions to Appendix. The main-text additions aim to clearly disambiguate the different components and highlight the key assumptions. In the appendix, we provide the full details. We made some minor alterations to the existing Section 3, and included more details at its end. The new section 3 is shown below. “ Collaborative learning is conducted among a set of parties performing one of two roles: a *client* (or *worker*) who performs learning on a local repository of data, or a *server* that aggregates the many client updates. Our protocol differs in two main ways: first, it is conducted among a set of *peers* (parties) which can perform either role, and second, the role of the *server* is performed by a subset of peers termed the *aggregation committee*. To align with prior literature, we sometimes refer to peers as clients or servers when they are performing those respective roles. *We consider a malicious threat model where clients and servers may perform arbitrary adversarial actions to interfere with the protocol.* Malicious behavior in the two roles may include, but is not limited to the following. 1. **Malicious Clients** may attempt to (1) lower the quality of the trained model by sending distorted model updates. This may take the form of both (a) intentional model poisoning attacks, and (b) unintentional problems such as errors in computation, and skewed or incorrect local data sets. They may also attempt to (2) steal information about the other peers’ data, i.e. break confidentiality, e.g. by colluding with other malicious peers and sharing transcripts of the protocol execution. 2. **Malicious Servers / Committee Members** may attempt to (1) reconstruct individual data points from the clients' updates, thus breaking data confidentiality, which can be achieved by arbitrarily modifying model parameters or colluding with other parties (Committee Members or Clients), (2) inappropriately change the shared model by e.g. omitting updates from selected clients, adding in bogus updates, or otherwise altering the global model updates. We compose multiple cryptographic primitives, including secure committee election, verified secret sharing, distributed zero knowledge proofs, and secure multiparty computation. The assumptions and guarantees of the individual primitives are laid out in the Appendix C6, and their composition is secure under universal composability [UC]. Our overall protocol operates under the standard assumptions of authenticated point-to-point secure channels between peers and a bounded proportion of adversarial peers (see Appendix for details). The following are the formal guarantees of our protocol. - **Correctness of aggregation.** Given clients that submit local updates $x_1, x_2, …, x_n$, the returned global update will be equal to $F^R(x_1, x_2, …, x_n)$, where $F^R$ is a publicly known function for update aggregation. See the following section for details. - **Confidentiality of client updates.** During protocol execution, all parties gain no information about any individual client update $x_i$ beyond what is implicitly revealed by the resulting aggregation $F^R(x_1, x_2, …, x_n)$. - **Robustness to poisoning.** An accurate model will be trained even in the presence of some subset of the clients which submit poisonous updates which may take arbitrary values. Our framework compiles existing robust aggregation algorithms into a stronger security model, and thus the details of this guarantee depend on the underlying algorithm. - **Malicious security.** The above conditions hold even in the presence of a subset of parties that may perform arbitrary malicious behavior, including but not limited to collusion between malicious peers, attempts to deviate from any part of the protocol, and submission of poisonous local updates. “ [UC] Canetti, Ran. "Universally composable security: A new paradigm for cryptographic protocols." Proceedings 42nd IEEE Symposium on Foundations of Computer Science (FOCS). IEEE, 2001.
Summary: This paper proposes a framework building on existing secure aggregation schemes to protect users from malicious server but protect the training from malicious users. If the MPC scheme used is secure then it shows that their framework is doubly robust. It illustrates this approach by leverging 3 existing algorithms (robust stochastic aggregation, centered clipping (CC), and FLTrust (FLT)) for running a grading descent on MNIST and EMNIST. Strengths: - The paper mixes several existing ideas and techniques to achieve its "doubly robust" guarantees - The paper stays with a descent cost that allows to train a small machine learning model and reports the runtime of their experiments (fig 5) - The paper tackles the problem of float updates and is robust to poisoning Weaknesses: - The paper seems does not mention the possibility of users to drop, which is quite common on FL. A scheme that doesn't allow it is quite unpractical - The paper does not discuss the leakage that comes from the aggregation itself that will have the nodes. In ML we know that the aggregated gradients still leak information about individual contribution and this problem is often addressed by using Differential Privacy. Here, it seems that the authors overlook this issue. - The paper seems does not seem primarily addressed to the ML community but rather for the security community. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - could you clarify if you can handle dropout and how? - could you discuss the risks of privacy leakage in your analysis? - are the two curves equal in Fig 4? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors include a paragraph. However, it should make clearer that the security guarantees rely on the existing MPC schemes guarantees and do not provide leakage from the learnt model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. >**User Drops:** We appreciate the question, discussion of user dropout is a great addition to our paper. We show that our framework can indeed tolerate substantial dropout. We will add the following analysis to the end of Section 4 and the Appendix. >**Tolerance of Users Dropping:** Our protocol includes two areas where peers must collaborate on the cryptographic protocol: the (client) work of computing updates and the (server / committee) work of aggregating updates. In terms of the former, our protocol tolerates any number of clients dropping, as long as the pool of clients that stays online meets the assumptions of the underlying robust aggregation algorithm. In this case the output of our protocol would be equivalent to as if only the subset of online clients submitted updates. In terms of committee member dropout, our protocols can tolerate drop out of committee members by proportionally increasing the committee size (due to the reconstruction guarantees of VSS). We analyze the committee size required to tolerate a given level of drop out in the text below, which we will add to the Appendix. We find that substantial levels of drop out can be tolerated with only modest increases to committee size. With the appropriate increases to committee size, committee member drop out (up to the specified proportion) would have no impact on the output of the protocol. >**Appendix – Tolerance of Committee Members Dropping:** In general, our protocol requires that the number of adversaries in the aggregation committee be kept below a certain proportion in order to guarantee security. The committee size is chosen as the smallest number of parties such that (except with negligible probability) a random sample from the pool of clients has less than $1/2$ adversarial proportion (in the case of RSA, CC), or less than $1/3$ (in the case of FLT). To tolerate drop out of honest committee members, we simply need to select an increased committee size such that the proportion of adversaries in the committee stays beneath these thresholds even if some number of the honest parties drop out. In particular, if we choose a committee size which guarantees (except with negligible probability) that a random sample from the pool of clients has less than $1/2 - (q/2)$ adversarial proportion, where $q$ is the proportion of tolerated dropouts from honest parties, we will guarantee that the adversarial proportion with reference to the number of committee members that stay online is at most $1/2$. We can find the necessary committee sizes by reasoning with the binomial distribution similarly to our original analysis of committee size. **For example, to tolerate 5%, 10%, and 15% dropout of honest committee members, RSA and CC would require committee sizes of 53, 60, and 69 respectively (compared to 46 with no dropout tolerance), and FLT would require 157, 218, and 326 respectively (compared to 121 with no dropout tolerance).** >**Privacy versus Security and Robustness:** We thank you for your valid concerns surrounding data privacy. We agree that our scheme provides no *data anonymization* guarantees, i.e., what can be inferred from observing the aggregated gradients, as would be protected by differential privacy (DP). We discussed this in Supplemental Appendix B and will use the extra page to move it to the main-text. Our paper provides guarantees on security (i.e. *data confidentiality*) and robustness, which are distinct from DP and of independent importance. Consider providing DP without any form of cryptographic security guarantee: this would either require local DP (which is useless in machine learning due to utility loss) or a trusted server which is often not a practical assumption and is independently vulnerable to other attacks even with DP (see our Table 1). We further remark that [7] shows that unless the server is trusted, DP is largely ineffective against reconstruction attacks whereas, as our work shows, security prevents these attacks. Though our work also provides some limited data anonymization guarantees by ensuring only the aggregated gradients are revealed, we opt to not discuss this in the paper to not create ambiguity in our contributions. We believe it is important future work to determine how to combine our work with differential privacy, which has in the past shown to require non trivial analysis [20] or an additional honest-but-curious privacy guardian [15]. Finally, we note that we limit our usage of the term “privacy” in the main-text to avoid confusion surrounding cryptography and differential privacy. We hope that by moving the limitations to the main-text, this will further facilitate their distinction. >**Best Community of Interest:** Our work is a design and application of cryptography to a largely machine learning problem of “robustly learning” in the face of potentially corrupted data. We believe that our work is most interesting to those in the machine learning community because it enables this community to immediately gain security guarantees for robust machine learning. >**Question about Fig 4** Yes, the curves are roughly equal, indicating that substituting floats for fixed points does not degrade robustness. --- Rebuttal Comment 1.1: Title: Raising my score as authors addressed my concerns Comment: I thank the authors for their clear and detailed rebuttal. I am happy to see that my question on dropouts lead authors to extend their results to take this possibility into account, which make their contribution much more practical. I am also happy by the fact that the clarification on the sense of privacy will be in the main text. Finally, the authors clarified and extended the experimental part, so the contribution seems quite strong. (Please change your table formatting when including the new experiments in the paper) --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the especially helpful feedback and for the engagement with our rebuttal. We will indeed incorporate these suggestions into the final version of the text.
Summary: The paper proposes a generic P2P learning framework that is simultaneously secure against malicious servers and robust to malicious clients. This is achieved by combining peer-based secure aggregation with existing robust aggregation techniques in an optimized way. The P2P approach eliminates the centralized server and instead has peers that take turns aggregating the updates. This removes the power asymmetry that allows servers to breach privacy. Strengths: * The approach is shown to be computationally efficient, training models with up to 1 million parameters on standard datasets with 100s of peers. * Strong approach towards P2P robustness Weaknesses: * The experiment on IID EMNIST is rather weak. The paper should have more convincing empirical experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. >**Clarification of contributions** We emphasize that the primary contributions of our work are: 1. Proposing a strengthened security model for collaborative learning that protects against malicious behavior from both clients and servers 2. Providing a flexible framework for realization of *existing* robust aggregation algorithms within this security model 3. Implementing three efficient examples of our framework applied to *existing* robust aggregation algorithms As such, the primary aim of our empirical evaluation (in particular Figures 4 and 12) is not to test the performance of the underlying robust aggregation algorithms, but rather to show that our protocols do not degrade in robustness in comparison to the centralized setting. Beyond this, the relative strength in performance of the robust aggregation algorithms (RSA, CC, FLT) is a matter of concern for those respective works. Byzantine robust aggregation is an active area of research, and our framework is intentionally modular so that new aggregation algorithms with better performance can be lifted to our security model easily. >**Additional Empirical Evaluation** Our approaches directly leverage robust aggregation algorithms, which perform as well in our case as in the FL setting. As requested, we do include an additional experiment with CC on non-iid EMNIST and iid CIFAR100 finding that it performs as well as in the FL setting. *Please also see the attached PDF to the main response with additional experimental results.* | Dataset | Type | No attack | sf attack | lf attack | ipm attack | alie attack | |----------|--------|-----------|-----------|-----------|------------|-------------| | EMNIST | IID | 91.68 | 91.09 | 90.83 | 91.2 | 91.43 | | EMNIST | nonIID | 91.69 | 91.07 | 90.88 | 91.19 | 87.2 | | CIFAR100 | IID | 48.04 | 33.77 | 44.94 | 45.64 | 32.77 |
Summary: Collaborative ML methods protect user data however they typically remain vulnerable to either the server or clients deviating from the protocol. Both clients and servers require a guarantee when the other cannot be trusted. This paper proposes learning scheme that is secure against malicious servers and clients. Strengths: - The paper is tackling a very important problem in machine learning that is very relevant to the NeurIPS community. - The paper provides the first collaborative protocol that is robust to both malicious clients and servers and operates under a malicious threat model. - Almost any aggregation algorithm can be converted to the proposed P2P security model -- it is very flexible. The authors do this for some popular and widely used methods. - The authors prove the cryptographic security of their protocol. - The presentation of the paper is very good and it is easy to follow. - The authors perform extensive experiments demonstrating the byzantine robustness benefits and computational efficiency (and their tradeoffs) and their method performs well in all metrics. Weaknesses: This is a good paper and I think it would be of interest to the NeurIPS community. However, the idea of this paper is very simple. This is not a weakness on its own but it raises a question about the optimality of the proposed method. Although the model performs well in experiments, I am not sure whether about optimality. It would be good if the authors can comment on this. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review! >**Optimality of our Approach:** This is an interesting question. There are two axes under which we may consider optimality: robustness and security. Unfortunately, along either there exist no strong lower bounds which could be used for such an argument. We elaborate below. In the area of robustness, learning in the face of corrupted data remains an open challenge that has numerous different approaches depending on the exact threat model. This actually serves as motivation for our work which we discuss, e.g., in the conclusion—we design our protocol to be generic so as to enable broad classes of future robust aggregation algorithms to be instantiated in our protocol. Though we cannot claim or discuss optimality in terms of robustness, we hope our empirical evaluation of the protocol highlights that it is both computationally efficient and compatible with many robust aggregation algorithms, enabling the best robustness guarantees to be extended efficiently. Regarding security, we assume a security model with stronger guarantees than (to our knowledge) any previous works in collaborative learning and design a flexible framework for realizing these guarantees. The security model specifies malicious security against clients and servers, along with protection from poisoning attacks. Regarding the performance of our cryptographic protocols, although we could not argue optimality due to the lack of lower bounds, it is unlikely that the performance could be further improved significantly. We used state-of-the-art cryptographic building blocks and took full advantage of the underlying learning algorithm. However, there could be other trade-offs between security and performance: if we assume a weaker security model, e.g., assume that less number of committee members can be corrupted by a malicious adversary, further performance improvements could be explored. As such, the optimal solution depends on the exact application setting. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the response. I have read the rebuttals and all the other reviews. I still believe this is a paper with solid contributions and will keep my score as is. I think the authors also did a very good job in the rebuttal period. I believe including some parts of the rebuttal in the final version would be very beneficial -- specially the response to Q1 by reviewer 5nT4. --- Reply to Comment 1.1.1: Title: Thank you for your response & recommendation Comment: We are grateful to the reviewer for the positive evaluation and for upholding the high score for our submission. Per the suggestion to include the response provided to Question 1 from Reviewer 5nT4, we will incorporate this material in the camera-ready version of the main paper by adding the allowed extra page of content, if our work is accepted. We believe this will enable us to fully address the question in the main text, benefiting readers and improving the completeness of the work. We appreciate the reviewer taking the time to provide this recommendation to strengthen our paper, and for recognizing the value of our research contributions.
Rebuttal 1: Rebuttal: We thank the reviewers for their positive feedback, insightful comments, and clarifying questions. Your reviews helped us to improve the paper. Particular thanks are due to reviewers 7EiB and 8qrt – the former facilitated an analysis of user dropout. We find that our framework can tolerate substantial dropout with only a modest increase to committee size. The latter facilitated discussion of a setting where at each round, clients are subsampled from a large user pool. We find that for little additional overhead, this setting would enable a much larger set of users to participate in training. Both discussions improve the practicality of the proposed framework. Overall, we tackle an important problem of how to provide secure and robust collaborative learning protocols and show how to do so under the strongest setting where collaborating parties can act maliciously (reviewers 8qrt, HYce, 5nT4). This is the first framework designed as a generic compiler that can convert robust aggregation algorithms to efficient approaches in the malicious P2P learning setting (reviewers 8qrt, HYce). Our approach combines many cryptographic tools (secure multi-party computation, verifiable secret sharing, distributed zero knowledge proofs) in a generic yet tailored way to existing robust aggregation techniques to achieve the stated objectives (reviewer 5nT4). The approach is shown to be computationally efficient, training models with up to 1 million parameters on standard datasets among 100s of peers (reviewer taqA). We illustrate the approach by leveraging 3 existing robust aggregation algorithms (reviewer 7EiB). Overall, the submission ​​”has a very nice flow, is technically sound with interesting notation and proofs, and has experiments to back up the claims and efficiency of the proposed algorithms” (reviewers 8qrt, HYce). Pdf: /pdf/d41f71284fb7033cd976eb5ea48bd30aac651497.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors provide a solution for collaborative learning there are risks associated with collaboration due to clients or server(s) acting maliciously. Malicious clients can submit corrupted updates which leads to the failure of creating a useful shared model. Similarly, server can also act malicious, e.g. in data aggregation. The authors propose a Peer-to-Peer Learning that provides a doubly robust protocol against malicious clients and server(s) to train a shared model without a central party. The paper has a very nice flow, is technically sound with interesting notation and proofs (I enjoyed reading), and has experiments to back up the claims and efficiency of the proposed algorithms. The framework is designed as a generic compiler that can efficiently convert robust aggregation algorithms to the P2P learning setting with the guaranteed malicious-secure protocol. Strengths: The paper is easy to follow. I enjoyed reading this paper. The paper presents an interesting and important setting where without a central party we want to have a doubly robust protocol against malicious clients and server(s). The contributions are solid, and worth sharing with the world. The paper is technically sound. The notations and graphs are clear, making it easy to follow the paper. Weaknesses: I do not see much Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My questions is related to the scalability. If a cross-device settings with pool pf millions of clients available, where we select in order of 1000-5000 clients per round in an FL round, are we looking at a million client runtime, or thousand client runtime? If it's millions, is it going to be in order of days per round? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The only weakness I can see, is the scalability of the solution and the runtime. I see runtime in order of hours for thousands of clients. Not sure how this can be scalable for very large setups. For example, a per-round CPU time of 46 seconds with 10^5 parameters (tiny model) when trained by 1000 peers shows the limitation of the scalability (and this is done on a pretty strong hardware of m5.metal of AWS. Can authors comment on this? Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review! >**Question regarding scalability when subsampling clients from a larger pool** This is a great question. To maintain security in this setting, it is necessary for the subsample of clients and selection of committee members to be known by participants in each round. This could be efficiently accomplished using standard techniques such as secure coin flipping [CF], e.g., before the protocol commences. Besides this addition, the computational cost of our protocol in this setting is the same as running it among the number of clients subsampled for a given round (in Reviewer’s example, 1000-5000 clients) for which the empirical results can be seen in Figure 5 (b). To answer the question directly, we are looking at **thousand client** runtime in this case rather than million client runtime. [CF] Blum, Manuel. "Coin flipping by telephone a protocol for solving impossible problems." ACM SIGACT News 15.1 (1983): 23-27. >**Limitations – Scalability** Indeed, there are still limitations that prevent full pre-training of large models. We note that this study is the first work in our security model, and further optimizations are likely possible. For example, taking advantage of parallelization, parameterizing the size of field elements used for Shamir secret sharing, and/or training using lower-precision fixed points may all provide substantial increases in efficiency. These finer-grained optimizations may be interesting to explore in future work. However, we note that the present efficiency of our scheme would enable parameter-efficient techniques [LoRA] for fine-tuning of large models, central pretraining with downstream tuning (e.g., [Gboard]), or pre-training of medium to smaller models. [LoRA] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). [Gboard] Xu, Zheng, et al. "Federated Learning of Gboard Language Models with Differential Privacy." arXiv preprint arXiv:2305.18465 (2023). --- Rebuttal 2: Title: Great paper, considering the rebuttal and other comments Comment: I appreciate the complete response from the authors to respond to my concerns and other reviewer's. I read all other comments and reviewers from other reviewers, and I decide to keep my score. This is a solid and strong submission, and I think it would be beneficial for the community to hear about it. Some suggestions: 1. Include the following that you stated in the paper > To maintain security in this setting, it is necessary for the subsample of clients and selection of committee members to be known by participants in each round. This could be efficiently accomplished using standard techniques such as secure coin flipping [CF], e.g., before the protocol commences. 2. (can ignore this if you like) The phrase "doubly robust" is hard for someone to understand without reading the paper. I believe another simpler name could attract more people to this paper :) --- Rebuttal Comment 2.1: Title: Rebuttal, additional statement, and the title Comment: We appreciate the reviewer's response, engagement in the discussion, and the prompt assessment of the rebuttal. We included the recommended statement in Section 6.2 in the main paper. Thank you also for the suggestions regarding the title - we are considering a few options, for example: "Maliciously Secure and Robust Peer-to-Peer Collaborative Learning" or "Efficient Maliciously Secure and Robust Peer Learning". We are also open to other recommendations.
null
null
null
null
null
null
ARTree: A Deep Autoregressive Model for Phylogenetic Inference
Accept (spotlight)
Summary: This work proposes a new approach, ARTree, for obtaining more complex tree-topology approximations by combining deep autoregressive models and GNNs. The existing works in black-box VI for phylogenetic inference predominantly rely on SBNs as approximations of the tree-topology distribution, but here ARTree is shown to be superior in terms of KL divergence to the true distribution (obtained via MCMC). ARTree, in contrast to SBNs, does not rely on presampled tree topologies and thus explores the full tree-topology space, not a subset. Strengths: Due to the combinatorial nature of the tree-topology space, designing efficient and powerful density approximation algorithms of the tree-topology posterior is arguably the most complicated aspect of Bayesian phylogenetic inference. In the VI setting, SBNs have been the SOTA algorithms, but, as is mentioned in the paper, SBNs rely on presampled tree topologies. This means SBNs require candidate trees from other tree-sampling algorithms, making them not stand alone. The algorithm proposed in the paper is an important contribution to field that is receiving increased interest in the ML community (most VI for phylogenetics papers have been published in NeurIPS, ICLR and UAI). Additionally, there have not been many attempts to improve over SBNs. As such, the proposed work appears to be relevant and could be of significant interest to the NeurIPS community. The writing and clarity of the paper is good, although I have some questions below. The experiments are performed using appropriate baselines (SBNs and VBPI w. SBNs). However I am missing some important references like VaiPhy (Koptagel et al., selected as oral at NeurIPS 22) and VCSMC (Moretti et al., from UAI 21). More on this below. Weaknesses: **Related work**: As this is a paper proposes a new algorithm for doing VI for phylogenetics, VaiPhy by Koptagel et al. (2022) should be referenced, preferably also VCSMC by Moretti et al. (2021). These work are highly related. I do not consider it necessary to experimentally compare ARTree with these algorithms though, as they do not compare favorably with VBPI empirically. Furthermore, in the VaiPhy paper a sequential algorithm (SLANTIS) is designed for sampling tree topologies. I believe it can be seen as a conditional sampler in the sense that it decided whether to replace (labeled) edges in a topology based on a Bernoulli probability, given the existing edges in the topology. It is does not use GNNs, and the algorithms are clearly distinct as SLANTIS uses precomputed maximum spanning trees. However, I think a conceptual distinction should be included in the paper, nonetheless. **Evaluating tree topologies**: It is clear that the likelihood of a topology, $\tau$, simulated by ARTree can be efficiently evaluated. At each decision, the probability of making that decision can be computed and then the probability of proposing $\tau$ is the product of these intermediate probabilities (as in Eq. 5; is this correct?). Now, suppose I give you another tree topology, $\tau'$, that was not simulated by ARTree. Can ARTree compute the likelihood of $\tau'$, i.e. $Q(\tau')$? To me it is not clear how this is achieved by reading the paper. This seems to be like an important downstream task which SBNs can handle. If it is possible, I recommend emphasizing this feature, and how a practitioner would achieve it. If it is not possible, I think that this missing feature should be discussed. Note that I do not regard this to be a crucial feature, ARTree is still an impressive algorithm. However, it would add transparency and promote future work on ARTree. **"Domain expertise"**: It is repeated multiple times as a key flaw of SBNs is the required domain expertise. I do not see, and it is not explained, how ARTree diminishes this requirement? In fact, how do SBNs require more domain expertise? Running MrBAYES or UFBoot to get the presampled tree topologies can be done without understanding these algorithms in depth, as the softwares are very neatly provided. Especially, does not implementing and understanding ARTree require the same domain expertise by the practitioner as implementing and understanding SBNs? This should be carefully clarified in the text. Alternatively, I think, the "domain expertise" argument should be removed as it does not add information as the paper is written at the moment. There are plenty of compelling arguments for ARTree over SBNs as is. **Experiments**: In the caption of Table 2: "The KL results are averaged over 10 independent trainings." I was expecting to see standard deviations of these 10 KL numbers. How come they are not included? Also in the same caption: "For ELBO, LB-10, and ML,the results are averaged over 100, 100, and 1000 independent runs". Do independent runs imply "independent trainings" here too? If not, why not use the 10 trained models used for the KL values to get uncertanties wrt the learned model parameters? Finally note that it says "100, 100 and 1000", which I figure is a typo. **Stds of ML results**: I am aware that previous works reward low-variance estimators of the marginal log-likelihood, as is done here in Table 2. My guess is that this is an appropriate way to compare models that provide estimates of lower-bounds of the ML with models that can harshly over-estimate the ML, like the stepping-stone algorithm used with MrBAYES. However, here the comparison is between two VBPI models, using either ARTree or SBNs, which both use estimates of lower bounds. Could the authors please expand on why then the standard deviation is an appropriate measure of the success of the models? For instance, I can come up with models with estimators that have zero standard deviation by sacrificing bias. Does this make these models "better"? If my concerns above are discussed and clarified, I may be willing to raise my score. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: **Critique of SBNs**: Could the authors please point to where in Zhang and Matsen, (2022) I may find the discussion regarding "the limited parent-child subsplit patterns in the observed samples" (line 37 in the submission)? This is an important argument for ARTree which I have not seen investigated before. To me it could make sense to include this discussion in the Appendix of this submission, as to make the paper more stand alone. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and helpful feedback! We address your concerns and questions as follows. **Weakness 1**: Related work **Response**: Thanks for suggesting these related works! We will reference them and clarify the distinction between SLANTIS and ARTree in our revision. More discussions can be found in the global response. **Weakness 2**: Evaluating tree topologies **Response**: Yes, ARTree can compute $Q(\\tau')$ even if $\\tau'$ is not simulated by ARTree. It is this property that allows us to calculate KL divergence to the ground truth. This fact comes from the decomposition process in Appendix C, where we will add more detailed explanations in our revision. To calculate $Q(\\tau')$, we can sequentially remove the taxa one by one in a reversed order, starting from the last taxon being added (Algorithm 2). This way, we would get the corresponding decision sequence in linear time (Lemma 1) and use it to compute the likelihood of $\tau'$. **Weakness 3**: ''Domain expertise'' **Response**: Thanks for raising this issue! In terms of domain expertise, what we indeed want to emphasize is that SBNs require a pre-selected sample of good candidate trees to provide subsplit supports for parameterization. Although running MrBayes or UFBoot does not require much domain expertise, the choice of using MCMC or bootstrapping indeed demands domain expertise (see Section 4.2 of Zhang and Matsen [2022]). Moreover, those are just some heuristic approaches that are commonly used so far, and designing efficient support estimation methods for SBNs, especially when the posterior is diffuse, remains an unsolved challenge for SBN-based VBPI. We apologize for not making this clear enough. In our revision, we will adopt your suggestion to remove the ''domain expertise'' argument for better clarification (e.g., replacing it with more specific descriptions such as pre-selected tree topologies samples). **Weakness 4**: Experiments **Response**: We are sorry that our description confused you. Let us illustrate our experiments more clearly. For each dataset, we repeat the experiment 10 times (i.e. ''trainings''). For the $i$-th repetition, we: (i) calculate the KL divergence (a deterministic number) denoted by $KL_i$; (ii) estimate the ELBO for 100 times (i.e. ''runs'') denoted by $ELBO_{i,1},\ldots, ELBO_{i,100}$, whose sample mean is $mean_{ELBO,i}$ and sample std is $std_{ELBO,i}$. We then report $mean_{KL}=\sum_i KL_i/10$, $mean_{ELBO}=\sum_i mean_{ELBO,i}/10$, and $std_{ELBO}=\sum_i std_{ELBO,i}/10$ in Table 2. The results of LB-10 and ML follow the same way as ELBO. Therefore, the stds of ELBO, LB-10, and ML across different runs reflect the variance of variational lower bounds, which is a common concern in VI. The std of KL divergence (see the following Table) across different trainings reflects the uncertainties wrt the learned model parameters, and we did not report it due to its different meaning. Table: KL divergence averaged over 10 independent trainings with standard deviation in brackets. |-|DS1|DS2|DS3|DS4|DS5|DS6|DS7|DS8| |-|---|---|---|---|---|---|---|----| |SBN|0.0707(0.0002)|0.0144(0.0019)|0.0554(0.0082)|0.0739(0.0012)|1.2472(0.0113)|0.3795(0.0015)|0.1531(0.0044)|0.3173(0.0257)| |ARTree|0.0097(0.0006)|0.0004(0.0001)|0.0064(0.0003)|0.0219(0.0014)|0.8979(0.0175)|0.2216(0.0014)|0.0123(0.0020)|0.1231(0.0078)| Finally, we want to clarify that "100, 100, and 1000" is not a typo. In fact, we use 1000 runs for the ML estimation for a more accurate estimation of the variance. **Weakness 5**: Stds of ML results **Response**: In our experiments, the ML (in nats) was estimated with importance sampling $$\hat{L}_K = \log\left(\frac{1}{K}\sum\_{i=1}^{K}\frac{P(Y,q_i,\tau_i)}{Q(q_i,\tau_i)}\right)$$ using $K=1000$ samples $(q_i,\tau_i)\sim Q(q,\tau)$, where $Q(q,\tau)$ is the variational approximation. With that many samples, the ML estimate $\hat{L}_K$ is more like an exact ML $\log p(Y)$ instead of a lower bound of it. This strategy is commonly used to access the ML of models (Normalizing Flow: http://proceedings.mlr.press/v37/rezende15.pdf; VIMCO: http://proceedings.mlr.press/v48/mnihb16.pdf). Moreover, the variance of an importance sampling estimator is often used as a measure of the approximation accuracy of the importance distribution to the target (e.g., adaptive importance sampling methods). The importance sampling estimator $\hat{L}_K$ is valid only when $Q(q,\tau)=0\Rightarrow P(Y,q,\tau)=0$. For estimators that have zero standard deviation by sacrificing bias, it seems that this condition is violated because $Q(q,\tau)$ would collapse to a point. Finally, please note that for ML in VBPI, the comparison of the stds is reasonable only when the means are in their correct range (see the following table). Table: ML estimates with std in the brackets. |-|DS1|DS2|DS3|DS4|DS5|DS6|DS7|DS8| |----|----|----|----|----|----|----|----|----| |ARTree|-7108.41(0.19)|-26367.71(0.07)|-33735.09(0.09)|-13329.94(0.17)|-8214.59(0.34)|-6724.37(0.46)|-37331.95(0.27)|-8650.61(0.48)| |MrBayes stepping-stone|-7108.42(0.18)|-26367.57(0.48)|-33735.44(0.50)|-13330.06(0.54)|-8214.51(0.28)|-6724.07(0.86)|-37332.76(2.42)|-8649.88(1.75)| **Question**: Critique of SBNs **Response**: Several relevant expressions can be found in Zhang and Matsen [2022]. For example, in the first two lines of page 9, it reads that ''$P_{\pi_i}(j\to i)$ is the conditional probability for the parent-child subsplit pair representing the local splitting pattern of ...''. Our use of the phrase ''parent-child subsplit patterns'' follows this expression. In the last 11 lines of page 11, it reads that ''if we can find a sufficiently large collection of subsplits from these favorable trees and restrict the support of CPDs accordingly ...''. This is just why we say the ''parent-child subsplit patterns'' are ''limited'' in observed samples. We will clarify this argument in Appendix A in our revised manuscript. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for the time invested in responding to my concerns. I only have one remaining concern and one follow-up question which I expand on below. First, I would like to clarify that I do not deem it necessary to integrate the new table provided in global response into the revised version of your paper. I.e., from my point of view, you do not need to update your tables with the results from VaiPhy or VCSMC as they are barely comparable in terms of ML estimates. Regarding removing the phrase "domain expertise", I still think this is a good idea, and the modification proposed by the authors in their rebuttal (response to Weakness 3) is much more informative. Concerning Weakness 2, I must have missed this in the Appendix. Maybe you can place a clear, but brief, pointer in the main text to this part of the Appendix? I think this is a feature of ARTree that deserves amplification. Now, to my withstanding concern. **Weakness 5** The generative model is parameterized with what are assumed to be known parameter values (i.e., they are not learned), meaning that the marginal log-likelihood, $\log p(y)$, is the same number regardless of the choice of variational distribution. And, indeed, as $K\rightarrow\infty$, then $L_K \rightarrow \log p(y)$, irrespective of $Q$. *"With that many samples, the ML estimate is more like an exact $\log p(y)$"*. I parse this statement as that you are assume that the ML estimates have zero bias from the true $log p(y)$. Aligned with what I stated above, I agree with this intuition, and the ML estimates provided in the paper are very similar in terms of their means. So, when evaluating the ML estimates, you are in fact interested in the qualities $Q$ has as an importance sampler? Did I understand your response correctly? I have to say that, if my interpretation is correct, I like this way of framing what you are testing more as the significance of achieving small std's makes more sense in the context of importance sampling. I sincerely think this should be included in the text in Sec. 4.2, as the reward of small std's right now feels ad hoc, in my opinion. In the context of learning $Q$'s that serve as good importance samplers, isn't it very important to train multiple $Q$'s and reason about their produced std's on average? The std's differ very little in Table 2 right now. Do you see my point? As it stands it seems like it is a bit undecisive whether SBNs or ARTrees produce the most reliable estimators? *Less important comments on your response*: First, I don't think "this strategy" is used in the NF paper by Rezende and Mohamed? The importance weighted autoencoder, and hence the new tighter objective (let's call it IWELBO), by Burda et al. was not proposed until later the same year. Also, Rezende and Mohamed report their scores as lower bounds of the negative log-likelihood (see Table 2). Second, just to finalize the argument about the silly estimator, the importance sampling condition you mentioned is often referred to as a rule of thumb, not a criterium for the estimator to be valid? If we choose $Q$ such that $P=0 \implies Q=0$, then $\hat{L}_K$ is still "valid" in the sense that its expectation is finite. So if I choose $Q(\tau)$ to be a categorical distribution with all its probability mass in one topology (I can only sample one topology), and $Q(q|\tau)$ to be LogNormals with super small standard deviations, my estimator of the $\log p(Y)$ would have very small standard deviation. However, as you know clarified that you consider the ML comparisons to only be reasonable when their means are in the "correct" range, this probably rules out the silly estimator here. **Follow-up question** It is unclear to me how the KL divergence is computed to the ground truth distribution, $P$, in Table 2. Is it KL($P||Q$) or KL($Q||P$)? Since the expectations in the KLs here are taken also over continuous distributions, how do you compute these quantities? Can you evaluate a branch length sampled from $Q$ in $P$? --- Reply to Comment 1.1.1: Comment: Cheers for our consensus on weaknesses 1-4! Thanks for your additional suggestions! We will revise our paper accordingly. Here are our responses to weakness 5 and the follow-up question. **Response to weakness 5** Your understanding is absolutely right. We are indeed interested in the qualities $Q$ has as an importance sampler. We appreciate you agree that the significance of achieving small std's makes sense in the context of importance sampling. We will clarify this interpretation in Section 4.2, as you suggested. In the context of interpreting $Q$ as an importance sampler, we repeated VBPI 10 times on each dataset, i.e. 10 independently trained $Q$s, and reported their average std in Table 2 (please see our response to weakness 4). This way, we expect that the std estimation is more accurate. Just as you have pointed out, the stds of ML indeed differ little in Table 2. Here we provide two explanations:\ (i) According to our experience, the ML (as well as ELBO and LB-10) estimates in VBPI are more sensitive to the quality of branch length model $Q(q|\tau)$ instead of the tree topology model $Q(\tau)$. As SBN and ARTree use the same branch length model, we do not expect a large improvement of (the stds of) ML.\ (ii) The support of ARTree spans the entire tree topology space. This adds to the difficulty of training $Q(q|\tau)$ which is conditioned on tree topology $\tau$, as discussed in Appendix E. Therefore, we only expect ARTree to be comparable to SBNs in terms of ML: this indicates that ARTree works well together with a collaborative branch length model for VBPI. The strong power of ARTree for modeling tree topologies is mainly reflected by the KL results (please see our response to the follow-up question for more details). *About the ''less important comments''* First. We agree that the interpretation of ML results in Table 2 in the NF paper is different from our paper and they did not use multi-sample lower bound for training. What we wanted to express in our response is that the idea of importance sampling is used in the NF paper to estimate ML (page 7: ''The true marginal likelihood is estimated by importance sampling using 200 samples from the inference network''). As far as we know, although the IWELBO was later proposed as an optimization objective, the idea of importance sampling for marginal likelihood evaluation, or more generally numerical integration, has long existed. Second. We apologize that we probably had a different understanding of ''validness''. In our response, we said $\hat{L}_K$ is valid in the sense that it is a strongly consistent estimator of $\log p(Y)$ as $K\to\infty$. This requires $Q=0\Rightarrow P=0$. We think it's just a different usage of this word. Finally, thank you for providing this interesting example! **Response to the follow-up question** We are sorry for the confusion. The KL divergence in our paper is $KL(P(\tau|Y)|Q(\tau))$ instead of $KL(P(q,\tau|Y)|Q(q,\tau))$, i.e. the approximation accuracy of the **marginal distribution of tree topology** (please see line 228 and line 238-240). Therefore, the strong power of ARTree for modeling tree topologies is reflected by the significantly improved KL results over SBNs. Title: Thanks for your response!
Summary: The paper introduces ARTree, a deep generative autoregressive model for phylogenetic tree reconstruction. The authors define an autoregressive sequential process for generating tree topologies, $\tau$, and prove that there is a one-to-one mapping between the resulting topologies and the decision sequence, $D$, instantiated by the process. Utilizing this fact, the authors define a distribution over $D$, letting each decision at time n be drawn from a Categorical distribution given previous decisions 1,..., n-1, and parameterize these Categoricals by calculating and passing learnable topological features to Graph Neural networks (GNN) with a recurrent unit and unit to incorporate time embeddings. ARTree is then used as variational distribution for tree topologies $Q(\tau)$, along with a GNN based parameterization of the branch length distribution, within Variational Bayesian phylogenetic inference (VBPI). The representative power of $Q(\tau)$ is evaluated by comparing it with Subsplit Bayesian networks (SBNs) to a "ground truth" posterior distribution (based on the posteriors of long-running MrBayes experiments) and within the context of VBPI on benchmark datasets. Strengths: The vastness of the tree topology space in phylogenetic inference is a well-known obstacle in both classical and tumor phylogenetics. The VI approach to Bayesian phylogenetics is an ongoing area of research and is need of more sophisticated tree topology variational distributions and experiment design to evaluate these varaiational distributions. The tree topology density experiment in 4.1 not only shows strong representative power of ARTree, but the experiment design itself is a contribution to the research field. Furthermore, ARTree is a large improvement when compared to previous VI-methods in phylogeny that use unconfined $Q(\tau)$ (however this is not highlighted in the paper, see Weaknesses section). Weaknesses: The paper fails to mention other works within VI in phylogenetics, e.g., VaiPhy (https://arxiv.org/abs/2203.01121) and VCSMC (https://arxiv.org/abs/2106.00075); these methods do not confine the $Q(\tau)$ support either and are relevant related work. The paper fails to highlight the strong performance of ARTree in VBPI w.r.t. other VI methods with unconfined $Q(\tau)$ - adding a row to Table 2 with results from, e.g., VCSMC and VaiPhy would greatly accentuate the contribution of the paper. ARTree relies on several subroutines to be able to construct and parameterize the generative decision sequences. This makes the method complicated to grasp and implement, which, given the complex problem at hand, can be regarded as a strength in the authors perseverance rather than a weakness of the paper. However, the amount of steps involved, e.g., L steps of messing passing and calculating the topological node embeddings, naturally invokes questions regarding inference runtime and memory usage. The lack of runtime comparisons between VBPI with ARTree, other methods in VI and MrBayes is a weakness of the paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Overall, a strong contribution to field of Bayesian phylogenetics and a well-written paper. The addressing the following points could make me increase my score further: 1. Experiment on runtime of ARTree in the context of VBPI 2. Addressing the issue of minimal increase of raised in Weaknesses Failing to incorporate the related works mentioned in Weaknesses-section could make me decrease my score. Misprints: 56 "edges of current…" -> "edges of the current…" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations, except for potential runtime concerns (see Weaknesses-section), are properly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable feedback! Below are our answers to your comments: **Weakness 1**: The paper fails to mention other works within VI in phylogenetics, e.g., VaiPhy and VCSMC; these methods do not confine the $Q(\tau)$ support either and are relevant related work. **Response**: Thank you for pointing out these missing related works. We will include the comparison between ARTree and the two important related contributions, VaiPhy[1] and VCSMC[2], in our revision. Please see our global response for more discussions. **Weakness 2**: The paper fails to highlight the strong performance of ARTree in VBPI w.r.t. other VI methods (...) greatly accentuate the contribution of the paper. **Response**: Thanks for the suggestion! We will add a comparison between ARTree and other VI methods, e.g. VaiPhy and VCSMC (see the following table) in Table 2 in our revised manuscript. We did not include this in our original paper considering SBN as the SOTA model outperforms other methods significantly. Table: Marginal likelihood (ML) estimates with one standard deviation in the brankets. $\phi$-CSMC is proposed along with VaiPhy in [1] and works on bifurcating tree topology space, making it comparable with other methods. | ---- | ARTree | SBN | VCSMC[2] | $\phi$-CSMC[1] | | ---- | ---- | ---- | ---- | ---- | | DS1 | -7108.41(0.19) | **-7108.41(0.15)** | -9180.34(170.27) | -7290.36(7.23) | | DS2 | **-26367.71(0.07)** | -26367.71(0.08) | -28700.7(4892.67) | -30568.49(31.34) | | DS3 | **-33735.09(0.09)** | **-33735.09(0.09)** | -37211.20(397.97) | -33798.06(6.62) | | DS4 | **-13329.94(0.17)** | -13329.94(0.20) | -17106.10(362.74) | -13582.24(35.08) | | DS5 | **-8214.59(0.34)** | -8214.62(0.40)| -9449.65(2578.58) | -8367.51(8.87) | | DS6 | -6724.37(0.46) | **-6724.37(0.43)** | -9296.66(2046.70) | -7013.83(16.99) | | DS7 | **-37331.95(0.27)** | -37331.97(0.28) | N/A | N/A | | DS8 | **-8650.61(0.48)** | -8650.64(0.50) | N/A | -9209.18(18.03) | [1] Koptagel, Hazal, et al. "VaiPhy: a Variational Inference Based Algorithm for Phylogeny." NeurIPS 2022.\ [2] Moretti, Antonio Khalil, et al. "Variational combinatorial sequential Monte Carlo methods for Bayesian phylogenetic inference." UAI 2021. **Weakness 3**: ARTree relies on several subroutines to be able to construct and parameterize the generative decision sequences. (...) The lack of runtime comparisons between VBPI with ARTree, other methods in VI and MrBayes is a weakness of the paper. **Response**: Thank you for pointing out the lack of runtime comparisons. We will add runtime comparisons in Appendix E in the revised manuscript. The following table is the CPU time and memory of each method in the VI setting on DS1. We do not compare them with the MCMC-based MrBayes because it seems hard to determine a fair time criterion as MrBayes is written in C++ and VBPI is written in Python. Table: The CPU time and memory usage in the VI setting on DS1. The CPU time is averaged over 100 trials with one standard deviation in the brackets. The experiments are run on a single core of MacBook Pro 2019. N/A: not available due to unresolved memory leak issues. | ---- | SBN | ARTree | VCSMC | VaiPhy | | ---- | ---- | ---- | ---- | ---- | | CPU time of passing 100 trees (seconds) | 0.99(0.14) | 5.61(0.22) | 11.54(1.50) | 34.97(0.63) | | Memory (MB) | 611.74 | 605.78 | N/A | N/A | Although VCSMC and VaiPhy seem to take longer time if the number of trees is fixed, they generally require hundreds of iterations to converge, since their variational distributions only have a few parameters and are highly structured. In contrast, SBN and ARTree require more than 10w iterations, since they both build machine-learning models with enormous parameters and rely heavily on optimization. ARTree takes more time than SBN because it relies on several submodules which, although complicated, are designed to promote the expressive power to accommodate the complex tree space and are widely-used strategies in the literature. The inefficiency of autoregressive generative models is also an inherent issue. The following strategies may help to reduce the computational cost of ARTree. (i) Training on GPUs. As a deep model, ARTree is mainly implemented using vectorized tensor operations in PyTorch. (ii) Early stopping. Although ARTree is trained for 40w iterations in VBPI to get the best numerical results, 10w iterations are enough to reveal the ground truth trees. (iii) More efficient architecture. Several efforts have been made to accelerate autoregressive models, e.g. GraphGEN (https://arxiv.org/abs/2001.08184). Designing efficient architectures for ARTree is an important future direction. **Question 1**: Experiment on runtime of ARTree in the context of VBPI. **Response**: Please see our response to weakness 3. **Question 2**: Addressing the issue of minimal increase of raised in Weaknesses. **Response**: We are sorry that we could not understand this question. We could not find relevant expressions about 'minimal increase' in Weaknesses. **Question 3**: Incorporating the related works mentioned in Weaknesses-section. **Response**: Please see our response to weaknesses 1 and 2. **Question 4**: Misprints: 56 "edges of current…" -> "edges of the current…". **Response**: Thank you for your careful check. We will fix this misprint in our revision. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: Thank you for addressing the concerns raised in my review. The suggested updates in the global response and to weakness 1 and 2 will serve the paper well. However, I still have concerns regarding Weakness 3 and clarify question 2 below. Weakness 3: The added experiment on time to pass topologies is interesting, however, the issue I raised was regarding runtime of ARTree for VI, supposed to show the runtime needed to produce the results of Table 2. This way readers can see the current trade-off between performance and runtime for different VI approaches. The extension of Table 2 together with the CPU table added conceals this trade-off in learning time and performance, which is very misleading. I find the argument regarding runtime comparisons to MrBayes fair. Question 2: My apologies, the full formulation of that question must have been lost in one of my offline sessions. It should have been: What is the reason behind the seemingly minimal increase of ELBO between ARTree and VPBI? Does this come from a different $q(B | T)$ or does it in fact come from the different $q(T)$? Maybe this is hard to disentangle, but addressing this fact would be interesting for the discussion. Currently, I will retain my score as the rebuttal did not address weakness 3 in a satisfactory manner. --- Reply to Comment 1.1.1: Title: Thanks for your response! Comment: Thanks for your response! We addressed weakness 3 and question 2 as follows. **Response to weakness 3** We apologize for misunderstanding your concern. Here is the runtime comparison of different methods in the VI setting on DS1 (will be added to the limitation section). Table: Runtime comparison in the VI setting on DS1. SBN* and ARTree* refer to the early stopping of SBN and ARTree that surpass the $\phi$-CSMC baseline in terms of marginal likelihood estimation (-7290.36), respectively. | ---- | VCSMC | VaiPhy | $\phi$-CSMC | SBN | ARTree | SBN* | ARTree* | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | Total training time (minutes) | 248.3 | 45.1 | N/A | 659.3 | 3740.8 | 10.2 | 79.5 | | Evaluation time (one estimate of ML, minutes) | 2.4 | 1.6 | 102.2 | 0.15 | 0.41 | 0.15 | 0.41 | *Remarks of the table*: (i) **Training**. We trained all models following the setting in their original papers: VCSMC was trained for 100 iterations with 2048 particles per iteration; VaiPhy was trained for 200 iterations with 128 particles per iteration; $\phi$-CSMC directly estimates ML based on VaiPhy, and therefore does not need extra training; both ARTree and SBN were trained for 400000 iterations with 10 particles per iteration. (ii) **Evaluation**. We find that the evaluation strategies in their original papers are quite different, e.g. VaiPhy, SBN, and ARTree used importance sampling to estimate ML with different repetition times; VCSMC and $\phi$-CSMC instead estimated ML with sequential Monte Carlo (SMC), also with different repetition times. To be fair, we report the time for producing one estimate of ML from each of these models (VaiPhy, SBN, and ARTree used importance sampling with 1000 particles; VCSMC and $\phi$-CSMC used SMC with 2048 particles). We want to emphasize that although ARTree (and SBN) takes longer time to converge (complete training) when compared to other methods with unconfined support, it takes a comparable amount of time to provide good enough approximations for marginal likelihood estimation of similar accuracy (see ARTree* and SBN* on the above Table). Moreover, the evaluation time of ARTree (and SBN) for marginal likelihood estimation would be much shorter than other methods. The suggestions for reducing the computational costs in our rebuttal are still applicable, among which designing a more efficient architecture for ARTree is an important future direction. **Response to question 2** Thanks for this question. Just as you have pointed out, the improvement of ELBO in Table 2 is minor. Here we provide two explanations: (i) The ELBO estimates in VBPI are more sensitive to the quality of branch length model $Q(q|\tau)$ instead of the tree topology model $Q(\tau)$. As SBN and ARTree use the same parametrization of the branch length model, we do not expect a large improvement in ELBO. (ii) The support of ARTree spans the entire tree topology space. This adds to the difficulty of training $Q(q|\tau)$ which is conditioned on tree topology $\tau$, as discussed in Appendix E. To investigate whether the increase of ELBO comes from a different $Q(q|\tau)$ or a different $Q(\tau)$, we conducted the following experiment (see the table). Table: The ELBO estimates on DS1 obtained by different combinations of tree topology model $Q(\tau)$ and branch length model $Q(q|\tau)$. | Model combination | ELBO | | ---- | ---- | | SBN + branch length model trained along with SBN | -7110.24(0.03) | | SBN + branch length model trained along with ARTree | -7110.26(0.03) | | ARTree + branch length model trained along with ARTree | -7110.09(0.04) | Therefore, it seems that the increase of ELBO indeed comes from a different $Q(\tau)$, as evidenced by the result of the 'SBN + branch length model trained along with ARTree' combination. This observation also coincides with the explanation (ii).
Summary: In this paper, the authors propose a tractable distribution over tree spaces that can be fit for use in density estimation or variational inference. The key idea is to build a tree by sequentially adding leaves one at a time by adding an additional branch to the tree. Each step of this process has a reasonable state space, and it is easy to see that such a process generates distributions that span all of tree space. The authors then parameterize the "action space" of this process using (recurrent) graph neural networks, which can be trained using either maximum likelihood in the density estimation case or VIMCO to optimize the ELBO in the VI case. The authors apply their method to 8 standard phylogenetic benchmarking datasets finding comparable or superior performance to existing density estimation .or VI methods. Strengths: * The ideas presented in this paper are extremely simple and elegant. * The big picture of the approach is easy to describe and conceptually straightforward. * The performance of the method seems to be an advance over existing methods, even by metrics that favor existing methods (e.g., inclusive KL is kind to SBNs, as exclusive KL would be infinite for SBNs that do not have support on all of tree space). Weaknesses: * A more thorough description of the technical details of the parameterization of the model would be helpful. In particular, it would be useful to have a schematic representing all of the components and how they fit together. Equations 7-11 had a lot of subcomponents (e.g., $P$ and $R$ and $b_n$ and emb, etc... etc...) which were hard to keep track of and see how they all fit together. Minor: * I know that it is common in the field, but it is not obvious to me why one would want to take $K$ greater than $1$ in equation (4). If $K$ is $1$, then (4) is exactly the usual ELBO. Maximizing the $K=1$ ELBO corresponds to minimizing the KL between the variational and true posteriors, which seems desirable. Taking $K$ larger than one certainly tightens the lower bound on the evidence, but that doesn't necessarily mean that it will result in a better variational approximation to the posterior. In fact, as $K \to \infty$ equation (4) should become independent of $Q$, which seems undesirable. See for example https://proceedings.mlr.press/v80/rainforth18b.html * Many of the references at the end have minor formatting issues (e.g., lacking capitalization: "bayesian, "markov", "monte carlo", "Graphrnn", etc...) Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Have the authors explored how necessary it is to condition the decisions on all previous decisions? Does a Markov decision process perform substantially worse? That is, does one need the recurrent GNN, or would a non-recurrent GNN be sufficient? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations of their study, and I do not foresee any potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedbacks and suggestions! Here are our responses to them. **Weakness 1**: A more thorough description of the technical details of the parameterization of the model would be helpful. In particular, it would be useful to have a schematic representing all of the components and how they fit together. Equations 7-11 had a lot of subcomponents (e.g., $P$ and $R$ and $b_n$ and emb, etc... etc...) which were hard to keep track of and see how they all fit together. **Response**: Thanks for the suggestion! We will modify our description and notations accordingly in our revision to make it more clear to the readers. **Weakness 2**: I know that it is common in the field, but it is not obvious to me why one would want to take $K$ greater than $1$ in equation (4). (...) See for example https://proceedings.mlr.press/v80/rainforth18b.html. **Response**: Thanks for asking! There are mainly two reasons for taking $K>1$. (i) The gradient of variational bound w.r.t. the discrete component $\tau$ is generally unstable and suffers from large variance. Taking $K>1$ allows us to use efficient stochastic gradient estimators such as VIMCO (which are designed for multi-sample ELBO) for the tree topology variational parameters. (ii) A sample size $K$ larger than $1$ may encourage exploration over the vast and multimodal tree space to avoid being trapped in local modes. We agree that taking $K$ larger does not necessarily lead to a better variational approximation, as the signal-to-noise ratio decreases as $K$ increases (https://proceedings.mlr.press/v80/rainforth18b.html). In practice, a moderate $K$ would be good choice, and we leave a more thorough investigation on the effect of $K$ to future work. Thanks for bringing up this discussion, and we will cite this paper in our revision. **Weakness 3**: Many of the references at the end have minor formatting issues (e.g., lacking capitalization: "bayesian, "markov", "monte carlo", "Graphrnn", etc...) **Response**: We appreciate your careful review of the references. All the formatting issues will be carefully addressed by us in the revised version of the paper. **Question**: Have the authors explored how necessary it is to condition the decisions on all previous decisions? Does a Markov decision process perform substantially worse? That is, does one need the recurrent GNN, or would a non-recurrent GNN be sufficient? **Response**: Thanks for your insightful question! We haven't explored the option that uses Markov decision process. However, we expect it to work fairly well given that the current tree topology is a summarization of all previous decisions. --- Rebuttal Comment 1.1: Comment: Thank you for the clear response! --- Reply to Comment 1.1.1: Comment: Thanks for your careful review and helpful suggestions again!
Summary: This paper presents a new way to construct the variational distribution of tree topologies based on autoregressive generation with GNNs, which is used in the problem of variational phylogenetic inference. The paper mainly compares the new construction of the variational distribution with subsplit Bayesian network approaches. The experiments show that the proposed approach outperforms SBN in terms of learning the ground-truth tree topologies. Strengths: 1. The paper seems to address an interesting sub-problem of the task, which is how to flexibly generate tree topologies for the variational distribution. I'm not a domain expert in variational phylogenetic inference, but previous approaches usually follow the way of SBNs and this paper proposes a novel and better alternative to SBNs. 2. The proposed approach of autoregressive generation with GNNs looks intuitive. The paper provides comprehensive experiments in the comparison with SBN, which support the claims of the proposed method. 3. The paper is well-written and easy to follow. Weaknesses: 1. As stated in the paper, the main drawback of SBN is that it could not span the entire tree topology space. It seems that there is no analysis on how/why the proposed method is better at doing this in addition to the empirical comparison in the experiments. 2. The main contribution of the paper is in Section 3.1, which is a new parameterisation of $Q(r)$. Most of the techniques in Section 3.2 follows Zhang (2023). It might be better to shorten 3.2, as I think it might not the focus of the paper. 3. Although Table 1 shows that ARTree has better number in terms of revealing the ground-truth trees, it seems that ARTree does not improve much on ELBO and ML in Table 2. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the authors talk about the potential use of the proposed method in computational biology in addition to getting better EBLO or other metrics of modelling the data? such as interpretability. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable questions! We address your comments and questions as below. **Weakness 1**: As stated in the paper, the main drawback of SBN is that it could not span the entire tree topology space. It seems that there is no analysis on how/why the proposed method is better at doing this in addition to the empirical comparison in the experiments. **Response**: Thanks for your question! In the generating process of ARTree, we use the softmax function to parameterize the conditional probability of decisions (where to add the new tip node). Therefore, all possible decisions would have nonzero probabilities. This means ARTree can sample any decision sequence with a nonzero probability. As there is a bijection between decision sequences and the entire tree topology space (Theorem 1), this implies ARTree can span the entire tree topology space. We will make it more clear in our revision. **Weakness 2**: The main contribution of the paper is in Section 3.1, which is a new parameterisation of $Q(\tau)$. Most of the techniques in Section 3.2 follows Zhang (2023). It might be better to shorten 3.2, as I think it might not the focus of the paper. **Response**: Thanks for the suggestion! We will modify it accordingly in our revision. **Weakness 3**: Although Table 1 shows that ARTree has better number in terms of revealing the ground-truth trees, it seems that ARTree does not improve much on ELBO and ML in Table 2. **Response**: Yes, you are right! In fact, the power of ARTree for VBPI is mainly on tree topology approximation as reflected by the KL results in Table 2. There are two reasons for minor improvements on lower bounds. (i) According to our experience, the lower bounds in VBPI are more sensitive to the quality of branch length model $Q_\psi(q|\tau)$ than to the tree topology model $Q_\psi(\tau)$, and ARTree and SBN use the same branch length model in VBPI. Also, we want to clarify that significantly improving ELBO and LB-10 are difficult considering they approach the same marginal likelihood. (ii) The support of ARTree spans the entire tree topology space. This adds to the difficulty of training $Q_\psi(q|\tau)$ which is conditioned on tree topology $\tau$, as discussed in Appendix E. **Question**: Can the authors talk about the potential use of the proposed method in computational biology in addition to getting better ELBO or other metrics of modelling the data? such as interpretability. **Response**: This is an interesting and open question. Two potential uses: (i) ARTree provides an alternative family of distributions over the entire tree topology with explicit likelihood computation and flexibility. This is itself a useful tool for phylogenetic inference, including tree density estimation and variational posterior approximations, which has a wide range of applications such as genomic epidemiology and conservation genetics. (ii) In ARTree, these learned conditional distributions for the species attaching operations also carry important information about the relationship between the new species to the species on the current tree topologies, and hence can be used to interpret the closeness among these species. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I've read the authors' response and other reviewers' comments. I will keep my score. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you for your careful review and valuable questions again!
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback. We have incorporated their suggestions and will revise the paper with the following major changes: **Related works**: We will clarify the distinction between ARTree and two related works - VaiPhy[1] and VCSMC[2] (see a short discussion below). Experimental comparisons will also be added to Table 2. - **A short comparision**. Both VaiPhy[1] and VCSMC[2] have unconfined support over tree topology spaces. VaiPhy employs a novel sequential sampler named SLANTIS, which makes decisions on adding edges in a specific order to sample multifurcating tree topologies. Unlike ARTree which uses parametrized GNNs, SLANTIS derives decisions based on a simply parameterized weight matrix and maximum spanning trees. Secondly, VCSMC samples tree topologies through subtree merging and resampling following CSMC[3], but employs a parametrized proposal distribution. A more powerful variant of it called variational nested SMC (VNCSMC) gives better proposals by incoporating future iterations. In contrast, ARTree takes a different approach by employing GNNs for an autoregressive model that builds up the tree topology sequentially, without requiring a resampling step or a looking-forward step. Empirically, we find ARTree surpasses these two methods significantly in terms of marginal likelihood. Table: Marginal likelihood (ML) estimates with one standard deviation in the brankets. $\phi$-CSMC is proposed along with VaiPhy in [1] and works on bifurcating tree topology space, making it comparable with other methods. | ---- | ARTree | SBN | VCSMC[2] | $\phi$-CSMC[1] | | ---- | ---- | ---- | ---- | ---- | | DS1 | -7108.41(0.19) | **-7108.41(0.15)** | -9180.34(170.27) | -7290.36(7.23) | | DS2 | **-26367.71(0.07)** | -26367.71(0.08) | -28700.7(4892.67) | -30568.49(31.34) | | DS3 | **-33735.09(0.09)** | **-33735.09(0.09)** | -37211.20(397.97) | -33798.06(6.62) | | DS4 | **-13329.94(0.17)** | -13329.94(0.20) | -17106.10(362.74) | -13582.24(35.08) | | DS5 | **-8214.59(0.34)** | -8214.62(0.40)| -9449.65(2578.58) | -8367.51(8.87) | | DS6 | -6724.37(0.46) | **-6724.37(0.43)** | -9296.66(2046.70) | -7013.83(16.99) | | DS7 | **-37331.95(0.27)** | -37331.97(0.28) | N/A | N/A | | DS8 | **-8650.61(0.48)** | -8650.64(0.50) | N/A | -9209.18(18.03) | [1] Koptagel, Hazal, et al. "VaiPhy: a Variational Inference Based Algorithm for Phylogeny." NeurIPS 2022.\ [2] Moretti, Antonio Khalil, et al. "Variational combinatorial sequential Monte Carlo methods for Bayesian phylogenetic inference." UAI 2021.\ [3] Wang, Liangliang, Alexandre Bouchard-Côté, and Arnaud Doucet. "Bayesian phylogenetic inference using a combinatorial sequential Monte Carlo method." Journal of the American Statistical Association (2015). **Technical details**: We will provide a more schematic description of the technical details in Section 3.2 and remove redundant statements to present it more clearly to the readers. **Limitations**: We will add a runtime comparison in Appendix E and give some suggestions for reducing the computational cost. **Decomposition process**: We will add a description of how to evaluate the tree topology probability using the decomposition process in Appendix C. **Drawback of SBNs**: We will remove the ambiguous argument about ''domain expertise'' and emphasize the importance of high-quality pre-sampled trees. We will clarify why SBN faces ''the limited parent-child subsplit patterns in the observed samples'' in Appendix A. We hope our response has adequately addressed the reviewers' questions and concerns, and look forward to reading any other additional comments.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
HiGen: Hierarchical Graph Generative Networks
Reject
Summary: This work proposes another auto-regressive-based graph generative model similar to GRAN. The authors propose a hierarchical generation scheme to un-coarse a graph level by level. In each non-leaf level, the abstract graph is weighted both in nodes and edges. A node represents a community, and its weight represents how many edges should be inside the community. An edge is the "connection" between two communities, and its weight represents how many edges should exist between the two communities. The weights of each community are generated through a stick-breaking process. And the number of communities is automatically decided by it. The structure within the community is generated using an AR model. Then the edges between communities are generated using GNN. Strengths: 1. The assumption makes sense, and the model decomposition is quite convincing. 2. This method can indeed improve generation efficiency by only auto-regressively generating the diagonal blocks of the adjacency matrix and using GNN (which has O(M) runtime) to predict the off-block entries. 3. The method is simple and straightforward. Weaknesses: see questions below Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The partition function is heuristics (see experiment) and I think this is very important for training a good model, the author should elaborate more on how to generalize it to the case where L>2. 2. Optimization detail is missing, I'd like to see how the model is being optimized and what the data format is for every level. Do you need to train a model for every level? 3. It would be nice to see the runtime analysis since it's one of the motivations mentioned in the introduction. 4. The experiment only shows model performance for the case where L=2, additional experiments should be included. Otherwise, it's just a small modification from GRAN and will limit the contribution. Also, I suggest the author experiment on even larger graphs and compare the model performance to [1,2,3] 5. Since the edges of the inter-community are generated in parallel, I am concerned about the expressivity of the model due to the edge independence [2]. It would be nice to see some discussion on it. Some related works are missing: [1] Rendsburg, Luca, Holger Heidrich, and Ulrike Von Luxburg. "Netgan without gan: From random walks to low-rank approximations." International Conference on Machine Learning. PMLR, 2020. [2] Chanpuriya, Sudhanshu, et al. "On the power of edge independent graph models." Advances in Neural Information Processing Systems 34 (2021): 24418-24429. [3] Haefeli, Kilian Konstantin, et al. "Diffusion Models for Graphs Benefit From Discrete State Spaces." arXiv preprint arXiv:2210.01549 (2022). [4] Chen, Xiaohui, et al. "Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling." arXiv preprint arXiv:2305.04111 (2023). [5] Kong, Lingkai, et al. "Autoregressive Diffusion Model for Graph Generation." (2023). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. one possible limitation is the edge independency of the model. 2. The model has made a strong assumption that the graph should have a community structure, while the experiment datasets are relatively small and may not have such a structure. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1, Q4)** The Louvain algorithm, used as a partitioning function, is able to provide coarsened graphs at multiple levels of abstractions and we spliced out the intermediate levels to achieve HGs of size $L=2$. For the datasets reported in Table 1 of the paper, two levels of abstraction was enough as it provided clusters of reasonable sizes (refer to cluster size analysis in the "author rebuttal" section). We also conducted experimental studies for *3D point cloud* dataset with graphs up to 5K nodes and with $L=3$ levels. The results outlined in the "author rebuttal" section effectively highlight the model's performance in managing deeper hierarchical models for large graphs. **Q2)** We optimize the GNN models for all levels jointly; however, it's worth noting that these models are not shared across communities, inter-community components, parent graphs, or levels. As a result, optimization and training can be conducted independently and in parallel for each level. In our joint implementation, the training can also be performed in parallel across levels but generation needs to be performed sequentially. For the training, we first sample a batch of $b$ HGs and then for each of these samples, we randomly sample $s$ subgraph of the communities at each level. Therefore, together with the single augmented graph for the inter-communities, defined in line 202, we will have data format $\\{ \hat{\mathcal{G}}^{l}, \hat{\mathcal{C}}^{l}\_{1}, …, \hat{\mathcal{C}}^{l}\_{s} \\}$ for level $l$ of a HG. Since we randomly sampled the subgraphs we estimate the conditional generative probability for community $\\mathcal{C}^{l}\_{i} $ by averaging the loss function over all the subgraphs in that community multiplied by the size of the cluster, therefore $$ p(\\mathcal{C}\_\{i\}^l | \\mathcal{G}^{l-1}) \\approxeq n\_{\\mathcal{C}\_{i}} * mean ( [ p(u\_t (\\hat{\\mathcal{C}}^{l}\_{j}), \~ \\forall \~ \\hat{\\mathcal{C}}^{l}\_{j} \\in {\\mathcal{C}}^{l}\_{i} ] )$$ , where $p(u\_t )$ is defined in eq. (6). The loss function for cross-community is straightforward as all of them are included. **Q3)** The average sizes of the largest cluster and some other statistics of the datasets are reported in the "author rebuttal" section. **Q5)** In comparison to independent models, ours employs multinomial and mixture models instead of just Bernoulli models. Consequently, the edge probabilities of inter-community components aren't treated as entirely independent. The performance of our model on the tested datasets indicates that this potential issue didn't significantly impact the results. This can also be explained by the fact that, based on [2], the edge independence is more crucial in denser community generations, whereas it's less significant in sparser inter-communities in our hierarchical approach. To address this challenge, an alternative solution is to generate inter-community components sequentially in an autoregressive (AR) manner. We implemented this approach for the 3D point cloud dataset, as detailed in Appendix D.1. However, this comes at the expense of sacrificing parallelism and some of the acceleration provided by our proposed model. Nevertheless, even with these trade-offs, this approach remains considerably faster and more efficient than GRAN. --- Rebuttal Comment 1.1: Comment: My concerns are mostly addressed by the responses. And I am willing to increase my rating.
Summary: The paper proposes HiGen a hierarchical generative graph model. The model consists of a clustering process (Louvain), followed by a GNN model (GraphGPS) to estimate probabilities. The generative process is separated by communities and bipartite sub-graphs. Strengths: The paper seems original. The proposition of a new model is always an important contribution. Even though the new model is the combination of a clustering process and GNN. The combination of both ideas is interesting. The theoretical quality of the demonstrations is good. Most of them seem fine and no errors were observed during the revision. Parts of the papers are quite clear. Figure 1 really helps to understand the main idea of the paper. However, there is room for improvement. The significance of the paper is high, it seems that this new model is able to reproduce the mean of the distribution quite correctly in comparison to other state-of-the-art methods, as it is shown in the results. Weaknesses: The state of the art can be improved. The paper mentions "there exists no data-driven generative models specifically designed for generic graphs that can effectively incorporate hierarchical structure.". Neville et al. focused on this type of work, generating several papers related to hierarchical graph models (doi.org/10.1145/3161885, doi.org/10.1145/2939672.2939808, doi.org/10.1007/s10618-018-0566-x). Parts of the paper are closed related to mKPGM (doi.org/10.1145/3161885). In both cases there is a hierarchical structure, both have the idea of a super-node at the higher level, and the sampling process is also based on a multinomial distribution (doi.org/10.1007/s10618-018-0566-x). Please take a look at the sampling process proposed, because it has similarities to the proposition of this paper, and the authors claimed to sample a network with billions of edges in less than two minutes. The paper must state its main contribution. In the beginning, it seems to be the model, but after reading the paper, it seems to be the sampling process. Unfortunately, both of them have different issues. If the main contribution is the model, then the paper should improve the modeling of the main network and be fairly compared in the experiment section against other baselines (not just the mean of the distribution). The main models consider $\ell$ hierarchies, but just two are applied. It is also not clear how the final probabilities are obtained. If the main contribution is the sampling process, there are some issues too. The time complexity of the generative model claims to be O(n_c \log n), but this is not demonstrated. The results of the paper are focused on the modeling of networks, not the sampling process. For example, there are no empirical results about the time complexity, and the largest networks have some thousand nodes, rather than millions. I understand that the papers follow the experimental setup and evaluation metrics of Liao et al. However, this methodology must be stated in the main paper, otherwise, the experiments of the main paper are not reproducible. The results of Table 1 are difficult to read because of the lack of explanations. There are no details on the separation of the data in the main paper. I understand that this is explained in the supplementary material (80% for training and 20% for testing), but it must be considered in the main paper too. Moreover, I do not know if the values are the average over the 20% of the testing graphs or, if you just considered it as a single distribution. In the first case, please add the standard deviation, to see if the difference at statistically significant. Section 5 claims: "The results demonstrate that HiGen effectively captures graph statistics". Considering that, generally speaking, MMD estimates the distance between the means of two distributions, I suggest you change it to "The results demonstrate that HiGen effectively captures the mean of the graph statistics". Given the use of MMD, you can not determine if the other part of the distribution are correctly estimated. The conclusions state that HiGen "enables scaling up graph generative models to large and complex graphs" but this is not demonstrated. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: -What are the similarities and differences between this sampling process and the sampling process proposed at doi.org/10.1007/s10618-018-0566-x? Can you use that sampling process to speed up your generative process? -Why did you consider such small networks? -Can you, at least, empirically demonstrated the time complexity of your model? -how did you compare the final distribution for the MMD? Did you consider the average of each network, or do you make a single distribution considering all networks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No, the authors did not consider the limitations of the proposed model. For suggestions, please check weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **mKPGM papers:** Compared to our proposed model, Kronecker product graph models lack the depth and complexity of deep neural networks and graph neural networks. Consequently, their capacity for modeling complex relationships in graphs is notably restricted. These models mainly concentrate on capturing specific statistical characteristics of graphs, such as degree distribution, rather than the complex structures present in the data. In contrast, \HiGeN adeptly models both intra-links and cross-links within graphs by employing separate GNN models. Notably, these models consider the parent graph's structure when determining edge distributions. This enables \HiGeN to capture the hierarchical nature of graphs and the interactions between different levels of the hierarchy. This approach offers a more expressive and flexible framework for graph generation compared to Kronecker product models. **main contribution:** Our proposed approach focuses on generating clusters of interconnected nodes within each community, capturing local relationships effectively. Additionally, it predicts cross-links connecting different communities using a separate model. This strategy allows our model to simultaneously capture both fine-grained local connections and broader global relationships, thereby leveraging the inherent hierarchical structure prevalent in real-world graphs. Furthermore, we extended our experimental analysis to the 3D point cloud dataset, comprising graphs of up to 5K nodes and employing a hierarchical level ($L=3$). The outcomes, detailed in the "author rebuttal" section, emphasize the model's capability to handle deeper hierarchies for larger graphs. *“It is also not clear how the final probabilities are obtained.” :* In this work the probability of the edges of the graph are modeled in hierarchical fashion and for community graphs and inter-community components according to equations (6) and (8), respectively. In contrast to mKPGM models, we used deep GNNs to model the parameters of these high dimensional probabilities. Besides offering a novel graph generative model, the hierarchical and modular structure offers parallel sampling of the distribution. **MMD metrics:** There seems to be a misunderstanding regarding MMD. The maximum mean discrepancy (MMD) is a distance-measure (or discrepancy measure) between two distributions using their sets of samples of them. It provides an efficient distance metric between two distributions by using first Wasserstein distance (earth mover’s distance) as the kernel hence it preserves all of the statistical features of arbitrary distributions, not just the mean of the distribution. You can find a detailed explanation of the MMD for graph statistics in Section 4.3 of [1]. **Experimental Results:** In this work, we followed the evaluation metrics of SOTA models such as GRAN, SPECTRE, DIGRESS and GDSS by reporting the MMD that measure the distance (discrepancy) between the distributions of the generated and test graph statistics, such as degree, i.e. $p\_{gen}(degree)$ and $p\_{test}(degree)$. So in all of these works, one distribution is assumed for each statistic and the MMD is computed based on the samples of this distribution. The details of the experimental setup and the distinction between training and test sets will be included in the final version of the main paper. Scalability: While recent models like SPECTRE, DIGRESS, and GDSS have demonstrated efficiency for graphs comprising several hundred nodes, our work showcases the scalability of the proposed models to accommodate graphs with several thousand nodes. It's important to note that this scalability isn't limited to the specified range and the proposed model has potential for larger graphs. [1] Jiaxuan You et all. Graphrnn: Generating realistic graphs with deep auto-regressive models. In ICML, pp. 5694–5703, 2018. --- Rebuttal Comment 1.1: Title: Author rebuttal Comment: Thanks for the response, I have read this response and others carefully. I know the way that is evaluated. I am just saying than the paper must try to be complete by itself. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful feedback. During the initial submission, due to the the page limit constraints, we focused on incorporating the main discussions and following the experimentation approach of the baseline paper in Graph Generation, in order to convey the core concepts and contributions of our work. Since we have one extra page in the final version, we will integrate the supplementary analysis, comprehensive clarifications and new results that have been thoroughly discussed in the rebuttal period.
Summary: The paper introduces an innovative hierarchical method for graph generation, which employs multiple levels of graph coarsening. This approach begins with the first level, representing the most coarse graph, and progressively expands nodes and edges to form new communities and connections between the newly created nodes. At each level, nodes serve as communities for the subsequent level, and the edge weights, including both inter-community edges and self-loops, dictate the total number of edges within each community in the final graph. Consideration of independence among the generation processes of inter and intra-community edges, conditioned on the graph and edge weights from previous levels enables parallel execution of the steps, resulting in acceleration of the generation process. Strengths: 1. The paper effectively utilizes hierarchical clustering to enhance the graph generation process, capitalizing on the benefits of this technique. 2. By introducing parallelization in generating distinct clusters at each level, the paper successfully minimizes the number of sequential steps required. 3. The experimental results presented in the paper demonstrate improvements across multiple datasets. 4. Paper for the most part is well-written and easy to follow. Weaknesses: 1. In lines 35-36 paper mentions that this work is the first hierarchical method for generic graphs. I believe [1] is also a hierarchal method for graph generation. I understand that methods are significantly different but still it would be more accurate to highlight the unique aspects of the proposed method and consider including a comparison between the two approaches. 2. The time complexity analysis provided in the paper focuses solely on the sequential steps, neglecting to consider the computational requirements. It would be valuable to compare the overall computational workload, particularly since the proposed method utilizes the GraphGPS approach, which has a time complexity of $O(n^2)$, in contrast to conventional GNN methods with a complexity of $O(n+m)$. Including such a comparison would provide a more comprehensive analysis. 3. The paper lacks a study examining the distribution of community sizes during the generation process across different datasets. Addressing this limitation by investigating and reporting the distribution of community sizes would enhance the understanding of the method's behavior and its adaptability to various datasets. 4. The paper uses a more advanced GNN compared to methods like GRAN, raising the question of how much of the observed progress is solely due to the change in the GNN architecture. Conducting an ablation study specifically focused on the GNN architecture used would provide valuable insights into its individual contribution to the overall performance of the method. 5. The evaluation metrics commonly employed for graph generative models have their limitations, as discussed in [1] and [2]. It is important to consider these limitations in the evaluation process. The paper mentions the use of random GNNs as an alternative evaluation method, but this approach is only used in the appendix for a few experiments. I would suggest using this in the main body and comparing all models using this metric. [Additionally/Optionally, there are two more recent approaches, one based on contrastive training and another one based on Ricci curvatures that could be incorporated for evaluation purposes.] [1] Shirzad, H., Hajimirsadeghi, H., Abdi, A. H., & Mori, G. (2022, May). TD-gen: Graph generation using tree decomposition. In International Conference on Artificial Intelligence and Statistics (pp. 5518-5537). PMLR. [2] O'Bray, Leslie, et al. "Evaluation metrics for graph generative models: Problems, pitfalls, and practical solutions." arXiv preprint arXiv:2106.01098 (2021). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. From line 253 it appears that the number of levels is fixed to 2 for all datasets, is this correct? What is the average size of the largest cluster for different datasets used? This information is required for understanding the levels of improvement the model makes. 2. The formulation of Theorem 3.1 is confusing, particularly in relation to the assumption of Multinomial distributions. It seems that the theorem does not fully consider the fact that some nodes are shared during the process of distributing edges among nodes for both intra and inter-cluster components. To clarify, let's consider the scenario of learning the distribution over d-regular graphs. In the final level, the number of edges associated with a node within its cluster and the sum of edges from intra-cluster connections must be fixed. Therefore, we cannot treat them as independent, even if we know the sum of the number of edges inside a cluster and the number of edges between each two clusters. It would be beneficial to address this concern and provide further clarification on how the theorem accounts for shared nodes during edge distribution. 3. An analysis of the number of parameters in the model would be valuable information to include in the paper. Given the relatively small number of samples in the datasets used and the utilization of large networks, particularly with each hierarchy level having its own separate GNN comprising 8 layers, the complexity and capacity of the model should be carefully considered. Providing details on the number of parameters, as well as discussing potential implications of model size and dataset size, would enhance the understanding and interpretation of the experimental results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Considering the assumed independence among the clusters and the cross-edges connecting them, it is evident that there exist certain graph distributions which the model may struggle to learn. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1)** Compared to the proposed method, TD-gen is limited to a single level of abstraction with tree structure and its graph generation requires $O(nk)$ steps where k is the width of the tree decomposition therefore its scalability is limited to medium size graphs. This comparison will be included in the final version. **W2)** As clarified in appendix B, the quadratic complexity of GraphGPS can be mitigated by employing linear Transformers like Performer or newer models such as Exphormer, effectively reducing GraphGPS complexity to $O(n+m)$ while maintaining HiGeN's scalability. For the large 3D Point Cloud graphs, we utilized Performer and the results are detailed in the appendix, along with an attached PDF for $L=3$. Furthermore, since Transformers are parallelizable, they can take advantage of GPU acceleration and achieve high speeds when fitting within GPU memory. **W3 and Q1)** The average sizes of the largest cluster are reported in the general "author rebuttal". In the paper, we also compared the MMD of the clustering coefficient, which measures the extent to which nodes in a graph tend to cluster together. **W4)** In the experimental section, we employed the GAT model used by GRAN to derive node features for the augmented sub-graphs $\mathbf{h}\_{\hat{\mathcal{C}}^{l}\_{i,t} } $. This explanation is missed to be added in the Appendix B. Furthermore, we customized the GraphGPS for application on graphs with augmented bipartite graphs $\mathcal{G}^{l-1} $, incorporating distinct initial edge features to differentiate augmented (candidate) edges from actual edges. GraphGPS was also utilized to acquire node features for the parent graphs. Additionally, for the Enzyme dataset, we conducted experiments where we replaced GAT with GraphGPS as the GNN model for communities. Although the results were quite close to those using GAT, these experiments were not included in the analysis, as the primary contribution and the focus of this model lies in developing a hierarchical generative framework. **W5)** In this work, we followed the evaluation metrics of SOTA models such as GRAN, SPECTRE, DIGRESS and GDSS and used the structure based metrics reported by them to compare the Higen’s results with theirs. Furthermore, an additional study comparing the performance of HiGeN on the Ego dataset using GNN-based metrics is presented in the general "author rebuttal" section. **Q1)** Yes, for these datasets, the number of the levels was set to 2. However, the proposed model , in conjunction with the Louvain algorithm used as a partitioning function, offers the potential for the dataset's expansion to larger graphs with greater values of $L$. The experimental outcomes for the *3D point cloud* dataset with $L=3$ are outlined in "author rebuttal" section. These outcomes effectively highlight the model's performance in managing deeper hierarchical graphs. Additionally, the average sizes of the largest cluster and other pertinent graph statistics are presented in the "author rebuttal" section. **Q2)** Thanks for bringing up this concern. To clarify, our model is based on the assumption that the communities are mutually independent given the parent level. Following the generation of cluster graphs, it is also assumed that the generation of each bipartite (inter-cluster component) can be modeled independent of the rest of the BPs which enables us to accelerate the final graph generation. Hence, the inter-community generation step does not require independence among the clusters and the cross-edges. Therefore, to correct this mis-intrerpretation we need to adjust equation (2) as: $$ p(\mathcal{G}^l | \mathcal{G}^{l-1}) \approxeq \prod p(\mathcal{C}\_{i}^l | \mathcal{G}^{l-1}) \times \prod p(\mathcal{B}\_{ij}^l | \mathcal{G}^{l-1}, \\{ \mathcal{C}\_{i} \forall \mathcal{C}\_{i} \in \mathcal{G}^{l} \\})$$ Therefore, the model is not based on independence of the communities and inter-cluster components. In fact, Higen captures this dependency by formulating the parameters of multinomial distribution for inter-cluster components in theorem 3.1 as a function of the already generated communities and the parent graph, $\mathbf{\theta}^l\_{ij} = f (\mathcal{G}^{l-1}, \\{ \mathcal{C}\_{i} \forall \mathcal{C}\_{i} \in \mathcal{G}^{l} \\})$. Given that link predictions of the inter-cluster occur after the generation of all clusters and rely on them, as demonstrated in figures 1.d and 1.e, an expressive deep NN in Higen should be able to learn patterns such as d-regular. **Q3)** The number of parameters is provided in the "author rebuttal" section, illustrating that the proposed model achieves superior performance with fewer parameters compared to GRAN. This emphasizes the efficiency of hierarchically modeling communities and cross-community interactions as distinct entities. This can be explained by the fact that the proposed model needs to learn smaller community sizes compared to GRAN, which, consequently, enables us to use smaller models which make the model training faster. --- Rebuttal Comment 1.1: Comment: Thank you for your dedication in responding! Some of my concerns have been addressed, but some critical ones still remain: 1. Regarding TD-Gen, I agree with your argument about TD-Gen's limited abstraction level. However, the time complexity part has been confusing for me. Isn't it the case that your model in the worst case has to create the whole adjacency matrix and so it can be $O(n^2)$ which is worse than $O(nk)$? (Always $k<n$) 2. Regarding the evaluation metrics, I don't think that following the previous work's method is a must. Shortcomings of older methods have been evident in several works including [1, 2] and TD-Gen. Now that we know these issues, it's important for recent graph generative papers to consider them and address them in their work. An argument discussed in TD-Gen is the ability of the models in memorizing the train dataset. Based on Table 1 in your rebuttal pdf, considering 4 bytes/parameter we can see that in many cases models are larger than the dataset itself. This is not inherently wrong, but an analysis of overfitting should be done in this case. For an extreme case scenario assume we have a model that just has stored train data; then for sampling every time it just selects one of the train samples and gives it as output. This model will give you near zero results on MMD distances, as train and test datasets are coming from the same distribution; and also time and space complexity for this model will be even almost perfect. However, I hope that we agree that this is not a useful model and it has not been the purpose of learning generative models. As a result, we need to make sure that the model has not just memorized the train data. Likelihood could help with such analysis but due to the huge number of permutations likelihood is intractable in many cases. Using more advanced comparisons such as using Precision and Recall analysis may be helpful here. Also, a method from [3] can be used to combine more recent GNN-based evaluation metrics with older ones. 3. Thanks for clarifying the independence assumptions. It makes it much more clear now. I still think there are cases of d-regular graphs that HiGen can't handle though. Assume we make d-regular graphs like this: We make $k$ cliques of size $d$ and then we connect each node to exactly one node outside of its clique. Now, for a large enough $d$ a natural algorithm for making denser clusters will most probably put each clique into one cluster. Clusters will be efficiently generated as all nodes inside a cluster are connected to each other. But, for the inter-cluster edges, from the coarsened graph, we can only understand how many edges are between two clusters. But, we can't possibly understand which nodes are going to connect to each other. All nodes are symmetric at this point after cluster generation. Thus, if we generate intercluster parts in parallel, a node from a cluster can not possibly understand if it has connected to a node from other BPs or not, and so uniformity can't be handled here. To be clear, I don't think that this is a huge problem about the work, many works have their limitations; however, I think the limitations should be clearly stated in the paper. I appreciate the authors' detailed responses and the new results they have provided, however, since some of my key concerns are remaining, I will keep my score. [1] O'Bray, Leslie, et al. "Evaluation metrics for graph generative models: Problems, pitfalls, and practical solutions." arXiv preprint arXiv:2106.01098 (2021). [2] Thompson, Rylee, et al. "On evaluation metrics for graph generative models." arXiv preprint arXiv:2201.09871 (2022). [3] Shirzad, H., Hassani, K., & Sutherland, D. J. (2022). Evaluating graph generative models with contrastively learned features. Advances in Neural Information Processing Systems, 35, 7783-7795. --- Reply to Comment 1.1.1: Title: GNN-based metrics comparison Comment: Thank you for your feedback and for considering our response. In the following I addressed your concerns: 1 ) In the worst-case scenario, if no partitioning occurs, our model operates as a hierarchical graph with a single cluster (equivalent to L=1). As the model generates one node at a time, it essentially reduces to a GRAN-like process with $O(N)$ generation steps. Thus, the worst-case generation step is $O(N)$. Furthermore, it's worth noting that a key distinction from TD-gen is our model's ability to employ different generation models for clusters and inter-clusters, adding to its versatility and performance capabilities." 2 ) As the proposed framework extends GRAN and uses that as a building block for community generation, we conducted experiments comparing the performance of both models across a range of structure-based and GNN-based metrics. The results are presented in the table below: | Model | Deg. $\downarrow$ | Clus. $\downarrow$ | Orbit $\downarrow$ | Spec., $\downarrow$ | GNN MMD $\downarrow$ | GNN F1 PR $\uparrow$ | GNN F1 DC $\uparrow$ | |-------------------|:-----------------:|:------------------:|:-----------------:|:-------------------:|:-------------------:|:-------------------:|:-------------------:| | *Enzyme* | | | | | | | | | **GRAN** | 8.45e-03 | 2.62e-02 | 2.11e-02 | 3.46e-02 | 0.0663 | 0.950 | 0.832 | | **HiGen-m** | 6.61e-03 | 2.65e-02 | 2.15e-03 | 8.75e-03 | 0.0215 | 0.970 | 0.897 | | **HiGen** | 2.31e-03 | 2.08e-02 | 1.51e-03 | 9.56e-03 | 0.0180 | 0.978 | 0.983 | | *Stochastic block model* | | | | | | | | | **GRAN** | 0.0159 | 0.0518 | 0.0462 | 0.0104 | 0.0653 | 0.977 | 0.86 | | **HiGen-m** | 0.0017 | 0.0503 | 0.0604 | 0.0068 | 0.154 | 0.912 | 0.83 | | **HiGen** | 0.0019 | 0.0498 | 0.0352 | 0.0046 | 0.0432 | 0.986 | 1.07 | | *Ego* | | | | | | | | | **GraphRNN** | 9.55e-3 | 0.094 | 0.048 | 0.025 | 0.0972 | 0.86 | 0.45 | | **GRAN** | 7.65e-3 | 0.066 | 0.043 | 0.026 | 0.0700 | 0.76 | 0.50 | | **HiGen-m** | 0.011 | 0.063 | 0.021 | 0.013 | 0.0420 | 0.87 | 0.68 | | **HiGen** | 1.9e-3 | 0.049 | 0.029 | 0.004 | 0.0520 | 0.88 | 0.69 | The table includes the average of random-GNN-based metrics [1] over 10 random Graph Isomorphism Network (GIN) initializations, including metrics such as MMD with RBF kernel (GNN MMD), the harmonic mean of improved precision+recall (GNN F1 PR), and harmonic mean of density+coverage (GNN F1 PR). Here, we reported the TV distance for the structure-based statistics. Moreover, in the following table (table 3 of the attached pdf + Frechet Distance (FD)) the GNN-based performance metrics (GNN MMD and FD) of HiGen are compared against the baselines reported in [3] for Ego dataset where the Gaussian EMD kernel was used for structure-based statistics. | Model | Deg. | Clus. | Orbit | GNN MMD | FD $\downarrow$ | |----------|---------------------|--------------------|------------------|--------------------|----------------| | **GraphRNN** | 0.0768 | 1.1456 | 0.1087 | 0.6827 | 90.57 | | **GRAN** | 0.5778 | 0.3360 | 0.0406 | 0.2633 | 489.96 | | **GDSS** | 0.8189 | 0.6032 | 0.3315 | 0.4331 | 60.61 | | **DiscDDPM** | 0.4613 | 0.1681 | 0.0633 | 0.1561 | 42.80 | | **DiGress** | 0.0708 | 0.0092 | 0.1205 | 0.0489 | 18.68 | | **EDGE** | 0.0579 | 0.1773 | 0.0519 | 0.0658 | 15.76 | | **HiGen** | 0.0472 | 0.0031 | 0.0387 | 0.0454 | 5.24 | Regarding your concern about model sizes, we have comprehensively addressed this concern in the official comment titled *On Model sizes*. Despite the notably smaller or equal model sizes, HiGen consistently surpasses GRAN's performance across various metrics. [1] Thompson, Rylee, et al. "On evaluation metrics for graph generative models." arXiv preprint arXiv:2201.09871 (2022). [2] Chen, Xiaohui, et al. "Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling." arXiv preprint arXiv:2305.04111 (2023).
Summary: The paper introduces a graph generative model that is analogously structured as the inverse process of graph pooling, where the model first split a single node into a metagraph. This metagraph is further partitioned by utilizing a multinomial scheme, which allows for the division of nodes and edges into intra-community and inter-community connections. The proposed model's performance is evaluated on several benchmarks using various metrics, demonstrating state-of-the-art results. Strengths: 1. The approach of initially generating a graph's skeleton and subsequently refining its details is a novel and intuitively logical motivation for the proposed model. 2. The proposed methods have demonstrated state-of-the-art performance on some widely adopted benchmarks. Weaknesses: 1. Some important technical aspects in the paper may require additional clarification or more detailed elaboration. Here are the major concerns regarding specific aspects: (1) Can the authors please provide more information on the loss function utilized in the model? (2) How is the weight on level 0 determined during model inference? (3) On line 188, the node embedding matrix $\mathbf{h}_{\hat{C}}$ is referenced without being defined. Could the authors please explain how this matrix is generated from the node and edge embeddings of prior levels? (4) Could the authors please elaborate on how the graph neural network (GNN) is utilized throughout the entire process? 2. The utilization of notations in the paper has resulted in a significant amount of confusion. There are two main issues that need to be addressed: (1) Inconsistent notations caused by reusing the same symbols: One notable example is the letter "t" used at lines 187-188, which has multiple interpretations. In $\hat{C}_{i,t}^l$, "t" represents the "t-th" step in the stick-breaking process. In $h{(t, s)}$, "t" denotes the node that is associated with community "i". Moreover, when referring to the node matrix size as "$t \times d_h$", it indicates the total number of nodes in community "i". These varying interpretations of the same notation can lead to confusion and should be clearly distinguished or explained consistently throughout the paper. (2) Notations used without being defined: An example is the "r" symbol in Figure 1 (c). Although it is assumed to represent the acronym for "remaining (edges)," its precise definition is not explicitly provided in the paper. To enhance clarity, it would be beneficial to define such notations explicitly or provide a glossary of symbols and their corresponding definitions. 3. In the paper, the specific method for determining the number of mutually exclusive events (i.e., the edges split from the same parent node) when modeling the partition weights using a multinomial distribution is not explicitly mentioned. This aspect requires further clarification or explanation. The paper should provide details on how the number of events is determined, whether it is considered a fixed parameter based on the model's architecture or if it is treated as a latent variable to be inferred during the training process. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. Please address my questions in the previous section. 2. At line 188, the paper defines the feature for an edge $(t, s)$ as the difference between node features. This implies that the features for any self-edge would be zero, as the difference between a node's feature and itself would result in a zero value. Regarding Eq(7), it states that the expected partition weight received by each edge is determined by concatenating the edge feature and the parent node feature, which remains constant for all edges split from the same parent node. Considering these points, if all self-edges have identical features, they are expected to receive the same partition weights. Could authors please confirm or repute the above analysis? 3. As a consequence of similar partition weights across the self-edges split from the same parent node, when these self-edges are further partitioned into intra-community connections, it could lead to similar sizes and connection densities across the communities. Does it align with the characteristics of real-world graph generation that the model intends to apply to? In other words, how well is the proposed model in generating graphs whose communities has varying sizes and connection densities. 4. I have some questions regarding the rationale behind introducing theorem 3.3, especially considering the prior introduction of lemma 3.2. It seems to me that theorem 3.3 primarily focuses on consolidating the two levels of split into a single function. I kindly request the authors to provide clarification regarding the significance and necessity of theorem 3.3 in the overall context of the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: 1. Please refer to questions 2 & 3. 2. The model is built upon the assumption that the graph contains underlying communities. While this assumption can aid in generating higher-quality graphs with evident community structures, it may come at the expense of the generation quality for graphs where the community structures are less apparent. It would be intriguing to explore how the quality of generated graphs varies with changes in graph modularity or other community metrics. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1.1 )** Binomial and Multinomial are from the exponential family distribution and hence their log-likelihood reduces to Bregman divergence []. The binomial likelihood is a general form of Bernoulli likelihood, and multinomial likelihood is a general form of multinoulli log likelihood. These details will be added to the appendix for clarification W1.2) As explained in lines 123-125, we estimate $p(w^0)$ by computing the empirical distribution (histogram) of $w\_0$ in the training set. Note that, it is the number of edges in the graph with binary weights, or $w\_0=m$, and it is the sum of the edge weights for the graph with integer-valued weights. Given the vector $p$ is the probability of all possible values of $w\_0$, at the inference time (sample generation) we sample w0 from a multinouli distribution with $PMF=p$. W1.3 & W1.4) Node embedding $\mathbf{h}\_{\hat{\mathcal{C}}^{l}\_{i,t} }$ , also called node features in the paper, are learned by GNN models (line 189) applied on the augmented sub-graph at the same level $\hat{\mathcal{C}}^{l}\_{i,t} $ , so we can write $\mathbf{h}\_{\hat{\mathcal{C}}^{l}\_{i,t} } = GNN^l\_{com} ( \hat{\mathcal{C}}^{l}\_{i,t} )$ (this equation will be added in the final version for clarification). Note that we assumed the node features are functions of the sub-graph at level $l$, not the node and edge embedding of prior levels. After obtaining edge embedding and sub-graph embedding (graph level representation), they are concatenated with the node features of its parent node to enrich edge embedding and sub-graph embedding used in eq. (7) for calculating edge probabilities (line 195). This is how the final probability of the new edges are estimated based on the $\mathbf{h}\_{\hat{\mathcal{C}}^{l}\_{i,t} }$ and its parent level. For inter-community (bipartite) we obtain node features as $\mathbf{h}\_{\hat{\mathcal{B}}^{l}\_{i,j} } = GNN^l\_{bp} ( \hat{\mathcal{G}}^{l} )$ and $\mathbf{h}\_{\mathcal{G}^{l-1} } = GNN^l (\mathcal{G}^{l-1} ) $, respectively, where $\hat{\mathcal{G}}^{l}$ is defined in line 202. As the main focus of this section is on designing a hierarchical and auto-regressive model for community and inter-community generation, the Node Feature Encoding models are explained after them and in the appendix. 2.1) Since the community $i$ recursively by adding a new node at each time step $t$ to the already generated sub-graph $\hat{\mathcal{C}}^{l}\_{i,t}$ , therefore it is equal to the total number of nodes in community $i$ . Therefore, we denoted the new node that is added at time step $t $ by $ v\_t $. To illustrate, the 4th nodes in figure 4.d for community generation are added at time step $t=4$ and their corresponding edges to the already generated communities are decided at this step. The figure in the attached pdf for auto-regressive community generation will also help resolving this confusion. 2.2) As you mentioned, $r$ denotes the remaining weights. Indeed it is defined in the caption of Figure 1 but with a typo so the correction will be "... fraction of the remaining weights $r\_m$ is allocated to the $m$-th row ... ". This variable is equal to $r\_m = w - \sum \_{i < m} v\_m$ in equation 3.3. 3 ) The total number of edges (events) is established by the weight of corresponding edges in the parent graph, as elaborated in Section 2 and further detailed in Appendix (lines 464 - 468). This value serves as the initial weight for the remaining weights during community generation. At each step of generation, this value decreases. Therefore for community $i$ at level $l$, the remaining weights are calculated as follows: $ r\_0 = w^{l-1}\_{00}, r\_1 = r\_0 - v\_1, r\_2 = r\_1 - v\_2 , …. $ where $v\_t$ is sampled from the binomial in e.q. (7). To illustrate, referring to Figures 1.a and 1.c, the generation of the single community in level $l=1$ is associated with a total of 29 edges/events, determined by the parent node of this community. Consequently, the edge probabilities of this community follow a Multinomial distribution. However, we model it in an autoregressive (AR) manner as a sequence of Binomials and Multinomials, as outlined in Theorem 3.3. **Questions:** 2, 3) Thank you for raising this analysis. Since the structure of the subgraph evolves during recursive community generation, the graph-level features are not similar during the community generation. Therefore, to address this issue, we have concatenated the edge features with graph-level features $pool( \mathbf{h}\_{\hat{\mathcal{C}}^{l}\_{i,t} }) $ , (the term that was also used to model the probability of total weights $v\_t$ in eq. (7) ). Consequently, the self-edges' outputs primarily depend on graph-level features, ensuring that their probabilities are not similar. Moreover, we observed that the model was able to generate samples, with heterogeneous communities with varying sizes. 4 ) Thank you for your point. In the context of generating graphs in an autoregressive (AR) manner, there are two primary approaches: I) Generating a graph by adding one edge at a time (edge AR), exemplified by methods like GraphRNN. II) Generating one node and its corresponding group of edges at a time (node AR), as demonstrated by models like GRAN. Lemma 3.2 provides a way to generate edges recursively, making it suitable for modeling a multinomial distribution in AR models such as GraphRNN. However, this approach requires a high number of generation steps, approximately $O(n^2)$, equivalent to completing the adjacency matrix element by element. On the other hand, Theorem 3.3 permits grouping the edges of each node and generating a community in a node-by-node fashion. This means generating a group of edges corresponding to a row of the adjacency matrix at each step (as depicted in Figure 1.c), resulting in significantly faster generation. For this reason, we adopted Theorem 3.3 to model the probability of edges in each community instead of using Lemma 3.2. --- Rebuttal Comment 1.1: Title: Upscore decision Comment: I appreciate the authors for providing thorough responses to my concerns, I am glad to see that most of them are addressed. Hence I am biased towards accepting the paper if these explanations can be properly incorporated in the revision. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable feedback and are grateful for your recognition of our efforts to address your concerns. In our initial submission, due to the the page limit constraints, we focused on key aspects and experimental results to effectively convey the core concepts and contributions of our work. However, with the provision of additional space in the final version, we will integrate the supplementary analysis, comprehensive clarifications and new results that directly respond to your concerns, as well as those that emerged and have been thoroughly discussed during the rebuttal phase.
Rebuttal 1: Rebuttal: Thank you to the reviewers for their valuable comments and analyses, which have contributed to the clarity of the paper. We have addressed the questions and concerns of each reviewer individually and in the specified order. Your feedback will be helpful in enhancing the quality of our work. ## statistics of the graph datasets Here we summarize some statistics of the graph datasets dataset | $max(n)$ | $avg(n)$ | $avg(\|c\|\_{max})$ | $avg(n\_c)$ | $avg(modularity\_{gen})$ | $avg(modularity\_{test})$ ---|---|---|---|---|---|--- Enzyme | 125 | 33 | 9.8 | 4.62 | 0.62 | 0.59 Ego | 399 | 144 | 37.52 | 8.88 | 0.66 | 0.56 Protein | 500 | 258 | 26.05 | 13.62 | 0.8 | 0.77 SBM | 180 | 105 | 31.65 | 3.4 | 0.59 | 0.6 3D point Cloud | 5K | 1.4K | 97.67 | 18.67 | 0.88 | 0.85 Where $\|c\|\_{max}$ denotes the maximum size of clusters and $n\_c$ is the number of clusters in each graph . Pdf: /pdf/ee37fa56e2079ca2ebd49e723b9291bbbcaa4e52.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bayesian Risk-Averse Q-Learning with Streaming Observations
Accept (poster)
Summary: The paper develops a Bayesian risk-averse Q-learning algorithm to tackle the setting of Bayesian risk MDP, which uses Bayesian posterior to estimate the transition model and impose a risk functional to account for the model uncertainty. The claim is that the proposed algorithm learns a "risk-averse yet optimal policy", which has theoretical guarantee of strong convergence. Strengths: The paper proposes an interesting formalization of an algorithm for Bayesian risks in MDP. The paper is scientifically sound and provides a few interesting results, among others with respect to the convergence (to the "optimal") in the context of infinite data. The paper explains why Monte-Carlo estimators are useful in practice and provides a few interesting theoretical properties. Numerical experiments provide illustrations of the theoretical analysis in the context of two relevant (small-scale) MDPs. Weaknesses: Even though related work is overall well-discussed, the paper could be more clear about the novelty of the different parts (e.g. Theorem 2.2 and Theorem 2.3 seem relatively generic and even though I'm not an expert in the BRMDP setting, I believe close theorems exist in the literature). The notations and overall formalization might benefit from a few (minor) improvements to improve the readability (see additional comments and questions). Additional comments: - In Equation 1, $d_i$ and $p_i$ do not seem to be formally defined (even though we can guess what they refer to). - In Equation 1, \rho is used with a subscript that depends on a sampling from a Dirichlet posterior but \rho was introduced line 114 without any subscript (the meaning of the subscript might not be fully obvious). - line 97: $r$ is not defined (even though we can guess what they refer to) - line 108: (s,a) is not in math mode (italic). - line 109: "??" instead of a reference to the appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Main question: - Theorems are mostly provided as fully original. What are the closest related theorems from the literature? - The inventory management problem is considered in two settings as can be read in the supplementary material. The second one is described as "(...) the demand depends on the current inventory level s. (...) we will consider the case where observations are insufficient to estimate the transition probability for every state-action pair". What is actually meant by that? And also, why are there two settings described in the supplementary material but only one in the main paper? Additional questions: - The abstract mentions the following: "The proposed algorithm learns a risk-averse yet optimal policy that depends on the availability of real-world observations." This seems unclear from the abstract because a risk averse policy will in general need make a tradeoff with the best expected return. Do you mean "risk averse policy that converges to the optimal one in the context of unlimited data"? - line 117, $\xi$ is defined as a probability distribution over $\mathbb R^n$, but is described as a "random vector taking values on $\mathbb R^n$". Can you clarify? - Line 123: Could there be some intuitions for the different parts of assumption 2.1? In particular Line 124-128 gives lightweight information about how it is slightly similar to the notion of coherent risk measure but that notion and the difference with it are not detailed. Why is sub additivity not included, why is 2.1.3 important? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations are provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's comment. For the first main question, our presented theorems can be classified into two groups. Theorem 2.2 and 2.3 characterize the property of BRMDP. The BRMDP in this paper is different from previous risk-sensitive MDP in that risk functional is taken with respect to the posterior distribution to account for epistemic uncertainty, while most previous works of risk-sensitive MDPs impose the risk functional on the known transition probability to account for the aleatoric uncertainty. BRMDP is first proposed in [1], where they consider the finite horizon MDP whose optimal policy can be solved using dynamic programming, while we consider the infinite-horizon and discounted MDP. No existing theorems can be applied here due to the different formulations. But like many works on (robust) infinite-horizon and discounted MDP, we proved Theorem 2.2 in a standard way of showing the Bellman operator is a contraction mapping. For Theorem 2.3, we do not see previous works have studied such "convergence" property, As we mentioned above, the previous works on risk-sensitive MDP mainly deal with aleatoric uncertainty, whose value function does not converge to that of the original MDP. The proof of Theorem 2.3 relies on the contracting property of the Bellman operator as provided in Theorem 2.2 and some statistical properties of risk measure VaR and CVaR, which are well studied in many previous works. We agree with the reviewer that the proof of these 2 theorems can be regarded as an extension of previous work on robust MDP to BRMDP, with some effort dealing with risk functionals. The remaining theorems guarantee the convergence of the proposed algorithm, which follows the framework of stochastic approximation as many other works on Q-Learning. However, unlike previous work on either robust or non-robust Q-Learning, where they can obtain an unbiased estimator of the Bellman operator, we cannot do so as discussed in Section 3. Instead, we need to show a uniform convergence of our Monte Carlo estimator of the Bellman operator in Theorem 4.3, which is the most challenging and novel theoretical result of the paper. The proof of Theorem 4.3 is completely new and non-trivial. For the second main question, the sentence "the demand depends on the current inventory level" is misplaced and should be deleted. In both settings, the demand does not depend on the inventory level. We greatly thank the reviewer to point this out and we will correct this in the paper. In addition, the two settings refer to the result in Figure 4 and Figure 5, respectively. In Figure 4, the posterior is updated at the beginning of each stage with some newly arrived data. In Figure 5, the posterior is only estimated once at the beginning of first stage. It can be considered as pure offline Q-learning, which is the same setting as the two distributionally robust (DR) Q-Learning in the comparison baseline. Our purpose of showing result in Figure 2-4 is to illustrate the advantage of utilizing streaming real-world data, as we can reduce the epistemic uncertainty. In Figure 5, when there is no source of streaming data, we want to illustrate that our Bayesian risk-averse (BR) policy possesses robustness like the other two DR policies. A DR policy has the best worst-case performance over a set of potential transition models. Here we list the performance (value function) of different policies under different transition models. The two DR policies are the most robust as in the worst case shown in Figure 5 (Poisson parameter equal to 2) they obtain the largest value function. Our BR policies with VaR and CVaR fall between the risk-neutral policy and DR policies, showing the risk measure is a more flexible choice of risk attitude between the worst-case and risk-neutral case. For the first additional question, the reviewer is correct. The risk-averse policy converges to the optimal policy as more data are available as the epistemic uncertainty is reduced to 0. We thank the reviewer to point this out and we will clarify this in the paper. For the second additional question, we thank the reviewer to point out this notation issue. We will correct this by deleting the distribution notation. For the third additional question, the only reason we replace the sub-additivity with assumption 2.1.3 is to make the assumption more general to include risk measure VaR, which is a widely used risk measure but not satisfies sub-additivity. In addition, we cannot simply delete the sub-additivity assumption without adding Assumption 2.1.3, because Assumption 2.1.3 is necessary to guarantee that the Bellman operator is a contraction mapping in the proof of Theorem 2.2.1. [1] Lin, Yifan, Yuxuan Ren, and Enlu Zhou. "Bayesian Risk Markov Decision Processes." Advances in Neural Information Processing Systems 35 (2022): 17430-17442. --- Rebuttal Comment 1.1: Title: Thanks for the clarfifications Comment: Thanks for the clarifications, I keep my score of 7 unchanged. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the very early response as well as the previous comments that help improve the paper.
Summary: This paper extends previous work on Bayesian Risk-averse MDPs (BRMDPs), an informed alternative to an ambiguity set in the infinite horizon setting. In doing so, the authors first present a nested BRMDP formulation for the state value function. Then, they show that difference between optimal value functions for BRMDP and the true MDP is bounded. Afterwards, the authors presented a multi-stage risk-averse Q-learning algorithm with periodic posterior updates and a Monte Carlo estimator for the proposed risk-averse bellman operators. Simple simulations are presented to verify the better performance as well as the lesser variance of the proposed infinite horizon BRMDP formulation. It should be noted that the Bayesian posterior and Risk functionals are well-defined and that the state and action spaces discussed in the paper are both finite. The work does hold merit, and the findings could be communicated to a larger audience through a conference like NeuRIPS. Strengths: The paper is generally well-written: 1. The infinite horizon BRMDP formulation is novel and the accompanying recursive Bellman equations follow nicely. 2. Detailed proofs of the distance bound for the State function and the convergence analysis are provided. 3. Experimental results are discerned in a manner that relates directly to claims of risk aversion (smaller variances) and performance gains. Weaknesses: 1. Apart from the assumption of the availability of a behavioral policy to generate real-world data, the rules for updating the sample sizes appear to be a heuristic at best. (ref. Algorithm 1) 2. The Monte Carlo estimators for the Bellman operators, in addition to generating real-world samples to update the Bayesian posterior, will make the algorithm extremely slow. 3. The state and action spaces under consideration appear to be finite, thus limiting the evaluation to more demanding experimental setups. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In line 180, page 5, section 2.5, it is stated that a batch n(t) of observations is available. What is the assumption for the smallest batch size? 2. What is the reasoning behind using the Monte Carlo estimator? 3. Why is the benchmark Q-Learning on the true environment not provided in the experimental results? 4. In line 201, page 6, section 3.1, what does i.i.d sampling of probability distributions p_i mean? 5. What is the computational overhead of the proposed methodology, and how well will it scale? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors addressed one limitation of their work, i.e., that the behavior policy which generates the real-world data is assumed to be given. In terms of societal impact, there would be no potential negative impact of this work according to the considered ethical criterion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's comment. For the first question, the proposed algorithm works for any batch size. For example, we can update the posterior once new data are available, in which case $n(t) \ge 1$. However, it is of future interest to control the number of Q-learning steps and real-data batch size to improve sample efficiency. For the second question, the Monte Carlo estimator is designed for estimating the Bellman operator, which depends on the posterior of transition model but not the real transition model. The real-world sample is only used for updating the posterior belief about the transition model to reduce the epistemic uncertainty. Once we update the posterior at the beginning of each stage, we turn to solving the BRMDP, the risk-averse problem instead of focusing on the original MDP. Our framework can be regarded as Episodic off-line RL, where within each episode (stage) we do not have sources of real data but only a fixed set of data that are used to construct the posterior. For the third question, the Q-Learning algorithm only uses real-world data. In our experiment setting, the number of real-world data $n(t) = 5$ is much smaller than the computing budget of Q-learning update $|\mathcal{S}|m(t) = 50$. As a result, the model-free Q-learning converges quite slowly as we only have very little real-world data. In fact, the reason for our proposed risk-averse formulation is to deal with the epistemic uncertainty caused by the (partially) lack of data, which can attribute to either highly cost real data or safety concerns. For the fourth question, recall $\phi_{s,a}$ denote the Dirichlet posterior distribution on the unknown transition probability $\mathcal{P}^c_{s,a} \in \mathbf{R}^{|\mathcal{S}|}$. Each $p_i$ is a $|\mathcal{S}|$-dimensional vector that represents a transition probability. For the fifth question, the computational overhead is mainly on generating samples from the posterior distribution to estimate the Bellman operator for each state-action pair. We restrict the choice of risk functional to CVaR, which is always risk-averse, to roughly compute the computational complexity. For each state-action pair, generating a $|\mathcal{S}|$-dimensional random vector takes $O(|\mathcal{S}|)$ time. The number of samples needed to estimate the Bellman operator depends on the posterior update according to the proposed algorithm. If a certain state-action space is never observed, then estimating the Bellman operator can be very costly when the stage is large. Although in the algorithm we allow such sample size to go to infinity to prove the almost sure convergence, which requires the bias term to converge to 0. In practice we can often use a fixed sample size $\bar{N}$, which can be computed using some concentration bound for CVaR [1] to control the bias with some confidence level. Then, in each stage (inner loop in algorithm), we need to generate $m(t)\bar{N}|\mathcal{S}||\mathcal{A}|$ samples, which then takes $O(m(t)\bar{N}|\mathcal{S}|^2|\mathcal{A}|)$ time. Assume $m(t) = m$ for simplicity. Since the Q-function given by the algorithm is always bounded by $\frac{\bar{R}}{1-\gamma}$, which does not depend on $|\mathcal{S}|$, the sample size $\bar{N}$ does not depend on $|\mathcal{S}|$ by [1]. We have the computational overhead scale as $O(|\mathcal{S}|^2)$ in terms of the size of state space. When the state space is large, function approximation of the Q-function is of interest to improve the computation efficiency. [1] Thomas, P., Learned-Miller, E . Concentration Inequalities for Conditional Value at Risk. ICML(2019). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for taking the the time and providing detailed and satisfactory responses to my comments. Please try to incorporate as much as details as possible in a future revised version of the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response. We appreciate the comments and will definitely try our best to incorporate more details to make the paper more specific.
Summary: The paper adopts the Bayesian risk MDP (BRMDP) formulation to train a reinforcement learning agent to be robust against model uncertainty. Infinite-horizon discounted value function of BRMDP is defined by a nested formula, and properties for the value function are derived with VaR and CVaR being the risk functional. Furthermore, Q-learning algorithms for BRMDP are proposed using finite-sample estimators for the Bellman operator, and they are shown to converge to the optimal Q-function under some typical assumptions. Numerical experiments for the proposed risk-averse algorithm is shown and compared with its risk-neutral counterpart and other robust RL algorithms. Strengths: - The paper defines infinite-horizon discounted value for BRMDP, and Theorem 2.3 provides bounds on the difference between the value function of BRMDP to the value function of the true MDP. It shows that the risk-averse version will converge to the true value function as the number of observed transitions grows. - Q-learning algorithms for BRMDP are proposed using finite-sample estimators for the Bellman operator with either VaR or CVaR as the risk functional. The Q-learning algorithms are shown to converge to the optimal risk-averse Q-function in a data-conditional sense, and this further implies its convergence to the true optimal Q-function given infinite amount of observations. - Numerical experiments for the proposed risk-averse algorithm is shown and it outperforms two existing robust Q-learning algorithms. Weaknesses: - The BRMDP formulation is motivated to provide robust policies, but there is no discussion on what does "robustness" mean in the context of this paper. The only place some kind of robustness measure is mentioned is in the numerical experiments where the variations of performance is brought up in the discussion. But there is no proper definition for the variations and there is only very little discussion. - Theorem 2.3 is claimed to show the trade-off between robustness and conservativeness, but it is hard to talk about robustness without any definition for robustness. In some sense, the upper bound has the risk level parameter $\alpha$ which may be viewed as a kind of robustness level, but no analysis is provided on how the risk level $\alpha$ would affect the behavior of the risk-averse policy and how $\alpha$ would improve the robustness of the performance in some sense. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can we define the robustness using some metric like performance variations? How does the risk-averse policy compared with other policies in terms of some robustness metrics? - It is shown in Theorem 2.2 that BRMDP has a unique optimal value function, but it's not clear wether the random data-conditional optimal Q-function in Definition 4.1 exists. Do we have existence for the data-conditional optimal Q-function following similar arguments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: limitations are adequately addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's comment. For the first question, we believe the reviewer's concern is whether the robustness of BRMDP can be shown in a more quantifiable and interpretable way. For example, in the literature of distributionally robust RL such as [1,2], they define the optimal robust policy as the policy that has best worst-case performance among a set of potential RL environments which belong to some ambiguity set constructed using some distributional metric. In the formulation of BRMDP, we also consider a set of potential environments, but with the ambiguity set to be the whole simplex space, that is, all the possible transition probabilities. In addition, each possible transition probability is assigned a posterior density and we do not consider the worst-case performance measure but some risk measure. The relation between distributionally robust MDP and risk measure has been studied in [1,2]. In [1], the author proved that when the ambiguity set satisfies some conditions, minimizing the cost of distributionally robust MDP coincides with some risk minimization optimization for some risk measure. It is later shown in [2] the equivalence between distributionally robust MDP and risk-sensitive MDP. However, the mapping between distributionally robust MDP and risk-sensitive MDP cannot be explicitly characterized, and the proposed BRMDP involves a nested formulation and risk functional over posterior distribution, all of which make it difficult to define the metric-like performance variations. Nonetheless, we believe it is helpful to characterize how the value function of BRMDP $V^{\phi,\pi}$ differentiates from the value function of the original true MDP $V^{c,\pi}$. While Theorem 2.6 serves this purpose, it is not intuitive enough. Our recent ongoing work shows, for a given deterministic policy $\pi$, number of transition observation $O_{s,a}$ for state-action pair $(s,a)$, total number of observation $O$, if $\\lim_{O \\rightarrow \\infty} \\frac{O_{s,\\pi(s)}}{O} = \\bar{o}_s,$ Then $V^{\phi,\pi}$ can be expressed as $$ V^{\phi,\pi} = V^{c,\pi} - \frac{1}{\sqrt{O}} C + o_p( \frac{1}{\sqrt{O}}), (2) $$ where $C = (I - \gamma \mathcal{P}^c_\pi)^{-1}\gamma \bar{o} \mu^\pi$ is a constant, $ \mu^\pi(s) = \frac{-\sigma^\pi_s}{\alpha} \psi(\Psi^{-1}(\alpha)), (\sigma^\pi_s)^2 = (V^{c,\pi})^\top diag(i_{s'}^\pi)(V^{c,\pi}),$ $ i_{s'}^\pi = (\mathcal{P}^c_{s,\pi(s)}(s'))^{-1},$, $\psi, \Psi$ represents the pdf and cdf of standard normal distribution, and $o_p$ represents converge in probability. (2) indicates maximizing the value function of BRMDP is equivalent to maximizing the original value function minus some positive "bias" term depending on the risk level $\alpha$ and a problem-dependent variance term $\sigma_s^2$. Notice $ \frac{\psi(\Psi^{-1}(\alpha))}{\alpha} $ is decreasing in $\alpha$ and no larger than 1. A more risk-averse attitude $\alpha$ results in a larger bias term. Furthermore, this bias term diminishes in an order of $\frac{1}{\sqrt{O}}$, as when more data are collected, we are less pessimistic. For the second question, we thank the reviewer for pointing this issue out. The answer is yes. The data-conditional optimal Q-function is defined by each sample trajectory $\omega$ (which contains all the randomness). For each state-action pair $(s,a)$, the posterior $\phi^t_{s,a}$ will either remain unchanged after some period $\tau$ or concentrate on the true transition probability $P^c_{s,a}$ with probability 1. If $\phi^t_{s,a} = \phi^\tau_{s,a}$ for all $t>\tau$, then $\mathcal{T}^{\phi^\omega} Q(s,a) = \mathcal{T}^{\phi_\tau} Q(s,a)$; Otherwise we can define $$\\mathcal\{T\}^\{\\phi^\\omega\} Q(s,a) := \mathcal{T}^{c} Q(s,a) = \mathbf{E}_\{P^c\_{s,a}\} [R(s,a,s') + \gamma \max\_b Q(s,b)].$$ Then, $Q^{\omega,*}$ is the unique solution to $\mathcal{T}^{\phi^\omega} $, whose existence can be guaranteed with the same proof of Theorem 2.2. We will clarify this in the paper. [1] Bäuerle, Nicole, and Alexander Glauner. "Distributionally robust Markov decision processes and their connection to risk measures." Mathematics of Operations Research 47.3 (2022): 1757-1780. [2] Zhang, Runyu, Yang Hu, and Na Li. "Regularized Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity." arXiv preprint arXiv:2306.11626 (2023). --- Rebuttal Comment 1.1: Title: Rebuttal reminder Comment: Dear reviewer, We are grateful for the opportunity to address your valuable comments and concerns and will greatly appreciate your feedback in either discussions and scores. If you have further questions, please let us know so we can answer it before the end date of discussion on August 21 1pm EDT. Best regard, Authors --- Rebuttal Comment 1.2: Comment: I appreciate the authors' detailed responses. I think this paper has good contents, but as mentioned in the response, it is difficult to define metric-like performance variations with the BRMDP formulation. The lack of a robustness metric leaves a disconnection between the theoretical results and numerical experiments. Thus I incline to keep my weak accept recommendation. --- Reply to Comment 1.2.1: Comment: The reviewer's concern is reasonable. We sincerely appreciate your feedback and understand it.
Summary: The authors propose a novel multi-stage Bayesian risk-averse Q-learning algorithm to learn the optimal policy with streaming data. A central difference from existing methods is that they attach a risk functional on the future reward at each stage (nested), rather than just once on the total reward. To arrive at unbiased estimators for the Bellman operators, the authors resort to Monte Carlo. The convergence of the Q-learning algorithm is theoretically shown. The authors include empirical investigation of their method on two toy tasks and show that their method is more flexible than some other worst-case criteria. Strengths: The authors propose a technically rigorous addition to the the BRMDP literature. Their empirical results clearly show the utility of per-step risk functionals, and their extensive theoretical results include a number of - technically involved - contributions. The centrepiece here is certainly the proof of Theorem 4.3, i.e. the uniform convergence of the Bellman operator estimator. The paper is well-written and structured, and the resulting method has the potential to generate impact on the safe RL community. Weaknesses: While the paper convinces through its soundness and wealth of theoretical contributions, the empirical evaluation could feature more (and more complex) environments. A fundamental weakness seem to be limitations in applying the authors' method to high-dimensional data streams. For example, one weakness is that the authors do not extend their results to Q-learning with non-linear function approximation. I believe this could be done fairly easily, and would lay foundations for their method to be used on high-dimensional data streams. Another weakness to address when moving to high-dimensional data streams seems the relative sample-inefficiency of MC Bellman operator estimators (an empirical study of how efficient their proposed estimator is for high-dimensional data streams would be insightful). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My questions concern possible extensions of this work, which I do not deem required for acceptance. * Do you think that a more sample-efficient way (as MC is unbiased but generally not efficient) of arriving at unbiased Bellman operator estimators could be attained, e.g. using variational inference? This seems to be required for problems featuring high-dimensional data streams. * The authors note that an extension to active sampling settings would be interesting. This would seem to require "deep exploration", necessitating a more fundamental Bayesian RL approach [1]. Do you see an easier way to make progress in this direction? [1] Bayesian Bellman Operators, Mattie Fellows et al, NeurIPS 2021 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations of their work, although I would have liked to see a discussion of how their method might scale to high-dimensional data streams. I do not see any negative societal impact arising from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's careful review. For the first question, we agree that it is worth deriving some more sample-efficient the Bellman estimator and thank the reviewer for pointing this out. One way to do this is to use some gradient-based method to update the Bellman estimator as in [1], where they have a distributionally robust Bellman operator (DRBO) and re-wrote it as a stochastic convex optimization. This enables the author to update their DRBO estimator by updating the decision variable given 1 sample of reward and state transition each time. In our BRMDP, if we restrict the risk functional to CVaR, we can re-write the Bellman estimator as $$\mathcal{T}^\phi Q(s,a) = CVaR_\alpha^{\phi_{s,a}} (f(Q|p,s,a)) = \sup_{\zeta} \\{ \zeta - \frac{1}{\alpha}\mathbf{E}_{p\sim \phi} [(\zeta - f(Q|p,s,a) )^+] \\}, (1) $$ where $f(Q|p,s,a) = \sum_{s'\in\mathcal{S}} p(s')(R(s,a,s') + \max_{b\in \mathcal{A}}Q_{s',b}) $. Notice if we know the solution $\zeta^*$ to (1), then we can obtain an unbiased estimator by simply drawing one sample $p$ from $\phi_{s,a}$. The righthand side of the last equation in (1) belongs to convex optimization and can be solved using stochastic gradient descent (SGD). However, solving (1) to optimality can be sample-inefficient. To improve efficiency, we do not solve (1) to optimality but only conduct one step of SGD to update the current estimate of $\zeta^*$ and use it to further estimate the Bellman operator. Such an estimator is biased since we do not solve the problem to optimality. Nonetheless, we can still expect the convergence of the algorithm if both the estimate of both $Q$ and $\zeta$ converge at appropriate rates (by setting proper learning rates). This is beyond the scope of this paper but is of interest to future research. Also, we agree with the author that variational inference can be used for complicated high-dimensional data streams to improve sample efficiency in both updates of the posterior disttribution and estimation of the Bellman operator, which is also of future interest. For the second question, the reviewer is correct that the extension to active sampling settings requires dealing with "exploration" and "exploitation". Both our paper and [2] given by the reviewer took the off-policy approach assuming the real data is given under an arbitrary behavior policy. By choosing a proper behavior policy, both [2] and our approach can be easily extended to the active sampling setting. One possible way of doing so is to adding some exploration bonus $Bonus(s,a)$ such as upper confidence bound. To be more specific, with the current estimation of Q-function, in state $s$ we can take action $a^*$ that maximizes $Q(s,a) + Bonus(s,a)$ to collect data samples for the sate-action pair $(s,a^*)$. [1] Liang, Zhipeng, et al. "Single-Trajectory Distributionally Robust Reinforcement Learning." arXiv preprint arXiv:2301.11721 (2023). [2] Bayesian Bellman Operators, Mattie Fellows et al, NeurIPS 2021 --- Rebuttal Comment 1.1: Title: Thank you for your response, I will keep advocating for acceptance. Comment: Thanks for your response. My score remains unchanged. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the reply.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Errors-in-variables Fr\'echet Regression with Low-rank Covariate Approximation
Accept (poster)
Summary: The paper attempts to extend regression models by combining error-in-variables in high-dimensional predictors and responses in a metric space. The main method is principal component regression whereby principal components are formed in the predictor space and serve as dimension-reduced predictors. It ties into existing approaches for global object regression and is a valid attempt at a very challenging problem but unfortunately none of the problems is solved: The predictors are not high-dimensional as their dimension is fixed; the errors-in-variables are not mitigated: While a bound is provided where the errors appear, the error-in-variables model is not shown to allow for consistent estimation and why principal components would be a reasonable approach at error mitigation is not elucidated; tuning parameter choice is not investigated. Strengths: This paper addresses two difficult topics, errors-in-variables and Frechet regression. Errors-in-variables is a major problem in regression settings and is an important issue for many applications. While much research has been done on errors-in-variables in standard regression settings, it remains a challenging issue and especially so when the responses are in metric spaces. Such new types of responses are also clearly relevant for modern data. An approach through principal component regression would be of interest if it leads to consistent estimation and is implemented in a data-adaptive way. So the topic of this paper is timely from several perspectives. Weaknesses: (1) The background is not well balanced. It emphasizes Frechet regression while it is very superficial about error-in-variables. Since principal component regression is used to address error-in-variables it is then of interest to report how PCR mitigates/addresses or is expected to mitigate error-in-variables even in standard regression setting. (2) The proposed method does not lead to convergence for errors-in-variables according to (16), but this is what is needed for a method to succeed in this setting. (3) There is no data-adaptive choice (along with theory justifying it) for the tuning parameter $\lambda$, however this parameter may be a crucial feature for principal component regression implementations as it determines the number of components to be included as predictors. It is not clear how it should be chosen to mitigate errors-in-variables or when responses are in metric space. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is the bound in (16) sharp? High-dimensional predictors are mentioned but the dimension p is fixed throughout. Can you handle the case of large p where p would increase with sample size ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the previous comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We value the insightful feedback and comments received. In response, we have prepared a comprehensive Author Rebuttal report and individual rebuttals to address the highlighted concerns and questions. Our response is structured to first address the highlighted weaknesses and then provide detailed point-by-point responses to the specific questions. ### **Weaknesses** #### 1\. Background of study: In this paper, we tackle EIV problems within the Fr'echet regression framework. While previous works have explored regression analysis in non-Euclidean metric spaces, addressing EIV issues in this context remains uncharted. This study aims to fill this gap, responding to real-world situations where latent measurement errors impact observed covariates. We adopt PCR as a concrete, practical solution to EIV models in non-Euclidean regression, driven by two compelling considerations. Firstly, the prevalence of (approximate) low-rank structures in real-world datasets enhances the practically relevance of our approach. Secondly, we intentionally opt for an approach with minimal assumptions regarding covariate errors to ensure broad applicability. Notably, PCR aligns well with these considerations, due to its inherent utilization of effective low-rank structures and its demonstrated capacity to alleviate covariate errors in the standard regression setting [2]. This has prompted our anticipation that PCR can similarly address EIV challenges in non-Euclidean settings. While there is an extensive EIV literature, conventional techniques often rely on knowledge of error distributions or require prior information to eliminate measurement errors. In response to the reviewer's insightful comment, we focus on "mitigating" (possibly non-stochastic) covariate perturbations by retaining only essential assumptions necessary for Fr'echet regression analysis, instead of attempting to completely "denoise" EIV effects that may depend on impractical information. Importantly, PCR's advantage lies in its error-mitigation capabilities without necessitating prior knowledge of error distributions. Our method demonstrates effectiveness with theoretical guarantees and practical applicability. We value the reviewer's input and acknowledge that these nuances may not have been fully conveyed in our initial presentation. In particular, we recognize the need to elaborate on how/why PCR is expected to mitigate/address EIV challenges. To address this, we plan to enrich the "Errors-in-variables-regression" and "Principal component regression" paragraphs in the Related Work, providing a more comprehensive overview of literature and our motivations in the upcoming camera-ready version after acceptance. #### 2\. Convergence/mitigation of EIV implied by Eq.(16): Here we reiterate our response in the Author Rebuttal report. Despite its generality, this upper bound highlights the effective error mitigation in specific scenarios. Consider the following: 1. **Well-balanced, effectively low-rank covariates:** Suppose that $X\in R^{n\times p}$ satisfies (1) $|X_{ij}|=\Omega(1)$ for all i,j, and (2) $\sigma_1(X)\asymp\sigma_r(X)\gg\sigma_{r+1}(X)\asymp\sigma_{n\wedge p} = O(1)$, where $r\ll n\wedge p$ denotes the effective rank of $X$. Then we have $\sigma_1(X)^2\asymp\sigma_r(X)^2\asymp\|X\|_F^2/r\gtrsim np/r$. 2. **Independent sub-gaussian noise:** Next, suppose that $Z=X+E$ where $E$ is a random matrix with independent sub-Gaussian rows. Then $\|Z-X\|\lesssim\sqrt{n}+\sqrt{p}$ with high probability due to concentration inequality. In the random design scenario where $X$ and $x$ have i.i.d. rows drawn from the same distribution, $\|x-\mu_{D_n}\|_{\Sigma}\approx 1$ with high probability. As a result, the upper bound in Eq.(16) is bounded by $\sqrt{r/p}+\sqrt{r/n}$, which diminishes to 0 when $r\ll n\wedge p$. #### 3\. Principled choice of $\lambda$: In our numerical study, we utilized a uniform search grid for $\lambda$ selection, opting for the value that minimizes the MSPE. Similarly, in practical scenarios, cross-validation can be employed to identify the optimal $\lambda$. Regarding the concern about lacking a data-adaptive principle (and theoretical justification), it's worth noting that the effective estimation of $X$ from $Z$ drives the error-mitigating capability, irrespective of $\mathcal{Y}$'s metric, under Assumptions (C0)-(C2). The low-rank estimation of $X$ via SVT (Eq. 6) is closely connected to nuclear-norm-regularized low-rank matrix estimation (hard SVT vs. soft SVT). We conjecture based on these observations that the well-studied principles for selecting $\lambda$ in low-rank matrix estimation could be extended with minor adjustments. ### **Questions** #### 1\. Sharpness of the bound in Eq.(16): The error bound in Eq.(16) is sharp (up to a multiplicative constant) due to the existence of a worst-case noise instance ($E=Z-X$) that achieves equality. This sharpness is inherited from the tightness of the subspace perturbation bound (Davis-Kahan/Wedin). While it might be feasible to achieve a more potent error-mitigating bound by adopting a stricter model assumption, we have refrained from doing so to maintain generality. Thus, Eq. (16) remains sharp within our framework. #### 2\. Handling high-dimensional case ($p>n$): Our approach, involving low-rank matrix estimation via SVT, effectively addresses the high-dimensional case where $p$ grows with $n$, under the condition that the (effective) rank of $X$ remains appreciably smaller than $n\wedge p$, ensuring control over $|Z-X|/\sigma_r(X)$ as seen in Eq. (16). Furthermore, we have supplemented our response with additional numerical simulation results in the Author Rebuttal report. These results underscore the superior performance of our proposed method compared to the naive Fr\'echet in EIV settings. Our method consistently achieves better predictive accuracy across diverse scenarios, including the cases with $p > n$. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have just reviewed the reference paper [2] regarding the role of the low-rank covariate matrix and the significance of large p in their work. Essentially, the findings presented in [2] support the validity of the principal component regression approach in your research. I am content with your explanations and have adjusted the score to 5 (borderline accept), as the results are accurate; however, the novelty beyond what is presented in [2] appears to be limited. --- Reply to Comment 1.1.1: Comment: Thank you once again for your thoughtful feedback on our research paper and for revisiting the reference paper [2] regarding the role of low-rank covariance matrix and the significance of large $p$. We also appreciate your diligence in re-evaluating the nuances of our contribution, and are pleased to learn that our explanations have met your satisfaction. Nonetheless, we wish to take this opportunity to address your concerns about the perceived limited novelty beyond [2]. In this regard, we aim to elucidate the unique aspects of our work, highlighting its novelty and significance. 1. **Addressing Errors-in-Variables Regression with Non-Euclidean Response Variables:** While our approach shares some similarities with Agarwal et al.'s [2] strategy for addressing errors-in-variables (EIV), it goes beyond these similarities by tackling the unique challenges posed by non-Euclidean regression settings. Unlike the standard linear regression scenario considered by Agarwal et al. [2], where a hypothesis relation $f: x \mapsto y$ can be explicitly represented as a regression coefficient vector $\beta \in R^P$, the distinctive nature of non-Euclidean response variables makes such a representation unattainable. The absence of an explicit form of the solution -- e.g., the global Fr\'echet regression function in Eq. (3) of the original submission -- profoundly limits the mathematical tools available for analyzing the theoretical properties of the Fr\'echet regression function. To the best of our knowledge, the analysis of Fr\'echet regression is only available through the generalized $M$-estimation theory, but EIV problems have not been addressed in the recent literature, including pioneering works such as [38], [41], and [46]. By effectively integrating Fr\'echet regression with covariate data cleansing (via SVT), our method adeptly handles a broad spectrum of non-Euclidean regression scenarios, subject to the specified regularity assumptions (C0)-(C2). This wide-ranging applicability stands as a notable and innovative advancement beyond the scope of [2]. 2. **Advancements in Theoretical Proof and Bounds:** While our proof of Theorem 3 shares common arguments and mathematical tools with [2], we emphasize our proof enhances clarity and precision. Notably, our analysis employs a cleaner and more straightforward argument, evident in the comparison between the proof of our Theorem 3 and that of Agarwal et al.'s Theorems 3.1 \& 3.2. Moreover, our approach eliminates dependency on exogenous factors, e.g., $\| \beta^* \|_1$ as seen in Eq. (5) of [2], providing a more intuitive understanding of prediction error control as shown in Eq. (16) of the original submission. Considering these elucidations, we believe that our research makes a substantial contribution towards addressing challenges in errors-in-variables regression with non-Euclidean response variables. We appreciate your recognition of the accuracy and validity of our results. We hear your perceived concerns on the novelty beyond [2], and we would like to assure you that the universal applicability of our method, coupled with the advancements in theoretical proofs, substantiates its significance.
Summary: This paper discusses a new method for Fréchet regression that relates non-Euclidean response variable to multivariate predictors. The proposed method leverages the low-rank structure of the covariate matrix to improve the efficiency and accuracy of the estimator, particularly in high-dimensional and errors-in-variables regression settings. The authors provide a theoretical analysis of the estimator's properties and demonstrate its superior performance through numerical experiments. Strengths: 1. The authors propose the regularized global Fréchet regression framework that can effectively utilize the low-rank structure inherent in the covariate matrix. 2. The proposed method is straightforward to implement and works well in the errors-in variables regression setting. 3. The authors have provided both theoretical analysis and numerical experiments to demonstrate the effectiveness of the proposed method. Weaknesses: 1. The paper looks interesting, but it does not appear to be significantly advancing the state of the art in the field, e.g. [38, 41] 2. In Theorem 2, it is not clear how $\lambda$ affects the convergence rate in general. 3. No discussions regarding the performance of the three algorithms are given after Table 1 in Section 5. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In the numerical experiments, does the method still work for the high-dimensional case when $p>n$? 2. How to choose the regularization parameter $\lambda$ in practice? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable feedback and comments on our work. With gratitude for the positive evaluation, we are committed to further elucidating our contributions by addressing the concerns and questions you have raised. To facilitate this process, our response is structured to first address highlighted weaknesses, followed by comprehensive point-by-point responses to your specific questions. ### **Weaknesses** #### 1\. Lack of significant advances: We acknowledge the pioneering nature of [38] and [41] in the realm of Fr\'echet regression, as highlighted in our Introduction section. Yet, it is crucial to underscore that our paper's primary focus is on addressing Errors-in-Variables (EIV) challenges within the Fr\'echet regression framework. While there have been commendable strides in statistical and machine learning literature, including notable works such as [10], [38], [41], [46], and [54], that delve into regression analysis for response variables within general metric spaces, we are not aware of any prior investigations into addressing EIV problems in the non-Euclidean regression setting. Our work bridges this gap adeptly, addressing the EIV challenge in the non-Euclidean regression context, relevant to real-world scenarios with latent measurement errors impacting observed covariates. Our paper also offers substantial theoretical advancements. An inherent challenge we tackled involves analyzing the asymptotic behavior of the $M$-estimator in the global Fr'echet regression model when coupled with the singular value thresholding (SVT) technique. This task is made considerably intricate due to the absence of an explicit form for the global Fr\'echet regression function in generic metric spaces, where it can be only identified pointwisely. Thus, our endeavor necessitates a detailed examination of risk differences involving weight function perturbations, influenced by the threshold $\lambda$ and covariate errors. As such, deriving the bias-variance decomposition in Theorem 2 is not straightforward; intricate technical development, exemplified by Lemmas 2 and 3, was essential. Additionally, the derivation of Theorem 3 diverges from conventional PCR analyses in [2] for similar reasons, highlighting our unique approach. While rooted in some proof techniques from [41] and [2], our work employs more sophisticated arguments to address intricate nuances and challenges. Considering these factors, we firmly assert that our work significantly advances this field. #### 2\. Effects of $\lambda$ in Theorem 2: As $\lambda$ increases, the bias (reflected on $b_{\lambda}$) grows while the variance diminishes. Our analysis captures the bias trend, although the variance aspect may be slightly unclear due to the asymptotic nature of our analysis. Refining our analysis into the finite-sample properties in future research could provide further insights. #### 3\. Missing discussions on numerical experiments: A thorough analysis of the three algorithms' performance is available in Appendix E. We've incorporated technical details like simulation settings, implementation specifics, evaluation metrics, and further result discussions in Appendix E due to page limitations. We acknowledge the reviewer's recommendation to succinctly describe these aspects in the Experiments section for clarity. In the camera-ready version post-acceptance, we will provide clear references to facilitate easy access to the technical details. ### **Questions** #### 1\. Numerical experiments for high-dimensional cases: We've performed extra simulations to reaffirm the consistency of our numerical results with those in the original manuscript. Specifically, we have extended our investigation to include nine additional combinations, with $n\in${100, 200, 400} and $p\in${150, 300, 600}, as presented in Table R.1 attached to the Author Rebuttal. These encompass high-dimensional scenarios, aiding readers in extrapolating numerical trends to even larger-scale experiments. For comprehensive information, please refer to our responses in the Author Rebuttal report. #### 2\. Choice of $\lambda$ in practice: In our numerical analysis, we utilized an evenly spaced search grid in the range $(0, \lambda_1 \sqrt{p/n})$, taking into account the maximum eigenvalue of the covariance matrix ($\lambda_1$), model complexity ($p$), and sample size ($n$). After evaluating mean squared prediction errors (MSPE) across this grid, we selected the parameter value that minimized MSPE. In practical contexts, cross-validation could be employed to estimate MSPE, and it is viable to use the sample covariance matrix of noisy covariates to approximate $\lambda_1$. Importantly, it is noteworthy that certain settings within the penalized regression literature provide theoretically-guided principles for selecting $\lambda$.
Summary: This paper proposes a new method in Frechet regression of non-Euclidean response variables, with a particular focus on high-dimensional, errors-in-variables regression. The idea is to combine the original Frechet regression with Principle Component Regression (PCR). In this way, the low-rank structure in the matrix of (Euclidean) covariates is utilized by extracting its principal components via low-rank matrix approximation. Theoretical analysis of consistency, convergence etc of the proposed method have been provided. Numerical experiments on simulated datasets have also been presented. Strengths: (1) The proposed method tackles limitations of Frechet estimation such as reliance on ideal scenarios with abundant and noiseless covariate data; (2) The method utilizes the low-rank structure so that it can be applied to high-dimensional settings; (3) both theoretical analysis and empirical results are demonstrated. Weaknesses: (1) underlines some terms and references which are rarely seen in papers; (2) line 98: after "Frechet regression.", there is an additional phrase "in this study." Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: no limitations provided Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your feedback and positive evaluation. In response, we have prepared both a comprehensive Author Rebuttal report and individual rebuttal to address highlighted weaknesses and questions. We believe these efforts will strengthen our contributions while effectively addressing the concerns raised. ### **Regarding your specific points** #### 1\. Atypical underlines: Thank you for bringing this to our attention. We will ensure the removal of the underlines in the revision. #### 2\. Typo in Line 98: We are grateful for pointing out the typo. During the preparation of the camera-ready revision upon acceptance, we will meticulously review the manuscript and make necessary corrections, including those highlighted by the reviewer. --- Rebuttal Comment 1.1: Title: keep the score as is Comment: Thanks for the rebuttal and I will keep my score as is.
Summary: The proposed method leverages the low-rank structure inherent in the covariate matrix to improve efficiency and accuracy. It combines global Fréchet regression with principal component regression to enable more effective modeling and estimation, especially in high-dimensional and errors-in-variables regression settings. The paper provides a theoretical analysis of the proposed estimator's properties in large samples, including bias, variance, and variations due to measurement errors. Empirical experiments support the theoretical findings, demonstrating the superior performance of the approach. Strengths: Strengths of this paper include: The paper introduces a new framework, called regularized (global) Fréchet regression, which combines Fréchet regression and principal component regression. This framework effectively utilizes the low-rank structure in the covariate matrix by extracting principal components through low-rank matrix approximation. The paper provides a thorough theoretical analysis with three main theorems. Firstly, it proves the consistency of the proposed estimator for the true global Fréchet regression model. Secondly, it investigates the convergence rate of the estimator's bias and variance. Lastly, it derives an upper bound for the distance between estimates obtained with error-free covariates and those with errors-in-variables covariates. These results establish the effectiveness of the proposed framework in addressing model mis-specification and achieving more efficient model estimation. Numerical experiments conducted on simulated datasets validate the theoretical findings. The results demonstrate that the proposed method provides more accurate estimates of regression parameters, particularly in high-dimensional settings. The experiments highlight the importance of incorporating the low-rank structure of covariates in Fréchet regression and provide empirical evidence that aligns with the theoretical analysis. Weaknesses: The authors should have performed experiments in diverse settings using multiple distance metrics. It would have been interesting to see the relation of MSPE against threshold in different settings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the difference between MSE and MSPE, does MSPE stand for Mean Square Prediction Error? The abbreviations in the experiments section have not been explained properly like REV, EIV, SVT, etc. Line 147 can be rewritten as " .. conditional distribution of Y given X = x is normally distributed .. " Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback on our work. With gratitude for the positive evaluation, we are dedicated to clarifying and enhancing our contributions by addressing the raised concerns and questions. To facilitate this process, our response is organized to first tackle highlighted weaknesses, followed by a detailed response to the question. ### **Weaknesses** #### 1\. Experiments in more diverse settings: We value the reviewer's suggestion and have extended our numerical investigation to encompass a wider range of scenarios. This includes diverse problem parameters and the introduction of non-Gaussian noise (Laplacian) conditions for the Wasserstein space example. Furthermore, we have examined the standard linear regression model for Euclidean responses, along with variations involving different metrics ($\ell_1$ and $\ell_{\infty}$) within the response space $\mathcal{Y}=R^d$. We present a summary of these findings in Table R.1 and Figure R.1, included in the attachment to our Author Rebuttal. Specifically, Figure R.1 demonstrates consistent trends in mean squared prediction error (MSPE) across all metrics, resembling those observed in Figure 2 of our original submission. Notably, our proposed method (represented in blue) consistently outperforms the naive EIV approach (depicted in red) across all three metrics. It is important to highlight that while the $\ell_2$ metric enables an explicit linear regression model form, such an explicit form is not available for $\ell_1$ or $\ell_{\infty}$ metrics alike Fr\'echet regression for the Wasserstein space. ### **Questions** #### 1\. Acronyms / paraphrasing of Line 147: We acknowledge the reviewer's accurate observation regarding the MSPE acronym. The formal definition of Mean Squared Prediction Error can be found in Appendix E (Line 822). Due to page constraints, we deferred the definitions of evaluation metrics and acronyms, including Mean Squared Error (MSE) and Mean Squared Prediction Error (MSPE), to Appendix E of the original manuscript. Other technical details, such as simulation settings, implementation specifics, and evaluation metrics, are also provided in Appendix E. We appreciate the reviewer's suggestion and recognize the value of providing concise explanations of these terms within the Experiments section to enhance readers' comprehension. During the revision process for the camera-ready version upon acceptance, we will incorporate these explanations either in the caption of Table 1 or within the main text. Moreover, we will revise line 147, as per the suggestion, for improved clarity. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed rebuttal and clarifications. I will keep the positive score.
Rebuttal 1: Rebuttal: We appreciate the valuable feedback from the reviewers and their constructive comments. Here, we address common themes raised by the reviewers, providing clarity on our methodological and theoretical contributions. We also present supplementary numerical results to reinforce our findings. Detailed responses to each reviewer's comments can be found in dedicated individual rebuttals. ### **Summary of contributions** We address the errors-in-variables (EIV) regression problem in a non-Euclidean response context through a composite approach, combining Fr\'echet regression (for non-Euclidean responses) and principal component regression (for covariate error mitigation). Unlike conventional EIV literature that relies on distributional knowledge for covariate error ($\varepsilon_i$ in Eq.(5)), we avoid such assumptions, distinguishing our approach. Rather than aiming for complete elimination of covariate noise -- a challenging task without stringent distributional assumptions -- we focus on "mitigating" errors. This is achieved by estimating the design matrix through low-rank matrix estimation, particularly via principal component regression (PCR), which approximates the design matrix and alleviates errors. A similar approach was explored in standard linear regression setting [2]. The paper underscores efficient use of the design matrix's low-rank structure (covariates), even in arbitrary non-Euclidean metric spaces for the response variable. Our "regularized" Fr\'echet regression estimator's superiority is supported by three theorems (consistency, convergence rate, and error reduction in variables) and also corroborated by numerical experiments. ### **More background & Rationale for PCR** A vast literature addresses EIV models. In this work, we adopt PCR for a concrete, practical solution to EIV models in non-Euclidean regression, driven by two compelling reasons. Firstly, the prevalence of (approximate) low-rank structures in real-world datasets renders our approach practically relevant. Secondly, we purposefully choose an approach with minimal assumptions concerning covariate errors for broad applicability. PCR aligns with these considerations, leveraging inherent low-rank structures effectively and demonstrating error-mitigating capabilities [2]. Notably, PCR stands apart from conventional EIV techniques by not requiring a priori knowledge of measurement error distributions. Moreover, PCR has an extensive presence in the high-dimensional statistics and dimensionality reduction literature. We appreciate Reviewer 1 (Jnac) and Reviewer 5 (urUC) for their invaluable insights. Acknowledging that these nuances may not have been fully elucidated in our initial presentation, we plan to augment the "Errors-in-variables-regression" paragraph within the Related Work section in the upcoming camera-ready version after acceptance. This revision will offer a more comprehensive overview of high-dimensional and robust modeling, clarifying our motivations and methodology. ### **Implications of Theorem 3** We refrain from imposing distributional assumptions on $\varepsilon=Z-X$, ensuring the broad applicability of inequality Eq.(16) across diverse scenarios. Also, this bound in Eq.(16) is sharp, exemplified by a worst-case noise instance that attains equality (up to a multiplicative constant). Despite its generality, this upper bound highlights the effective error mitigation in specific scenarios. Consider the following: 1. *Well-balanced, effectively low-rank covariates:* Suppose that $X\in R^{n\times p}$ satisfies (1) $|X_{ij}|=\Omega(1)$ for all i,j, and (2) $\sigma_1(X)\asymp\sigma_r(X)\gg\sigma_{r+1}(X)\asymp\sigma_{n\wedge p} = O(1)$, where $r\ll n\wedge p$ denotes the effective rank of $X$. Then we have $\sigma_1(X)^2\asymp\sigma_r(X)^2\asymp\|X\|_F^2/r\gtrsim np/r$. 2. *Independent sub-gaussian covariate noise:* Next, suppose that $Z=X+E$ where $E$ is a random matrix with independent sub-Gaussian rows. Then $\|Z-X\|\lesssim\sqrt{n}+\sqrt{p}$ with high probability due to concentration inequality. In the random design scenario where $X$ and $x$ have i.i.d. rows drawn from the same distribution, $\|x-\mu_{D_n}\|_{\Sigma}\approx 1$ with high probability. As a result, the upper bound in Eq.(16) is bounded by $\sqrt{r/p}+\sqrt{r/n}$, which diminishes to 0 when $r\ll n\wedge p$. ### **Additional numerical experiments** Our extended numerical investigation encompasses non-Gaussian (Laplacian) covariate noise and includes larger-scale experiments, including $p>n$ cases, enabling straightforward extrapolation to larger scenarios. The summarized results in the attachment (Table \ref{tab:sim-wasserstein-high-dim}) affirm the superiority of our proposed method (SVT), evident from consistently lower mean squared prediction errors (MSPE). This added experiment further underscores SVT's ability to effectively address EIV challenges, enhancing prediction accuracy for EIV Fr\'echet regression, even without prior knowledge of measurement errors. Significantly, SVT even surpasses error-free-covariate Fr\'echet regression (REF) in MSPE within this experiment. Standard Fr\'echet regression generally falters in high-dimensional settings due to issues like non-invertibility of sample covariance matrix (when $n<p$), or high covariate correlations. Our background study (not detailed here) identified the ill-posed nature of the M-estimation for REF, leading to instability and escalated mean squared errors (MSE). Hence, Table R.1 presents REF's finite-sample performance achieved through pseudo-inverse of the sample covariance matrix. Nevertheless, SVT proves a more dependable predictive model than even this stabilized version of REF. Furthermore, we conducted experiments in linear regression settings ($\cal{Y}=R^d$) using three metrics: $\ell_1$, $\ell_2$, and $\ell_{\infty}$. Similar trends in MSPE, observed in Figure 2 of the original submission, persist across all these metrics; see Figure R.1 attached. Pdf: /pdf/08e89ff60dacce54add47d55f493de8c19749789.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The submission 5822 makes a step forward in exploring the Fréchet regression, which is a significant approach for non-Euclidean response variables. Compared to existing work, this research has specific focuses on 1) high-dimensional, 2) errors-in-variables settings, and designed a novel framework that combines the Fréchet regression and the principal component regression. The analysis in paper 1) proved the consistency of the proposed estimator, 2) investigated the convergence rate, and 3) derived an upper bound for the distance between the the estimates from error-free and errors-in-variables covariates. Some numerical simulations on synthetic datasets are provided. Strengths: - this work is well-written and provides results that will I believe be of interest to the community. The idea to design such a regression scheme is novel to me. In particular, the framework and main theorems described in Sec. 3 and Sec. 4 are convincing and well presented. The literature (from what I know) is globally well discussed. - claims and mathematical derivations seem to be sound and correct, and it clearly state its contributions, notation and results. - also the R implementation is offered in supplement, which allows for reproducing the key results using the README file Weaknesses: - I understand this paper is a theory-based work. However, I still think the numerical simulations are too weak and might not be convincing. - the scale of the experiment is too small. The authors are suggested to consider larger $p$ and $n$, as well as more concrete examples/models as mentioned in Sec. 4.1 - the synthesized data ($p<n$) in implementation is *not* high-dimensional, which is the main setting of this work - lack of experiments on real-world benchmarks, e.g., UCI, libsvm datasets - maybe the motivations of this research should be reconsidered. - why the authors want to combine the *Fréchet regression* and *errors-in-variables setting*, along with the specific *PCR*? The authors only mentioned some advances in PCR research in line 96 and expressed their inspiration from this. Infact, there is whole lot of literature on high dimensional statistical learning and robust modeling. Why not borrow relevant ideas? - compared to the existing work of Fréchet regression, what are the core challenges (difficulties) of this work, and did the authors draw on some existing frameworks / proof techniques from them? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - in experiments, the Gaussian noise added to covariate may be too ideal. Although this assumption is common, could you provide results of other forms of noise (for example, some work on adversarial perturbations)? - how would proposed theorems extend to concrete models/objectives? providing a practical guide is better for helping readers understand the contribution of this paper - line 826: ''$\Lambda$ is a fine grid on ...'' adding more details about how to define the searching grid and the final chosen value of $\lambda$ are appreciated - some lines have small typos, e.g., in line 523, it should be Proof of Proposition **4** I'm not an expert of non-Euclidean regression analysis and would be happy to revise the rating if I missed some key aspects Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - authors did not discuss in detail the limitations of the proposed regression framework (while they said "Yes" in checklist), but given that this work is based on theoretical analysis, I believe that all Assumptions in main body could clarify the technical limitations - there's no need to discuss potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We value the insightful comments and feedback provided. Our response is organized to address the highlighted weaknesses first, followed by comprehensive point-by-point responses to the specific questions. ### **Weaknesses** #### 1\. Numerical simulations: *(A) Scale of experiments, high-dimensional settings, and more concrete examples:* We expanded simulations, incorporating results in the Author Rebuttal report; we added nine configurations in Table R.1, encompassing various dimensions ($n\in${100, 200, 400} and $p\in${150, 300, 600}) for easy extrapolation to larger-scale experiments. Additionally, we extended investigations, analyzing the proposed method's prediction performance in standard regression analysis with Euclidean responses (Example 1 in Sect. 4.1) and other metrics ($\ell_1$ and $\ell_{\infty}$) in $R^d$. Consistent MSPE trends were observed, akin to Figure 2 in the original submission; see Figure R.1 in the Author Rebuttal report. Lastly, our focus on the Wasserstein space (Example 2 in Sect. 4.1) as the primary simulation instance in the original submission aligns with growing interest in machine learning for random objects, enhancing insight into errors-in-variables challenges within related work. *(B) Absence of real-world benchmarks:* As the reviewer mentioned, this paper primarily focuses on establishing theoretical guarantees for employing singular value thresholding (SVT) in errors-in-variables Fr\'echet regression analysis for metric-space-valued responses, with a comprehensive analysis. While real-world benchmark experiments are absent, our mathematically rigorous analysis and numerical experiments on synthesized data consistently demonstrate SVT's superior finite-sample performance over the naive EIV estimator, even without leveraging prior knowledge of measurement error distributions. We acknowledge the reviewer's point and concur that investigating SVT's performance on real-world datasets offers exciting prospects for future research. Additionally, the inherent flexibility of our two-step EIV Fr\'echet regression framework (allowing covariate cleansing through means other than SVT) suggests potential for practical improvements, making these avenues enticing for further exploration. #### 2\. Reconsidering the motivations: *(A) Why PCR?* Indeed, there exists a diverse body of literature in high-dimensional learning and robust regression modeling, as acknowledged by the reviewer and elaborated upon in the Author Rebuttal. However, much of this literature assumes response spaces to be vector spaces or endowed with inner product structures. Additionally, prior statistical analyses of EIV problems often assume known or estimable noise distributions. In this paper, we propose a two-step approach involving covariate cleansing followed by followed by Fréchet regression. Specifically, we opt for PCR/SVT for the covariate cleansing step, among other methods, to address these challenges in scenarios when distributional knowledge is absent. We appreciate the reviewer's suggestion of exploring other ideas in the literature and agree that this could offer promising avenues for future research. *(B) Core challenges:* One of the core challenges in our work was analyzing the asymptotic behavior of the $M$-estimator in the global Fr\'echet regression model combined with SVT. Unlike settings with an algebraic structure on the response space, the global Fr\'echet regression model in generic metric spaces lacks an explicit form and is defined only pointwisely. Thus, controlling the risk difference involves scrutinizing perturbations of weight functions, influenced by the threshold $\lambda$ and covariate errors. Consequently, obtaining the bias-variance decomposition in Theorem 2 is not straightforward from existing works; it necessitated intricate technical development, as exemplified by Lemmas 2 and 3. The derivation of Theorem 3 also departs from conventional PCR analyses in [2] due to similar reasons. While drawing on some proof techniques in [41] and [2], we employed more sophisticated arguments to address these unique intricacies and challenges within our framework. ### **Questions** #### 1\. Non-Gaussian noise: Please refer to the Author Rebuttal and its attachment for extra experimental outcomes using Laplacian noise. It's important to note that our proposed method isn't explicitly designed for active robustness, yet it can withstand (sparse) adversarial noise if $|Z-X|$ is significantly smaller than the signal singular value, as indicated by Eq. (16). #### 2\. Illustration of Theorems with concrete models: In our paper, we used a basic linear regression model with scalar responses to offer an accessible illustration for readers unfamiliar with metric-spaced-valued responses. For instance, consider the linear regression $Y = \alpha + \beta X + \eta$, where $X$ is confined to a compact interval in $\mathbb{R}$ (page 4). Then, Theorem 2 demonstrates that bias asymptotically relies on eigenvalues below threshold $\lambda$, while variance follows a $\sqrt{n}$ rate. This extends standard principal component regression to metric-spaced-valued regression. Remark 1 on page 7 discusses the application of Theorem 2 in Examples 1, 2, and 3, and Theorem 3's implications are highlighted in the Author Rebuttal report. #### 3\. Details for searching grid: We used an 100 evenly spaced search grid in the interval $(0, \lambda_1 \sqrt{p/n})$, where $\lambda_1$ is the maximum eigenvalue of the covariance matrix. MSPE was calculated over this grid, and the value minimizing MSPE was selected. For instance, we obtained $\lambda=0.3584$ for n=100, p=50 in Figure 2. Due to page limits, full technical details, simulation settings, implementation, metrics, and result discussions are deferred to Appendix E. However, we recognize the value of summarizing these details in the Experiments section for reader clarity. Upon revision, we'll ensure a clear pointer to technical details. --- Rebuttal Comment 1.1: Comment: Thanks for your response to each question and hope to see these clarifications in the final version. I have improved my rating to 6.
null
null
null
null
null
null
An Optimal Structured Zeroth-order Algorithm for Non-smooth Optimization
Accept (poster)
Summary: This paper presents a study on a structured finite-difference algorithm for non-smooth black box optimization. The authors successfully demonstrate that their finite-difference surrogate serves as an unbiased estimator of the gradient for the smoothing approximation of the target function. The proposed O-ZD method's convergence analysis is established under different assumptions. Strengths: Strengths And Weaknesses: Overall, the paper is well-structured and provides valuable insights into the structured finite-difference method for non-smooth black box optimization. The authors establish the optimal complexity in the non-smooth convex case and demonstrate convergence rates in the non-smooth nonconvex and smoothing settings. However, it is unclear to me what is the exact theoretical complexity improvement of this method compared to state-of-the-art methods in different scenarios. Additionally, the objective function used in the numerical experiments appears to be relatively simple. Furthermore, I kindly request the authors to address the following questions for better understanding: 1. Could the authors please provide the exact formula of C in the Corollary 1? It seems that C should be somehow related to \theta. 2. In line 205, the authors claim that the complexity in terms of the number of iterations is better than [15] and [37]. However, I believe it would be more appropriate to compare the number of function evaluations since the iterations of O-ZD are more computationally expensive. 3. I suggest the authors include a table explicitly listing the number of function evaluations' complexity of this work and the related works, rather than just providing a discussion for each case separately. Weaknesses: See above Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her observations and questions. We answer here to his/her questions. ## Weakness In the literature, different works (empirically) showed that imposing a structure on the direction matrices let zeroth order method obtain better performance. However, no non-smooth analysis was provided. The main goal of this work was to provide analysis of structured zeroth order method in non-smooth setting. Indeed, one of our main results is the Smoothing Lemma for direction matrices in $\mathcal{O}(d)$. In terms of advantages w.r.t. the state-of-the-art is in the variance of the gradient estimator (as indicated in lines 210 to 212). Moreover, we agree with the reviewer that the objective functions considered in the numerical experiments are relatively simple. Indeed, the main contributions of this work are theoretical and the experimental part is included just to confirm the theoretical results. We will extend the experimental part by repeating the experiments on different functions (some of these experiments are included in the global response). ## Questions * Q1: We provided the explicit formulation of the constant $C$ of Corollary $1$ (point $(i)$) in the proof (in Appendix B.2). We will write it in the main paper. * Q2 and Q3: In line 205 we discuss the result obtained for non-smooth convex functions. In [30] and [15, Theorem 2], the authors consider estimators built using a single directions and thus the complexity in terms of number of iterations coincide with the complexity in terms of function evaluations (i.e. we can obtain such a complexity by multiplying the complexity in terms of number of iterations by $2$ since a single iteration costs $2$ function evaluations). In [30] (more precisely in Theorem 6), to derive the complexity, authors consider a constant step-size scheme (see [30] eq. 46) and they obtain a complexity of $\mathcal{O}(d^2 \varepsilon^{-2})$. For our algorithm, the constant step-size scheme is considered in Corollary $1$ point $(iii)$. The complexity in terms of function evaluations is $\mathcal{O}(d \varepsilon^{-2})$ (since every iteration costs $2\ell$ function evaluations, we can compute the complexity in terms of number of function evaluations by multiplying the complexity in terms of number of iterations by $2\ell$). In [15, Theorem 2], the authors consider a stepsize sequence $\alpha_k$ that satisfies the Monroe conditions. More precisely, they consider a stepsize sequence $\alpha_k$ as $\alpha (1/\sqrt{k})$ with $\alpha$ constant (note that it does not satisfy the Monroe conditions, however the choice $1/(k^{1/2 + \delta})$ satisfies it with $\delta$ arbitrary near to $0$). Using their double smoothing scheme they obtain a complexity in terms of function evaluations in the order of $\mathcal{O}(d \log d \varepsilon^{-2})$ (you can obtain it upper-bounding eq. $18$ by $\varepsilon$ and solving the inequality for $k$ i.e. searching $k$ s.t. the right part of eq. $18$ is smaller than $\varepsilon$). To make a fair comparison, we consider the choice of parameters proposed in Corollary 1 point $(i)$ i.e. $\alpha_k = \alpha k^{-\theta}$. As indicated in the discussion, in order to obtain the optimal dependence on the dimension we need to include $\sqrt{\ell/d}$ in the stepsize e.g. by taking $\alpha = \sqrt{\ell/d}$ (line 204-205). Again the complexity in terms of function evaluations is $\mathcal{O}(d \varepsilon^{-2})$. We will include an Appendix "Expanded Discussion" in which we extend the discussions below the corollaries including also the table with complexities in terms of the number of function evaluations of this work and the related works. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I have raised my score.
Summary: This paper analyzes the convergence rate of structured zeroth order optimization, Whose iterations’ descent direction Is chosen by random orthogonal group. It applies to the most general non smooth setting, and this paper gives the convergence rate of all specific settings of interest. Strengths: This paper analyzes an important optimization method, and provides concrete mathematical analysis to prove the theories. The result is novel and comprehensive. This paper has a very detailed discussion of many settings, each of them has the convergence rate in its case. Weaknesses: No major weaknesses, questions below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Eq(2), is there a motivation why you want to fix $l$ and vary $G$, is $h$ fixed or random? In Algo. 1, I think it means sample $G_k$ i.i.d. from “a uniform distribution” on $O(d)$, is that correct? What structure do the Algo’s chosen direction satisfy? For example, the paper mentions “orthogonality [30,40]” as a type of structure, but are the directions in Algo. 1 orthogonal? What is the difference between 1) “sampling $G_k$ randomly” and 2) zeroth order GD when you just randomly choose a point nearby and estimate $f(x+dx) – f(x-dx)$? With this part clarified, I would be willing to increase the score. There is another type of zeroth order method – although not always applicable in practice but worth mention. When the objective function is analytic, one can use complex number to make estimation variance smaller by estimating the gradient by $$ Im( f(x+yi) – f(x-yi) ) / 2y $$ where $y$ is small. For example, if $f(x) = x^3$, then $$ Im( f(x+yi) – f(x-yi) ) / 2y = 3x^2 + O(y^2)$$ $$ ( f(x+y) – f(x-y) ) / 2y = 3x^2 + O(y)$$ With complex number, the noise or variance on the real part can be ignored, with only high order noise kept. ====== Raised to 7. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her comments. We answer here to his/her questions. ## Question **Eq. 2 and gradient estimator.** In eq.2, we introduce our gradient estimator. The parameter $h$ controls the smoothness of the smoothed target (see Proposition 1) and, in the smooth setting, the quality of the estimator (see Lemma 4). It is not random, but it is fixed (and the results depend on the choice of this parameter). Specifically, in Eq. 2 $h$ is fixed. In Algorithm 1 we consider a sequence of $h_k$ where $k$ is the iteration counter, and for every $k=0, 1, 2, \cdots$, we compute \begin{equation} g_k(x_k) := \frac{d}{\ell} \sum\limits\_{i = 1}^\ell \frac{f(x_k + h_k G_k e_i) - f(x_k - h_k G_k e_i)}{2h_k} G_k e_i. \end{equation} According to the theoretical result, the best choice of $\ell$ is $\ell =d$ because it let reduce or remove (in non-smooth setting) the dependence on the dimension in the variance upper bound (see Lemma 4 in Appendix). However, a sequence of $\ell$ can be considered and it can be useful in pratical scenarios where a budget of function evaluations is provided (and it can be "derived" by considering the cost in time of the single function evaluation and how much time we want to spend to solve the optimization problem). **$G_k$ sampling.** In Algorithm 1, $G_k$ is i.i.d. uniformly sampled from $O(d)$. **Structured directions and differences with random directions.** Directions in Algorithm 1 are orthogonal in the sense that, since $G_k \in \mathcal{O}(d)$, it implies that $G_k^\intercal G_k = G_k G_k^\intercal = I$. Of course, since $\ell \leq d$ we have to truncate it. The main difference between using orthogonal directions and random directions is in the variance of the estimator obtained as we indicated in line 210-212. Moreover, in previous works (see e.g. [1, 2, 3, 4]), authors showed that gradient estimators built with structured directions (in particular orthogonal directions) provide better performance than the ones built using random directions. In particular, in [2] the authors empirically show that to obtain a gradient accuracy comparable to methods that use orthogonal directions, other methods (e.g. random Gaussian or spherical) can require significantly more samples. Informally, such an improvement can be justified by noting that structured directions provide a better local exploration of the space than non-structured ones, reducing the probability to generate bad directions. **Other zeroth-order method.** We thank the reviewer to indicate these zeroth-order methods. We will include some references in the "Related Work" section. **References** 1. K. Choromanski, M. Rowland, V. Sindhwani, R. Turner, and A. Weller. Structured evolution with compact architectures for scalable policy optimization. 2. A. S. Berahas, L. Cao, K. Choromanski, and K. Scheinberg. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. 3. M. Rando, C. Molinari, S. Villa, and L. Rosasco. Stochastic zeroth order descent with structured directions. 4. D. Kozak, C. Molinari, L. Rosasco, L. Tenorio, and S. Villa. Zeroth order optimization with orthogonal random directions --- Rebuttal Comment 1.1: Title: Thanks for the comment Comment: The comment mostly makes change to me. About the "structure" part, I see that it means in each batch, since $e_i$'s are orthorgonal, then "in each batch" the directions are orthogonal. Could the authors point how much faster it is compared to just sampling random vectors (given Reviewer KbSU saying that sampling $G$ might have higher oracle complexity)? Raised score to 6 for now, will make it 7 if the authors reply a convincing answer to this and to Reviewer KbSU's question. About fixed $l$, it would be great to find an optimal rate with the best strategy of "a sequence of $l$ can be considered". I think the theory is self-consistent and no more experiments are necessary in opposition to Reviewer VJ1k's comment. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for his/her response. We answer here to his/her question. Note that in the literature, different methods to generate orthogonal matrices were proposed - see e.g. [1,2,3,4,5,6,7,8,9]. ***Direction matrix as Householder reflection.*** An efficient method to generate an orthogonal matrix is the following: At iteration $k \in \mathbb{N}$, we generate the direction matrix $G_k$ as a single random Householder reflection i.e. \begin{equation} G_k := I - 2 v_k v_k^\intercal \end{equation} where $I$ is the identity $\mathbb{R}^{d \times d}$ and $v_k$ is a vector uniformly sampled from the sphere (i.e $v_k \in \mathbb{S}^{d- 1}$). Note that the cost of this method consists of two parts: - Generation of $v_k$: which consists in generating a Gaussian vector and in normalizing it. - The outer product $v_k v_k^\intercal$ for which modern implementations exploit parallelization/vectorization. The identity matrix can be generated and stored offline. Note that since it is very sparse, it can be stored using a sparse format (e.g. COO[10]). In this way, we can save resources in high-dimensional settings. The computation of the gradient approximation $g_k$ at iteration $k \in \mathbb{N}$ is \begin{equation*} g_k(x_k) := \frac{d}{\ell} \sum\limits_{i = 1}^\ell \frac{f(x_k + h_k G_k e_i) - f(x_k - h_k G_k e_i) }{2h_k}G_k e_i. \end{equation*} Thus, it "uses" only the first $\ell$ columns of $G_k$. Therefore, considering $\ell$ constant (as we proposed), we can store offline a (truncated) identity $I_{d, \ell}$ and compute the outer product truncating $v_k^\intercal$. We report the time cost of generating a set of directions using this procedure and random (Gaussian and Spherical) directions in the case $\ell = d$ (i.e. the most time-expensive setting). Mean and standard deviation are indicated using 500 repetitions. *** d | Random Gaussian | Random Spherical | Householder *** 2 | 9.27e-7 $\pm$ 7.96e-7 | 5.49e-6 $\pm$ 2.05e-6 | 9.32e-6 $\pm$ 3.43e-6 4 | 1.30e-6 $\pm$ 7.21e-7 | 6.56e-6 $\pm$ 2.63e-6 | 1.12e-5 $\pm$ 5.79e-6 8 | 2.18e-6 $\pm$ 6.06e-7 | 8.01e-6 $\pm$ 5.32e-6 | 1.11e-5 $\pm$ 5.20e-6 16 | 5.69e-6 $\pm$ 1.61e-6 | 1.15e-5 $\pm$ 4.10e-6 | 1.18e-5 $\pm$ 7.20e-6 32 | 1.78e-5 $\pm$ 6.42e-6 | 2.49e-5 $\pm$ 1.33e-5 | 1.16e-5 $\pm$ 7.25e-6 64 | 6.58e-5 $\pm$ 7.03e-6 | 7.74e-5 $\pm$ 1.95e-5 | 1.62e-5 $\pm$ 3.79e-6 128 | 2.73e-4 $\pm$ 2.37e-5 | 2.98e-4 $\pm$ 2.45e-5 | 3.32e-5 $\pm$ 4.02e-6 256 | 1.26e-3 $\pm$ 2.79e-5 | 1.36e-3 $\pm$ 2.90e-5 | 1.20e-4 $\pm$ 1.04e-4 512 | 5.50e-3 $\pm$ 1.63e-4 | 5.91e-3 $\pm$ 1.22e-4 | 1.22e-3 $\pm$ 3.82e-4 1024 | 2.16e-2 $\pm$ 6.92e-4 | 2.41e-2 $\pm$ 7.35e-4 | 4.83e-3 $\pm$ 2.26e-3 2048 | 8.92e-2 $\pm$ 8.19e-2 | 1.04e-1 $\pm$ 1.03e-1 | 2.40e-2 $\pm$ 3.87e-2 *** The resources of the machine used to perform this experiment are described in Appendix C. Note that, our procedure is more expensive than random directions only in small dimensional settings (i.e. for $d \leq 16$). However, for higher dimensional cases, it scales better in time (i.e. it is cheaper than random directions). Moreover, note that the highest cost in this procedure is the outer product which can be efficiently computed in GPU (and thus the time cost can be reduced by exploiting it). We will extend Appendix D including this table and other details. Moreover, in order to complete the answer to your question (and the Reviewer's KbSU fourth question) we have to compute and compare the performance in function values using this algorithm instead of random directions. To do that, we repeated the numerical experiments plotting the computational time in the x-axis, and reported the results in the global response (see Figure 2) as requested by Reviewer kbSU (the choice of the parameters is described in Section 4 and Appendix C). As we can observe, orthogonal directions still provide better performance than random directions. Such results confirm the empirical observations of [11]. **References** 1. A. Genz. Methods for generating random orthogonal matrices. 2. F. Mezzadri. How to generate random matrices from the classical compact groups. 3. K. Choromanski, M. Rowland, W. Chen, and A. Weller. Unifying orthogonal monte carlo methods. 4. A. Hedayat and W. D. Wallis. Hadamard matrices and their applications. 5. Å. Björck. Numerics of gram-schmidt orthogonalization. 6. T. W. Anderson, I. Olkin, and L. G. Underhill. Generation of random orthogonal matrices. 7. A. Barvinok. Approximating orthogonal matrices by permutation matrices. 8. C. Rusu and L. Rosasco. Fast approximation of orthogonal matrices and application to pca. 9. C. Boutsidis and A. Gittens. Improved matrix algorithms via the subsampled randomized hadamard transform. 10. P. Virtanen et. al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. 11. A. Berahas, L. Cao, K. Choromanski and K. Scheinberg. A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization.
Summary: This paper proposed a structured zeroth-order estimator for non-smooth optimization. The proposed algorithm using this estimator could achieve optimal convergence rate for non-smooth convex optimization, also achieve a convergence rate in terms of Goldstein stationarity for non-smooth non-convex optimization. Numerical experiments are provided to show the efficiency of the proposed algorithm. Strengths: The theory is well-rounded with both theoretical and numerical evidences. The proposed structured zeroth-order estimator is the first one for non-smooth optimization. The use of the Goldstein stationary is pretty novel and an interesting direction to further explore. Weaknesses: (Please respond to the Questions section directly) The design of the zeroth-order estimator may cost more time, due to the fact that it requires sampling orthogonal matrices; The problem studied doesn’t cover stochastic situations; The numerical experiments are not adequate. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Major: 1. Could the authors talk about possible applications of the proposed method? For example one could argue that zeroth-order smooth optimization can be applied to black-box attack of neural nets. But I’m not aware of the application of non-smooth situation. 2. The design of the zeroth-order estimator may cost more time, due to the fact that it requires sampling orthogonal matrices (either using QR or other matrix operations). More specifically, I’m wondering how the proposed method behaves comparing to simply sample $\ell$ vectors (with/without replacement) from the canonical orthogonal basis. 3. Regarding the numerical experiments. The choice of $\ell$ seem to be arbitrary. One could imagine that with a larger $\ell$, the per-iteration convergence would be faster, but it would cost more time to construct the estimator. Can the author give some insight on how to choose $\ell$ in an efficient manner? Another question related is that would the convergence and numerical behavior behave better if we vary $\ell$ in each iteration? 4. For the numerical experiment. I understand that due to page limit the authors moved a lot of details into the appendix, but it should be necessary to at least include the problem on which the experiments are conducted. Also since all the plots in the numerical experiments use “function evaluation” as ax-axis. I’m wondering what would the plot look like if we could the CPU time, since the proposed method could consume more time to construct the orthogonal matrix. 5. This is a personal comment and the authors may consider this for future works. The proposed method and analysis are all for deterministic optimization, whereas modern machine learning is about stochastic optimization. It would be interesting to see the convergence behavior of the proposed method for stochastic optimization problems, where the smooth stochastic case has been studied[1]. For zeroth-order non-smooth stochastic optimization, Goldstein stationarity seems to be necessary. Minor: 1. Please add some reference in the abstract, e.g. line 7 “Recently,… improve performance”. References: [1] Balasubramanian, Krishnakumar, and Saeed Ghadimi. "Zeroth-order nonconvex stochastic optimization: Handling constraints, high dimensionality, and saddle points." Foundations of Computational Mathematics (2022): 1-42. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is well stated in weakness and question sections. I’m not aware of any potential negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her comments and his/her suggestions. We answer here to his/her questions ## Questions 1. An example of an application of our algorithm we are working on is gain tuning in robotics. 2. Also coordinate directions (i.e. random canonical bases) provide good results. However, if the target function is "sparse" in the sense that the intrisic dimensionaly is smaller than the dimension (e.g. only some dimensions are relevant), sampling random canonical bases can provide no improvement while with random rotation we should be able to cover such settings. 3. In practice, the choice of $\ell$ depends on the function evaluation budget i.e. the maximum number of function evaluations that we can spend to face the optimization problem. Such budget depends on the time-cost of the single function evaluation and on the amount of time we want to spend to solve such a problem. According to the theory, it is better to perform less iterations with large $\ell$ instead of many iteration with small $\ell$. Thus a possible criterion can be to choose $\ell$ as large as possible (according to time constraints and computational resources). We thank the reviewer for the interesting idea on varying the number of directions $\ell$ per iteration. To the best of our knowledge there is no finite-difference method that analyze such a setting and it can be an interesting research direction. 4. We agree with the reviewer that it could be interesting to plot the CPU time vs function values. We repeated the experiments and we will include such plots either in Appendix or in the Numerical Experiments section. We included such plots in the global answer (i.e. the pdf file) - see Figure 2. In that case the orthogonal matrix is generated as a single Householder reflector (see Appendix D), in that way, we can precompute the truncated identity $I \in \mathbb{R}^{d \times \ell}$ and at every iteration we just have to generate a vector $v \in \mathbb{S}^{d - 1}$ and compute the outer product. The parameters are chosen as indicated in the paper (see Appendix C). Note that in order to make the comparison fair, the number of iterations performed by methods with multiple directions is smaller than the number of iterations performed by single direction methods (more precisely, given a budget of $T = 1000$ function evaluations, the number of iterations performed is $T/(2\ell)$). According to these experiments, we can still observe an advantage in using orthogonal directions. Moreover, we want to underline that in the literature there are different (and faster) methods to generate orthogonal matrices (see references in Appendix D). We want also to underline that in order to understand this kind of properties, we should perform an exhaustive empirical analysis which is out-of-the-scope of this work. However, note that previous works confirm that structured methods provide better practical performance (see e.g. [1]). 5. We thank the reviewer for the suggestion and we confirm that a research direction that we are considering consists in extending these results to the stochastic setting (considering different noise models). In this work, we wanted to introduce the first analysis for non-smooth structured zeroth and in particular the Smoothing Lemma for structured directions providing the basis to analysis structured zeroth order methods in non-smooth setting. **Minor:** we will add some references. **Reference** 1. A. S. Berahas, L. Cao, K. Choromanski, and K. Scheinberg. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, 22(2):507–560, Apr 2022 --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgement Comment: Thank you for your detailed responses to my comments and questions. I believe that paper is useful contribution to the field and would like to keep my score.
Summary: Zeroth-order optimization is the sub-field of optimization concerned with solving $$ x^{\star} \in \operatorname*{argmin}_{x \in \mathbb{R}^d}$$ _without_ using gradient information. This paper studies the most general case, where $f$ is assumed neither smooth nor convex. They propose a structured approach to the standard finite-difference gradient approximation, which means that the sampling directions are taken to be orthogonal. The main contributions of this paper are theoretical; it provides a fine-grained analysis of the convergence rate of the proposed algorithm under a variety of assumptions. The theoretical claims are supported by two numerical experiments. Strengths: - The first 4 pages are easy to read, and position the problem under consideration (orthogonal sample directions for non-smooth optimization) nicely with respect to prior work - This paper is definitely a technical advancement on prior work. - The mathematical technique displayed in this paper is impressive. As an example, I enjoyed the use of the Goldstein subdifferential. The correct notion of "approximate stationarity" in this setting is quite subtle but you handle it very well. - In Corollaries 1, 2, and 4 I appreciated that you provided convergence guarantees in both the constant step-size and decreasing step-size settings. Both are useful Weaknesses: - Overall I found this paper to be a little too heavy on theory and a little too light on experiment. I would suggest relegating some of the parameter configurations of Corollaries 2, 4, and 5 to an appendix and adding an additional experiment (see next point). - I'd strongly suggest adding more experiments; in my opinion the two simple functions you tested do not provide enough data to draw any conclusions. It would be particularly interesting to see more non-smooth experiments. Minor points: - In line 235 "the ball centered in 0" should be "the ball centered at x" - In line 239 you write $\partial f_h(x_I)$ but I think you mean $\partial_h f(x_I)$. - Why is there a staircase pattern in the purple graph in Figure 1? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the related work section, you state that your method achieves the optimal dependence on dimension and then cite [1]. But this paper deals with stochastic zeroth-order optimization with a particular "two-point" query model, whereas in your paper you consider the non-stochastic/noise-free setting. I did look at [1] but could not find a clear statement of the optimal dependence on dimension in the noise-free setting (I believe it is $\mathcal{O}(d)$). Could you make this a bit clearer in your exposition? Also, what is the authoritative reference for the optimal $d$-dependence for noise-free DFO? [1] J. C. Duchi, M. I. Jordan, M. J. Wainwright, and A. Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her observations and comments. We answer here to his/her questions ## Weakness The main goal of this article is to provide the first analysis of a structured zeroth-order algorithm in non-smooth setting providing the mathematical tools (i.e. the Smoothing Lemma) required to analyze its convergence, and hopefully future extensions (e.g. stochastic setting). The purpose of the numerical experiments section is to empirically show the properties indicated by the theory. An exhaustive empirical comparison is out of the scope of this work but we are considering it as a research direction. We performed other experiments that we will include in the paper (some of them are included in the global answer - see Figure 1). However, we want to underline that in order to understand the practical behavior of the algorithm, an exhaustive empirical analysis should be performed keeping in account also the other parameters (e.g. stepsize choice or the discretization choice) and this is out of the scope of this work. **Minor:** in figure 1 we analyze the impact of the number of directions $\ell$. In particular, we plotted in x-axis the number of function evaluations. Now, since to compute the gradient estimator we need to perform $2\ell$ function evaluations, we repeated $2\ell$ times the target function values (this is explained in line 323-324). ## Questions The reference is the same, note that by replacing the stochastic oracle with a noise-free oracle the proofs (of the propositions) follow the same line. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks to the authors for their response! I have no further questions.
Rebuttal 1: Rebuttal: Several reviewers have requested more experiments, including nonsmooth functions, and comparison in terms of CPU time. We include some of these in the attached pdf. We run these experiments $20$ times and provide the mean and standard deviation of the results. Pdf: /pdf/e5f8f1babccc42743ab8744a965bd1746d537796.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces and analyzes a structured finite difference algorithm for non-smooth optimization problems. The algorithm is built on a smooth approximation of the nonsmooth loss function and a structured finite difference approximation. The convergence of the proposed algorithm in non-smooth convex, non-smooth non-convex, smooth convex and smooth non-convex cases are studied. Strengths: 1. A simple algorithm using structured finite difference approximation of the gradient is proposed. 2. The convergence behavior of the proposed algorithm in four cases are studied. Weaknesses: I think the relation between the convergence of the proposed method and the number of directions chosen is not clearly analyzed. The error in Theorem 1 is only linear in \ell. While numerical experiments suggests \ell has an impact on the convergence rate. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Theorem 1 and Corollary 1, the error is linear in \ell. The convergence rate seems only depends on \theta. Does \ell affect the convergence rate? 2. From the theories in this paper, setting \theta towards 1/2 provides better convergence rate. But what is the hurt by doing that? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her useful comments. We answer here to his/her questions. ## Questions **Dependence on $\ell$ in the rate:** Yes, the number of directions $\ell$ affects the convergence rate but only in the constants. Indeed, Theorem $1$ states that \begin{equation} \\mathbb{E}[f(\bar{x}_k) - \min f] \leq S_k / A_k = \frac{1}{\sum\_{i=0}^k \alpha_i } \Big( \frac{\|\| x_0 - x^*\| \|^2}{2} + c\frac{d L\_0^2}{\ell} \sum\limits\_{i=0}^k \alpha_i^2 + L\_0 \sum\limits\_{i = 0}^k \alpha_i h_i \Big). \end{equation} Note that the impact of the dimension in the rates depends on the choice of the stepsize $\alpha_k$ and the discretization parameter $h_k$. Since the second term on the right part of the inequality depends on $\frac{d}{\ell}$, in order to reduce the impact of the dimension in the rate and obtain the optimal complexity, we need to include a $\sqrt{\frac{\ell}{d}}$ in the stepsize $\alpha_k$. The same observation holds for Corollary $1$ (specifically in points $(i)$ and $(ii)$). We will include these observations in the discussion paragraph below Corollary $1$. **Choice of $\theta$ in the stepsize:** As for classic methods like subgradient descent and stochastic gradient descent, the choice of $\theta$ towards $1/2$ is the best choice we can provide. Indeed, note that the second term in the right part of the inequality of Theorem $1$ (see the inequality above) is an error term which does not depend on the discretization sequence $h_k$ and thus it is bounded if and only if $\alpha_k^2$ goes to $0$. Thus, ideally, we would choose a sequence that decreases as fast as possible. However, if $\alpha_k$ goes to $0$ too fast, we might have no convergence, for this reason we need to take $\alpha_k \not\in \ell^{1}$ and $\alpha_k^2 \in \ell^{1}$. The result is not surprising and is in line with the classical results obtained by studying algorithms with errors or based on subgradients. ## Limitations Limitations of the algorithm are discussed in Appendix E. We will add a reference in the main paper.
null
null
null
null
null
null
GmGM: a fast Gaussian graphical model for multi-modal data
Reject
Summary: This paper introduces the Gaussian multi-Graphical Model (GmGM) as a novel method to construct sparse graph representations of matrix- and tensor-variate data. It simultaneously learns the representation across several tensors that share axes. The authors demonstrate that GmGM outperforms previous methods in terms of speed when applied to matrix data. Strengths: 1. GmGM extends the application of Gaussian Graphical Models to multi-tensor datasets, presenting a novel approach in the field. 2. GmGM exhibits significantly improved speed compared to previous methods when dealing with matrix data. 3. The results of GmGM on five real datasets are well explained. 4. Especially, I appreciate the comprehensive discussion provided in this paper. The authors present cases where the results are excellent, as well as cases where the results are not as impressive, such as the performance on higher-order tensor data (fig 4b) and the E-MTAB-2805 dataset (fig 6a). This in-depth analysis helps readers gain a better understanding of the method and be aware of the situations in which it should be employed. Weaknesses: One major concern I have relates to the evaluation. Although the authors present many intriguing findings on the datasets, it would be beneficial to include some more quantitative analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the authors show the results on the COIL-20 dataset and provide quantitative comparisons with baselines in terms of both efficiency and accuracy? 2. The two multi-omics datasets, LifeLines-DEEP and 10x, are analyzed from different perspectives. It would be advantageous if the authors could also present UMAP consistency analysis results for the LifeLines-DEEP dataset. Additionally, conducting quantitative comparisons with baselines on the 10x dataset would also be informative for readers. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. It is well discussed in the study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! - “Could the authors show the results on the COIL-20 dataset and provide quantitative comparisons with baselines in terms of both efficiency and accuracy?” We are very happy to provide quantitative results to all experiments, which we have run on the same computer as in our paper (Ubuntu 20.04 with Intel Core i7 Processor and 8GB RAM). COIL-20 Duck Performance in terms of row/col/frame recovery accuracies and runtime: GmGM: 80%/91%/99% in 0.14 seconds TeraLasso: 80%/91%/99% in 1.99 seconds We had not measured runtime for TeraLasso on this dataset before now. On synthetic data, as reported in the paper, our model improved efficiency compared to TeraLasso in the 3-axis tensor case, but not this much. This was a pleasant surprise! We suspect this is due to real data tending to require more iterations to converge, in which case the computation of Gram matrices no longer dominates the runtime for 3-axis data. Rather, the speed of iteration will dominate - and our algorithm has much faster iterations than TeraLasso due to it avoiding an eigendecomposition each time. If you had not suggested this comparison, we would not have noticed this – thank you! - “[…] conducting quantitative comparisons with baselines on the 10x dataset would also be informative for readers […]” Quantitative analysis on the 10x dataset is harder, as there are no ground truth (cell labels), preventing us from performing quantitative analysis such as measuring assortativity (as in the LifeLines-DEEP experiment), nor can we show the performance in terms of recovery accuracy. However, we can compare our algorithm's runtime with that of TeraLasso, as below. We did not show the EiGLasso runtime as it was too slow. GmGM: 94.39 seconds TeraLasso: 3752.57 seconds For comparison, we created a UMAP consistency plot for both GmGM and TeraLasso, in the global response (Figure R1). - “It would be advantageous if the authors could also present UMAP consistency analysis results for the LifeLines-DEEP dataset.” For this dataset, we kept the top 1200 largest-weighted edges of the estimated graph in accordance with the paper we used as a baseline ("A zero inflated log-normal model for inference of sparse microbial association networks" by Prost et al.), whose model is called 'ZiLN'. Note that ZiLN can only learn the species graph; it makes an independence assumption for the genes. Our model and TeraLasso make no independence assumption and simultaneously learn multiple graphs. We have given two examples of our algorithm, one in which we only consider metagenomics (as in ZiLN and TeraLasso), and one in which we consider multiple modalities (metagenomics and metabolomics) simultaneously. ZiLN and TeraLasso are not able to consider multiple modalities; in this case, only GmGM could be run. ZiLN: 3.2 seconds (learns only species graph) GmGM: 2.59 seconds (learns species and people graphs) GmGM: 22.18 seconds (learns species, people, and metabolomics graphs) TeraLasso: 1299.33 seconds (learns species and people graphs) From a speed perspective, we can see that we greatly outperform prior multi-axis work (TeraLasso), and perform favorably to single-axis work, especially on a per-graph basis: ZiLN (species): 3.2 seconds per graph GmGM (species, people): 1.30 seconds per graph GmGM (species, people, metabolomics): 7.39 seconds per graph TeraLasso (species, people): 649.67 seconds per graph It is quite encouraging that we have managed to outperform a single-axis method in one scenario, given the fact that single-axis methods can take advantage of stronger assumptions to make simplifications. The UMAP consistency plot is given in the global response document (Figure R2); we can see that the clusters we find also happen to correspond to distinct regions in UMAP-space. --- Rebuttal Comment 1.1: Comment: Thanks for these responses, which effectively address my concerns regarding the quantitative evaluation.
Summary: The authors propose Gaussian multi-Graphical Model (GmGM), a novel approach to constructing sparse graph representations of matrix- and tensor-variate data. It stands out from previous models by learning representations across multiple tensors that share axes simultaneously, a feature crucial for analyzing multimodal datasets, particularly in multi-omics scenarios. The GmGM algorithm utilizes a single eigendecomposition per axis, which results in a significant speedup over previous models. This efficiency enables the application of the methodology on large multi-modal datasets, such as single-cell multi-omics data, a task that was challenging with previous approaches. Strengths: 1. Fair and Interesting Motivation: The paper's motivation on model multi-tensor decomposition with shared axis is rooted in the real-world need for handling multi-omics scenarios, which often involve multi-tensor data with shared axes. The GmGM is introduced as a solution, addressing a significant gap in existing data analysis methodologies and providing a fair and interesting motivation for the study. 2. Reasonable solution and impressive improvements in Efficiency The GmGM model stands out for its impressive efficiency improvements, achieved through the use of the KS decomposition of the precision matrix and transiting it to the eigen-decomposition over each dim. This approach results in a substantial speedup over previous models, enabling the handling of large multi-modal datasets, This efficiency, coupled with the model's ability to maintain state-of-the-art performance, underscores the strength of the paper. Weaknesses: 1. **Limited Technical Contribution** While the problem setting proposed in the paper is reasonable, the algorithm's strict assumptions about data integrity (no missing data) and quality (no noise) somewhat limit its potential for broader application. The authors are encouraged to consider relaxing these assumptions or proposing strategies to handle missing data and noise, which are common issues in real-world datasets. Addressing these issues could significantly enhance the model's practical utility and broaden its applicability. 2. **Improvements Needed in Representation and Flow** The paper could benefit from substantial improvements in its representation and flow. The omission of important concepts and content significantly hinders reader comprehension. Some sentences appear casual and can lead to confusion. The overall logical flow of the paper is not clear, making it difficult to follow. This is particularly evident in the following areas: - Concepts such as the Kronecker product and Gram matrix are not clearly introduced. - Many notations and their subscripts and superscripts in the algorithm table are not clearly defined. - The task setting and metric definition in the experimental section are vague, reducing the persuasiveness of the validation part. Overall, the authors are encouraged to make a concerted effort to reorganize and polish the paper's presentation, improve the flow, and highlight the key points of the work and problem. This could significantly enhance the readability and impact of the paper. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness parts Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See weakness parts Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! - “[…] strict assumptions about data integrity (no missing data) […] limit […] broader application […]” This is a very fair point. To address the problem of missing data, it is helpful to split it into two cases: 1) Elements of one or more of the input tensors are missing As a strategy to handle missing data, we would propose doing complete cases analysis, or imputation. Our aim in this paper is to take an already existing class of algorithms, and make it practically usable - before, none of these algorithms could be used on anything but the smallest datasets. This is why we did not spend too much time addressing concerns about missing data, which were also not addressed by the aforementioned models. Embedding the imputation of missing data into our approach would be a very powerful addition to our software implementation. 2) Two tensors share an axis, but do not contain exactly the same elements This type of missing data has been addressed and discussed in our paper (see Section 4, “Limitations”). - “[…] strict assumptions about […] quality (no noise) […] limit […] broader application […]” Finally, you mention the case in which additional noise is added. This is an interesting case, which we had not considered before. We didn’t perform robustness-against-noise tests on simulated data, but our performance on real data suggests that our methodology is effective even when not explicitly modelling the noise. We will add an experiment to the paper in which we explore the addition of different levels of noise and its effect on graph recovery. This has not been considered in the literature we are exploring (multi-axis models). In the single-axis case, the paper "A Nonconvex Variational Approach for Robust Graphical Lasso" by Benfenati et al. proposes handling this by including an additional regularization term and a small modification to the loss function. At first glance, their approach seems amenable to the treatment given in this paper (specifically an analog of Theorem 1 may hold); but this would be left to future work. - “Concepts such as the Kronecker product and Gram matrix are not clearly introduced […] Many notations and their subscripts and superscripts in the algorithm table are not clearly defined.” We were too trigger-happy with offloading information into the supplementary material, and our paper's flow and cohesion has suffered because of it. In the final version, we will address this. (Concretely, we will include a small section at the start introducing more of the notation, concepts, and technical terms - with examples, when sensible). - “The task setting and metric definition in the experimental section are vague” Task setting was mainly linked to a task’s use in prior work for the sake of comparison and the availability of ground truth for performance evaluation. Not all datasets were amenable to quantitative analysis, as ground truth graphs are often unknown. In the paper, we will clarify our rationale behind each task we performed, and the metric chosen for that task, as explained in the following; COIL-20 Duck Video: A limited version of this analysis, done on a heavily down-sampled and flattened version of the video, was performed in the original BiGLasso paper as a proof-of-concept, without quantitative analysis. We chose the metric of row/column/frame recovery percent as it seemed the most natural way to measure our algorithm’s capability to reconstruct the video. LifeLines-DEEP: The ZiLN paper performed the same analysis with the same metric, which we have repeated in the paper. Mouse Embryo Stem Cells: The scBiGLasso paper also considered this dataset. Validation in this case is more complicated due to the underlying biology not having an obvious interpretation. Thus, we opted to explore how well our algorithm could separate the three cell stages, rather than repeating the analysis in the scBiGLasso paper. Heartbeat Videos: This was not considered in prior work. We chose this with the hopes that it would prove to be a similar but more complex version of the COIL-20 analysis due to the periodic nature of a heartbeat. We show that our algorithm can capture this periodic nature quantitatively through the prediction of future heartbeats. 10x Genomics: This dataset is the largest that we have considered – prior work was not able to run on such a large dataset in a reasonable amount of time, as shown in the following table. GmGM: 94.39 seconds TeraLasso: 3752.57 seconds As there was no ground truth available, we could not perform quantitative analysis on the 10x dataset, and thus we compared the similarity of clusters found on our graph to structure found using a well-known nonlinear transformation technique, UMAP. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' hard work on the response. It somehow addresses my concerns, but I still think the work could be further polished and may not change the score now. --- Reply to Comment 1.1.1: Comment: No worries! Which areas of the paper would you recommend need the most polishing?
Summary: This paper proposes the Gaussian multi-Graphical Model, a novel method to extend the use of Gaussian Graphical Models to multi-tensor datasets. It generalizes Gaussian graphical models to the common scenario of multi-tensor datasets. For the single-tensor case, the proposed algorithm is faster than prior work while still preserving state-of-the-art performance. Strengths: The paper considers an interesting and still challenging topic, extending conventional Gaussian graphical models (GGM) for complex systems like multi-modal data models. The paper has been generally well-written and the problem has been clearly defined. Indeed, the theoretical parts that extend the GGM to multi-tensor datasets have proper quality. This algorithm is significantly faster on lower-order tensor data (reported for the synthetic data sets) and its efficacy is slightly better in the real-world data sets. Weaknesses: - Some parts of the paper should be checked again. For instance, line 52 starts to explain the computational costs of the state-of-the-art methods. The parameters n and p have not been defined before. It seems it uses the defined parameter in the main reference paper (Kalaitzis et. al. 2013), where n and p are the numbers of observations and features, respectively. Indeed, the computational costs of the other baselines need a piece of clarification. For instance, O(n^2 * p^2) in BIGLasso represents the number of non-zeros in the Kronecker-sum (KS) structure. It would be better if the authors consider the full cost of the algorithm for the proposed method and available baselines. - The paper models each tensor as being drawn independently from a Kronecker-sum normal distribution. It makes sense to see this assumption reduces the computational cost at least in small-order data sets. However, it does not describe how this strong assumption still preserves state-of-the-art performance. - As has been reported in the paper, the proposed solution can not improve the complexity of higher-order tensor data sets (fig. 4b). Indeed, its performance can not significantly outperform the other baseline (Fig. 5a). By decreasing the sparsity, the performance of the model suffers and it seems it works properly only on high sparse graphs (Fig 7). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - See the Weaknesses part. - A question about the scope of the proposed model: It has been designed for multi-modal data sets. Another problem that has a similar structure is distributed learning when the entire data set is divided into several partition and each partition provides local inference. The partitions are clusters of multi-dimensional data sets, and the features are the same for all clusters. Can the proposed method be used for estimating the conditional dependencies between the features and also dependencies between local partitions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors addressed the limitation of the work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! - “[…] line 52 […] The parameters n and p have not been defined before. […]” The reviewer is correct; we used the same notation as in Kalaitzis et al; in the final version, we will clarify this. We will use the following notation instead: Let $d_i$ be the size of the ith axis (so a 50 by 60 matrix would have $d_1 = 50, d_2 = 60$). - “$O(n^2p^2)$ in BIGLasso represents the number of non-zeros in the Kronecker-sum (KS) structure” In general, the number of non-zeros when using a Kronecker Sum structure is $O(d_1d_2(d_1 + d_2))$. $O(d_1^2d_2^2)$ refers to the number of non-zeros if we were to use a Kronecker Product structure. - “the computational costs […] of the algorithm for the proposed method and available baselines […]” For space complexity, all algorithms (ours and prior work) except BiGLasso achieve an optimal $O(\sum_i d_i^2)$. BiGLasso does not report space complexity, but by looking at the implementation we know it must be at least $O(d_1^2d_2 + d_2^2d_1)$. In the following, we give the computational complexities for each algorithm we compared against. To keep the notation simple, we assume all axes are the same size ($d = d_1 = d_2 = ... = d_K$), where $K$ is the number of axes. Note that BiGLasso and EiGLasso only work on matrix data ($K=2$). • BiGLasso: $O(Kd^4)$ per iteration (specifically, $O(Kd)$ Lasso regressions per iteration). • EiGLasso: Does not explicitly state in the paper, but can be seen to be $O(Kd^3)$ per iteration due to the use of eigendecompositions. • TeraLasso: Does not explicitly state in the paper, but can be seen to be $O(Kd^3 + d^K)$ per iteration due to the use of eigendecompositions and projecting data onto the space of Kronecker-sum-decomposable matrices. Computation of Gram matrices at the start is $O(Kd^{K+1})$. • GmGM (our work): $O(Kd^{K+1})$ overall due to computing the Gram matrices and eigendecompositions at the start. Per iteration, however, it is $O(d^K)$ due to projecting data onto the space of Kronecker-sum-decomposable matrices. - “[…] it does not describe how this strong assumption still preserves state-of-the-art performance […]” For unimodal data, we are making the same assumption (Kronecker sum) as prior multi-axis work (BiGLasso, scBiGLasso, EiGLasso, TeraLasso), and a weaker assumption than prior single-axis work (Graphical Lasso, which assumes full independence). We are the first to consider multiple modalities for multi-axis data, so there is no prior work to compare against in this case. - “[…] the proposed solution can not improve the complexity of higher-order tensor data sets […]” The most common type of tensor data is 2-axis (matrix) data, in which we achieve a very substantial speedup (i.e. an order of magnitude) compared to prior work. While there is not as dramatic a speedup for 3-axis data, we are still faster than prior work. The main barrier is the Gram matrix computation, which is effectively a preprocessing step that all these algorithms have to do. Furthermore, despite our similar performances on synthetic data, we did find that our algorithm was much faster than prior work on real-world 3-axis data as well, as reported below. COIL-20 Duck Performance: (row/col/frame accuracies) GmGM: 80%/91%/99% in 0.14 seconds TeraLasso: 80%/91%/99% in 1.99 seconds We suspect this is due to real data requiring more iterations to converge, and hence the Gram matrix computation no longer dominates as it was during our synthetic data experiments. - “the model […] works properly only on high sparse graphs (Fig 7)” Yes, this is true, sparsity is very important. A sparse precision matrix defines a Gaussian Markov random field, which is conventionally represented by a weighted, undirected graph (“Graphical models”, Lauritzen, 1996) – our algorithm, and prior work, fits in this framework. The assumptions we and prior work make (Kronecker sum distribution) and the type of graph we learn (conditional dependencies) both encourage sparsity, so we will perform better when this assumption is met. This is true for all models considered in the paper. - Questions (distributed learning) This is a very interesting question! We have not tested the model in this situation, but from what you describe it seems that the model fits the scenario well. The iterative component of the algorithm does require the eigenvalues of the Gram matrices from each partitions, but the rest of the algorithm can be performed locally wherever the partition is stored. Thus it should still be applicable in situations in which data is being kept in separate centers for privacy/data protection reasons. The downside is that this will only find connections within each local partition, not between elements in different partitions. Depending in the scenario this may or may not be acceptable. The model would do a good job of finding connections between features still. (The output would be one graph of connections between features, learned globally, and then for each partition one graph of connections between samples in that partition). As an example, suppose a hospital in Belgium and a hospital in Sri Lanka both collect the same healthcare information on patients. It is unlikely for there to be connections between a patient in Belgium and a patient in Sri Lanka, so this model would be reasonable. However, if instead the data was all collected in from one town Belgium, being partitioned after the fact, then the assumptions the model makes could be too strong, as there could plausibly be conditional dependencies between patients in different partitions. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response and for addressing the concerns (one hint: graphical Lasso generally assumes a joint Gaussian distribution between features and not full independence. It estimates the inverse of the full covariance matrix). I keep my initial score because, despite the shortages in some parts, the paper still has proper potential. --- Reply to Comment 1.1.1: Comment: Thank you for the kind words. We did wish to clear up one potential misunderstanding; Graphical Lasso does indeed assume a joint Gaussian for the features and estimates the precision matrix, as you say - but it also assumes independence for the samples. (To fact-check this claim, one can refer to Section 2.1 of "Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data" by Banarjee et al. The original GLasso paper, "Sparse inverse covariance estimation with the graphical lasso" by Friedman et al, follows Banarjee et al in their choice of distribution) Unlike GLasso, multi-axis methods do not make this independence assumption for the samples - instead we replace it with the weaker assumption that the features and samples both have dependencies, which interact through the Kronecker sum.
null
null
Rebuttal 1: Rebuttal: This is the global rebuttal; individual reviewer rebuttals have been submitted separately in accordance to the directions given to the authors. The global rebuttal comprises of an attached pdf containing only figures and captions. Pdf: /pdf/79bf094a2d826a3c4bfca0b78a06c08b826cefa6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent
Accept (oral)
Summary: I thank the authors for supplying the additional experiments and making the comparisons. It appears the technique is a novel and competitive method that can perform very well even for difficult datasets in the large-scale regime. Below is the initial review, unchanged. ------------------------------------------------------------------- The authors propose to sample from the posterior of a GP via optimisation using SGD. The authors rephrase sampling as a quadratic optimisation problem that allows an efficient approximation of the gradient using random kernel features. Then, SGD is used to find the solution. Theoretical derivations show tht the proposed algorithm gives good variance estimates in densely sampled areas as well as areas far outside the sampled regions. Further, the algorithm performs better than CG and SVGP in certain settings. Strengths: The article provides a reltive complete package with an understandable derivation, good theoretical results and experimental evaluations. The observations of the good performance of SGD in densely sampled regions is very interesting. Weaknesses: The article omits subset of data (SoD) methods completely, both in the related work and the experimental section. [1] introduces them as as category, [2] establish them empirically as competitive method in sampling and [3] investigates the size of the subsampling dataset. [1] A Unifying View of Sparse Approximate Gaussian Process Regression, Qui~nonero-Candela and Rasmussen (2005) [2] A Framework for Evaluating Approximation Methods for Gaussian Process Regression, Chalupka and Murray and Williams (2013) [3] Adaptive Cholesky Gaussian Processes, Bartels et al. (2023) Subsampling methods perform similarly to the proposed method as they work well in densely sampled regions and also well in regions far away from the data distributions with only region of large error in the low sampled tails. This makes comparison mandatory. The compairosn with SVGP is unfair due to the small number of inducing points (1024) and a bad optimisation algorithm for them (ADAM). It seems like the authors did not tune the competing methods well, making the resulting baselines weak. Theoretically, the authors omit in their complexity class evaluations the number of SGD steps, which might become very large. Further, the authors omit the standard work on GP, while refering to standard notation introduced by it: [4] Gaussian Processes for Machine Learning, Carl Edward Rasmussen and Christopher K. I. Williams (2006) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Question: Can you provide plots including a parameter study of SVGP including a stronger optimizer? Suggestion: I would suggest to introduce SoP method in the related work, but also compare them to the proposed method in the experimental section. I would suggest to compare them empirically to the proposed method by given them a dataset size with computation budget the same as the proposed method to allow a fair comparison on wallclock time. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review! Below we address the key points: **Comparison with Subset of Data Methods** Thank you for bringing SoD methods to our attention. *We ran SoD on our regression datasets.* We randomly select subsets from the training data and build exact GP models with these points. We use the complete datasets for data normalization. We provide mean results and std. err. across dataset splits and subset seeds. We consider 5% and 10% subsets for the small and medium datasets. For the larger datasets, we use 25k and 50k (largest possible on an 80GB A100 GPU) subsets. These represent roughly 5% and 10% of 3droad song and buzz, and 1.25 and 2.5% of houselectric. Due to the character limit, we do not reproduce the numbers from our paper here but we bold best results across this table and the paper. **SoD Small & Medium Datasets** | | SoD | pol | elevators | bike | protein | keggdir | |:----:|:----:|:----------------:|:---------------:|:---------------:|:----------------:|-----------| | | | N = 15000 | N = 16599 | N = 17379 | N = 45730 | N = 48827 | | RMSE | 5% | 0.20 ± 0.01 | 0.44 ± 0.00 | 0.18 ± 0.01 | 0.71 ± 0.01 | 0.11 ± 0.00 | | | 10% | 0.16 ± 0.00 | 0.41 ± 0.01 | 0.13 ± 0.00 | 0.66 ± 0.00 | 0.10 ± 0.00 | | NLL | 5% | -0.43 ± 0.02 | 0.57 ± 0.01 | -0.61 ± 0.09 | 0.99 ± 0.01 | -0.76 ± 0.10 | | | 10% | -0.66 ± 0.02 | 0.51 ± 0.01 | -1.28 ± 0.07 | 0.91 ± 0.01 | -0.82 ± 0.09 | **SoD Large Datasets** | | SoD | 3droad | song | buzz | houseelec | |:----:|:----:|:----------------:|:---------------:|:---------------:|:----------------:| | | | N = 434874 | N = 515345 | N = 583250 | N = 2049280 | | RMSE | 25k | 0.23 ± 0.00 | 0.82 ± 0.00 | 0.33 ± 0.00 | 0.06 ± 0.01 | | | 50k | 0.16 ± 0.00 | 0.81 ± 0.00 | **0.32 ± 0.00** | **0.06 ± 0.00** | | NLL | 25k | -0.42 ± 0.01 | 1.22 ± 0.00 | 0.30 ± 0.06 | -0.53 ± 0.33 | | | 50k | **-0.78 ± 0.01** | **1.20 ± 0.00** | **0.26 ± 0.06** | -0.49 ± 0.39 | In terms of mean prediction, SoD does not perform best on any small or medium datasets. We also tried higher percentages: the results are similar. On the large datasets, the 50k subset performs best on buzz, where SVGP previously was the best method, and on NLL for 3droad and Buzz. Note that an exact GP on 50k points requires 40 GB of GPU memory; it is not accessible to most practitioners. In terms of error bar geometry, SoD performance is heavily dataset dependent: it performs well when the observations are largely redundant. **We graphically illustrate this with Figure 1(r) in the rebuttal PDF attached to the summary post**. To conclude, we would like to emphasize that our contribution is presenting a **novel approach for scaling up Gaussian processes**, quite dissimilar from existing ones. This is valuable due to its opportunity to unlock new avenues for GP research, not because our technique outperforms all baselines across all tasks, which it does not - and, *this is not one of our claims*, as Table 1 (in paper) clearly shows instances where SGD is outperformed by CG and SVGP. **SVGP Hyperparameters and optimization algorithm (ADAM)** * **Hyperparameters.** To facilitate comparisons with prior work, **our paper used the same SVGP and CG hyperparameters as Wang et. al. 2019** (a very well-cited NeurIPS2019 paper), which studies conjugate-gradient-based Gaussian processes, and whose techniques are now used in GPyTorch. We have also trained SVGP with $M=4096$ inducing points on our 4 largest datasets (we will include the rest in the camera ready), which is rather expensive because 10k optimization steps are needed for convergence and each step's cost is cubic in the number of inducing points $\mathcal{O}(M^3)$. | SVGP 4096 | 3droad | song | buzz | houseelec | |:----:|:---------------:|:---------------:|:---------------:|:----------------:| | RMSE | 0.49 ± 0.01 | 0.81 ± 0.00 | **0.33 ± 0.00** | 0.11 ± 0.01 | | NLL | 0.67 ± 0.02 | 1.22 ± 0.00 | **0.25 ± 0.04** | -0.90 ± 0.10 | Although increasing the inducing points improves SVGP's performance on all large datasets, SVGP 4096 only performs best on buzz, where the 1024 point version was already best. * **Adam.** Our understanding is that ADAM is currently the best optimizer for SVGP. It is the default in GPyTorch, GPFlow, and GPJax. We suspect the reviewer may be thinking of *full-batch* methods such as LBFGS, which **cannot be used with stochastic approximation (i.e. minibatching)**. These optimizers are SoTA for full-batch inducing point approaches such as Titsias 2009, but are incompatible with SVGP (namely, *stochastic* variational GP) which is mini-batch based. **SGD steps in the complexity analysis** This is a good question! Our work's key finding is that *SGD does not need to be run to convergence to produce good empirical performance*. On this basis, it makes sense to view the number of SGD steps as a hyperparameter. This is reflected in our paper's language; **We make no claims about the complexity of SGD Inference, only about the cost of a single optimization step** - e.g. in line 106: “(7)[our unbiased estimator] presents O(N) complexity, in contrast with the O(N^2) complexity of one CG step.” **Empirically, the number of steps needed to obtain a given performance level is, roughly independent of the dataset size**. We use 100k SGD steps across all experiments, except the Bayesian optimization experiment where we vary this parameter as part of the experiment. We find not only that this is sufficient to obtain reasonable results in all cases but also that the convergence plots (Figures 3, 5, 8) present a similar shape for numbers of observations running from 10k to 2M. **We have added citation of Rasmussen and Williams (2006)**
Summary: The paper presents a novel approach for sampling from GP posteriors based on SGD, which bypasses the need to solve the typical linear system of equations that is prevalent in both the exact variant (cubic in the number of query points) or the pathwise approximation (cubic in the number of data points). The SGD approximation is provided for both exact GPs and for inducing point approximations. Moreover, the approximation quality is investigated for three regions of varying data density, and explanations for the (occasionally poor) approximation quality are given. Strengths: - __Novel, interesting idea:__ The use of SGD for atypical objectives is interesting, and the analysis of the approach is detailed from both a theoretical and empirical perspective. - __Clearly addressed limitations:__ The SGD approach is _not_ a silver bullet, and the authors make this clear by highlighting the approximation quality in data-dense regions (good), faraway regions (good), and interpolation regions (not as good). - __Informative, well-designed figures:__ The various figures are not only visually appealing, but provide . Figure 1 and 4 in particular highlight the strengths and shortcomings of the method nicely, and seamlessly add intuition as to why that is. - __Diverse Experiments:__ Experiments from both large-scale GP regression and BO are included, which demonstrates that the method is applicable and potent in both domains. - __Clear writing:__ The paper is consistently well-formulated, pedagogical and as far as I could tell, correct. Moreover, I beieve that there has been substantial effort to provide the reader with additional intuition for why the approach is effective. Weaknesses: I struggle to find weaknesses with the paper, but remain unconvinced on its potential impact due to the relatively small niche (sampling for GPs in large data regime) that is addressed. However, I am not overly confident in this assessment, and invite the authors to challenge my opinion on this topic. For example, do the authors see opportunities for impactful follow-up work which spans other areas of ML? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - __Comparison to Pathwise Sampling:__ Does the proposed method hold any advantages to pathwise sampling in a low-to-moderate data regime in terms of accuracy or complexity? At which point (in #data points) does SGD start becoming beneficial? - __Computing the predicitve uncertainty:__ Perhaps a trivial question, but how is the predicitve uncertainty computed when one can only access the posterior mean and samples from the posterior (and not the posterior variance) in Section 4.1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you their time in reading our work! We are thrilled that you found our writing and plots “Clear" and "Informative” and agreed that our paper "Clearly addressed limitations". We now discuss weaknesses and respond to your questions: **Significance:** > "I ... remain unconvinced on its potential impact due to the relatively small niche (sampling for GPs in large data regime) that is addressed." * **Our main motivating setting is large-scale Bayesian optimization**. Sampling from large-data GPs with fixed hyperparameters is a key component in large-scale Bayesian optimization, particularly in industrial settings. Bayesian optimization (whether under this name, or that of GP bandit algorithms) is a strong approach for the optimization of black-box systems. In particular, sampling from GPs is a core element of the Thompson sampling algorithm. Historically, due to the cost of fitting GPs, Bayesian optimization was limited to small—perhaps even toy—systems; however, work undertaken over the last 5-10 years on scaling inference in GPs, which our method is a part of, now allows for its use on an industrial scale (e.g., optimizing stock levels and recommendations at Amazon). The Thompson-sampling-based approach to Bayesian optimization is particularly well suited to parallelization and asynchronous processing, and thus combines well with such large-scale systems. We believe our approach to sampling to be particularly well-suited for Thompson sampling and easy to use due to its robustness to ill-conditioning. Thus, it has the potential to be adopted by industry users at the usual big-name tech companies (which all provide online recommendations to users and tackle other similar bandit problems, well-suited for Bayesian optimization). * More generally, there is growing interest in applying GPs to **spatiotemporal modeling** (Howes et al., PLOS Global Public Health 2023, "Spatio-temporal estimates of HIV risk group proportions for adolescent girls and young women across 13 priority countries in sub-Saharan Africa"), applications in the **physical and natural sciences** (Goḿez-Bombarelli et al., ACS Central Science 2018, "Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules") and **climate modeling** (Thompson et al., Environmental Data Science 2022, "A dependent multimodel approach to climate prediction with Gaussian processes)". Here datasets tend to be large, and as shown by Foster et al. (2009, JMLR, "Stable and Efficient Gaussian Process Calculations"), and Terenin et al. (2023, arXiv:2210.07893, "Numerically Stable Sparse Gaussian Processes via Minimum Separation using Cover Trees"), **ill-conditioned systems appear in almost all moderate-to-large-scale GP problems**. Regularization through for instance inducing point choice helps, but at the cost of bias and performance. Our work suggests an orthogonal way to handle instability is to design algorithms that tolerate ill-conditioning well. **Questions:** 1. **Comparison to Pathwise Sampling:** > "Does the proposed method hold any advantages to pathwise sampling in a low-to-moderate data regime in terms of accuracy or complexity? At which point (in #data points) does SGD start becoming beneficial?" * This is a great question! Since our approach is an approximation to efficient sampling (namely pathwise conditioning with no approximations except for the prior term) we expect it to perform worse whenever solving the involved linear systems exactly is tractable, for instance in the low-data, well-conditioned regime. From our experiments (Table 1), we estimate that the transition where SGD may start to become better than CG occurs in the 50k-100k datapoint range. Where in that range depends on kernel-matrix conditioning. For very poorly conditioned kernel matrices, SGD can perform better with only a couple of tens of thousands of points. 2. **Computing the predictive uncertainty:** > "How is the predictive uncertainty computed when one can only access the posterior mean and samples from the posterior (and not the posterior variance) in Section 4.1?" * For each test-point, we estimate the scalar predictive variance from 64 0-mean posterior samples $f_i(x)$ as $\frac{1}{64}\sum_{i=1}^{64} f_i(x)^2$. We do this for all methods under consideration. This is tucked away in the second paragraph of Section 4.1 - thank you for pointing this out, we will look into making this point easier to find. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thanks to the authors for addressing my questions. I am very much aware of the BO implications, but I value the references to other use-cases, which points to the potentially substantial impact of the work. This paper was a pleasure to read. Once again, I greatly appreciated the clearly addressed limitations, which should not go unnoted. I have increased my score to a 9.
Summary: This work proposes SGD GP, a method based on stochastic gradient descent to efficiently compute the GP posterior samples given fixed hyperparameters. The method relies on the pathwise conditioning GP posterior formulation and random Fourier features (RFF) approximation. The key idea is to express the GP posterior quantities as solutions to quadratic optimization problems whose objective is a sum over data points and hence SGD can be applied. The paper shows that SGD GP produce accurate predictions. However, SGD GP can converge slowly or converge to sub-optimum. However, non-convergence behaviors of SGD only occur in region closed to the data boundary. SGD GP performs comparably to SVGP and conjugate gradients (CG) in most settings, and can outperform them in large-scale systems or ill-conditioned problems. Strengths: - The paper is overall well-written and easy to follow. - The proposed method is novel and sound. It is shown to provide better predictive performance given the same inference time compared to SVGP and CG. What I found is most compelling is that SGD-GP seems to be a stronger alternative in large-scale or ill-conditioned systems. - I also find the spectral analysis of SGD convergence of three different regions to be very insightful. Weaknesses: - The major weakness is that the setting the paper considers is quite limited, which is the posterior inference given fixed learned hyperparameter. I would appreciate if the authors can elaborate on the significance of the setting and why the proposed methodology is in particular important. For example, how often would the ill-conditioned systems arise in practice and how common is the case that hyperparameters are known in advance. It also seems like the method is applied to a isotropic Gaussian likelihood (see the question section below). - The fact that SGD converges slowly in extrapolation region is a bit concerning. Especially in applications like Bayesian optimization, this is the region of high interests for exploration. Or in settings where there are distribution shifts, the under-calibrated uncertainty in this region can be an issue. In general, I found the "benign non-convergence" argument in the extrapolation region not convincing. Would appreciate if the authors can elaborate on this issue, and if if there is any potential way to alleviate the non-convergence property? - The SGD convergence analysis is insightful. But I would appreciate a more formal mathematical characterization of three regions. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Related to the first point above, can the method be adapted for broader settings where the hyperparameters need to be learned, or the likelihood is not Gaussian? If not, what is the limiting factor there? - Regarding the experiment evaluation, sec 4.1, is the predictive RMSE and NLL good metrics for evaluation? If I understand it correctly, SGD and CG are based on the same set of learned hyperpararmeters (and SVGP is based on a separately learned variational model). The core goal of the comparison is whether SGD recovers more faithful inference approximation compared to CG as a baseline. So I thought the valuable metric should be a distance against the "exact" inference result (e.g. CG with max. number of iterations and with high numeric precision). So I am not sure how to interpret Table 1. - In Fig 3 and Fig 5, do you have a sense why CG errors first go up and then go down? In Fig 5 last panel (houseelectric) would you expect CG error to match SGD performance eventually if running for more time? In both figures, I think it would be helpful to provide an exact baseline (e.g. CG ran to reach tolerance like1e-3). Minor comments. - A few notations and figures are not very clear. E.g. Fig 4. top left panel, indicate the size of error bands (blue shaded area) and dotted black line; top right corner, indicate the black dots (observations). In Proposition 1, define G-sub-gaussian. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitation discussion is missing. I don't think the paper would have negative societal impact. My main concern on technical limitaiton is weakness point 1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wanted to start by thanking you for your time in reading our work and providing very helpful comments! We are thrilled you found our work “well-written and easy to follow” and agreed that our results are interesting because they provide a “stronger alternative in large-scale or ill-conditioned systems”, which as we will argue below include most systems of sufficient scale. ---- Below we address the weaknesses and questions: 1. **Significance of setting** * *Significance*: You are correct that our proposed approach can not be used to tune hyperparameters and requires a Gaussian (*but not necessarily isotropic*) likelihood. In our view, presenting a new, scalable, way to perform posterior inference in GPs is a valuable contribution on its own which can pave the path for future work on hyperparameter selection and non-conjugate inference. * *The fixed hyperparameter* setting occurs in large-scale Bayesian optimization, particularly in industrial settings. Here, hyperparameters are often selected using historical (offline) data, and no updates are made online thereafter. Combining online hyperparameter updates and closed-loop systems is very challenging in practice and rigorous theory of these updates is scarce. * *Ill conditioned systems* appear in almost all all moderate-to-large-scale GP problems. See Foster et al. (2009, JMLR), "Stable and Efficient Gaussian Process Calculations", or Terenin et al. (2023, arXiv:2210.07893), "Numerically Stable Sparse Gaussian Processes via Minimum Separation using Cover Trees". Regularization through for instance inducing point choice helps, but at the cost of bias and performance. Our work suggests an orthogonal way to handle instability is to design algorithms that tolerate ill-conditioning well. 2. **Benign non-convergence** * > “That SGD converges slowly in extrapolation region is a bit concerning. Especially in applications like Bayesian optimization...high interest for exploration." * This is a very good comment; it illustrates why we find this work so exciting: a priori, we expected the same thing. Our empirical results instead showed that SGD can achieve strong performance in spite of non-convergence. In particular, SGD produces error bars in the extrapolation region which are closer to the prior than the true GP, and thus SGD overestimates uncertainty here. This may cause over-exploration, making convergence somewhat slower. We consider this "benign" compared to underestimating uncertainty which may cause catastrophic failure in Bayesian optimization (convergence to a local optimum). * The distribution-shift setting is hard to make reasoned arguments about, it is a rather ill-posed problem and there is no guarantee that the exact Bayesian model will perform best in such a setting. 3. **Formal characterization of region-specific error.** A full mathematical characterization of the part of state-space where non-convergence occurs is what we initially aimed for; however, it proved too difficult. Such a characterization would require one to understand where in space do eigenfunctions corresponding to intermediate eigenvalues occur, which is non-trivial because it is a non-asymptotic question. We believe the amount of work required for this would warrant a separate submission. Alike the reviewer, we think the current analysis is insightful. 4. **The suggested extensions** are good ideas! *Non-conjugate inference* can be immediately achieved via the Laplace approximation as in Antorán et al. (2023), "Sampling-based inference for large linear models, with application to linearized Laplace". *Hyperparameter optimization* would require bi-level optimization: an outer loop for the hyperparameters and/or variational parameters, along with an inner loop for the linear systems. Since we cannot expect the inner loop to converge, one would need to study how to ensure that the outer loop behaves well even if the inner loop is not at the optimum. 5. **Questions on experiments** * You are correct: **SVGP shares model hyperparameters with CG and SGD**, and also has some variational parameters where applicable. * We would expect **CG run for sufficiently long** to outperform all alternatives on all data sets. However, for houseelectric (2M+ points), this might take weeks. We are unable to commit that much compute. * **We do provide "RMSE to exact GP"** on our four small datasets in figures 3 and 8. In Table 1 we use test RMSE and NLL since for datasets with more than 50k observations (the focus of our work), we cannot do exact GP inference. In Table 1, for the four smallest datasets, CG converges to $10^{-2}$ tolerance and thus can be thought of as an **exact GP baseline**. 6. **Question: CG non-monotonicity**. This is a good question which we also asked ourselves for some time. Our best explanation: CG converges monotonically in the RKHS norm induced by the chosen kernel. For the Matérn kernel, this RKHS norm is (effectively, i.e. with certain parameter choices and up to norm equivalence) a weighted sum of the $L^2$ norms of the first $m$ derivatives plus the $L^2$ norm of the function itself. CG is thus trading off minimizing the norm of the $m$ derivatives of the function at the expense of the 0th order term: the $L^2$ error in the fit. The derivative norms being minimized first yields the divergence when looking at just $L^2$ error. Of course, since the RKHS norm is a sum of said $m+1$ non-negative terms, minimizing it will eventually force the 0th order term (error in $L^2$ norm) to go to zero too. 7. **Other**. Thanks for the suggestions! We have reworked Fig 4 and Proposition 1 to address your comments and add all required definitions. ---- With these responses in mind, we gently and politely request that you please consider increasing your score towards firm acceptance. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their thoughtful response. Most of my questions are touched and addressed. However, my concern on the potential impact of this work is not fully resolved. In the authors response to Reviewer w7c5, two promising applications are (1) large-scale BO (with parallel Thompson sampling), and (2) large-scale spatio-temporal modeling. The paper investigates the first application in a synthetic setting (correct me if I am wrong). While the results seem encouraging, it would be more convincing to conduct the experiments in benchmark BO datasets (ideally also ill-conditioned to demonstrate its advantage). Is there a reason that the authors did not choose to do so? For (2), since the proposed method can only perform posterior sampling given fixed hyper-parameters instead of _learning_ the hyperparameters, how can it be useful for sptaio-temporal modeling? Is there a scenario where the large-scale posterior sampling is of interest in that domain? Again, I wanted to say that I believe the proposed method could potentially shine in many applications. However, I feel the paper and the authors response haven't really touched upon its real applicability. I would be happy to raise my score once this concern is addressed. --- Reply to Comment 1.1.1: Title: Real applicability: BO over molecular properties with Tanimoto kernel and more Comment: Thank you very much for your reply! We go on to describe our ongoing work on the application of SGD GPs to molecular property prediction as well as how the method can be applied more generally, for instance, to spatiotemporal modeling. ---- 1. We are currently working on **applying SGD inference to molecular binding energy prediction** using a dataset of 250k molecules introduced by [1]. We are using the Tanimoto kernel for graphs, which admits random features ([2]). In particular, we are searching for molecules which have a high probability of binding to proteins of interest using Bayesian Optimization. * The Tanimoto kernel only has 1 hyperparameter, the marginal kernel variance. For this task, the **authors of [1] provide an optimized kernel hyperparameter** value, which they used in their experiments. Additionally, the **authors of [3] show how marginal kernel variances can be learnt using only GP posterior samples.** Thus, our SGD-based inference can be directly applied to this setting for learning the Tanimoto kernel’s hyperparameter. * Although there is not enough time to conclude these experiments before the end of the discussion period (the 21st), we would be happy to include them in the camera-ready version of the paper. 2. **We think that GP inference methods can be useful even without hyperparameter learning.** A simple but general and effective approach to select hyperparameters is to **maximize the marginal likelihood on clustered subsets of the data**, as we do in our paper (See Appendix A.1). This yields results competitive with, and in some datasets outperforming, the hyperparameters learnt via conjugate gradients of [4]. This approach is particularly well-suited to length scale hyperparameters, which are of key importance in spatiotemporal modeling. 3. Next, **ill-conditioning appears consistently for large enough datasets or when the kernel distance between observations is small** (in fact, *often provably so* - see Section 2.3 of [5]), so one does not need to search particularly hard to find examples. The latter is bound to occur in Bayesian optimization, since methods often explore near previously-found well-performing locations. 4. Finally, there is strong precedent in the Gaussian processes where (a) **a novel method with significant advantages but important limitations was introduced**, and (b) **the limitations were addressed through follow-up work**. * For example, Titsias [6] introduced the variational-inference-based view of sparse Gaussian processes, developing a novel formalism for inducing points, whose complexity is $O(NM^2)$ - larger than for instance certain subset-of-data methods. Then, Hensman et al. [7] reduced this to $O(M^3)$ - a major improvement when $N$ is in the millions - by applying stochastic optimization to the variational inference objective. Achieving this improvement was only possible because the variational viewpoint had been developed previously. * Mirroring this example, **we expect that follow-up work, using for instance bilevel optimization techniques, can address limitations around hyperparameter learning** (for examples of such techniques in a neural network context, see [8,9]). This would start from the ideas we developed, but would likely introduce enough additional theoretical and methodological contributions, as well as experimental evaluation specific to hyperparameter optimization, to constitute another paper. [1] *DOCKSTRING: Easy Molecular Docking Yields Better Benchmarks for Ligand Design.* Miguel García-Ortegón*, Gregor N. C. Simm, Austin J. Tripp, José Miguel Hernández-Lobato, Andreas Bender, and Sergio Bacallado [2] *Tanimoto Random Features for Scalable Molecular Machine Learning*. Austin Tripp, Sergio Bacallado, Sukriti Singh, José Miguel Hernández-Lobato [3] *Sampling-based inference for large linear models, with application to linearised Laplace.* Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato [4] *Exact Gaussian Processes on a Million Data Points.* Ke Alexander Wang, Geoff Pleiss, Jacob R. Gardner, Stephen Tyree, Kilian Q. Weinberger, Andrew Gordon Wilson [5] *Numerically Stable Sparse Gaussian Processes via Minimum Separation using Cover Trees*. Alexander Terenin, David R. Burt, Artem Artemev, Seth Flaxman, Mark van der Wilk, Carl Edward Rasmussen, Hong Ge [6] *Variational learning of inducing variables in sparse Gaussian processes*. Michalis Titsias [7] *Gaussian processes for big data*. James Hensman, Nicolò Fusi, Neil Lawrence. [8] *Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation*. Ross M. Clarke, Elre T. Oldewage, José Miguel Hernández-Lobato [9] *Generalized Inner Loop Meta-Learning*. Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala
Summary: This paper introduces a method for fast approximating a Gaussian process posterior when the data size is large. Exact computation complexity would be cubic in the data size, while this method is linear. It originates from the idea of pairwise conditioning of Gaussian process, where the law of Gaussian process posterior is expressed in terms of the law of Gaussian process prior. Decomposition into eigenfunctions gives a way of approximating the Gaussian process posterior, so the posterior inference transforms into appropriately choosing the coefficients of decomposition. Objectives are formed quadratic in the coefficients, which can be optimized with SGD for the sake of lower computational complexity compared to conjugate gradient methods. Ideas of inducing points are also discussed for reducing the data size needed. Intuitive discussion of errors in different regions are companied by figure illustrations as well as some theoretical results. Adequate numerical experiments are presented to support the method. Strengths: 1. The paper has clear descriptions and is well written. 2. The ideas are mostly original and combine the advantages of multiple methods. 3. Many figure illustrations are present, making the ideas easily understood. 4. Supportive numerical experiments are conducted. 5. The problem of reducing the computational complexity of Gaussian process posteriors is itself very important and meaningful. Weaknesses: It would be nice to discuss why the Fourier basis are used for eigen decomposition and how it performs compared to other basis such as wavelets. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: To control the approximation error below a constant threshold, how does the number of basis L (number of components) needed scales with the data size and dimension? Either theoretical or numerical result could be interesting. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review! We are delighted that you found our descriptions “clear” and our paper “well-written” - thank you for these comments! > “It would be nice to discuss why the Fourier basis are used for eigen decomposition and how it performs compared to other basis such as wavelets.”* and asks the question *“To control the approximation error below a constant threshold, how does the number of basis L (number of components) needed scales with the data size and dimension? Either theoretical or numerical result could be interesting.” Thank you very much for these two closely-related questions! Due to the use of the terms “Fourier basis” and “spectral basis” in a non-synonymous way in our work, these questions can be interpreted in two ways: either (1) referring to the number of Fourier features, or to (2) the number of spectral basis functions along which we examine the convergence of SGD. Both questions are interesting, including potentially to other referees, so we will answer each of them. 1. **Fourier features.** In our work, Fourier features are used to (a) approximate the prior samples needed for pathwise conditioning, and (b) to approximate the regularization term $ || \alpha ||_K$ which appears in the quadratic objective used by SGD. Neither requires Fourier features explicitly: any finite basis function approximation of the prior kernel will work. None of our techniques are limited to Fourier features - we use them because they are convenient and work well for stationary kernels. For other kernels, including potentially non-stationary kernels, other bases such as wavelets or random hashes (Tripp et al. 2023, "Tanimoto Random Features for Scalable Molecular Machine Learning") could be used instead of Fourier features. This can be especially interesting for non-stationary kernels and kernels on boundary-constrained domains. * **On random (Fourier) feature approximation error**: We only use random features to approximate quantities that do not dependent on the targets: (a) prior function samples and (b) norms in the metric induced by the kernel matrix $\|\alpha\|_K$. In (a), the approximation error is also independent of the number of observations. We found this also to be the case empirically in (b). * Indeed, we use the same number of random features for across all experiments; 2000 for prior sampling and 100 for kernel matrix norms. * Crucially, we do *not* approximate any conditional distributions (i.e. matrix inverses) using random features; we use SGD for this. As the reviewer suggests, approximating conditionals with random features requires a number of features increasing in the number of observations and is prone to variance starvation. See Sutherland and Schneider (2015), "On the Error of Random Fourier Features" and Wilson et al. (2021), "Pathwise Conditioning of Gaussian Processes" for details including explicit error analysis. 2. **Spectral basis functions.** One can ask how many spectral basis functions one needs to look at in the convergence bound to ensure good performance. Recall that these are defined as $u^{(i)}(\cdot) = \sum_{j=1}^N U_{ji} / \sqrt{\lambda_i} k(x_j, \cdot)$, where $U$ is the U-matrix in the eigendecomposition of $K_{xx}$ and $\lambda_i$ are the respective eigenvalues. We chose the term “spectral” for these basis functions since they arise from the kernel matrix’s eigendecomposition. They define a “kernel-specific” basis that in some intuitive sense resembles the Fourier basis, but is different because it is data-dependent. As revealed by Proposition 1, the question of how many such basis functions are needed to ensure good performance is central to the performance of SGD in GPs, because it determines how many iterations are needed to ensure convergence. We do not address this question from a theoretical standpoint, because it is very technically difficult - please see our response to Referee Hc1f part (3) for more on this. --- Rebuttal Comment 1.1: Comment: Thank you for the further explanations and responses to several questions. I am pretty satisfied with them and will keep the score as is.
Rebuttal 1: Rebuttal: We thank all reviewers for the time taken to read our paper and for their insightful and helpful comments. We are pleased the reviewers unanimously found our paper to be well-written, novel, and interesting. ---- The two most pressing concerns come from: * Reviewers Hc1f and w7c5 ask about the **significance of the setting covered**. The technique presented is immediately useful for **large-scale Bayesian optimization**. This problem appears most often in industrial settings and is the focus of our paper's second experiment Our work also presents a novel approach to deal with **ill-conditioning in GP kernel matrices**. Ill-conditioned systems appear in almost all moderate-to-large-scale GP problems. Further details on both of these points are given in the rebuttals for Reviewers Hc1f and w7c5. ---- * Reviewer xja3 is concerned about our **lack of comparison against Subset of Data (SoD) inference and our choice of SVGP hyperparameters**. We have run **additional regression experiments using SoD methods**. We further illustrate the strengths and weaknesses of SoD in **Figure 1(r) in the attached PDF**: SoD performs strongly only when the data is very redundant. We have also **run the SVGP baseline with the number of inducing points increased to 4096.** Quantitative results are provided in the individual response to xja3. Pdf: /pdf/143568894ff2d0f9c3b9fc699191f2db3b331acf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Enhancing Sharpness-Aware Optimization Through Variance Suppression
Accept (poster)
Summary: This paper proposes a method, which applies EMA on batch gradients, to better solve the maximum problem in SAM and obtain higher test accuracy. Strengths: 1.The proposed method is easy to implement. 2.The theoretical analysis is sufficient. Weaknesses: 1.The core of this paper is to alleviate the gradient noise when solving the inner maximum problem of SAM. To achieve this goal, it proposes to implement EMA on the batch gradient and gives theoretical proof on the variance suppression. Although experiment results show the improvement of VaSSO, I am somehow confused that why the gradient noise harms the generalization performance of SAM. In the Section 4.1 of SAM[1], it claims that smaller batch size tends to yield models having better generalization ability, which conflicts the core idea of this paper. 2.Following 1, the statements on Line 47-48 and Line 50 need more support or evidence. And I think the introduction takes too much space to describe related works. In contrast, the author should give some experiment results to support the statement on Line 47-50. [1] Foret P, Kleiner A, Mobahi H, et al. Sharpness-aware minimization for efficiently improving generalization[J]. arXiv preprint arXiv:2010.01412, 2020. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: no question Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: no limitation Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing this submission. Responses to the issues raised are provided next. **W1.** Most of existing works on m-sharpness only test SAM itself, but not SAM variants. Hence, m-sharpness does not necessarily generalize to our setting, simply because we introduce bias in $d_t$. Please see more experiments and explanation in the general response, where we numerically confirm that i) m-sharpness depends on the choice of neural network; ii) with the appearance of bias, m-sharpness even may not hold; and, iii) m-sharpness heavily depends on the specific means of updating SAM, which may not hold for different choices of $\epsilon_t$. **W2.** We will update our manuscript based on these responses. --- Rebuttal Comment 1.1: Title: Follow up Comment: Dear reviewer 988S, we hope that our responses have addressed your concerns. Could you please let us know if further clarification is needed?
Summary: This paper proposes to improve upon sharpness-aware minimization (SAM) with variance-reduced inner perturbation steps. Theoretical analysis demonstrates that the proposed method achieves similar convergence rate with the original SAM. Empirical results on different applications (image classification and neural machine translation) demonstrate the effectiveness of proposed method. In addition, the proposed method also endows SAM with robustness against large label noise. Strengths: - The proposed method is clear and easy to understand - Theoretical results are sound - Empirical results demonstrate the superiority of proposed method Weaknesses: - Some claims are not fully supported - Some baseline methods are missing Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - I am a bit puzzled by the claim that the proposed method can be easily integrated with computational efficient variants of SAM, e.g., (Liu et al, 2022; Zhao et al., 2022b) as well as a missing reference (Jiang et al., 2023). Compared with SAM that only needs to compute stochastic gradient, VaSSO has to keep track of d_t, as in (4a). I am not sure if the update is still possible if we only perform SAM step periodically (Liu et al, 2022) or randomly (Zhao et al., 2022b). The authors need to elaborate more on that. - If such integration is not possible, the proposed method may suffer from larger computational cost than these variants. Although it can be regarded as the cost of better final performance, the authors may still need to make this limitation clear. - Despite these computation efficient versions, the authors claim that some other existing works on SAM are also orthogonal to their work and can be easily integrated. While I do not find critical problems for these works, the authors may need to add some experiments that integrate the proposed method with these methods, and see if such integration can achieve any improvements. Nevertheless, such experiments are missing in current version. - It also surprises me that the experiments contain so few baseline methods. For example, why is Fisher SAM (Kim et al., 2022) not compared in experiments? It is confusing to compare with ASAM and not Fisher SAM. References: Weisen Jiang, Hansi Yang, Yu Zhang, James Kwok. An adaptive policy to employ sharpness-aware minimization. ICLR 2023 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see Weakness and Questions part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time spent for coping with our submission. We hope your concerns can be addressed after reading our responses. **Q1 & Q2.** Since updating $d_t$ in (4a) only needs the gradient on the original model , i.e., $g_t(x_t)$, which is computed every iteration, our work can be adopted jointly with the computational efficient variants (Liu et al, 2022; Zhao et al., 2022b). For example, we combine our VaSSO with (Zhao et al., 2022b) in the pseudocode below. - For t = 0 ... T - Draw a Bernoulli random variable $B_t$ ($B_t = 0$ with probability p) - Calculate g_t(x_t) - Update $d_t = (1- \theta) d_{t-1} + \theta g_t(x_t)$ - If $B_t = 1$, update with SGD, i.e., $x_{t+1} = x_t - \eta_t g_t(x_t)$ - If $B_t = 0$, update with VaSSO, i.e., $\epsilon_t = \rho d_t /|| d_t ||$ and $x_{t+1} = x_t - \eta_t g_t(x_t + \epsilon_t)$ - EndFor We also include some numerical results with $p=0.3$ on CIFAR10 to demonstrate the efficiency of VaSSO in this case. | | (Zhao et al., 2022b) | VaSSO+ (Zhao et al., 2022b) | | ------------- | --------------- | ------------- | ResNet18 | 96.37 $\pm$ 0.13 | 96.50 $\pm$ 0.16| **Q3.** We test VaSSO aided ASAM on CIFAR10, and the results show that VaSSO helps ASAM. | | ASAM | VaSSO+ASAM | ------------- | --------------- | ------------- | ResNet18 | 96.33 $\pm$ 0.09 | 96.52 $\pm$ 0.12| | WRN-28-10 | 97.15 $\pm$ 0.05 | 97.46 $\pm$ 0.08 | **Q4.** The numerical tests for FisherSAM on CIFAR10 are shown below. FisherSAM performs slightly worse than VaSSO. | | FisherSAM | VaSSO | | ------------- | --------------- | ------------- | ResNet18 | 96.73 $\pm$ 0.03 | 96.77 $\pm$ 0.09| | WRN-28-10 | 97.46 $\pm$ 0.18 | 97.54 $\pm$ 0.12 | | PyramidNet110| 97.84 $\pm$ 0.11 | 97.93 $\pm$ 0.08| --- Rebuttal Comment 1.1: Title: Follow up Comment: Dear reviewer 65jc, we hope that our responses have addressed your concerns. Could you please let us know if further clarification is needed?
Summary: This paper proposes a variance suppression approach for SAM in order to account for the sensitivity of stochastic gradients used in SAM inner maximization. The proposed method is shown to provably reduce the MSE of gradient estimation. Some experimental results on benchmark datasets are provided. Strengths: Paper strengths: 1. great motivation for variance suppression 2. simple algorithm modification to standard SAM 3. theoretical results for VaSSO in Theorem 2 and Corollary 1 Weaknesses: Paper weaknesses: 1. The authors claim in section 3.2 that VaSSO can boost performance of other SAM family approaches, but this is not shown in the experimental results 2. gains are marginal for larger datasets, see Table 3 3. authors provide convergence rates for Vasso in Corollary 1 but it's not clear that these are sharper than the SAM rates in Theorem 1 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions to address in rebuttal: 1. does VaSSO translate to gains in domain generalization performance? e.g. WILDS benchmark [1] 2. can authors cite more Frank-Wolfe papers in connection to min-max problems, e.g. [2], [3] 3. how do convergence rates in Corollary 1 relate to Frank-Wolfe optimization convergence rates? convergence rates for FW algorithms and various variants have been studied in the literature [4] 4. what are the limitations of VaSSO? authors should include a discussion of limitations 5. can the authors add more SAM baselines to the list? Also, ASAM combined with VaSSO is not included, and would be good to include for comparisons I am willing to increase my score if the authors address most of these concerns. [1] Koh et al, WILDS: A Benchmark of in-the-Wild Distribution Shifts, https://arxiv.org/abs/2012.07421 [2] Tsiligkaridis et al, Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training, CVPR 2022, https://arxiv.org/abs/2012.12368 [3] Gidel et al, Frank-Wolfe Algorithms for Saddle Point Problems, https://arxiv.org/pdf/1610.07797.pdf [4] Huang et al, Accelerated Stochastic Gradient-free and Projection-free Methods, ICML 2020 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time devoted for this work. We find the comments helpful to improve the quality of our work, and we are happy to modify our manuscript accordingly. **W1.** We have combined VaSSO with ASAM, and here are our results for CIFAR10. It can be seen that VaSSO+ASAM outperforms ASAM. | | ASAM | VaSSO+ASAM | ------------- | --------------- | ------------- | ResNet18 | 96.33 $\pm$ 0.09 | 96.52 $\pm$ 0.12| | WRN-28-10 | 97.15 $\pm$ 0.05 | 97.46 $\pm$ 0.08 | **W2.** We respectfully disagree with this assessment. The accuracy is not easy to improve when using ResNet50 on the ImageNet. In addition, SAM improves over SGD by 0.54, and VaSSO further improves over SAM by 0.28. This is a 52% extra improvement compared to SAM’s merit over SGD, which is not marginal. **W3.** The convergence rates are the same. **Q1.** Thanks for pointing out another possible domain for VaSSO. While domain adaptation is not our central theme, it will be of interest to investigate a possible extension of VaSSO to this end. In fact, we have noticed that other sharpness aware optimization approaches have been applied to domain adaptation; see e.g., [5]. We have this direction in our future research agenda, and look forward to leverage the problem structure of domain adaption for further improvement on top of VaSSO. [5] P. Wang, et al. "Sharpness-aware gradient matching for domain generalization." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3769-3778. 2023. **Q2.** Thanks for pointing out missing references. We will include them in the updated manuscript. **Q3.** Our rate is not directly comparable to [4]. This is because the inner maximization of SAM/VaSSO is changing over iterations, i.e., $\max_{||\epsilon|| \leq \rho}f(x_t + \epsilon)$; and SAM/VaSSO only uses 1-step FW for solving its inner maximization problem per iteration; see more in Appendix 1. These differences make it difficult to compare our rates with [4]. However, it may be possible to apply methods in [4] to our framework for better bounds. We will discuss these issues when outlining our future work in the revised paper. **Q4.** The limitation of VaSSO is that it has to compute gradient twice per iteration, which we have discussed in line 329 - 331. We have also included potential solutions in the same paragraph, with numerical results shown in our responses to Q1 and Q2 of Reviewer 65jc. **Q5.** The combination of VaSSO and ASAM is shown earlier in our response to W1. For other SAM baselines, we adopt FisherSAM (Kim et al, 2022). The test accuracy on CIFAR10 is shown below, where VaSSO also has numerical merits over FisherSAM | | FisherSAM | VaSSO | | ------------- | --------------- | ------------- | ResNet18 | 96.72 $\pm$ 0.03 | 96.77 $\pm$ 0.09| | WRN-28-10 | 97.46 $\pm$ 0.18 | 97.54 $\pm$ 0.12 | | PyramidNet110| 97.84 $\pm$ 0.11 | 97.93 $\pm$ 0.08| We hope our responses address your concern. Let us know if there are further comments and we are happy to discuss. --- Rebuttal Comment 1.1: Title: Follow up Comment: Dear reviewer BP9H, we hope that our responses have addressed your concerns. Could you please let us know if further clarification is needed?
Summary: **I have read the author's rebuttal, see reply below** This work proposes changing SAM from perturbing the weights by the gradient of the current minibatch, with instead a moving average of the gradients across training iterations. They provide upper bounds that indicate that the moving average may better approximates the inner-maximization with respect to the loss over the entire dataset, which is more difficult for naive SAM due to the minibatch noise. Empirically, SAM with moving average gradients (which they call VaSSO) performs better on CIFAR10, CIFAR100 by around $0.1-0.5$%, and the improvements are more apparent in the presence of heavy label noise with gains up to +10% improvement with 75% label noise. Empirical experiments show that VaSSO minimizes the maximum eigenvalue of the Hessian better than SAM. Strengths: **Significance**: The proposed algorithm VaSSO is a very simple change that achieves flatter minima and achieves better generalization than SAM. Although it may not necessarily be the case that two effects are tied in a causal manner, its effectiveness at achieving both beyond SAM may suggest that it may be a useful optimizer in practice and for further study in future works. **Quality**: The paper is clearly written and organized. Weaknesses: **1. Lack of important literature review**: The authors motivate VaSSO by identifying problems with SAM in terms of its ability to optimize the original objective _due to the minibatch noise_. The moving average is hypothesized to be better at correctly approximating the full-batch gradient. The authors do not address the fact that works that have analyzed SAM previously have unanimously observed that SAM actually **only observes improvements in generalization if it is paired with minibatch noise**. In particular, Andruschenko et al (https://arxiv.org/abs/2206.06232) showed that n-SAM which utilizes the full-batch gradient directly for the perturbation step actually observed no generalization gain. This is supported by experiments on m-sharpness in the original SAM paper (https://arxiv.org/abs/2010.01412), and they also showed that better approximating the sharpest direction by also taking the second order approximation of the loss instead of the first order approximation lead to worse performance. Wen et al (https://arxiv.org/abs/2211.05729) showed that the minibatch noise is important to minimize the trace of the Hessian instead of the max eigenvalue (though the work does not make any claim about which measure of sharpness is better correlated with generalization). To summarize, the authors pose the problem as that SAM cannot achieve the full generalization benefits because it does not minimize its intended objective optimally, but previous literature indicate that this suboptimality is actually what allows SAM to achieve better generalization. The authors do not mention the conclusions made in previous works, but it seems important to address this conflict. **2. Marginal improvements, and inconsistent numbers in comparison to Foret et al.**: Without label noise VaSSO's improvements range between 0.2-0.5% for CIFAR10 and CIFAR100, and it's not clear whether further hyperparameter tuning could close this gap. In particular, the authors train WideResNet-28-10 on CIFAR10 with cutout data augmentation which was also conducted by Foret et al, where they report an error of 2.3% while the authors report 2.7% and for VaSSO 2.5%. This slight improvement may potentially come from the authors only optimizing the rho hyperparameter, and not m-sharpness (the other hyperparameter mentioned in Foret et al. where the minibatch is sharded first). For label noise, there is a much more significant boost in test accuracy, but it is unclear whether the authors are reporting the peak early-stopping accuracy or the final accuracy. The former is more important for label noise, and the difference between VaSSO and SAM may be coming from reporting the latter. Also, Foret et al reported better numbers for SAM on CIFAR10 with similar amounts of label noise although they were using ResNet34 instead of ResNet18. **3. The upper bounds are too loose, and comparing upper bounds does not lead to any meaningful comparison.** The upper bound derived in Theorem 2 requires very approximate intermediate steps (very loose upper bound in Line 181), and is not a function of $t$ but $T$. The bound implies that if you train for longer, the bound for the MSE between the moving average and the full batch gradient _at every intermediate step_ improves which doesn't seem right. More importantly, any meaningful comparison between the MSE for SAM and MSE for VaSSO should compare the upper bound of VaSSO to the _lower bound_ for SAM. Comparing two upper bounds isn't very meaningful. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the time devoted to this review. The issues raised are addressed one-by –one next. **W1.** It is prudent to clarify that *m-sharpness is only tested for SAM, which may not necessarily generalize to other SAM variants*. One of the key differences with m-sharpness in SAM is that VaSSO introduces **bias** in $d_t$. As shown in our **general response,** VaSSO works in a regime where m-sharpness does not necessarily hold. Thus, the role of noise in SAM does not imply that for VaSSO. In addition to that, there are some other concerns that we hope to settle in our response. > Andruschenko et al .. n-SAM .. actually observed no gain - VaSSO does not reduce to SAM even when using full gradient when finding $\epsilon_t$; hence, the results of the aforementioned citation may not generalize to VaSSO. Moreover, n-SAM does not account for the normalization step in SAM. This can make a great difference especially when the stochastic gradient noise is pronounced. In addition, the implementation in this citation does not fully match the claim, as n-SAM is implemented by performing the `ascent step on a different batch compared to the descent step,' as shown in Appendix D of the aforementioned citation. - The experiment in this citation only observes that SAM has an accuracy of 95.8 using ResNet18 on CIFAR10, which is lower than our implementation of SGD (96.25) or SAM (96.58). This is perhaps because a piecewise linear learning rate is adopted in this citation or no data augmentation (such as cutout) has been applied. Therefore, the results in this citation may not necessarily generalize to our setting since the *cutout* might have markedly changed the loss curvature. > pose the problem as that SAM cannot achieve the full benefits because it does not minimize its intended objective optimally There may have been a misunderstanding. We do not claim that solving the inner maximization optimally is helpful. We only point out that the approximation in the current SAM derivation is not perfect. And this is a chance for making SAM stronger. In fact, even if the inner maximization uses the full gradient, it is unlikely to solve the inner maximization optimally in a single step. **W2.** > Marginal improvements We will respectfully disagree with this assessment. Most of our numerical results show a clear improvement over SAM, especially those in Tables 3 and 6. > inconsistent numbers in comparison to Foret et al There are several reasons for the difference on WRN. The first is that [Foret et al] uses JAX, while we work with pytorch. The second reason is that [Foret et al] works with 8 GPUs, which allows calculation of 8 adversarial models $ \epsilon_t^i$ for $i \in 1, 2, .., 8$ and computes $g_t^i(x_t + \epsilon_t^i)$ separately on each GPU. Unfortunately, this approach requires each GPU to backpropagate twice (16 times in the 8-GPU setup), which is unaffordable by our single GPU setting. > peak early-stopping accuracy or the final accuracy As suggested, we report the peak early stopping accuracy when the noise level is 75%. SAM achieves 76.02, while VaSSO exhibits an accuracy of 83.63, which is still a considerable improvement. > Foret et al reported better numbers for SAM on CIFAR10 although they were using ResNet34 instead of ResNet18. Our difference is that [Foret et al] trains a **ResNet32,** which has only 0.46M parameters (there is a **typo** in the review, not ResNet34). We are training a ResNet18, which is an 11M model. Our model is more than 20x larger than that in [Foret et al]; hence, the difference is reasonable. **W3.** > Theorem 2 dependent on T but not t. This T dependence comes from the choice of $\rho = O(1/\sqrt{T})$ and $\eta = O(1/\sqrt{T})$. This is mainly to simplify the proof. However, extending the choice of hyperparameters to $\rho_t = O(1/\sqrt{t})$ and $\eta_t = O(1/\sqrt{t})$ is rather straightforward based on standard optimization techniques. For such a parameter choice, Theorem 2 can be readily modified to exhibit dependence of order $O(1 / \sqrt{t} )$. > Comparing upper bounds does not lead to any meaningful comparison Unfortunately, the reviewer *missed the fact that VaSSO is compared with a lower bound*. Recall that a lower bound means that there exists an instance, such that $\mathbb{E}[ || g_t(x_t) - \nabla f(x_t) ||^2]= \sigma^2$. The simple 1d example provided next shows that this is indeed a lower bound. Let $f(x, \xi) = h(x) + \xi x$, where h(x) is a deterministic loss function, and $\xi$ is a Gaussian random variable with 0 mean and variance $\sigma^2$. For such a loss function, it can be readily verified that $\mathbb{E}[| | g_t(x_t) - \nabla f(x_t) ||^2] = \sigma^2$. Hence, our bound on SAM is indeed a lower bound, and the comparison is certainly meaningful. --- Rebuttal Comment 1.1: Title: Follow up Comment: Dear reviewer zZon, we hope that our responses have addressed your concerns. Could you please let us know if further clarification is needed?
Rebuttal 1: Rebuttal: **General response to Reviewers zZon and 988S**: Our results do not conflict with existing works. Specifically, m-sharpness does not conflict with the contribution of this submission for three reasons: - the behavior of m-sharpness may depend on the adopted dataset and neural network; see Experiment 1. - m-sharpness may not hold when the gradient estimator is biased, which is the case of VaSSO; see Experiment 2. - m-sharpness depends strongly on the specific SAM update, and its generalizability to other approaches is still well supported; see Experiment 2. Reasons above explain why our results do not conflict with m-sharpness, simply because the present submission deals with a regime that m-sharpness may not hold. We confirm this with the following experiments. **Experiment 1**: m-sharpness on transformers We conduct experiments on a transformer following the settings in section 4.2. We use a fixed batch size of B and vary the choices of m in {B/2, B/4}. For each m, we tune $\rho$ from {0.025, 0.05, 0.1, 0.2, 0.5}, respectively. The best BLEU score are reported below (larger is better). | | SAM | B/2-SAM | B/4-SAM | | --- | --- | --- | --- | | BLEU | 34.75 $\pm$ 0.04 | 34.73 $\pm$ 0.02 | 34.69 $\pm$ 0.04 | It can be seen that a smaller m actually worsens the test performance, suggesting that m-sharpness depends on the adopted neural networks and datasets. **Experiment 2**: m-sharpness under the appearance of bias A critical difference between m-sharpness in SAM and VaSSO is that the latter uses a biased gradient estimator $d_t$ to find $\epsilon_t$. To test the impact of bias in m-sharpness, consider the following experiment. First, fix a bias vector $\beta$. $\beta$ with the same size of gradients, and with each entry sampled from a distribution of ${\cal N}(0.01,1)$. Next, normalize $\beta$ to ensure $||\beta|| = 0.1$. We use $\beta$ as the bias to find the adversarial model, i.e., $\epsilon_t = \rho (g_t(x_t) + \beta)/ || g_t(x_t) + \beta ||$. We find that when bias is present, the m-sharpness may not hold true for ResNet18 on CIFAR10. Moreover, a decreasing trend on test accuracy is observed in this case, which illustrates how helpful is to reduce the gradient variance. | | m=128 | m=64 | m=32 | m=16 | | --- | --- | --- | --- | --- | | BLEU | 96.33 $\pm$ 0.04 | 96.26 $\pm$ 0.07 | 96.26 $\pm$ 0.10 | 96.18 $\pm$ 0.13| Experiment 2 demonstrates also m-sharpness is highly related to the specific updateSAM is updated. Changing the way one finds $\epsilon_t$ can reverse the m-sharpness. In sum, our Experiment 1 shows that m-sharpness depends on the neural network SAM is applied to. The exact reasoning behind m-sharpness is unclear, and it is clearly possible that m-sharpness is caused by a careless optimization step. In fact, most of the existing works on m-sharpness only test SAM, but do not consider alternative choices for inner maximization. Lastly, it is important to stress the reason for not studying m-sharpness directly. We find that m-sharpness formulation, e.g., eq. (3) in Andruschenko et al 2022 *may be ill-posed mathematically due to the lack of a clear definition on how the dataset ${\cal S}$ is partitioned*. Using their notation, suppose for instance that the loss function is $l_i(x) = a_i x^2 + b_i x $, where $(a_i, b_i)$ are data points. Consider a dataset with 4 samples, $(a_1=0, b_1=1)$; $(a_2=0, b_2=-1)$; $(a_3=-1, b_3=0)$; and, $(a_4=1, b_4=0)$. Let us consider 2-sharpness below under different partitions of the dataset. - If the data partition is (1,2) and (3, 4), the objective of 2-sharpness i.e., (3) in Andruschenko et al 2022, becomes $\min_w \sum \max_{||\delta|| < \rho} 0$. - If the data partition is (1,3) and (2,4), the objective is $\min_w \sum_{i=1}^2 \max_{||\delta|| < \rho} f_i(w,\delta)$, where $f_1$ is the loss on partition (1,3), i.e., $f_1(w,\delta) = -(w+\delta)^2 + (w+\delta)$; and $f_2(w,\delta) = (w + \delta)^2 - (w + \delta)$ is the loss on partition (2,4). Clearly, the loss functions are not the same when the data partition varies.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimistic Exploration in Reinforcement Learning Using Symbolic Model Estimates
Accept (poster)
Summary: This paper proposes a new method for optimistic exploration in sparse reward settings for reinforcement learning problems. The core of this method is learning optimistic symbolic approximations of the underlying world model. Such optimistic symbolic models can then be used with a diverse planner to generate plans for explorations, which could ultimately facilitates learning. The proposed method is tested in four benchmark domains with grid world type environments. The proposed method is compared with three baseline methods, including two exploration methods and one hierarchical RL method from prior work. With the same computational time limit, the proposed method is shown to outperform the baselines in number of problems solved (goal reached). The proposed method is also shown to be able to effectively leverage human input through lifted representations in the symbolic model. Strengths: - The proposed method of learning optimistic symbolic models and utilizing them for exploration is technically interesting and novel. - The paper is well-written and easy to follow. - Exploration in RL is a long-standing and important problem. Leveraging symbolic methods in RL is a promising direction. This paper makes an important step in this direction. - The experimental setting is sound: the proposed method is evaluated on four traditional benchmark domains and compared against three existing methods from prior work. Weaknesses: For experimental results, the author states that both R-max and SMDP timed out in all test instances under the time limit set in the paper, and therefore no result was reported in the paper on these two baselines. It would be helpful to report at least some results on these two baselines, otherwise it is not clear how the proposed method compare with these two baselines in different aspects. As an example, the author could set a very high time limit such that R-max and SMDP can finish running, then report the number of test cases solved as well as the average computational time used, and compare the proposed method with these two baselines in these statistics. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Given a higher time limit, how does the proposed method compare with R-max and SMDP baselines? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: I did not see a discussion on the limitations. It might be helpful to discuss what is assumed to be provided for the symbolic model for the proposed method to work, and what can we do if these elements are not provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their suggestions. Since most of the baseline methods considered, including Rmax and epsilon greedy visit all states in the limit, they will eventually find a goal state after a number of iterations. For planning tasks considered here, this number however can (and in many cases will) be prohibitively high. Having said that, we will make sure to update the paper with results for higher time limits. Limitations: We have included a discussion of the specific assumptions made in section 5 of the pdf included as part of the appendix.
Summary: This paper studies reinforcement learning in deterministic MDPs in which a symbolic representation of the state space is assumed to exist. The method proposed in this paper learns a symbolic model of the environment, and it uses it to speed up the reinforcement learning process. The algorithm starts with an "optimistic" model in which all actions lead to the desirable goal state. The role of the algorithm is to refine the actions to make them more realistic by discovering the preconditions and the effects of actions. A few mathematical assertions are provided in the paper, and the empirical results compare the new method with the Q-learning algorithm. Strengths: * The paper studies an important problem of enhancing reinforcement learning through symbolic models from symbolic PPDDL-like planning * The contributions of the paper that are related to symbolic planning are strong * The fact that the paper uses lifted inference is a great plus * The paper presents both empirical results and explains the method using theoretical arguments Weaknesses: * As said above, the contribution to the symbolic planning literature is clear and seems strong, but the link with reinforcement learning is unclear. For example, it is not clear to me how exactly the model being learned by Algorithm 1 interacts with Q-learning. A pseudo-code of a complete Q-learning algorithms with the new stuff would be very useful. Lines 285-294 are not sufficient to explain this integration. * Since the MDP is deterministic, there is no need for a full RL policy since the solution can be a single trajectory. This means that Q-learning may not be required. Perhaps it would be more appropriate to integrate and compare this method with LRTDP, which would have convenient properties in deterministic MDPs. * The authors were careful to acknowledge in the introduction that their models are optimistic with respect to the underlying transition function. But, the authors should also note that the word "optimistic" has a very special meaning in RL and in heuristics search or symbolic planning in general. Optimistic is often used as a synonym for admissible, and unfortunately the symbolic models studied in this paper are not optimistic nor admissible with respect to the expected rewards optimized by Q-learning. Note that one may have a very low-reward action that leads to the goal in 1 step, but such an action should never be chosen, when a longer trajectory can reach the goal at a much higher reward. The current approach will miss that. For that reason, I feel that the use of the word "optimistic" may confuse the readers in the future. Perhaps the authors could think about this. The authors should note that the R-max algorithm that the authors cited led to many other ways of using optimism in RL, e.g. https://icml.cc/Conferences/2010/papers/546.pdf or https://www.ifaamas.org/Proceedings/aamas2012/papers/1C_1.pdf or https://www.ifaamas.org/Proceedings/aamas2010/pdf/01%20Full%20Papers/06_06_FP_0313.pdf. The use of a diverse planner is good, but the point of the methods mentioned above is that they can do much better than a diferse planner which can go in a wrong direction and explore useless states. * It seems to me that the authors could survey the literature on symbolic planning in deterministic domains when the PDDL operators are learned from data. I am sure that there must exist papers in which this problem was studied. I think that some literature review on this would be useful in this paper. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * What is the difference between online and offline RL in lines 24-25? * The sentence in line 81 criticizes [22], but the point made in that sentence is not clear to me. * Def. 1 is confusing to me. If the action sequence has k actions in the full symbolic model, the optimistic sequence may have only one action which would lead to the goal straight away. I am not sure why both sequences have the same length in the definition. Also, the definition itself is cryptic to me, and only the subsequent paragraph managed to explain the idea of this particular optimism to me. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: * The discount factor gamma should be part of the MDP definition. This is part of the environment, not the algorithm. This would also make the paper clearer. For a moment, I thought that discount actor is 1, but then given only the goal reward, the length of the trajectory would not matter. But, line 104 indicates that gamma is 1 indeed because "any" trajectory is seen as optimal. There are some conflicting statements in the paper. * There are a few small typos and grammatical errors in various places. E.g. the exponent in line 163 or PDDL is written as pddl on p. 7. * Line 420: how where the Q-values initialized? * Appendix mentions Fig. 5, but the figure is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments, and we will make sure to include a pseudo code for the updated Q-learning algorithm. We will make sure to include a more detailed discussion of the existing work in the space of learning symbolic models. We would also like to point out that the supplementary file includes a comparison to a popular symbolic model learning algorithm. Weaknesses: Contributions to RL - In so far as planning is the problem of using fully defined models to generate a course of action and RL learning the best course of action from experience, we are solving an RL problem. Additionally, we show theoretically that assuming a symbolic structure provides a number of advantages. We empirically show that our method outperforms a state-of-the-art hierarchical reinforcement learning approach on popular RL benchmarks. LRTDP -- The LRTDP algorithm is intended for non-deterministic domains. On deterministic domains, it will perform many unnecessary computations. We are not aware of any evidence of why this particular algorithm would work better than the popular Q-learning. The more efficient planning algorithms that do not require action models would be Greedy Best-First Search (GBFS) with weak evaluators that do not require the action model, like goal-count heuristic or Best-First Width Search. However, these algorithms can benefit greatly from the knowledge of action models, and that is what most symbolic planners are doing, including the diverse planners used here. It is a very well-known fact within the planning community that these search algorithms perform much better when equipped with evaluators obtained from action models and therefore, we did not feel the need to perform such an experiment. Optimism -- Please note that in this paper, we are trying to formalize and learn an optimistic representation of the underlying model. As discussed in Definition 1, we define this optimism in terms of the number of traces allowed (i.e., has non-zero probability in the model). In the paper, we were careful to frame all our claims of optimality in terms of learning an optimal representation and not necessarily in terms of optimal policy identified on top of the learned model. We will make sure to update the paper to clarify any such confusion. Further enhancements to the method: We really appreciate the links to the works that extend Rmax. Given the fact that the nature of the optimistic estimation process is drastically different from Rmax, it is unclear how we could map the intuitions from these works to our current approach. However, we agree this would make for an interesting next step for our work. Questions: Offline vs. online RL in the context of symbolic model learning: The methods we referred to as belonging to offline RL assume access to a set of traces. On the other hand, the method we employ directly interacts with the environment (or a simulator). Definition 1 - Please note that definition one doesn’t limit itself to just the optimal sequence possible under the model but rather considers all sequences with a non-zero probability. As such, the set of all traces (i.e., state action sequences with non-zero probability) under the learned approximation is guaranteed to be a superset of traces possible under the original model. We will work on updating the definition to make it easier to understand. Automatic synthesis of symbols: Note that we are building models on top of symbols (predicates, objects, actions, etc.) that are provided by the user. As such, one would expect the user to be able to make direct sense of the learned model descriptions, or at the very least, we could expect these models to be used as inputs to existing explanation generation methods for symbolic models. However, with methods that directly synthesize the symbols as well, there is no guarantee that people can make sense of the learned model descriptions. Discount Factor - Thank you for pointing out this typo; we will make sure to include the discount factor in the model definition. We would like to note that our method can support a discount factor of `1’. Given our focus on deterministic models, any policy that only generates finite paths to the goal will be considered optimal. Note that any policy that doesn’t end support a path to a goal state or includes a loop will not be optimal (as the loop will never terminate). We will update the discussion around line 104 to clarify that this is only true when the discount factor is 1. Q Value Initialization: They were initialized to zero. --- Rebuttal Comment 1.1: Title: Your answers read Comment: Your answers clarified a few things to me, and the other reviews are helpful to see your work in a better light. As a result, I will increase my score sightly. --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment: We thank the reviewer for taking the time to go over our response and update the score. If you have any other questions about our method, evaluation, or the significance of the contributions, we would be more than happy to provide additional details.
Summary: This paper follows a long line of recent work on trying to combine RL with symbolic reasoning. The main motivation for doing so in this paper is to improve exploration on sparse reward, goal-oriented tasks, similar to the "taskable RL" setting of Illanes et al. A major limitation of existing work is the need for a human expert to provide a symbolic model upfront. This paper aims to relax that assumption by removing the need to specify the transition logic (in RL parlance). The human still needs to specify how the states are represented symbolically, but an algorithm takes care of learning the model. In a nutshell, the way the algorithm works is by starting with the most permissive model possible, then iteratively asking the planner for a solution, attempting to execute it, and updating the model based on the sampled experience. The preconditions, additions and deletions are updated according to the least restrictive explanation of what just occurred, ensuring that the model remains "optimistic", i.e., any possible trace in the MDP remains possible under the planning model. This guarantees that the algorithm will eventually find a solution, provided one exists. The evaluation shows that approach learns quickly relative to some standard RL exploration strategies, and also shows that the approach can effectively bootstrap lifted representations learned from previous task instances when tackling new tasks. Strengths: Overall, I thought this was a very interesting paper, based on an intuitive idea. The approach seems more similar to how humans tackle long-term, sparse reward tasks compared to the standard RL approach of intrinsic rewards. Generally, we humans don't possess a complete model of the environment, so we come up with a *plausible* plan, try it out, then update our models based on what happened. It always amazes me how complicated ideas like this sound when expressed in planning logic (I'm more of an RL person), but surprisingly I was able to follow most of the core technical details. Largely this is because the paper is very well-written -- unlike a lot of papers I've reviewed recently, this one is actually self-contained, with the authors going to great care to explain the preliminaries and problem setting properly. This was much appreciated as someone who needs to be reminded of all the planning lingo. The coverage of related work is good (the authors come across as being very knowledgeable across many subfields) as a far as I know the approach is original. The experiments are mostly compelling (with some caveats noted below), and I thought the curriculum learning experiment (Section 5.3) was particularly cool! Weaknesses: The main weakness of the paper is one that's already acknowledged by the authors; namely, the assumption that the environment is deterministic. This limits the applicability of the method quite a lot, and I'm not sold from the brief explanation in Section 6 that the method would work well if directly applied to stochastic environments. Moreover, assuming determinism greatly simplifies some aspects of the decision-making problem. This is partly discussed on page 3, but there are a lot of things that one can do to speed up learning that aren't mentioned. For example, rather than applying Q-learning with 1-step updates, you can calculate all possible n-step return estimates and backup from the largest one. Also, until a reward is found, all action-values will be tied at 0, so an e-greedy policy is actually just a uniform random policy until that point (assuming random tiebreaking), which seems woefully inefficient if you know that the environment is deterministic. It's trivial to learn the transition function in the deterministic setting, so you could adopt a simple strategy like maintaining a list of visited states with untried actions, and actively work towards ticking off that list. I'm not super familiar with R-MAX, but I understand that it's designed for stochastic environments, which also puts it at a disadvantage assumption-wise. I'm not suggesting that the authors derive a brand-new RL algorithm that perfectly exploits determinism just to baseline against (although I'd be very surprised if no-one has tackled this problem before), but I'd be more convinced by the experiments if the baselines were at least slightly optimised for this setting. My only other (minor) criticism of the paper is that it clearly ran into issues with the NeurIPS page limit. Much of the writing has been packed into walls of text to address this, and Figure 1(b) has been compressed to the point where it's very hard to read. While there's little you can do about this (I can't think of anything major that's cuttable), I have to admit I groaned when I opened the paper and saw the formatting. I'd definitely recommend releasing a nicer, less compressed version of the paper as a supplemental sometime in the future, since I think the current format will put off some readers. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Could you please go into a bit more detail about what the main challenges would be in extending your approach to stochastic environments? For example, do you envisage learning transition probabilities and using a probabilistic planner, or would you consider something like FOND planning? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: As mentioned in my comments above, I think the paper could do a better job of acknowledging the drawbacks of assuming a deterministic environment. This is very downplayed at the moment. It's promised early on that "In Section 6, we will see how we can also apply our methods directly in settings with stochastic dynamics", but then all that's actually given is a one-liner. I don't see any potential ethical concerns with this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments; we will make sure to expand our discussion on how the method could be applied in stochastic settings. Weakness/Questions: Ease of learning models in deterministic settings: We would like to point out that a brute force-based method will not scale well in the domains we consider, given the sheer state space sizes involved. A good demonstration of this fact is the symbolic model learning baseline provided in the appendix main pdf (Section 3), which relies on random walks to generate different traces. Extending the method to stochastic settings: Please note that central notions of the optimism considered in the paper carry over to stochastic settings. We can still start with a model with empty preconditions and add effects that contain all add effects. Now the challenge here is, of course, how does one refine the effects of this model when there are multiple effects possible. As alluded to in the paper, the conceptually most straightforward approach would be to treat each qualitatively different outcome one comes across as a new action and introduce it into the model. One can’t remove the old action copies directly as one is unsure if there might still be a yet undiscovered outcome of that action, which may produce that effect. However, our internal action prioritization system can be updated to ensure that the planner will not try to rely on those outcomes after a certain number of trials have passed. This would, in theory, correspond to learning effectively a determinization of the original model. Of course, one could combine the outcomes together to form a FOND-like representation of the model, and we could even associate probability estimates with each observed outcome. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I'm just confirming that I've read the other reviews and rebuttals, and I still think the paper should be accepted. While the paper is a little bit dense, the subject matter is very technical and makes this somewhat unavoidable. Reviewer 6i1K also has some concerns about the deterministic MDP assumption, but for me these aren't enough to sink the paper. It's very interesting work in a novel direction, and I think it's fair enough to leave this weakness for future work, since there's already a lot of contribution here. I'm not concerned by reviewer nfft's review, because it seems like they just didn't "get" the work, whereas I think many readers will. (Even the other negative reviewer, 6i1K, seemed to understand the core idea pretty well and could see some clear strengths.) --- Reply to Comment 1.1.1: Title: Reply to official comment Comment: Thank you for the vote of confidence. We appreciate the time invested in reading our response and other reviews, and we especially appreciate your support. We believe that an opportunity to present our paper at a general machine learning conference of such a caliber would allow us to highlight the advantages of using the tools and ideas developed originally in the context of symbolic planning. If you believe that the paper deserves a higher score, we would be grateful if you upgraded your evaluation.
Summary: The authors present a framework for learning symbolic representation of an underlying MDP that leverages key concepts of the planning community. Specifically, they use the PDDL formalism to represent a planning problem that's acquired through agent interactions with an MDP. The authors show that this approach can lead to higher success rate in underlying planning problems that simply using standard Q-learning. Strengths: - The results show that the proposed method is able to learn a model that can be leverage by the planning agent. The authors show that they are able to learn a symbolic representation for the underlying MDP from the agent experience. Weaknesses: - The main weakness of this paper is its presentation and writing style. I found the text really confusing as it seems like the text is not self-contained. For example, line 155 "denote grounded instance by replacing the parameter with an object list", what objects? where is the list coming from? in line 178, "... learn a binary classifier that tests whether a ground truth predicate may be true in a given MDP state..." What do you mean it is learned by collecting positive and negative examples for each ground concept? Algorithm 1, what are the DiversePlanner, UpdateModel, and PuneModel functions? - It is not clear where the exploration is taking place. The title would lead me to think that it would be a central part of the paper, but in the main text, it appears to be just a comment on the use of the planners. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - As a suggestion, it would have been extremely helpful to have a working example throughout the text or some visualization to explain concretely what the method is doing. What exactly is being collected to learn the binary classifier? What are the objects replacing the parameters? - What was the actual planer used in this evaluation? The text mentions several times "diverse planners", but what was actually used? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: No clear mention of limitations, no concerns on societal issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments; we will work on having a running example that better illustrates the various points made in the paper. Questions: Binary classifier: We are using handcrafted classifiers in the experiments. As such we didn’t need to collect positive or negative examples. Our methods are completely agnostic of how the predicates are obtained. The specific passage the reviewer mentioned points to the fact that earlier works have shown that such predicates can automatically be learned from examples. To clarify, in this work, we assume the predicates to be provided (either hand-crafted or learned separately). Our methods are completely agnostic of how the predicates are obtained. The specific passage the reviewer mentioned points to the fact that earlier works have shown that such predicates can automatically be learned from examples. Diverse Planner: We used FI-diverse-agl planner provided as part of the ForbidIterative planner. The details are provided (along with other implementation details) in Section 2 of the main file in the supplementary folder. Weaknesses: Object: The notion of objects, as used in this paper, corresponds to a concept used within relational representations. This representation scheme makes the ontological commitment that the world can be represented as a set of objects and relationships between the objects. In theory, an object could be anything. However, in practice, they usually correspond to what people would consider to be objects. For example, in the case of the blocksworld domain described in the paper, the objects would be the various blocks that will be stacked on top of one another. The notation for objects is presented in lines 127-128. Predicates: Please see the answer to question 1. UpdateModel, and PruneModel - These processes are described in sections 4.1 (lines 242 - 255) and 4.2 of the paper (267 - 280), respectively. Exploration: As with the case of Rmax, exploration here is automatically performed as part of the planning process. In our case, as the model estimate is refined through interaction, the planner automatically comes up with different paths it thinks will reach the goal. These plans are then tested on the simulator and then used to refine the model. The refined model then provides new plans. We provide a high-level discussion of the overall approach in lines 192 to 207.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation
Accept (poster)
Summary: This paper proposes DiT-3D, an extension of DiT for point cloud data to achieve point cloud generation. With the design of patch embedding and 3D window attention, the proposed model is able to reduce the computation cost. Also, the proposed model is able to directly leverage pre-trained 2D DiT model by fixing most layers and only finetuning several layers, which can reduce the required training time. The results quantitatively verify the effectiveness of the proposed method. Strengths: 1. The proposed method out-performs existing SOTAs on the unconditional point cloud generation task in different evaluation matrices. 2. The paper is easy to follow and understand. Weaknesses: 1. The proposed DiT-3D looks like the original DiT with several modifications (simply changing existing 2D techniques into 3D version). Specifically, it seems like an ECCV 2022 paper [1] has already addressed a similar design of window attention, this paper is just using such a technique on a different task. Therefore, I think the novelty is limited, and more insights should be explained. 2. Previous point cloud generation works such as PVD and LION conducted experiments on multiple conditional generation tasks to verify their design. However, this paper only shows the results of unconditional generation. I believe more experiments such as point cloud completion or 2D image-to-3D point cloud should be included. 3. The qualitative results shown in the main paper only contain examples generated from DiT-3D and the supplementary material only additionally contains results from PVD, DPM, and SetVAE. As two methods that are quantitatively most comparable to DiT-3D, the visualization of LION and MeshDiffusion are not shown in this paper. So it is not clear how DiT-3D outperforms these methods. A minor issue: the computation resources should be detailed in this paper. [1] SWFormer: Sparse Window Transformer for 3D Object Detection in Point Clouds (ECCV 2022) Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weakness part Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 7dkY, Thank you for the detailed review. We will address your concerns below. > Differences from SWFormer (ECCV’2022) for 3D Object Detection. Although SWFormer used a shifted sparse window operator in each transformer block, our DiT-3D has conceptual differences from their design and implementation. For each SWFormer block, they first repeated multi-head self-attention (MSA) on all valid voxel within the same window by repeating N times, then they performed a shifted sparse window partition to re-generate the sparse windows, and processed the shifted windows with another M self-attention layers. However, our DiT-3D simply reduced the complexity of the self-attention operator in Equation (2) from $O(L^2)$ to $O(L^2/R^3)$ using a single attention layer for each block. Furthermore, as we discussed in Sec 3.4 in the main paper, we introduced multiple different and efficient designs for 3D shape generation compared with DiT on 2D image generation. We are the first to propose efficient 3D window attention in the transformer blocks for reducing the complexity of the self-attention operator in DiT. We proposed to add a devoxelized operator to the final output of the last linear layer from DiT for denoising the noise prediction in the point cloud space. Our contribution is not just proposing a new transformer for 3D point clouds generation, but aslo investigating properties of plain diffusion transformer on 3D point clouds generation. We demonstrate that the representations learned on ImageNet have a very positive impact on 3D generation, despite the significant domain gap between 2D images and 3D point clouds. Meanwhile, given a pre-trained DiT-3D model on source classes, we can use the parameter-efficient fine-tuning approach to extend its applicability to new categories. > Experiments on point clouds completion. Thanks for the suggestion! To verify the effectiveness of our DiT-3D on conditional generation, we conducted point cloud completion experiments on ShapeNet and compared with PVD and LION in Table below. Here, we applied Chamfer Distance and Earth Mover’s Distance to evaluate the reconstruction result. As can be seen, our method achieves the best performance in terms of all categories (Airplane, Chair, Car). | Method | Airplane-CD (↓) | Airplane-EMD (↓) | Chair-CD (↓) | Chair-EMD (↓) | Car-CD (↓) | Car-EMD (↓) | |--------|:------:|:-------:|:------:|:-------:|:------:|:-------:| | PVD | 0.4415 | 1.030 | 3.211 | 2.939 | 1.774 | 2.146 | | LION | 0.4035 | 0.9732 | 2.725 | 2.863 | 1.405 | 1.982 | | DiT-3D (ours) | **0.3521** | **0.9235** | **2.216** | **2.385** | **1.126** | **1.513** | > Clarification on qualitative and quantitative comparisons. We provided multiple qualitative comparisons with previous baselines (especially PVD, a U-Net based diffusion model on voxelized point clouds) in Figure 2 in the supplementary, and our DiT-3D generated high-fidelity and diverse point clouds of 3D shapes for each category. For quantitative comparison with LION and MeshDiffusion, our DiT-3D outperforms them significantly on all metrics in terms of high-fidelity and diversity, as compared in Table 1 in the main paper. Due to rebuttal time constraints, we will add more visualizations of LION and MeshDiffusion to the supplementary. > Computation resources. We used 8 NVIDIA V100-32GB GPUs for the experiments. --- Rebuttal Comment 1.1: Title: Following comment Comment: Thank the authors for the reply. Although I still think the novelty is limited, I appreciate the effort of conducting new experiments. Therefore, I will raise my rating to 4. --- Reply to Comment 1.1.1: Title: Response to following comment Comment: Dear Reviewer 7dkY, Thank you for your prompt response and for raising your rating. We appreciate your feedback and would like to address your concerns regarding the novelty of our work and provide further clarifications. While it is true that the technique of window attention has been previously explored in the domain of 2D methods, it is essential to highlight the distinctive implementation and design aspects of our proposed DiT-3D, which differentiate it from the shifted sparse window operator in SWFormer [1]. Furthermore, it is important to note that our paper's novelty surpasses the sole design of an efficient window attention mechanism within a 3D diffusion transformer. Our work introduces a comprehensive framework that leverages a plain diffusion transformer to achieve state-of-the-art performance in 3D point cloud generation. To address your concerns, we would like to emphasize the following key distinctions between our DiT-3D and SWFormer [1]: 1. **Input Representation**: Unlike SWFormer [1], which exclusively operates on 2D voxel inputs, our DiT-3D operates on 3D voxel patch embeddings. This distinction necessitates the integration of 3D positional embeddings into our approach. 2. **Nearest Neighbor Aggregation**: SWFormer [1] employs nearest neighbor aggregation, where each striding window selects the nearest neighbor non-empty voxel feature from the center. In contrast, our DiT-3D reshapes and maps the input voxel feature tokens, resulting in reduced length. Subsequently, global attention is applied, and the aggregated features are unpartitioned to restore the original input tokens. 3. **Shifting**: SWFormer [1] utilizes a shifted sparse window partitioning strategy to propagate information. Conversely, our DiT-3D employs a straightforward non-overlapping window attention mechanism without any shifting scheme. 4. **Buckets**: SWFormer [1] employs bucketing to group Bird's Eye View (BEV) voxels into non-overlapping windows and pads the sequence length to a fixed size. In contrast, our DiT-3D does not utilize any bucketing strategy within its window attention mechanism. 5. **Hierarchical Transformer Architecture**: SWFormer [1] primarily relies on hierarchical sparse window transformer blocks, whereas our approach utilizes global window attention blocks to aggregate input voxel features. The hierarchical structure employed in SWFormer [1] significantly differs from the architecture adopted in our work. We hope this clarification strengthens the understanding of the unique contributions and novelty of our work. Thank you again for your valuable feedback, and we look forward to addressing any further concerns you may have.
Summary: This paper tackles the task of 3D generation. Inspired by the recent progress of utilizing transformers in 2D image generation with diffusion processes (DiT [1]), this paper proposes to replace the common U-Net in 3D diffusion models with a plain transformer. To adapt the DiT to the 3D scenario, authors make several modifications to the vanilla 2D DiT. Experiments on ShapeNet demonstrate improvements over baselines. Strengths: This paper tackles an important task of 3D generations that is important for many downstream tasks. The paper is generally well-written and easy to follow. Experiments are thorough and convincing. Weaknesses: My main concerns are about the incremental development of the proposed DiT-3D based on DiT [1]. Specifically, a. **Diffusion on voxelized point cloud** has been studied in [12]; b. **3D positional embeddings** is a natural/must-have modification from DiT's 2D positional encodings; c. **3D window attentions** may not be necessary. Actually, I am quite confused why we need the following procedure: point cloud -> voxel (Sec. 3.2 Voxelized point clouds) -> patches in voxels (Sec. 3.2 patch embeddings) -> reshape patches in voxels into 3D window (Sec. 3.2 3D window). I think these hierarchical steps essentially just change the **actual voxel size**. Then why don't we just have a voxel with a resolution of $(p \cdot R)^3$ at the very beginning? Here $p$ is the patch size the author used in "3D Positional and Patch Embeddings"(Sec. 3.2) and $R$ is the number of patches for "3D window" (Sec. 3.2). And this does not prevent authors from applying 3D convolution to exchange information (L179). If the above abstraction/simplification is correct, it seems like the major modifications to 3D are just changing from image patches to voxels. d. Authors state > These results indicate that our DiT-3D can support flexible transferability on modality and domain, which is different from previous 3D generation methods [12, 13] based on U-Net as the backbone of DDPMs (L325-327). Can the authors explain whether there are some experiments to support this statement? As in Tab. 3, we only have results from DiT-3D without any baselines. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Authors state > 3D shape generation is a challenging and important problem that seeks to synthesize high-fidelity point clouds ... (L27) I do not think this is a proper statement as there are many 3D representations besides point clouds, e.g., mesh [15] or implicit ones [A, B]. 2. L190 "Q, K, V have the same dimensions" where the notation "V" is the same as the voxel resolution in L176. Please modify it to make readers be able to distinguish the two. 3. L226 "It’s worth noting that we initialize $\gamma$ to 1, which is then multiplied with the frozen layers": $\gamma$ is not explained. 4. L266 "trained on point clouds (l-GAN) and latent variables (l-GAN)": duplicated 1-GAN. [A] 3D Neural Field Generation using Triplane Diffusion. CVPR 2023 [B] SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation. CVPR 2023. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No limitations are provided by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer trpZ, Thank you for the detailed review. We will address your concerns below. > Diffusion on voxelized point cloud has been studied in PVD. We agree that diffusion on voxelized point cloud has been used in PVD, but we are the first to use a plain diffusion on voxelized point clouds. Meanwhile, directly applying the transformer to denoising the point-voxel features is not working. Therefore, we designed to use the devoxelization layer in the end to denoise the original point clouds. > 3D positional embeddings. Although 3D positional embeddings is a nature modification from 2D positional embeddings, we are the first to leverage 3D positional embeddings in a plain diffusion transformer for 3D point clouds generation. Furthermore, in Table 2, we validated the necessity of 3D positional embeddings in our diffusion transformer to generate meaningful 3D point clouds. > Clarification on 3D window attentions. The 3D window attention designed in the diffusion transformer blocks is to propagate patch-level point-voxel features in efficient memory usage, and we do not change the actual voxel size. If we use the voxel with a resolution of (p·R)^3 at the very beginning, we will lose much semantics from the original input, and performs worse on denoising the original point clouds. Meanwhile, we do not want to involve 3D convolution in our transformer as our target is to propose a plain diffusion transformer without convolution layers for 3D point clouds generation. To verify the advantage of 3D window attention over 3D convolution, we replaced 3D window attention in our DiT-3D with 3D convolution on the voxel size of (16)^3, and compared the generation results on Airplane category in the Table below. Our method with 3D window attention achieves the best results in terms of both metrics on quality and diversity. | Method | 1-NNA-CD (↓) | COV-CD (↑) | | ---- | :----: | :----: | | 3D convolution | 71.57 | 42.23 | | 3D window attention | **62.35** | **53.16** | > Comparison regarding flexible transferability on modality and domain Previous methods mainly used diverse and different U-Net architectures as diffusion model to achieve denosing process, and they do not support parameter-efficient fine-tuning from 2D modalities. However, our DiT-3D used a plain transformer blocks architecture for diffusion, and we can efficiently transfer the weights from DiT-2D pre-trained on ImageNet to our DiT-3D transformer blocks. In Table 3, we demonstrate the effectiveness of our DiT-3D in supporting modality transferability of the proposed approach from 2D ImageNet pre-trained weights to 3D generation with parameter-efficient fine-tuning. In addition, by only training 0.09MB parameters of models from the source class to the target class, our DiT-3D can achieve a comparable performance of quality and diversity in terms of all metrics. > Clarification on the proper statement about 3D shape generation. Thanks for pointing this out. We will correct it by making it more clear that 3D point clouds generation is a challenging and important problem that seeks to synthesize high-fidelity point clouds using generative models. We will also add the provided citations for discussion about generation based on other 3D representations. > notation "V" as the voxel resolution. We have replaced the notation of voxel resolution $V$ with $v$. > $\gamma$ is not explained. $\gamma$ refers to learnable scale factors γ in transformer blocks of the diffusion model. We will add this clarification to the revision. > duplicated 1-GAN. We have fixed it. The first one is r-GAN. > No limitations We provided the discussion on limitations in L48-52 in the supplementary. Although our plain diffusion transformer on point clouds achieves improved performance in generating high-fidelity and diverse 3D shapes. However, we have not explored the potential of other 3D modalities or text-to-3D generation. We plan to leave this for future work. --- Rebuttal Comment 1.1: Title: Additional response to the reviewer Comment: Dear Reviewer trpZ, Thank you for your detailed review and the valuable feedback. We have carefully addressed each of your concerns and provided clarifications in our previous response. We would like to kindly request your response to the provided explanations and revisions. We appreciate your thorough evaluation of our work, and your feedback will greatly contribute to the improvement of our manuscript. Thank you for your continued engagement and support. --- Rebuttal Comment 1.2: Title: Rebuttal Reply Comment: First, I appreciate authors's time and effort in addressing my concerns. Regarding comparing to PVD: I am not sure whether I fully get authors's arguments. Can authors clarify what it means for "we are the first to use a plain diffusion on voxelized point clouds". I think PVD is also a plain diffusion process. Regarding 3D positional encoding: I still think the novelty is quite limited for claiming 3D positional encoding as a major contribution (L73) but I appreciate authors's effort in showing that this is an important factor for achieving high-quality results (Tab. 2). Regarding 3D window attention, I think my concerns get resolved. However, I am still confused about the process of the series of voxelization and patchification descried in Sec. 3.2. Essentially, point clouds first get voxelized into $V^3$. Then without going through any networks, it will get reshaped into $(V/p)^3$ and fed into the first network (L179). I think essentially, we only have a voxel with a resolution of $(V/p)^3$. In essence, assume the original voxel size is $u$. Why not directly have voxelization for voxel size $u \cdot p$ at the very beginning? Meanwhile, after reading other reviews and authors's responses, I think this work provides some benefits to the community. However, I am still quite concerned about the limited novelty of the techniques used in the paper. Based on this, I raised my score to 5. --- Reply to Comment 1.2.1: Title: Response to the rebuttal reply Comment: Dear Reviewer trpZ, Thank you for your continued feedback and for raising your score to 5. We appreciate your time and effort in reviewing our paper. We will address your remaining concerns below. Regarding the claim of being the first to use plain diffusion on voxelized point clouds, we apologize for any confusion caused. We acknowledge that diffusion on voxelized point clouds has been studied in the context of PVD. Our statement was intended to highlight that we are the first to use a plain diffusion transformer directly on voxelized point clouds. We will revise our statement to clarify this point in the final version of the paper. We appreciate your understanding of the importance of 3D positional encoding for achieving high-quality results, as demonstrated in Table 2. We will revise the paper to better reflect the significance of this factor and to avoid any overemphasis on its novelty. Regarding the process of voxelization and patchification described in Section 3.2, we apologize for the confusion caused by our explanation. We agree with your point that the resolution of the voxel after the reshaping operation becomes $(V/p)^3$. To clarify, the purpose of the reshaping operation is to divide the original voxel into non-overlapping patches of size $(V/p)^3$. These patches are then processed by the network in a patch-wise manner to facilitate memory efficiency and computational feasibility. We chose this approach to avoid losing important semantics from the original input and to enable a plain diffusion transformer without convolution layers for 3D point cloud generation. However, we understand your suggestion of voxelizing with a resolution of $u\cdot p$ at the beginning. While this could be an alternative approach, it would result in a voxel size that is p times larger than the original voxel size $u$. This would significantly reduce the level of detail and potentially affect the performance of denoising the original point clouds. We will make sure to clarify this explanation in the revised manuscript to provide a clearer understanding of our approach. We appreciate your concerns and the constructive feedback you have provided throughout the review process. We will carefully consider all your suggestions and incorporate them into the final version of the paper to improve the clarity, novelty, and overall quality of our work. Thank you once again for your time and valuable input.
Summary: This paper proposes to adapt Diffusion Transformer [1] to class-conditional 3D point cloud generation task. In order to achieve it, the authors propose (1) to apply diffusion/infusion directly to point clouds rather than work in latent space; (2) transform input point clouds to voxel grids to apply transformers to tokens extracted from 3D grids in a straightforward way; (3) to reduce the complexity voxel features are processed in patches to produce patch tokens and self-attention in the transformer is modified to aggregate tokens in a window of a predefined size; (4) final voxel features are devoxelized back into points for per-point noise prediction in the infusion process. Experimental results show promising results in class-specific point cloud generation on ShapeNet dataset, various ablation studies demonstrating the importance of different components and some additional transferability studies that use selective fine-tuning to adapt models pretrained on a data modality/domain to another one. Strengths: The main strength of the paper lies in experimental results beating state of the art with an novel approach, not based on prior 3D point cloud generation works. Although it is an adaptation of Diffusion Transformers working with images to 3D point clouds, such a transition from one data type to another is not trivial, so it is remarkable that the authors made it work in this setting. Weaknesses: In my opinion the main weakness of the paper is the quality of the presentation. Text can still be polished to improve readability and correct some mistakes. The authors claim across the paper (e.g. L120) that they apply diffusion for the voxelized point clouds, and I think it is misleading. In fact the diffusion is applied to regular point clouds, it is the infusion network that is designed to operate on voxelized 3D features, but as far as I understand, the denoising is applied on a per-point basis. Clarity of explanation can also be improved, since some details are missing (see questions). The authors show a lot of ablation studies, but comparisons to external approaches are limited by the main single-class generation experiment. Qualitative comparisons are moved to supplementary materials, while it is better to show it in the main paper (quality of these comparisons could be improved by decreasing point sizes and sampling more points per shape, so some finer details could be examined). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In principle, point to voxel and backwards transitions do not necessarily preserve all the points, since if the resolution is low, several point will collapse into a single voxel and will not be recovered during devoxelization. How is this avoided in this work? 2. Since evaluations in this work only include single-class data setups, the expressivity of the proposed model is left underexamined. LION included some experiment showing that their approach is capable of generating realistic point clouds when pretrained on multi-class data. It would be interesting to compare how your class-conditioned model will perform in that setting. 3. What is a «plain transformer»? How this «plain» property is characterized? I think, it is better to drop this adjective or change it and be more specific. Minor comments: * The optimal best possible value of 1-NNA metric is 50%, since it means that the nearest neighbour classifier is incapable of distinguishing true from gerenated samples. The text and tables should be modified accordingly. * L14: «high computation» -> high computational costs * L119, L340: «operate» -> perform or implement * L234: sentence is broken Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not provide any statements about such limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer QDvm, Thank you for appreciating our approach. We address your comments below. > Clarification on the diffusion for the voxelized point clouds. Thanks for pointing this out. We will correct the claim by making it more clear that our diffusion is applied to regular point clouds, then we designed the infusion network to extract point-voxel features and applied the devoxelization layer in the end to denoise the original point clouds. > Comparison with LION on more categories generation. Thanks for the suggestion. Beyond the single-class comparison, we trained our DiT-3D on the large-scale ShapeNet-55 dataset with 55 diverse classes covering vehicles, furniture, and daily necessities. We compare the newly trained model with the state-of-the-art point cloud generation model, LION, on Mug and Bottle generation in the Table below. Our method achieves the best results in terms of all metrics. We will also decrease the point size and move the qualitative comparisons to the main paper. | Method | 1-NNA-CD (↓) | COV-CD (↑) | 1-NNA-CD (↓) | COV-CD (↑) | | ---- | :----: | :----: | :----: | :----: | | LION | 70.45 | 31.82 | 61.63 | 39.53 | | DiT-3D (ours) | **57.39** | **45.26** | **53.26** | **51.28** | > How to avoid points collapse into a single voxel? This is a good question! In this work, we applied trilinear interpolation to transform the voxel into points to guarantee that the features mapped to each point are distinct. If we assigned the feature of a grid to all points that fall into the grid using the nearest-neighbor interpolation, this will make the points in the same voxel grid always share the same features. > Evaluation on multi-class settings. Please see the second response to the reviewer. > Plain transformer. The plain transformer refers to a transformer-based architecture that does not use any U-Net architecture for diffusion. In this work, we explore the plain diffusion transformer on voxelized point clouds, instead of using U-Net architecture for denoising in both PVD and LION. We will add this clarification to the main paper. > Minor comments. Thanks for spotting these. We will fix them accordingly in the revision. > No limitations. We provided the discussion on limitations in L48-52 in the supplementary. Although our plain diffusion transformer on point clouds achieves improved performance in generating high-fidelity and diverse 3D shapes. However, we have not explored the potential of other 3D modalities or text-to-3D generation. We plan to leave this for future work. --- Rebuttal 2: Title: Rebuttal reply Comment: First of all, I'd like to thank the authors for provided clarifications and additional experiments. After reading all reviews, individual rebuttals, and author replies I still stand by my positive evaluation. Even if the conceptual novelty is not necessarily striking, this work provides an effective non-trivial adaptation of prior approaches to a novel (for that type of approaches) data type and achieves noticeable improvements over recent state of the art in multiple applications. At the same time, I want to point out that a lot of these experiments were provided during the rebuttal period, so I strongly encourage the authors to continue improving the paper by incorporation of the additional experiments in multi-category setups and different applications provided in other replies and by overall polishing of the paper. --- Rebuttal Comment 2.1: Title: Additional response to the reviewer Comment: Dear Reviewer QDvm, We sincerely appreciate your positive evaluation of our work and your acknowledgment of the non-trivial adaptation and noticeable improvements we achieved in multiple applications. We are grateful for your careful consideration of the reviews, individual rebuttals, and author replies. We take your suggestion to heart and assure you that we are committed to further enhancing our paper. We recognize the value of incorporating the additional experiments, particularly in multi-category setups and different applications, as suggested in our replies. We will diligently work on incorporating these experiments to provide a more comprehensive evaluation of our proposed method. Furthermore, we acknowledge your recommendation to polish the paper overall, and we will dedicate the necessary effort to ensure its clarity, coherence, and academic rigor. We are committed to presenting our research in the best possible manner and providing readers with a clear understanding of the contributions and implications of our work. Thank you for your valuable feedback and continued support. We greatly appreciate your guidance, and we will strive to make the necessary improvements as we move forward with the revision process.
Summary: This paper proposes a Diffusion Transformer for 3D shape generation (point cloud), named DiT-3D, which conducts the denoising process on voxelized point clouds. Technically, it introduces 3D positional and patch embeddings, as well as 3D window attention. The main experiments are done on ShapeNet. In addition, the authors empirically show that the pre-trained DiT-2D checkpoint on ImageNet can significantly improve DiT-3D on ShapeNet. Strengths: - The paper is clearly written and easy to follow. - The authors extend the 2D window attention operator to 3D. - The authors empirically show the benefit of leveraging a 2D transformer pre-trained on natural images for 3D generation. Weaknesses: 1. It seems that an important baseline or reference is missing: "Point-E: A System for Generating 3D Point Clouds from Complex Prompts". In L172-L173, the authors claim that "We tried to train the diffusion transformer on point coordinates, but it did not work since point clouds are sparsely distributed in the 3D embedding space". However, given Point-E, it sounds not that convincing. Can the authors explain why Point-E is not mentioned or compared in the paper? 2. It is hard to tell whether the generated shape is of high fidelity if the number of points is only 2048 (L245). It is more convincing if the number of points is larger than 4096 (Point-E) or 16384 (usually used in high-fidelity point completion). In addition, it will be visually better if the authors can present the 3D shapes in the format of mesh (like Point-E, MeshDiffusion) or colored point clouds. Currently, it is hard to tell whether the point cloud is of high fidelity. Visually, GET3D and Point-E look better than this work. 3. Only ShapeNet is used. Currently, there are more and more 3D datasets. It will be better if the authors can show results on more categories of ShapeNet, ABO, or (a subset of) Objaverse. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Since the evaluation metric is based on Chamfer Distance or Earth Mover’s Distance, I assume that the authors sample a fixed number of points from the GT mesh (and predicted mesh if the baseline, e.g., GET3D, is a mesh-based), which is 2048. I am not sure whether the metric might favor the proposed method especially when the number of points is small, as details can be missed under such a condition. The authors can try to use a larger number of points to compare baselines with the proposed method. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation is not adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer zPmE, Thank you for appreciating our approach. We address your comments below. > Why Point-E is not mentioned or compared in the paper. Thanks for pointing this out. The reason why Point-E [A] is not mentioned in the initial manuscript is that our main focus is exploring a plain diffusion transformer in point clouds generation, instead of text-to-point cloud generation in Point-E. Compared to Point-E, our proposed DiT-3D has conceptual differences. The transformer in Point-E is conditioned on CLIP features from a synthetic rendered view generated by a fine-tuned GLIDE [B] model from a text prompt, while we are simply conditioned on a class. They do not use positional embeddings for the input. However, we need to apply 3D positional embeddings for the voxelized point clouds to maintain the voxel structure locality. We will add this discussion to the revision. > Evaluation on number of points larger than 2048. Thanks for the suggestion. We initially followed the commonly-used generation setting in PVD and LION to sample 2,048 points on each point cloud in the ShapeNet benchmark for evaluation. Meanwhile, the ShapeNet benchmark does not no colorful point clouds. To further demonstrate the effectiveness of our proposed DiT-3D on high-fidelity point cloud generation, we resampled 4,096 points and compared our method with PVD and LION in the Table below. Our DiT-3D achieves the best results in terms of all metrics. | Method | 1-NNA CD ($\downarrow$) | COV CD ($\uparrow$) | | ---- | :----: | :----: | | PVD | 62.76 | 37.25 | | LION | 57.86 | 52.18 | | DiT-3D (ours) | **51.19** | **57.39** | > More categories of ShapeNet. To validate the generalizability to more categories, we train our DiT-3D on the large-scale ShapeNet-55 dataset with 55 diverse classes covering vehicles, furniture, and daily necessities. We compare the newly trained model with the state-of-the-art point cloud generation model, LION, on Mug and Bottle generation in the Table below. Our method achieves the best results in terms of all metrics. | Method | 1-NNA-CD (↓) | COV-CD (↑) | 1-NNA-CD (↓) | COV-CD (↑) | | ---- | :----: | :----: | :----: | :----: | | LION | 70.45 | 31.82 | 61.63 | 39.53 | | DiT-3D (ours) | **57.39** | **45.26** | **53.26** | **51.28** | > Comparisons on a larger number of points. Please see the second response to the reviewer. > Limitations. Although our plain diffusion transformer on point clouds achieves improved performance in generating high-fidelity and diverse 3D shapes. However, we have not explored the potential of other 3D modalities or text-to-3D generation. We plan to leave this for future work. **References** [A] Nichol, et al. "Point-E: A System for Generating 3D Point Clouds from Complex Prompts", arXiv preprint arXiv:2212.08751 (2022). [B] Nichol, et al. "GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models", arXiv preprint arXiv:2112.10741 (2022). --- Rebuttal Comment 1.1: Comment: Thank the authors for the extra results. - My concern about more categories has been resolved. - My concern about the number of points is partially resolved, as only point-based generation methods are compared. I assume that "high-fidelity" in the paper means the quality is better than other point-based generation methods, instead of actually containing many details. The authors can include more visualization, e.g., a comparison with other methods, especially mesh-based methods. - It seems that Point-E still can be compared, as the class can also be used as a text prompt. I think text-to-point-cloud is a superset of the topic studied in this paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer Comment Comment: Dear Reviewer zPmE, We sincerely appreciate your feedback and the opportunity to address your concerns. We are grateful for your understanding of the term "high-fidelity" in our paper, which indeed refers to the superior quality of point clouds generated by our DiT-3D model, as demonstrated in our illustrations. In response to your suggestion, we will include a comparison of our method with other approaches, particularly mesh-based methods, in the revised version of the paper. By providing visualizations and performance evaluations, we aim to offer a comprehensive analysis that encompasses a broader range of generation techniques. Furthermore, we acknowledge your point about the potential comparison with Point-E. We have conducted additional experiments comparing our DiT-3D model with Point-E, wherein we utilized the class as a text prompt. The training was performed on the large-scale ShapeNet-55 dataset, which comprises 55 diverse classes encompassing vehicles, furniture, and daily necessities. Specifically, we evaluated the performance of Mug and Bottle generation, and the results are presented in the Table below. Our method consistently outperforms Point-E across all metrics, highlighting the superiority of our approach. | Method | 1-NNA-CD (↓) | COV-CD (↑) | 1-NNA-CD (↓) | COV-CD (↑) | |------------------|:------:|:-------:|:------:|:-------:| | Point-E | 65.73 | 36.78 | 58.16 | 43.72 | | DiT-3D (ours) | **57.39** | **45.26** | **53.26** | **51.28** | We sincerely appreciate your valuable feedback, which has contributed to improving the comprehensiveness and rigor of our study. Thank you again for your valuable feedback, and we look forward to addressing any further concerns you may have.
Rebuttal 1: Rebuttal: Dear all reviewers, We thank each of you for generously dedicating your valuable time and expertise to reviewing our work. We acknowledge and sincerely appreciate the insightful comments and critiques provided by all the reviewers. In response to your invaluable feedback, we have made significant revisions to our manuscript, aiming to address each of your concerns comprehensively and scholarly. Reviewer trpZ and Reviewer 7dkY, we kindly request your reconsideration of your decision, given that we have taken utmost care to address the main comments raised in your reviews thoroughly. Once again, we express our sincere appreciation for your valuable contributions to the review process. Your expertise and guidance have been invaluable in improving the quality of our work. We remain committed to continuous discussion and eagerly await your final decision.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces DiT-3D, a groundbreaking diffusion transformer for 3D shape generation. It addresses the limitations of previous 3D diffusion methods that mainly relied on the U-Net architecture. DiT-3D leverages the power of Transformers to directly operate the denoising process on voxelized point clouds, resulting in superior scalability and high-quality generations. The authors incorporate 3D positional and patch embeddings to aggregate input from voxelized point clouds and mitigate the computational cost of self-attention by employing 3D window attention in Transformer blocks. The proposed DiT-3D achieves state-of-the-art performance on the ShapeNet dataset, showcasing its ability to generate diverse and high-fidelity 3D point clouds. Strengths: 1. DiT-3D achieves state-of-the-art performance in single-category point cloud generation, demonstrating its effectiveness in generating high-quality 3D shapes. 2. The utilization of pre-trained DiT-2D checkpoints from ImageNet to improve DiT-3D on ShapeNet showcases the transferability of 2D diffusion models to the 3D domain, which is an interesting and promising approach. 3. The model is concise, and the paper is well-written, accompanied by clear and visually appealing illustrations. Weaknesses: 1. The authors only conducted unconditional generation experiments for single-category and three-category cases, limiting the application fields. 2. Although the window attention technique is employed to mitigate computational costs, there are concerns regarding the generation speed, when operating on 32 * 32 * 32 (even 64 * 64 * 64) voxel grids. 3. Most DiT-3D models, except for DiT-3D-S, have parameters exceeding 100 million, considerably higher than other existing 3D generation methods. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Have the authors considered exploring diffusion in the latent space? This may potentially enhance the overall performance and inference speed. 2. It would be more valuable to investigate multi-category generation, such as training on entire ShapeNet-13, ShapeNet-55, or even larger datasets like Objaverse [Deitke et al., 2023]. Previous work [Sanghi et al., 2022, 2023] suggests that voxel representation could aid in generalization to some extent. [Deitke et al., 2023] Objaverse: A universe of annotated 3d objects. In CVPR. [Sanghi et al., 2022] CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation. In CVPR. [Sanghi et al., 2023] CLIP-Sculptor: Zero-Shot Generation of High-Fidelity and Diverse Shapes from Natural Language. In CVPR. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N.A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer RjmT, Thank you for appreciating our approach. We will address your comments below. > Only single-category and three-category cases. Thanks for pointing this out. To validate the generalizability to more categories, we train our DiT-3D on the large-scale ShapeNet-55 dataset with 55 diverse classes covering vehicles, furniture, and daily necessities. We compare the newly trained model with single-category and three-category cases on Chair generation in the Table below. Our models trained on 55 categories achieves competitive results on generating high-fidelity point clouds, and achieves the best performance on diversity. | Train Class | Test Class | 1-NNA CD ($\downarrow$) | COV CD ($\uparrow$) | | ---- | :----: | :----: | :----: | | Chair | Chair | **51.99** | 54.76 | | Chair, Car, Airplane | Chair | 53.35 | 52.81 | | All 55 classes | Chair | 52.68 | **57.87** | > Concerns regarding the generation speed. This is a good suggestion! To solve your concerns, we tested our DiT-3D on the Chair generation speed in the Table below, where it is tested on a single V100-32GB GPU with a batch size of 1. When the voxel size is larger, our method with efficient window attention achieves better generation results while maintaining similar inference times. | Voxel Size | 1-NNA CD ($\downarrow$) | COV CD ($\uparrow$) | Inference Time ($\downarrow$) | | ---- | :----: | :----: | :----: | | 32x32x32 | 51.99 | 54.76 | **2.5s** | | 64x64x64 | **50.32** | **55.45** | 3.3s | > High parameters than other existing 3D generation methods. Sorry for causing the confusion. To clarify this, we compared our DiT-3D-S with PVD and LION on the parameters and performance of Airplane generation in the Table below. Compared to PVD, DiT-3D-S has comparable parameters but achieves significantly improving generation performance. Compared to LION, the recent state-of-the-art method, our DiT-3D-S with 1/3 parameters achieves much better performance. | Method | Params | 1-NNA CD ($\downarrow$) | COV CD ($\uparrow$) | | ---- | :----: | :----: | :----: | | PVD | **27.65M** | 73.82 | 48.88 | | LION | 110M | 67.41 | 47.16 | | DiT-3D-S | 32.81M | **62.35** | **53.16** | > Diffusion in the latent space. This is a good suggestion! While it is possible to extend our diffusion transformer to the latent space, it will require pre-training a strong 3D encoder-decoder on this data. Meanwhile, our DiT-3D has already achieved advantageous overall performance and maintained comparable generation speed. We will leave this for future work. > Multi-category generation on ShapeNet-55. Please see the first response to the reviewer. --- Rebuttal Comment 1.1: Comment: I am grateful for the explanations provided by the authors, which address my concerns to some extent. I'll keep my positive rating. Title: Response to the authors --- Reply to Comment 1.1.1: Title: Response to the reviewer Comment: Dear Reviewer RjmT, We express our sincere gratitude for the valuable feedback you have provided on our work. Your insightful comments and suggestions have been instrumental in enhancing the quality and clarity of our research.
null
null
null
null
null
null
SpecTr: Fast Speculative Decoding via Optimal Transport
Accept (poster)
Summary: The paper proposes SpecTr, a new speculative decoding framework for efficient Transformer and LLM decoding. SpecTr uses a combination of a small, fast model for generating draft samples and a larger, more accurate model for scoring and validating them in parallel. Compared to the prior works, the paper presents a formulation of the speculative decoding process through the lense of optimal transport theory and demonstrates the potential for further acceleration by generating multiple drafts in parallel along the batch axis. The authors offer an approximate solution to efficiently solve the optimal transport problem at scale, resulting in significant latency improvements compared to baseline methods and previous speculative decoding approaches. Strengths: The paper presents theoretical justifications for the methods that it proposes. Furthermore, it is the first attempt to incorporate parallelization along batch into the speculative sampling framework for better efficiency. Weaknesses: 1. While the paper proposes theoretical justifications for the proposed method, it rather lacks proper evaluation. Since accelerating decoding processes is a very practical area, the author should provide a more thorough analysis of the end-to-end latency and text generation performance. In particular: * (a) The paper lacks an evaluation of the impact of the proposed method on text generation performance. Tables 2 and 4 in the paper only focus on latency, and there is no comparison in terms of text generation quality (e.g. measured by metrics like BLEU scores, etc.) compared to the baseline or prior speculative methods. Latency numbers without these performance metrics make it difficult to assess the effectiveness of the proposed method. * (b) The number of decoded tokens per serial call (in Table 2) is not directly indicative of the actual latency and does not necessarily represent the actual speedup. That is, it serves as a proxy measurement that may or may not yield the same degree of latency improvement. This is because there are factors such as framework overheads [1], hardware utilization, and others that can potentially impact the latency. For instance, it is difficult to state if processing 3 tokens (as in Table 2) in a single call is truly 3 times more latency efficient than processing them sequentially in 3 separate calls. Therefore, the claimed 3X speedup, and the additional 1.36X improvement over speculative decoding mentioned in the abstract, can be misleading. * (c) It is unclear whether the proposed method introduces any additional run-time overhead compared to the prior speculative sampling method. If there is indeed extra overhead involved, it could potentially reduce the gap between the two methods, making the improvements presented in Table 2 less significant (thereby making the 1.36X improvement over the prior method misleading as well). * (d) The latency overhead of running smaller models concurrently remains uncertain, since Table 2 only presents the efficiency of running the large model. For instance, a longer sequence length (L) can decrease runtime costs for the large model by increasing the number of tokens decoded per call as stated in the table. However, at the same time, it also increases the small model's cost as more tokens are generated but rejected/discarded. The paper lacks discussions around this point. While the authors presented the latency overhead of 10-15%, it is not clear under which particular setting it was measured and how the hyperparameters (K and L) affects this value. * (e) Taken together, the authors should provide the end-to-end inference latency and the text-generation quality/performance measure to make the evaluation convincing. 2. Including a concluding paragraph in the paper would enhance its professionalism. The authors should at least wrap up the paper with a one-paragraph summary. [1] The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment, https://arxiv.org/pdf/2302.06117.pdf Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please see the weakness section Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for agreeing on the novelty of the algorithm and the theoretical justification. Below we address the concerns on the evaluation of our method. #### ***[Quality of the final outputs.] (response to 1(a))*** (Identical to the same comment in the global response, provided here for completeness.) We would like to clarify that one of the biggest advantages of our proposed acceleration methods is that there is **provably** no drop in performance because our algorithm guarantees that the final outcome is a statistical draw from the large model. This holds for both the optimal solution from exactly solving the linear program, and the approximate solution $k$-seq. The “approximation” part in $k$*-seq* comes from the non-optimal acceptance probability, which leads to less decoded tokens per serial call and a hit in latency compared to the optimal solution. More precisely, SpecTr guarantees that the final output sequence follows the exact same distribution as the output sequence from the large model as long as the token-level algorithm is a valid coupling between $p^{\otimes k}$ and $q$ where $p,q$ are the conditional distributions on the next token from the small and large model respectively. Hence all metrics such as BLEU, avg. sentence likelihood, will be *neutral*. The formal statement is stated in Theorem 3 of the submission. The guarantee is the same as what is claimed in previous speculative decoding methods. We will add more discussions in the revised version and improve the theorem statement to make this fact clearer. #### ***[End-to-end latency improvement.] (response to 1(b-e))*** (Mostly the same with comment in the global response, provided here for completeness.) We agree that all factors mentioned by the reviewer would affect the effectiveness of the proposed approach in practical systems, and it is important to implement the method and report end-to-end latency comparisons including the delays caused by system overheads. To further demonstrate the effectiveness of our proposed approach, we conduct experiments on the state-of-the-art PALM-2 models [1] with PALM-2-Gecko and PALM-2-Bison (where Bison is a larger model) as the small model and large model, respectively. We report end-to-end (wall clock) latency comparisons between regular decoding, speculative decoding, and SpecTr. This includes the time to (parallelly) draw drafts from the small model, the time to verify the drafts with the large model, latency introduced by running the sequential rejection algorithm, and other system overheads. See the attached PDF file in the global response for detailed numbers. While we do see a smaller wall clock speed-up compared to the number of decoded tokens per serial call, our proposed method still achieves significant improvement in wall clock latency with respect to baseline decoding and speculative decoding. When $K =8$ and $L=8$, our relative wall clock speed-up over baseline is 2.13x (in contrast to 1.56x for speculative decoding over baseline), a further 1.37x improvement over speculative decoding. We will add additional experimental results in the final version. [1] Google AI. Introducing PaLM 2, 2023. https://blog.google/technology/ai/google-palm-2-ai-large-language-model/.
Summary: The paper studies speculative decoding, a technique to increase inference efficiency in a large autoregressive model by sampling a set of candidate tokens (a *draft*) from a smaller (thus, faster) model, which is then scored according to the conditional distribution of the original larger model. The scores of the larger model are obtained in parallel, thus potentially resulting in a significant speedup depending on how well the smaller model's conditional distribution is close to the larger model's. The main contributions are twofold. First, the authors cast the speculative decoding problem as a maximal coupling problem in optimal transport. Then, they propose to use multiple drafts (i.e. multiple sets of tokens completing the context), which is solvable in exponential time in the number of drafts. Thus, a novel algorithm is proposed that runs linearly in the number of drafts and provably solves the problem up to a $(1-1/e)$ factor of the optimal acceptance probability. Strengths: 1. To my knowledge, translating the speculative decoding problem in the context of optimal transport is not very surprising but novel. Furthermore, by connecting the problem to the field of optimal transport, the paper opens new possibilities for research, due to the maturity of the theory of optimal transport, which the authors actually use to derive an extension of the original speculative decoding algorithm to allow for multiple drafts and discuss its feasibility in terms of algorithmic complexity. 2. The proposed algorithm to approximately compute the transport plan (Algorithm 2) nicely follows the problem casting performed earlier and is backed by a sound theory (Theorem 2) describing how the solution (in terms of acceptance probability) compares to the intractable case of exact solution in exponential time. I have quickly skimmed through the appendix and although not an expert in OT, the theory seems correct. Weaknesses: 1. The idea of extending the speculative decoding algorithm in [15] to multiple drafts is simple and elegant, but sometimes the authors add some unnecessary details. For instance, it seems trivial and very intuitive that by having multiple drafts, the acceptance probability increases with the number of drafts (hence Lemma 1 seems unnecessary). 2. The authors test their proposed method solely on LM1B. I would have liked to see a larger set of experiments, covering at least a more significant subset of those performed in [1]. 3. Algorithm 1 is wrong, as it always returns at line 7. I guess the authors want it to return at line 5 as well. Also, some quantities are not defined yet when Alg.1 is introduced (such as $\mathcal{X}$), which complicates a bit the readability. Minor: 3. In Figures 1,2,3 the size of ticks, labels, and lines should be increased. 4. Typos: line 251: "Gvien". Line 217-218 the sentence "To control the probability of accepting an $x \in \Omega$ with probability larger than $q(x)$." does not seem to be grammatically correct. Line 44 "a several contexts". Line 120: "an discrete" [1] Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. arXiv preprint arXiv:2211.17192, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors address the limitations. I do not see any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments, pointing out the typos, and suggesting ways to improve the paper. We will incorporate them in the final version. Please see inline replies below. #### ***[Lemma 1 seems unnecessary]*** We agree that the monotonicity part of Lemma 1 follows from definitions, but the consistency part needs proof. We will clarify in the subsequent version. #### ***[Larger set of experiments]*** For the rebuttal, we have evaluated our method on state-of-the-art PALM-2 models and report end-to-end latency improvements. Please see details in the global response. We will add additional experimental results in the final version. #### ***[Algorithm 1 always returns at Line 7]*** Thank you for pointing out the typo. As you mention, Line 5 of the algorithm should say Return Y = X and accept = True. Will add other definitions in the Algorithm. #### ***[Typos & suggestions on font size]*** Thanks for these suggestions. We will incorporate them.
Summary: This paper presents speculative decoding, wherein a smaller language model is used to approximately sample from a large one, akin to a type of accept/reject MCMC sampler. The idea is that it is slow to sequentially sample from a large model of interest, but joint probabilities can be computed in parallel across the time dimension. At the same time, we could have access to many parallel copies of a smaller not-as-good model, and it's quicker to generate e.g. 100 tokens from the small model than to sample from the large one. We generate e.g. 100 tokens for K such smaller models, and accept one of the K sequences with some probability. The paper is mostly a theory paper that gives bounds on the transport cost, between the large model distribution and the sampling distribution resulting from the accept/reject sampling composed with proposing from the smaller models. On the plus side, the paper is a theory paper that I think has some nice implications for practitioners. I would love to see some more empirical work exploring this. The quality of the math / theory / bounds is high and the authors did a nice job breaking down the OT parts for the typical ML language modeling reader. On the minus side, while a light set of experiments is totally fine for a theory paper, I believe most readers of the paper would benefit more if there were also some initial results of the actual sample quality from the presented algorithm. There seem to be results on latency and accept/reject probability (and there's some theory connecting accept/reject probability to closeness of sampled distribution to desired large model distribution), but there seem to be no direct results evaluating the samples resulting from the new algorithm. In short, I tentatively accept but think the minus needs to be addressed and will change the score if there is no discussion or clarification on this. Note that light evaluation is totally fine since the theory is good. And it is even okay if the quality of the samples according to any of the metrics is possibly below competitive, since there is always room to improve these algorithms once they are established. Strengths: - novel algorithm for quicker sampling making use of smaller models and MCMC-like concepts to sample from a larger model - nice framing in terms of optimal transport that I think the typical ML + LM reader could take something positive away from - rigorous characterization theoretically of the algorithm in terms of transport cost bounds, accept/reject rates, etc. Weaknesses: - Table mismatch: e.g. referenced table 8 in main text but no table 8 in main text or appendix - Most noticeably, there seem to be no direct reports of quality of samples from the resulting approximate algorithm. There are tables for latency and for accept/reject rates (the latter is in appendix but could probably go in main text). And there is theory characterizing that the transport cost will not be too bad, etc. But, there are no direct metrics quantifying the original large model samples vs the proposed algorithm's samples (those produced by using the smaller models' sampling and larger models' scoring). Seems like any of the basic LM metrics would be warranted here Glad to be corrected if I have misunderstood. Minor: - Small type "Gvien" in source line 251 of the PDF. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Strengths/Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments on the novelty of the algorithm and soundness of the theory. Please see replies to specific questions below. #### ***[Quality of the samples]*** (Identical to the same comment in the global response, provided here for completeness.) We would like to clarify that one of the biggest advantages of our proposed acceleration methods is that there is **provably** no drop in performance because our algorithm guarantees that the final outcome is a statistical draw from the large model. This holds for both the optimal solution from exactly solving the linear program, and the approximate solution $k$-seq. The “approximation” part in $k$*-seq* comes from the non-optimal acceptance probability, which leads to less decoded tokens per serial call and a hit in latency compared to the optimal solution. More precisely, SpecTr guarantees that the final output sequence follows the exact same distribution as the output sequence from the large model as long as the token-level algorithm is a valid coupling between $p^{\otimes k}$ and $q$ where $p,q$ are the conditional distributions on the next token from the small and large model respectively. Hence all metrics such as BLEU, avg. sentence likelihood, will be *neutral*. The formal statement is stated in Theorem 3 of the submission. The guarantee is the same as what is claimed in previous speculative decoding methods. We will add more discussions in the revised version and improve the theorem statement to make this fact clearer. #### ***[Scale of the empirical evaluation]*** We agree that performing a more complete set of experimental evaluations would provide more evidence on the effectiveness of the method. For the rebuttal, we also evaluate our proposed method on the state-of-the-art PALM-2 models and report end-to-end latency improvements with respect to baseline autoregressive decoding and speculative decoding. Please see details in the global response and Table 1 in the attached PDF. We will add additional experimental results in the final version. #### ***[Table 8]*** Thanks for catching this! Table 8 was a latex error on our part. The experimental section refers to Tables 1, 3 (wall clock for large and small models) and Table 2 (Results on LM1B dataset).
Summary: This paper proposes a novel and efficient decoding algorithm for autoregressive large language model, SpecTr, which is an extension of speculative decoding. Given a large model M_b, it can only output one word at a time when it decodes, however, the cost of a serial call is quite expensive. The previous method speculative decoding can alleviate the slow decoding problem, which uses a much smaller model M_s to decode a single segment of length L and allows M_b to calculate its likelihood in parallel. The optimal transport plan between the distributions of the two models is used to determine whether to accept a certain part of the segment, which makes sure the samples has the same distribution as M_b. This approach has the advantage of reducing the number of serial calls of M_b, improving decoding efficiency. However, if the acceptance probability is too low, the reduction may not be significant. The SpecTr algorithm allows to utilize K i.i.d. segments sampled by M_s to enhance acceptance probability. Concretely, the previous optimal transport problem is modified to transport the K-product distribution of M_s to the distribution of M_b. Since solving the new problem has an exponential cost with respect to K, the paper introduces the k-sequential selection algorithm with a 1-1/e approximation ratio with an appropriate cost. In numerical experiments, they use an M_b model with 97M parameters and an M_s model with 6M parameters tested on LM1B, speculative decoding achieves an average of decoding 2.3 tokens per serial run of M_b, while the proposed method in this paper can decode 3 tokens, showing a significant improvement. Strengths: The paper is well-written and self-contained, making it accessible even to someone like me who is not familiar with the related work on language model decoding. The proposed algorithm, SpecTr, combines the practical need for parallelization of large models in real-world scenarios. Moreover, it is mathematically concise, intuitive, and insightful. The paper also provides a basic theoretical analysis of the properties of the algorithm. The numerical experiments show a significance improving of their methods. I think this is a good submission. Weaknesses: The numerical experiments on real datasets seem insufficient, as they only compare with speculative decoding when K=1, which may not be fair: using larger K would require more computational resources. It would be worth considering if there is a more equitable way to compare the methods, e.g. extend speculative decoding to use more resources in some crude way. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The first line of Algorithm 1: Input: ... "X~iid p" is confusing, since you have only one sample here, "iid" is unnecessary. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments on the novelty of the algorithm and theory, and acknowledging that SpectTr is mathematically concise, intuitive, and insightful. #### ***[Fair comparison with speculative decoding]*** We would like to point out that one of the main contributions of our work is to relate speculative decoding to the theory of optimal transport, which allows for the use of $k$ draws from the small model. We are unaware of other intuitive baselines on how to use more resources to speed-up speculative decoding due to the subtly involved in ensuring that the final sequence is still a *valid statistical draw from the large model*, which is our main contribution in this paper. We would be happy to conduct comparisons if we are missing some baseline extensions that you may have in mind. #### ***[Typo]*** Thanks for catching; we will fix it.
Rebuttal 1: Rebuttal: We thank all reviewers for their detailed reading and encouraging comments about our submission. We are delighted to read the reviewers’ acknowledgement of the novelty of the algorithm (reviewer TBLD, hEU2, jweu, DdMT) and the soundness of the theory (reviewer hEU2, jweu). We will incorporate their suggestions to improve the presentation in future revisions of the paper. Below we address a few common concerns raised by the reviewers. Each reviewer's individual questions will be answered in separate responses. #### ***[End-to-end latency improvement.]*** We agree that it is important to implement the method and report end-to-end latency comparisons including the delays caused by system overheads. To further demonstrate the effectiveness of our proposed approach, we conduct experiments on the state-of-the-art PALM-2 models [1] with PALM-2-Gecko and PALM-2-Bison (where Bison is a larger model) as the small model and large model, respectively. We report end-to-end (wall clock) latency comparisons between regular decoding, speculative decoding, and SpecTr. This includes the time to (parallelly) draw drafts from the small model, the time to verify the drafts with the large model, latency introduced by running the sequential rejection algorithm, and other system overheads as suggested by reviewer DdMT. See the attached PDF file for detailed numbers. While we do see a smaller wall clock speed-up compared to the number of decoded tokens per serial call, our proposed method still achieves significant improvement in wall clock latency with respect to baseline decoding and speculative decoding. When $K =8$ and $L=8$, our relative wall clock speed-up over baseline is 2.13x (in contrast to 1.56x for speculative decoding over baseline), a further 1.37x improvement over speculative decoding. We will add additional experimental results in the final version. #### ***[Quality of the final outputs.]*** We would like to clarify that one of the biggest advantages of our proposed acceleration methods is that there is **provably** no drop in performance because our algorithm guarantees that the final outcome is a statistical draw from the large model. This holds for both the optimal solution from exactly solving the linear program, and the approximate solution $k$-seq. The “approximation” part in $k$-seq comes from the non-optimal acceptance probability, which leads to less decoded tokens per serial call and a hit in latency compared to the optimal solution. More precisely, SpecTr guarantees that the final output sequence follows the exact same distribution as the output sequence from the large model as long as the token-level algorithm is a valid coupling between $p^{\otimes k}$ and $q$ where $p,q$ are the conditional distributions on the next token from the small and large model respectively. Hence all metrics such as BLEU, avg. sentence likelihood, will be *neutral*. The formal statement is stated in Theorem 3 of the submission. The guarantee is the same as what is claimed in previous speculative decoding methods. We will add more discussions in the revised version and improve the theorem statement to make this fact clearer. [1] Google AI. Introducing PaLM 2, 2023. https://blog.google/technology/ai/google-palm-2-ai-large-language-model/. Pdf: /pdf/a93e57b0b2d653ca1670066616837b2e119ba0bb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation
Accept (poster)
Summary: This paper presented a new mechanism to train efficient panoptic segmentation frameworks, which adds relaxation to mask predictions and class prediction for panoptic segmentation. In experiments, the authors demonstrated that the relaxation techniques can consistently improve panoptic segmentation frameworks. Strengths: 1. The results are impressive, especially, with R50 backbone, ReMax-M achieves 49.1 PQ and 51.9 FPS on COCO dataset. 2. This paper is well-written and the motivation is clear. Weaknesses: 1. The generalization of the proposed training mechanism should be demonstrated. 2. Some experimental results are unconvincing. Pleas see questions and limitations for details. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Why can the loss function with semantic masks be a relaxation that helps training? Training with semantic masks has been demonstrated to improve the instance segmentation in many frameworks such as CondInst. Therefore, what is the main contribution or the novelty of the proposed ReMask should be carefully discussed. 2. In Table 6, the results of MaskDINO and ReMaX should be tested with same GPU. When the GPU is V100, ReMax achieves 16.3 FPS (from Table 1), but MaskDINO achieves 16.8 FPS. Typo: 1. Line 71: "... like YOSO [26] and MaskConver [26] ... " Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: 1. The contributions of this paper may be limited. For the segmentation framework, this paper directly uses the kMax-DeepLab. For the ReMask technique, it has been demonstrated in instance segmentation. For the ReClass technique, the effect of applying it or not is not shown in experiments. 2. The generalization of the proposed method should be demonstrated based on more frameworks such as Mask2Former, YOSO, and Panoptic DeepLab, not only kMax-Deeplab. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the insightful feedback. We hope that the subsequent response will address the concerns voiced in the review. We thank the reviewer for pointing out the typo, which we will fix in the final revision. | *w/* semantic masking? | *w/* $\mathcal{L}\_{sem}$? | *w/* ReClass? | Iterations | PQ | | :--------------------: | :------------------------: | :-: | :-: | :-: | | | | | 50K | 50.4 | | | | | 150K | 53.0 | | | &#x2611; | | 50K | 51.3 | | | &#x2611; | | 150K | 53.0 | | &#x2611; | &#x2611; | | 50K | 51.7 | | &#x2611; | &#x2611; | &#x2611; | 50K | 52.4 | | &#x2611; | &#x2611; | &#x2611; | 150K | 54.0 | *Q1. Training with semantic masks has been demonstrated to improve instance segmentation in many frameworks such as CondInst. Therefore, what is the main contribution or the novelty of the proposed ReMask should be carefully discussed.* We appreciate the insightful suggestion. We echo that incorporating semantic loss such as CondInst [A1] and YOLACT [A2] can indeed enhance the performance of instance segmentation. This is also evident from our experiments, as demonstrated in the table above. However, we respectfully point out that our approach differs from the application of loss in [A1] and [A2] in the following aspects: 1. **The key of ReMask is semantic masking instead of purely semantic loss.** As demonstrated in the above table, the direct application of semantic loss, without semantic masking, does not result in any improvement for long-schedule training (i.e., 150K iterations). It's only when semantic masking is implemented that the network tends toward better convergence, a central aspect of our methodology. While the exclusive use of semantic loss may expedite the initial stages of the training process (e.g, 50K iterations), it fails to improve the ultimate convergence quality. 2. **The ReMask is applied along with mask transformers while CondInst and YOLACT are not.** We kindly argue that our method is applied with mask transformers while CondInst and YOLACT are conventional segmentation frameworks. 3. The meaning of relaxation is two-fold: (1) We posit that compared to panoptic segmentation, semantic segmentation presents a less challenging task. As such, employing semantic prediction to fine-tune the results of panoptic segmentation can be viewed as a form of relaxation strategy. (2) The ReClass process alters the initial one-hot label into a softer tensor, thereby easing the intensity of the strict supervision and can be regarded as a way of relaxation. We would add the above discussion in the revised paper and carefully clarify the difference between ours and the related papers. --- *Q2. In Table 6, the results of MaskDINO and ReMaX should be tested with the same GPU. When the GPU is V100, ReMax achieves 16.3 FPS (from Table 1), but MaskDINO achieves 16.8 FPS.* We kindly note that 16.8 FPS (typo, should be 14.8 in the original paper) of Mask DINO is evaluated on **`A100`**, while our 16.3 FPS is evaluated on **`V100`** and a higher resolution 1281x1281. Here we report the detailed FPS below: | Method | GPU | Resolution | FPS | | :----: | :----: | :-: | :-: | | Mask DINO | A100 | Not reported | 14.8 | | Mask DINO | V100 | 1200x800 $^\dagger$ | 10.9 | | Ours | V100 | 1200x800 | 26.3 | | Ours | V100 | 1281x1281 | 16.3 | For fair comparison, as shown in the table above, ours is about 2x faster than the Mask DINO model for 1200x800 resolution on V100. We will revise the manuscript accordingly to further clarify the confusion. $^\dagger$ We recompute the FPS of Mask DINO on our own device. --- *L1. For the ReMask technique, it has been demonstrated in instance segmentation. For the ReClass technique, the effect of applying it or not is not shown in experiments.* We kindly note that in Table 4 of the manuscript, we have reported that the application of ReClass results in a **`0.7`** PQ increase when comparing the second column ($\eta=0$) with the fifth column ($\eta=0.1$). For distinctions between ReMask and prior methods [A1] and [A2], please refer to the previous response. --- *L2. The generalization of the proposed method should be demonstrated based on more frameworks such as Mask2Former, YOSO, and Panoptic DeepLab, not only kMax-Deeplab.* This is a good suggestion. We have re-implemented ReMaX for Mask2former in PyTorch and reported the results in the table below. | Method | Epochs | PQ | | :----: | :----: | :-:| | Mask2former | 24 | 48.36 | | Mask2former + ReMaX | 24 | 50.24 | Due to the time limit, we did not reproduce the originally reported Mask2Former results by fully exploring all the hyper-parameters. However, the table above shows that based on the same Mask2Former baseline ReMaX boost its accuracy. This could demonstrate that ReMaX is also effective for other segmentation frameworks like Mask2former. We will add such results on more baseline models like Mask2Former and YOSO to the final paper. --- ### Reference [A1]: Tian Z, Shen C, Chen H. Conditional convolutions for instance segmentation. ECCV 2020. [A2]: Bolya D, Zhou C, Xiao F, et al. Yolact: Real-time instance segmentation. ICCV 2019. --- Rebuttal Comment 1.1: Comment: Thanks for the feedback. The response has solved my concerns. I will raise my score. --- Reply to Comment 1.1.1: Title: Thank you for the kind feedback Comment: We are happy to see the above feedback solved the concerns of the reviewer. We thank the reviewer for all the constructive comments.
Summary: This paper presents a relaxation technique for training Efficient Panoptic Segmentation models called ReMaX. Based on the observation of much higher false positive penalisation in training panoptic segmentation models, it introduces two relaxation designs, ReMask and ReClass. Results are reported on COCO, ADE20K and Cityscapes datasets. Strengths: 1. The motivation of this paper is clear, and the finding is interesting. 2. The proposed method is reasonable and fits well with the motivation. 3. Using soft semantic segmentation prediction for relaxing the training of mask transformers makes sense to me. 4. Consistent improvements are obtained with the proposed method, and ablations are thorough. According to the experiments, the proposed method can work well with multiple existing approaches. Weaknesses: 1. The design seems coupled with mask transformers. It may not be generalised to all efficient panoptic segmentation models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: About the ReClass operation. What if the overlapped objects belong to the same category? For instance, two 'persons' in Fig. 3. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Error bars are not reported. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer for the recognition of our paper. *W1. The design seems coupled with mask transformers. It may not be generalised to all efficient panoptic segmentation models.* We thank the reviewer for the suggestion and agree that it is interesting to further explore the effectiveness of our method with other segmentation frameworks. In principle, ReMaX has the potential to be generalized to other non-transformer-based panoptic segmentation frameworks with a panoptic mask representation $m\_{pan}$ and a semantic mask representation $m\_{sem}$. But it is beyond the scope of this work and rebuttal. We currently mainly explore our method in kMaX-DeepLab (most recent state-of-the-art), and have also quickly experimented with Mask2Former in this rebuttal. We will further validate the generalizability and effectiveness of our method in future work. --- *Q1. About the ReClass operation. What if the overlapped objects belong to the same category? For instance, two 'persons' in Fig. 3.* This is a good question. Since different instances with the same category belong to the same semantic mask, the class labels for their instance masks **will not change**. --- *L1. Error bar not provided.* Thanks for the suggestion. We will add it to the revised paper. --- Rebuttal Comment 1.1: Comment: Thanks for the feedback. The proposed method is technically sound, but the impact may be limited by only exploring the method based on one panoptic segmentation framework (kMaX-DeepLab). In the rebuttal, the authors provide an analysis of the possibilities of applying the techniques to other frameworks and mention a quick trial on Mask2Former. However, I don't find the results in their rebuttal.
Summary: This paper shows that the existing sota panoptic segmentation methods have an unbalanced loss ( excessively large false-positive loss due to the use of the sigmoid function). The authors designed two relaxation mechanisms to relax the supervision at the mask and class levels, thereby improving training efficiency and improve accuracy. Strengths: 1. This paper reveals the problem of excessive false-positive loss caused by the use of the sigmoid function in current transformer-based panoptic segmentation models and proves that false-positive loss is also helpful for training and proper relaxation of constraints can improve training efficiency, which is instructive for the community. 2. Both approaches to relaxation in the paper (ReMask and ReClass) design are interesting and effective. 3. The proposed ReMax is excellent in training efficiency, inference efficiency, and accuracy. Weaknesses: 1. On the one hand, excessive false-postive loss affects training efficiency, but on the other hand, false-postive loss also benefits the final result. The author provides only an empirical solution, lacking more in-depth discussions. I wonder about the qualitative analysis of the impact of different scales of false-positive loss on the final result. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: see weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer for the recognition of our paper. *Q. Qualitative analysis of the impact of different scales of false-positive loss on the final result.* Well spotted! We experimented with various methods to adjust the scales of FP/FN losses. Please refer to the table below for these results, where the reported result for FP loss scaling represents the most optimal scaling factor we have tried. Overall, as summarized in line 39-44, we observed that equivalently scaling the magnitude of false-positive losses for all examples doesn't enhance performance. The crux of ReMaX's effectiveness lies in its ability to dynamically filter out extreme losses, drawing parallels with gradient-clipping. | Loss-scale Method | PQ | | :---------------: | :- | | baseline | 50.4 | | w/ FP loss scale $\downarrow$ | 50.4 | | w/ FN loss scale $\uparrow$ | 50.9 | | w/ Grad-clip | 51.2 | | w/ ReMask | 51.7 | | w/ ReMask+ReClass | 52.4 | Based on the above table, it's evident that holistically scaling down the false-positive loss doesn't yield a performance boost. In fact, we've tested various scaling factors to holistically scale down the false-positive loss, yet none contributed to better performance. The result for FP loss scaling reported here represents the most optimal scaling factor we identified.
Summary: The manuscript presents two novel heuristics for training efficient panoptic models based on mask-level recognition and pixel-to-mask assignment. The first heuristics affects pixel-to-mask assignment and is referred to as ReMask. ReMask has been designed to balance the overwhelming contribution of false positive mask assignments to the mask-assignment loss by leveraging an independent semantic prediction head. In particular, the authors suppress the pixel assignment towards masks that get recognized into classes that are inconsistent with local semantic predictions. However, Table 7 suggests that most of the improvement does not stem from training relaxation and that the benefits may be caused by enhanced locality of the recognition process through Lsem. The second heuristics affects mask-level recognition and is referred to as ReClass. ReClass changes the classification targets of the predicted masks from one-hot winner-takes-all to mixtures of one-hot assignments of all incident ground-truth masks. It appears that the authors conjecture that ReClass contributes to the convergence speed by reducing the penalty of inaccurate masks during early training. Strengths: S1. Panoptic segmentation is an important computer vision task with many applications. S2. State-of-the-art performance among approaches based on mid-range backbones (RN50, MNV3). S3. Ablations and validations suggest that the proposed heuristics contribute significant performance improvements. S4. The proposed heuristics can be removed during inference; this results in competitive inference speeds. Weaknesses: W1. the manuscript requires non-linear reading effort: * lines 161-181 start to make sense only after reading the equations * equations for x_pan and x_sem are missing (they should start from shared features) * d_pan, d_sem, N_Q and N_C should be defined before use. W2. a bird's-eye view figure is missing (example: Fig.2 in [10]). W3. many small details: * l161: it appears that x_pan should be HWxd_pan? * l148: post-processing is unclear. Suggestions S1. it may be interesting to mention the following related work: * Fully Convolutional Networks for Panoptic Segmentation with Point-Based Supervision. TPAMI 2023. * Panoptic SwiftNet: Pyramidal Fusion for Real-Time Panoptic Segmentation. Remote Sensing. 2023. * Panoptic, Instance and Semantic Relations: A Relational Context Encoder to Enhance Panoptic Segmentation. CVPR 2022. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Q1. Can you disentangle the relative contribution of loss relaxation and local enhancement (cf Table 7)? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: It would be interesting to discuss whether there is any benefit in conjunction with weaker (RN-18) and stronger (SWIN, ConvNext) backbones. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. We will improve the writing of the final paper based on all reviewers' feedback. In the meantime, we hope the answer below can help improve the readability of the paper. *W1. (1) lines 161-181 requires non-linear reading efforts.* We guess this might be due to the reviewer finding it hard to read the paper while referring to Figure 2 since it is separated apart into two pages. To make the paper easier to read, we will put the text of L161-181 close to Figure 2 together on the same page. *W1. (2) Equations for $x\_{pan}$ and $x\_{sem}$ are missing* Thanks for the suggestion. $x\_{pan}$ and $x\_{sem}$ indeed start from the shared features. To make it clear, we plan to add a bird's-eye view figure which will better illustrate how $x\_{pan}$ and $x\_{sem}$ in Figure 2 are related to the overall architecture, e.g. kMaX-Deeplab or MaskFormer. *W1. (3) d_pan, d_sem, N_Q and N_C should be defined before use.* We kindly argue that we did not define $d_{pan}$, but use $N_Q$ since it represents the number of queries for the transformer decoder. Finally, we thank the reviewer for the suggestion. We will move all these notations currently provided in L166-169 to an earlier part of the paper, closer to where they are first used, i.e. line 164. --- *W2. A bird's-eye view figure is missing (example: Fig.2 in [10]).* Thank you for the thoughtful suggestion. Due to the original space constraint, we didn't include a bird's eye view figure. We plan to add a bird's-eye view figure which will better illustrate how $x\_{sem}$ and $x\_{pan}$ in Figure 2 are related to the overall architecture, e.g. kMaX-Deeplab or MaskFormer. Meanwhile, we would like to point out that Figure 2 showcases the ReMask process, while Figure 3 details ReClass. They are presented in separate figures because ReClass is solely related to the loss and does not alter the architecture. Regarding the entire process, it completely followed kMaX-DeepLab. We recognize that this may be challenging for those unfamiliar with mask transformers. --- *W3. (1) it appears that $x\_{pan}$ should be HWx $d\_{pan}$?* We thank the reviewer for the question. We kindly note that we have never used (or defined) $d\_{pan}$ in our manuscript and $x\_{pan} \in \mathbb{R}^{HW\times N\_Q}$ is defined in L161. This is due to the structure of masked transformers, where the number of panoptic masks is defined by the number of queries $N\_Q$. We also have defined another term $d\_{q}$ in L167-168. I hope this will clarify this confusion. *W3. (2) Post-processing is unclear.* Thanks for the question. We completely followed the post-processing in kMaX-DeepLab [64] without any change. We will mention this technical detail in the final paper and make it clear to readers. --- *S1. Related work.* Thanks for the supplement. We will add and discuss all three papers in the revised paper. --- *Q1. Can you disentangle the relative contribution of loss relaxation and local enhancement (cf Table 7)?* This is a good question! We added another ablation study that removes the semantic masking (the concrete grey arrow right under "stop grad" in Figure 2). This would keep the semantic loss $\mathcal{L}\_{sem}$ for semantic relaxation but remove the local enhancement (semantic masking). The result is shown below and will be added to the final paper. | *w/* semantic masking?$^{[a]}$ | *w/* $\mathcal{L}\_{sem}$?$^{[b]}$ | *w/* ReClass? | PQ | | :--------------------: | :------------------------: | :-: | :- | | | | | 50.4 | | | &#x2611; | | 51.3 | | &#x2611; | &#x2611; | | 51.7 | | &#x2611; | &#x2611; | &#x2611; | 52.4 | The table above shows that the semantic relaxation would lead to a 0.9 increase in PQ; while with the semantic masking it would lead to an additional 0.4 PQ gain. The semantic masking can be linked to local enhancement as it would suppress the extreme false-positive predictions via a simple masking operation. --- *Q2. Results on R18 and stronger backbones (Swin/ConvNext)* We thank the reviewer for the suggestion, and kindly remind the reviewer that we have evaluated our method on weaker backbones that are even smaller than ResNet-18 (MobileNetV3-Small and MobileNetV3-Large in Table 1 and 8). We also reported the result for the ConvNeXt model in Table A2 of the supplementary material. --- $^{[a]}$: local enhancement $^{[b]}$: semantic loss relaxation --- Rebuttal Comment 1.1: Comment: Thank you for the feedback, the proposed changes improve the manuscript. My d_pan indeed corresponds to N_q. For some reason that was not completely clear to me while reading the manuscript. --- Reply to Comment 1.1.1: Title: Thank you for your kind feedback Comment: We thank the reviewer for the recognition of our efforts. We will revise the paper accordingly and try our best to make it more readable for the readers.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a novel mechanism called ReMaX to enhance the training of mask transformers for efficient panoptic segmentation, making it more accessible and practical. The authors observe that the high complexity of panoptic segmentation training objectives often results in imbalanced loss, leading to challenges in training end-to-end mask-transformer based architectures, particularly for efficient models. In response to this challenge, the authors propose ReMaX, which incorporates relaxation techniques for mask predictions and class predictions during training for panoptic segmentation. Through these simple relaxation strategies, the model consistently improves its performance without incurring any additional computational cost during inference. The effectiveness of ReMaX is demonstrated by integrating it into efficient backbones like MobileNetV3-Small. The proposed method achieves a new state-of-the-art record for efficient panoptic segmentation on benchmark datasets such as COCO, ADE20K, and Cityscapes. The results showcase the significance of ReMaX in improving the performance of mask transformers and its potential for advancing the field of efficient panoptic segmentation. Strengths: By applying such simple techniques for relaxation to the state-of-the-art kMaX-DeepLab, ReMaX can train the network stably without any gradient-clipping operation under a learning rate that is over 10× greater than the baseline. Experimental results have show that the proposed method both boosts the training speed by 3×, and also leads to much better results for panoptic segmentation. ReMaX sets a new state-of-the-art record for efficient panoptic segmentation. Weaknesses: (1) Do you have the experimental results on test set for COCO, CityScapes and ADE20K? (2) How do you balance Lpan and Lose? Are there any weights for these two losses? (3) In Table 2, what is the result of using softmax as activation and with grad-clip? (4) In Table 5, I'm wondering why removing the auxiliary semantic head will not lead to performance drop when using identity mapping. (5) In Table 7, why does PQ drop a lot when you use ground-truth semantic masks for m_sem? What is the result of using ground-truth semantic masks with the stop-gradient operation? Technical Quality: 3 good Clarity: 3 good Questions for Authors: I'm positive about this paper. However I still have some concerns in the Weaknesses. I'll make the final decision after I see the response from the authors for those questions in Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I cannot find any limitations or potential negative societal impact which the authors list. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's valuable feedback. We hope that the subsequent response will clarify the issues highlighted and contribute to an improved rating. *Q1. COCO/CityScapes/ADE20K test set* Thank you for the suggestion. We intend to include test set results for these datasets in the finalized version. Regrettably, due to the shortage of training resources (GPUs) and time constraints, we are unable to present these results in the current rebuttal. --- *Q2. loss weights on $\mathcal{L}\_{pan}$ and $\mathcal{L}\_{sem}$.* We have mentioned it in L246-247, and L307-308, respectively. We kindly note that the panoptic losses $\mathcal{L}_{pan}$ consist of multiple losses including PQ-style losses and mask-id loss. We suppose the current inconsistency in loss names may lead to confusion for the audience to read, therefore, we would revise L246-247 to: > The loss weight for $\mathcal{L}\_{sem}$ is 0.5 and that for $\mathcal{L}\_{pan}$ is set as the same with kMaX-DeepLab [64]. Then we would revise L307-308 to: > The weights for the PQ-style loss (part of $\mathcal{L}\_{pan}$), auxiliary semantic loss ($\mathcal{L}_{sem}$), mask-id cross-entropy loss (part of $\mathcal{L}\_{pan}$), and instance discrimination loss are set to 3.0, 1.0, 0.3 and 1.0. --- *Q3. In Table 2, what is the result of using softmax as activation and with grad-clip?* Softmax tends to produce fewer False Positives than Sigmoid because each pixel is limited to a single positive prediction, as outlined in lines L24-26 of the manuscript. By contrast, Sigmoid permits each pixel to correlate with several mask predictions, which can result in a highly unbalanced loss during training. When applying Softmax with grad-clip, the network converged too slowly (10+ points lower than the baseline) because of insufficient positive gradients; so we did not report it in Table 2. --- *Q4. In Table 5, I'm wondering why removing the auxiliary semantic head will not lead to a performance drop when using identity mapping.* We employed identity mapping to preserve the initial prediction, with the semantic branch serving solely as a relaxation mechanism. The overarching goal for panoptic segmentation remains unaltered. It's important to highlight that only the **`first four stages`** utilize ReMask, not all mask decoders. Eliminating the semantic branch might influence the intermediate predictions, but it won't have a direct impact on the final prediction. This may suggest that variations in intermediate predictions might not substantially influence the final performance. --- *Q5. In Table 7, why does PQ drop a lot when you use ground-truth semantic masks for $m\_{sem}$? What is the result of using ground-truth semantic masks with the stop-gradient operation?* This is a good question. To make it clearer, using the ground-truth semantic masks (gt-masks) means that : 1. There is no semantic loss during training, which does not provide any relaxation for the training objective as semantic segmentation is a sub-task of panoptic segmentation. 2. All false-positive losses outside the gt-masks would be eliminated, indicating that the false-positive losses are still important. Removing most of them will lead the network to converge to sub-optimal. 3. ReMask here is to help eliminate the **`extreme`** false positive losses to prevent the loss distribution to be unbalanced (Figure A2 in supplementary material). We would like to clarify that when using ground-truth masks for semantic masking, there would be no parameters in the semantic branch; therefore, there is no gradient to be stopped in this scenario. --- Rebuttal Comment 1.1: Title: Increase my rating to weak accept Comment: Thanks for the author's response. It has addressed all of my concerns and I'll raise my rating to weak accept. Thank you.
null
null
null
null
null
null
Learning to Parameterize Visual Attributes for Open-set Fine-grained Retrieval
Accept (poster)
Summary: The work proposes a new approach for open-set fine-grained retrieval. Their main contribution is the new state-of-the-art method termed VAPNet, which exploits objects’ attributes to discriminate between known and unknown objects. Due to the absence of attribute annotations, their training loss includes a pipeline to extract and refine attributes in an unsupervised fashion. Strengths: 1. The authors’ idea to exploit visual attributes to discriminate between known and unknown classes is novel for the task. 2. The approach to learn such attributes in an unsupervised fashion and in the open-set scenario is quite interesting. 3. The quality of the manuscript is good. 4. Their evaluation protocol makes sense and their method achieves the best results across three datasets when compared with related works. Weaknesses: 1. Section 3 is hard to follow, mainly the architectural choices. There are many components in the method. The authors could refer to Figure (2) whenever introducing some components to guide the reader’s understanding. 2. The differences in the pipeline between training and evaluation are not stated explicitly. This could help in understanding the overall method better. 3. Figure (2) could be improved to better convey the method (the current image is too crowded). From the text and the image it may be hard to understand whether the network represented in the retrieval module and the attribute exploration modules are the same or not. The retrieval embeddings returned by the pooling operations seems superfluous in the image. It may be best to either disentangle the components, or increase abstraction of some components. Moreover, the caption is too short and conveys no information. Ideally, the figure should be able to convey all that is required to understand the method without additional context. 4. This is a minor suggestion. Some section names are misleading or could be improved. For instance, many titles contain the word “attribute” to the point that its presence means little. Moreover, the title for Section 3.4 is misleading and it may be best to rename it. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: please refer to Weakness section Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1. No limitation was reported. It may be compute intensive to extract multiple patches from the same image or for the overall training pipeline. Moreover, it could be that the evaluation process is slower due to the additional components introduced. It may be important to clearly explain any limitation to the work. 2. Same for broader impact. It should be stated Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments! Below, we discuss each of the reviewer's concerns, and explain how we plan to address them in the revised version of our manuscript. **W1**: Thank you for your valuable feedback. We appreciate your suggestion to refer to Figure (2) whenever introducing components in order to guide the reader's understanding. We agree that this will enhance the coherence of our narrative and make it easier for readers to follow along. We will make sure to incorporate these references in the revised version of our paper. Thank you once again for your insightful comments, which will help us improve the clarity and readability of our work. **W2**: Thank you for your insightful feedback. In the revised version, we will provide a more detailed explanation of the differences in the pipeline between training and evaluation. Specifically, our VAPNet framework comprises a backbone network, an attribute exploration module, and an attribute parameterization module. During the training phase, all these components are utilized to explore and parameterize visual attributes. However, during the evaluation phase, we only employ the backbone network to extract retrieval embeddings. By clearly highlighting these differences, we hope to enhance the overall understanding of our proposed method. **W3**: We sincerely thank the reviewer for the insightful feedback on Figure 2. Your suggestions are valuable, and we recognize the need for refining the figure to better represent our method. Aligning with your advice, we will present an improved version of Figure 2, which will hopefully address these concerns more effectively. **W4**: Thank you for bringing up the issue of section names. To enhance the clarity of our method description, we have decided to remove sub-titles such as attribute exploration, attribute sampling, and attribute parameterization constraint from the method section. This modification allows for a more streamlined and concise presentation of our approach. Additionally, we agree that the title for Section 3.4 is misleading and will rename it as "Loss Function" to better reflect its content. Thank you again for your valuable suggestion. **L1**: Thank you for your insightful comments. We appreciate your suggestions regarding the limitations of our work. As you correctly pointed out, the extraction of multiple patches during training can indeed introduce additional computational overhead. This approach is necessary to capture diverse visual attributes, which are then used as supervisory signals for fine-tuning the backbone network. While this does require computational resources during training, we believe that the benefits outweigh the costs, especially considering the expense of labeling attribute annotations. During the evaluation phase, it is important to note that our VAPNet does not utilize the designed components or extract multiple patches. As a result, the evaluation process is not slower than the baseline. To provide a comprehensive analysis of complexity, we have reviewed previous test logs and compared retrieval embedding extraction times and model parameters. The results are summarized in the table below: Method | Parameters | Time | Recall@1 ----------|---------------|--------|----------- Baseline| 23.50M | 21.7ms | 69.5% Our VAPNet| 24.55M|21.7ms | 76.2% As observed, our proposed VAPNet with AAM and APM modules achieves a performance improvement of 6.7% while only adding an additional 1.05M parameters compared to the baseline. Furthermore, since the AAM and APM modules are only utilized during training, the retrieval embedding extraction time during testing remains the same as that of the baseline. Therefore, the increase in algorithm complexity is minimal and is considered acceptable. **L2**: We appreciate your feedback regarding the broader impact of our work. By introducing VAPNet, we aim to extract visual attributes from seen classes without relying on attribute annotations to differentiate unseen classes. This innovation has the potential to greatly impact open-domain tasks. In particular, annotating a large number of attributes for unseen categories in open-domain tasks can be a costly and time-consuming endeavor. By enabling the model to automatically capture knowledge about unseen classes, our approach reduces the reliance on attribute annotations, resulting in decreased manual labeling costs. Furthermore, our approach exhibits improved adaptability to data from domains resembling the training set, such as natural images or medical images. This heightened adaptability contributes to stronger generalization capabilities, allowing the model to perform well in real-world scenarios. Ultimately, our solution has the potential to propel the advancement of open-domain tasks and facilitate their practical applications. Thank you for bringing this to our attention. --- Rebuttal 2: Title: Thanks for the response Comment: The response clarified my comments, and I appreciate the authors' efforts in responding other reviewers too. In general, I think the method is interesting and comparison is convincing, please try to also make a clear presentation of it in your final version. --- Rebuttal Comment 2.1: Title: Response to Reviewer Xapw Comment: Thank you for your comments. We appreciate your positive evaluation of our response and the overall review of our paper. We have taken note of your suggestions and will make an effort to improve the presentation of our work in our final version. Once again, we would like to express our gratitude for your valuable feedback and for contributing to the improvement of our manuscript.
Summary: This paper proposes VAPNet to learn visual attributes from known fine-grained categories and parameterize them into final open-set retrieval. To learn visual attributes without attribute annotation, VAPNet explicitly attains some semantics with rich details via making use of local image patches and distills the visual attributes from these discovered semantics. Then, it incorporates the online refinement of these visual attributes into the training process to iteratively improve them and simultaneously regards these attributes as supervisory signals to tune the retrieval models. Experimental results on on open-set fine-grained retrieval benchmarks demonstrate improved performance compared to existing methods. However, I have some concerns about the motivation and the methodology employed. Please refer to the detailed comments below. Strengths: - It is commendable to introduce the visual attributes to open-set fine-grained retrieval. - This paper is well-written and easy to follow. Weaknesses: - The author do not explain the necessity of introducing visual attributes into open-set fine-grained retrieval. - In line 149, the attribute exploration module attain semantic clues of input object via randomly cropping local patches from the input image, but is hard to guarantee the complete coverage of all discriminative local regions of the input image by random cropping. - In addition to local attributes, visual attributes can also contain global attributes, such as shape. The design of attribute exploration module only focuses on local visual attributes while ignoring global visual attributes. - The artificially defined attributes correspond to different visual features and can be decoupled from each other. While the proposed method does not further constrain the relationship between visual attributes(i.e. Eq.(6) the attribute parameterization constraint Lc), and the extracted visual attributes cannot be guaranteed to have the characteristics of this aspect. It is more like to enhance the local representation ability of the model. - There are some errors in the details of the paper, such as Eq.(6) in line 322, which actually corresponds to Eq.(5). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - The corresponding relationship between local features and different patch of global features in line 191 is not clearly explained. - The global view in Figure 3(top row(b)(d)(f)) shows that the model only pays attention to some local parts, which is quite different from the VAPNet shown in Figure 4(top row(c)(f)), which focuses on more local parts. - Does the randomly cropped size have an effect on the model, and what is the relationship between the cropped size, the Patch number M, and visual attribute dimension k. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: Thank you for your valuable feedback. In open-set fine-grained retrieval, the model is required to learn embedding from the seen classes and then to be capable of utilizing the learned knowledge to distinguish the unseen classes. Existing approaches commonly employ class supervisory signals to guide the model in capturing discriminative details that are useful for identifying the seen classes. However, these approaches tend to focus solely on capturing discriminative concepts of known categories from unknown instances, which can make it challenging to identify unknown categories. A noteworthy observation in the context of fine-grained objects is that visually similar objects from different subcategories often share common visual attributes. Therefore, an unknown instance can be described comprehensively using a range of visual attributes, which can be discovered across multiple known categories. Leveraging this insight, we can represent the unknown classes using visual attributes that have been discovered from the seen instances. By combining these attributes, we can effectively capture the subtle differences between the unseen classes, thus mitigating the challenges associated with open-set scenarios. **W2**: Thanks for your nice comment. Random cropping cannot always guarantee the completeness of all the parts which are smaller than the size of the patch. Although there could exist some parts which are smaller than the patch size, those still have chances of getting split. However, this should not be a concern for model training, since we adopt random cropping which is a standard data augmentation strategy before the random cropping. This strategy leads to the result that patches are different compared with those of previous iterations. Small discriminative parts, which are split at this iteration due to the random cropping, will not be always split in other iterations. This variability in patches brings an additional advantage when dealing with occluded visual attributes. It improves the generalization ability of our model, allowing it to better handle occlusions and enhancing its overall performance. **W3**: VAPNet successfully captures global attributes with a high probability. We achieve this by integrating large-scale patches that cover 1/4 of the original image within our attribute exploration module. When small objects lie within these patches, VAPNet can directly capture their global attributes. However, it is important to note that these attributes serve as supervisory signals rather than object descriptors. Consequently, VAPNet transforms these global attributes into parameters. This allows us to effectively capture the global attributes of other large-scale objects using the parameters fine-tuned by global attributes produced those small objects. **W4**: This is due to the lack of attribute annotations, which makes it challenging to establish explicit relationships between attributes and guarantee specific characteristics for attributes. However, these limitations do not hinder the exploration of visual attributes for describing unknown classes. VAPNet is designed to capture diverse visual attributes and convert them into parameters for retrieval models. When provided with a local view, it can translate it into a set of attributes. Although this attribute set is not directly used for object description, it serves as supervisory signals to fine-tune the retrieval model's parameters. This process allows the retrieval model to automatically disentangle and store the attributes in its parameters. Thus, the model can produce precise attributes to describe diverse visual content and effectively capture the discriminative differences among unknown classes. As you correctly pointed out, VAPNet significantly enhances the local representation of the model. With this enhancement, VAPNet can accurately translate an input object into a set of visual attributes, leveraging its exceptional local perception. **W5**: Thanks for your valuable corrections. We will rectify this error in the revised version and meticulously refine our paper. **Q1**: Each local view includes its associated position information $(V_w, V_h, V_x, V_y)$, where $(V_w, V_h)$ represents the width and height of the local view, and $(V_x, V_y)$ denotes the center location of the local view. To extract local features from global features, we leverage the coordinate information of local views and employ the RoI Align operation proposed in Mask RCNN (He et al., ICCV 2017). **Q2**: The global view depicted in Fig. 3 is generated by the baseline network, rather than VAPNet. In contrast to Fig. 3, Fig. 4 is generated by VAPNet. The baseline network, which is supervised by class signals, tends to selectively learn partial regions that are easier to reduce the current training empirical risk for the seen categories. Consequently, it focuses more on specific local regions rather than capturing comprehensive details and information from all sides. **Q3**: The size of the randomly cropped patches does indeed have an impact on the performance. As the size of discriminative details may vary, it is important for the patch size to be flexible. Thus, we define four scales of patches to cover a wide range of detail scale. However, it is challenging to ensure that all scales of visual content are adequately covered solely through randomly cropped sizes. It is important to note that the cropped size, the number of patches (M), and the dimension of visual attributes (k) are independent of each other. Specifically, VAPNet aims to collect diverse visual attributes and transform them into model parameters, rather than utilizing them as the final representation. Hence, the number of patches is not a critical factor. Additionally, the size of patches only determines the richness of visual semantics, while the dimension of visual attributes indicates the complexity of these attributes. Therefore, they are not unrelated. --- Rebuttal Comment 1.1: Comment: Thanks to authors' reply. My main concerns are addressed, I update my rating: 4->5. --- Reply to Comment 1.1.1: Title: Response to Reviewer uknX Comment: Thanks for raising your score! We’re very encouraged that our rebuttal basically addressed your concerns and appreciate your support for the paper's acceptance.
Summary: Firstly, they introduce a novel approach to address the problem of open-set fine-grained retrieval settings. They transform the retrieval model, which is typically trained using image-level supervisions for category semantic prediction, into attribute modeling. This transformation helps alleviate the challenges posed by open-set fine-grained retrieval. Secondly, they propose a Visual Attribution Parameterization Network (VAPNet) that distills visual attributes from various semantics observed in seen fine-grained objects. These attributes are then transcribed into parameters within the retrieval model. This parameterization allows for the precise representation of unknown categories based on their transformed parameters derived from visual attributes. The authors conduct extensive experiments to evaluate their method's performance on open-set fine-grained retrieval tasks. The results demonstrate that their proposed approach, VAPNet, brings significant benefits. It achieves an average accuracy gain of 8.6% compared to the recent state-of-the-art work on three open-set fine-grained retrieval benchmarks. Strengths: The novelty presented in this paper revolves around the proposed Visual Attribution Parameterization Network (VAPNet) and its application in open-set fine-grained retrieval tasks. The authors introduce VAPNet as a novel approach to handle unknown categories in such tasks. VAPNet focuses on distilling visual attributes from semantic clues observed in known instances. These visual attributes serve as supervisory signals to fine-tune the retrieval model. By doing so, the authors transform the retrieval model, originally trained with image-level supervisions for category semantic extraction, into attribute modeling. This transformation enables precise representation of unknown categories based on parameters supervised by visual attributes. As a result, VAPNet effectively addresses the challenges associated with encountering instances from unseen novel categories. Furthermore, the authors highlight the simplicity and flexibility of the overall retrieval pipeline enabled by VAPNet. They emphasize that the proposed method surpasses state-of-the-art approaches by a significant margin, demonstrating the effectiveness of attribute modeling when dealing with unknown categories in fine-grained retrieval tasks. Weaknesses: 1. The framework generally follows some contrastive learning works. The input pair – image and patch, can be regarded as two views in the contrastive learning, which has been used in RegionCL[1]. Additionally, using KL divergence as objectives for distillation has been proposed in RepDistiller [2]. 2. The authors claim to solve the task of open-set fine-grained retrieval. However, the experiments on open-set image retrieval are missing in the paper. 3. Why most of SOTA work reported in Table 2 are proposed for deep metric learning instead of the image retrieval? Additionally, why some benchmark datasets for image retrieval, such as CIFAR, COCO, Oxford, are not used here? 4. There are lots of confusing concepts in the manuscript. For example, what does “c” and “k” in “… weight matrix W_L ∈ R^c x k …” in Line 179 and Line 194 refer to? What does “sampler” in L189 refer to? What does “L_s” in Eq. (8) refer to? 5. There are some typo errors. For example, “A” in Eq. (2) should be in bold. [1] Xu, Yufei, et al. "Regioncl: exploring contrastive region pairs for self-supervised representation learning." ECCV 2022. [2] Yonglong Tian, et al. Contrastive Representation Distillation, ICLR 2020. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please reply the concerns in the weakness section during rebuttal. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments! Below, we discuss each of the reviewer's concerns, and explain how we plan to address them in the revised version of our manuscript. **W1**: Thank you for your insightful comments. I appreciate your corrections and would like to provide further clarification regarding the similarities and differences between our VAPNet framework and existing contrastive learning and knowledge distillation approaches. Indeed, our VAPNet framework shares some similarities with contrastive learning schemes, such as RegionCL [1]. The input pairs, consisting of an image and a patch, can be seen as two views in contrastive learning. However, the goals of our VAPNet differ from those of self-supervised learning schemes like RegionCL. While self-supervised learning aims to learn consistent probability distributions by attracting positive pairs and repelling negative pairs, our VAPNet utilizes local and global views to generate attribute pairs as supervisory signals. These attribute pairs are then used to transform the retrieval model from category prediction to attribute modeling. Similarly, you correctly mentioned that KL divergence is commonly used in knowledge distillation, as seen in techniques like RepDistiller [2]. However, the purpose of knowledge distillation is to make the output of a student network mimic that of a teacher network by constraining their probability distributions using KL divergence. In contrast, in our VAPNet, we employ KL divergence to ensure the distribution consistency between attribute pairs generated by local and global views within a single network. This allows us to iteratively refine the visual attributes during training and simultaneously use them as supervisory signals to fine-tune the retrieval models, resulting in effective attribute parameterization. Although input pairs and KL divergence have been widely used in various contexts, our VAPNet effectively combines them within a joint network and leverages their collaboration to transform the retrieval model from category prediction to attribute modeling. This unique integration makes our VAPNet specifically designed for open-set tasks, enabling the effective utilization of knowledge learned from seen categories to identify unseen categories. These characteristics offer valuable insights and practical applications for real-world scenarios. Thank you for your thoughtful feedback, and we will make sure to incorporate these additional explanations and clarifications in the revised version of our paper. **W2**: Thank you for bringing up this concern. We have indeed conducted experiments to evaluate the generalization ability of our VAPNet in open-set scenarios. As mentioned in Section 4.1, we have selected the CUB-200-2011, Stanford Cars, and FGVC Aircraft datasets, which are commonly used in open-set fine-grained retrieval tasks. These datasets are split into seen categories for training and unseen categories for evaluation, simulating real-world scenarios where new classes can emerge. In the experiments section, we report all the results under the scenario where our VAPNet is trained on the seen classes but tested on the unseen classes. This setup allows us to assess the performance of our method in handling unknown or novel query images, which is a key aspect of open-set retrieval. **W3**: Thank you for your question and feedback. In Table 2, we present the results for retrieving unseen classes. Metric learning is designed to quantify the similarity between two images independent of their class information. As a result, metric learning can be trained on seen classes and utilized for retrieving unknown classes. However, traditional image retrieval methods often suffer from being trapped in the discriminative knowledge of seen classes and may struggle to handle unseen subcategories effectively. Therefore, most of the state-of-the-art works reported in Table 2 are proposed for deep metric learning rather than image retrieval. Regarding the choice of benchmark datasets, general image retrieval datasets such as CIFAR, COCO, and Oxford cover a wide range of object categories. and their settings usually share the same classes in both training and testing. However, in our experiments, we specifically focused on open-set settings to simulate real-world scenarios where new classes can emerge. Therefore, we choose fine-grained datasets containing visually similar classes, which can be easily split into seen categories for training and unseen categories for evaluation. **W4**: Thanks for your valuable corrections. In $W_L$, "c" denotes the channel number of input features, while "k" denotes the dimension of visual attributes. "Sampler" refers to the RoI align operations used to crop the local features from the global features based on the coordinates of local views. We apologize for mistakenly writing the auxiliary constraint "$L_a$" instead of $L_s$". In the revised version, we will provide a clear explanation of these potentially confusing concepts and rectify any errors. **W5**: Thank you for your valuable feedback and for pointing out the typo errors in our manuscript. We appreciate your careful review and have made the necessary corrections, including ensuring that the "A" in Eq. (2) is in bold. --- Rebuttal Comment 1.1: Comment: I have read the comments from other reviewers (especially the three Borderline reject comments) and the rebuttal. I continue to be borderline toward this paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer yK4j Comment: Thank you, Reviewer yK4j, for taking the time to review our rebuttal. If you have any remaining concerns after the rebuttal, we would be happy to resolve them before the end of the Author-Reviewer Discussions. Thank you again for your suggestions and for reviewing our work. Sincerely, Authors
Summary: The paper addresses the problem of fine-grained image retrieval, where the model must identify subtle object attributes in order to distinguish between visually similar classes. To achieve this, the paper proposes a Visual Attribute Parameterization Network to localize object attributes in images to enhance performance. Specifically, the paper designs Attribute Exploration Module that extracts image features into local patches and projects them into attribute space. Moreover, an Attribute Parameterization Module is introduced to iteratively refine visual features for fine-grained retrieval. The paper conducts experiments on three datasets of CUB, Stanford Cars, and FGVC Aircraft for retrieval tasks. Strengths: + The direction of fine-grained image retrieval without attribute annotation is interesting with impactful real-world applications. + The idea of detecting attributes by using local image patches is sensible and has been demonstrated by prior works to be effective for fine-grained classification [6,11] + The paper shows promising retrieval improvements on CUB and Cars datasets. Weaknesses: + The claim that "we are the first to transform the retrieval model trained by image-level supervisions from category semantic prediction into attribute modeling" is not well supported. To be specific, many works approach the problem of fine-grained recognition without attribute annotations [A, B] which also leverage the ideas of attribute localization just like the proposed method. Thus, the main difference between the proposed method and SOTA is simply the retrieval tasks instead of classification tasks, which lack novelty. + The reviewer is not yet convinced by the significance of the experimental results, as the paper lacks strong comparisons with appropriate baselines such as [A, B] that are specifically designed to capture fine-grained attributes without annotation. Thus, it is unclear to the reviewer how effective the proposed method is compared to SOTA. + The experiment datasets are small in scale and might not be challenging enough for grained classification. Specifically, CUB, CAR, and Aircraft datasets might not have a diverse number of attributes per class. Thus, the reviewer has some doubts about the effectiveness of the proposed method. It would be more convincing if experiments on recent fine-grained datasets such as DeepFashion [19] or iNaturalist [C]. [A] Lin et al., Bilinear CNN Models for Fine-Grained Visual Recognition [B] Ding et al., Selective Sparse Sampling for Fine-Grained Image Recognition [C] Grant Van Horn et al., The iNaturalist Species Classification and Detection Dataset Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the weakness section Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Sufficiently addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments as well as constructive suggestions. Below, we discuss each of the reviewer's concerns, and explain how we plan to address them in the revised version of our manuscript. **W1**: Thank you for raising this point and providing references to related works. We appreciate your feedback and apologize for any confusion caused by our previous claim. As you correctly pointed out, it's true that there have been previous works that approach fine-grained recognition without attribute annotations and leverage ideas of attribute localization. However, we believe that our claim of being the first to transform the retrieval model trained by image-level supervisions from category semantic prediction into attribute modeling for identifying unknown categories is still well-supported. The key distinction lies in the open-set fine-grained retrieval task, which poses the challenge of **retrieving unseen subcategories using knowledge learned from seen subcategories**. This is different from the closed-set learning setting typically used in fine-grained recognition. Therefore, there are few studies that explore visual attributes from seen classes without attribute annotations and utilize these attributes to identify unknown classes. Specifically, works like Bilinear CNN [A] and S3N [B] focus on capturing discriminative details and identifying seen subcategories. However, when directly applying these fine-grained recognition methods to the open-set fine-grained retrieval task, the models tend to **get stuck in the discriminative knowledge of seen subcategories** and may struggle to handle unseen subcategories effectively. In VAPNet, we aim to address this challenge by transforming the retrieval model trained by image-level supervision into attribute modeling. By dissecting various fine-grained object semantics and capturing numerous attributes from seen categories, we aim to improve performance in recognizing unseen subcategories. While prior fine-grained recognition studies have also extracted visual attributes without additional attribute annotations, their acquired attributes are specific to seen categories and may not be useful for recognizing unseen subcategories. In contrast, our VAPNet method aims to capture attributes from seen data that aid in recognizing unseen subcategories. We appreciate your feedback and will make the necessary adjustments to our claim to better reflect our contribution in the revised version of the manuscript. Thank you for bringing this to our attention, and we apologize for any confusion caused. **W2**: Thank you for your feedback and raising the concern about the significance of our experimental results. Although Bilinear CNN [A] and S3N [B] were originally devised for recognizing seen subcategories, using them directly in retrieval tasks for unseen classes may result in poor performance. Concretely, to provide a clearer comparison, we have included the recall@k results for Bilinear CNN and S3N, as well as our VAPNet method, in the table below: Method | Recall@1 | Recall@2| Recall@4 | Recall@8 ----------|----------|----------|----------|---------- Bilinear CNN | 67.4% | 78.7% | 86.3% | 91.2% S3N | 65.4%| 77.2%|85.6% | 90.7% Our VAPNet | 76.2% | 84.6%| 90.1% | 94.0% From the results, it is evident that using Bilinear CNN and S3N directly in the retrieval tasks for unseen subcategories leads to lower recall@k compared to our VAPNet method. This highlights the challenge of using feature extractors trained in closed-set scenarios with classification supervisions to detect distinguishing variations from unseen subcategories, which ultimately affects the retrieval performance. In contrast, our VAPNet method focuses on acquiring visual attributes instead of relying solely on discriminative cues. This enables us to better comprehend the unknown categories and accurately represent their distinguishing variations, leading to a significant improvement. **W3**: Thank you for your valuable suggestions. We understand your doubts and agree that conducting experiments on recent fine-grained datasets, such as DeepFashion [19] or iNaturalist [C], would provide a more convincing evaluation. In response to your suggestion, we have performed an additional experiment on the large-scale DeepFashion dataset to further validate the effectiveness of our proposed VAPNet. The DeepFashion dataset offers an open-set retrieval dataset, where the 3,997 classes are split into training and the remaining 3,985 classes are used for testing. However, due to time limitations, we will consider conducting experiments on iNaturalist in future work to further verify the effectiveness of VAPNet. To provide a clearer comparison, we have included the recall@k results for our VAPNet method, as well as the state-of-the-art methods CEP [4] and PNCA [29], in the table below: Method | Recall@1 | Recall@10| Recall@20 | Recall@30|Recall@40 ----------|----------|----------|----------|----------|---------- CEP [4] ECCV20 | 90.6% | 98.0% | 98.6% | 98.9% | 99.1% PNCA [29] ECCV20| 90.9% | 98.2% | 98.9% | 99.1% | 99.4% Our VAPNet | 93.9% | 98.7% | 99.1% | 99.4% | 99.6% From the results, it is evident that our VAPNet method achieves superior performance compared to other state-of-the-art (SOTA) methods, CEP and PNCA, on the large-scale DeepFashion dataset. By leveraging the visual attributes learned from known instances to identify category-specific discrepancies, our VAPNet demonstrates impressive generalization capabilities. We greatly appreciate your professional feedback, and we will incorporate these additional experiments in the revised version of the paper to further demonstrate the effectiveness and generalization ability of our proposed VAPNet method. Thank you for your valuable input.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper works on an open-set fine-grain retrieval task. To align novel categories and the basic categories into a unified space, this paper proposes a Visual Attribute Parameterization Network (VAPNet) to learn visual attributes from known categories and parameterize them into the retrieval model. Extensive experiments on open-set fine-grained retrieval datasets validate the superior performance of our VAPNet over existing solutions. Strengths: - The motivation of this paper is sufficient and reasonable. Adopting a series of visual attributes to represent an image makes it easier to embed unknown categories and known categories into a unified space. - The experimental results show this paper has achieved SOTA performance across three datasets. Weaknesses: - The visual attributes seem to be a series of local discriminative patch-level features by contrastive learning. It is a very implicit manner to extract attributes. With the development of large language models (LLM), it is possible to represent visual attributes as a structural language description through LLM-driven multi-modal model. Please discuss it. - Additional attributes extraction model may bring additional computation. A detailed computation and parameters number discussion should be included. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No limitations discussion is provided in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to read our paper and thanks for your valuable feedback. **W1** : Thank you for your insightful comments. As highlighted by the reviewers, recent multimodal large models, such as CLIP (Radford et al., ICML 2021) and BLIP2 (Li et al., Arxiv 2023), have shown success in learning the mapping between visual content and language concepts by training on visual and language tasks together. However, directly representing the visual attributes extracted by VAPNet as structured language descriptions poses a challenge. For example, one of the main difficulties arises from the fact that VAPNet and the text encoder of CLIP operate in different representation spaces. This semantic gap between our VAPNet and CLIP model makes it challenging to determine the most suitable language description for the visual attributes based on similarity between attribute and text embeddings. Similarly, BLIP2 encounters a comparable challenge as CLIP. Due to the absence of a semantic association between our VAPNet and the large language model in BLIP2, we are unable to directly input the attribute information into the large language model to generate the language description of visual attributes. One possible solution is to feed the patches used for visual attribute extraction into the image encoder of CLIP or BLIP2. This approach can help mitigate the modality gap problem. However, it is important to note that CLIP and BLIP2 are trained on image-text pairs that describe general categories rather than fine-grained attributes. This raises challenges in representing fine-grained visual attributes and may lead to ambiguities in language descriptions. We appreciate the opportunity to discuss this forward-thinking question and eagerly look forward to further discussions. We welcome any guidance or input to improve our response. **W2**: Thank you for raising this important concern. We carefully analyze the computation and parameter numbers for our proposed VAPNet method and will include a detailed discussion in our revised manuscript. As shown in the table below, we compare the parameters, retrieval embedding extraction times, and recall@1 performance of our VAPNet method with a baseline model. Method | Parameters | Time | Recall@1 ----------|---------------|--------|----------- Baseline| 23.50M | 21.7ms | 69.5% Our VAPNet| 24.55M|21.7ms | 76.2% From the comparison, we can observe that our VAPNet method achieves a recall@1 performance improvement of 6.7% while only adding an additional 1.05M parameters compared to the baseline model. It is important to note that the increase in parameters is relatively small in proportion to the overall model size. Additionally, during testing, the retrieval embedding extraction time remains the same as that of the baseline model, as the additional attribution exploration modules (AAM and APM) are only used during training. Based on these results, we believe that the increase in computation and model complexity introduced by our additional attribute exploration model is minimal and acceptable. We will add this discussion to our revised manuscript to provide a comprehensive analysis of the computation and parameter numbers. We sincerely appreciate the time and effort you have invested in carefully reviewing our paper. We eagerly await your response and value your insights and feedback. --- Rebuttal Comment 1.1: Comment: thanks, all responses have been read and will be taken into account --- Reply to Comment 1.1.1: Title: Response to Area Chair mu4e Comment: Thanks a lot for taking the time. We’re very encouraged for your response and we would like to express our gratitude to you for your support! We believe that our proposed VAPNet specifically designed for open-set tasks plays a significant role in advancing practical applications for real-world scenarios. We hope our responses address all of concerns. Please feel free to raise further questions or concerns after you read our rebuttal. Thank you very much! Authors
null
null
null
null
null
null
Neural Sampling in Hierarchical Exponential-family Energy-based Models
Accept (poster)
Summary: The paper introduces the hierarchical exponential family energy based model as a biologically plausible mechanism for the brain to interpret the external world. The authors first describe the learning/inference and the generation process of the model, the dynamics is local and biologically plausible through introducing a set of fast interneurons for the log-partition functions. They further show that adding adaptation mechanism can accelerate the sampling process. Finally, the authors present extensive numerical results to show that the model achieves good generation quality, and exhibit representations similar to those in the biological visual systems. Strengths: 1. This paper presents a nice combination of theoretical analysis and extensive numerical results to demonstrate that their model exhibits multiple desired properties (including the acceleration effect of the adaptation mechanism, generation quality and representations). 2. The authors show multiple interesting aspects of their models that are very relevant to the neuroscience community, including the acceleration effect of neural adaptation, and the similarity between the representations in biological visual system and the model, thereby providing new insights on the possible advantages/origins of well-known neural mechanisms/representations in the context of Bayesian inference in the brain. Weaknesses: The paper is very well written in general, and establishes nice connections between their findings and neural representations in the visual system. However, the comparison with neuroscience is limited to well-known experimental findings, it would be nice if the authors can add a discussion to highlight several possible experimental predictions of their model. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Eq4: It is a little confusing, before the second equality there is the KL divergence which is defined by Eq.2 with averaging over $z$ and $x$, whereas after the equality it is actually $p_{\theta}(x,z)$ for a sample of $\{x,z\}$. Maybe it would be better to introduce separate notations for the random variables $x$ and $z$ (perhaps capital letters) and the realizations of $x$ and $z$. 2. Line 99-100: If I understand correctly, both joint generation and marginal generation can generate samples following $p_{\theta}(x)$? Why say “However, in order to get the marginal distribution $p_{\theta}(x)$” here? 3. What could the fast interneurons $\epsilon_l$ correspond to in the visual circuitry? 4. Is there an intuitive explanation why the marginal method always seem to perform better than the joint method for generation? 5. Fig 6D: As the proportion of both simple and complex cells decrease with the number of layers, do you see neurons with more complicated receptive fields (objects, etc.)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We acknowledge the encouraging and valuable comments of the reviewer, and would like to address the concerns of the reviewer in details below. **Weaknesses:** We are grateful for the issues the reviewer highlighted. In the present study, as the first step, we aim to establish a framework, which addresses a particularly challenging problem in theoretical neuroscience, i.e., achieving simultaneous integration of biologically plausible local learning and sampling-based inference within a hierarchical network structure. In our forthcoming work, we will explore in details how model predictions can be experimentally validated. Recently, we have also developed a keen interest in inputs with temporal information and are conducting relevant experiments on stimuli with serial dependence [1]. We will incorporate some discussions about the latest developments in the revised manuscript. [1] Fischer, Jason, and David Whitney. "Serial dependence in visual perception." Nature neuroscience 17.5 (2014): 738-743. **Questions:** 1. Thanks for the suggestion. In the revised version, we will distinguish between random variables and their samples. For example, Eq. (2) will be revised as follows: $\nabla D_{KL}\left[p_{true}(\mathbf{x})\parallel p_{\theta}(\mathbf{x})\right]=-E_{\tilde{\mathbf{x}}\sim p_{true}(\mathbf{x})}E_{\tilde{\mathbf{z}}\sim p_\theta(\mathbf{z}|\tilde{\mathbf{x}})}\left[\nabla_\theta\ln p_\theta({\tilde{\mathbf{x}},\tilde{\mathbf{z}}})\right]$, where $\mathbf{x},\mathbf{z}$ are random vairables and $\tilde{\mathbf{x}}, \tilde{\mathbf{z}}$ are samples 2. Indeed, both joint generation and marginal generation can produce samples following $p_{\theta}(\mathbf{x})$. We recognize that our expression was unclear. To clarify, joint generation involves simultaneous sampling in the x-space and z-space, leading to x's marginal distribution following $p_{\theta}(\mathbf{x})$; whereas, marginal generation first samples in the z-space and then generates x according to $p_\theta(\mathbf{x}|\mathbf{z})$, which also results in the x's marginal distribution conforming to $p_{\theta}(\mathbf{x})$. We will clarify this in the revised manuscript. 3. Please see **rebuttal to all reviewers (part 2)**. 4. Joint generation involves simultaneous sampling in both x-space and z-space according to $p_\theta(x,z)$. On the other hand, marginal generation first samples from z-space according to $p_\theta(z)$, and after obtaining $z$ samples, proceeds to sample from x-space following $p_\theta(x|z)$. Considering the time complexities of $O(m)$ for x-space and $O(n)$ for z-space sampling, joint generation has a time complexity of $O(m*n)$, while marginal generation's complexity is $O(m+n)$. This implies that marginal generation is more likely to yield high-quality samples as its sampling space is smaller. 5. We have investigated not only neurons of orientation tuning, but also neurons tuned to high-order features [1] and neurons responsive to image categories. However, the proportion of these neurons was extremely low, constituting less than 1%. Therefore, we chose not to report these results. Although only a small number of neurons exhibit selectivity for categories at the single-neuron level, information about categories can still be linearly decoded from the population response. We used neural representations in each layer to train a linear SVM to discriminate the categories of CIFAR-10 images. We found that the classification accuracy increases with the layer hierarchy (Figure 6G), with the final layer achieving an accuracy exceeding 60%, significantly surpassing the random baseline accuracy of 10%. [1] Julesz, Bela. "Textons, the elements of texture perception, and their interactions." Nature 290.5802 (1981): 91-97. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: I appreciate the response and clarifications by the authors. I'm still confused about the joint/marginal generation though. For marginal generation, in principle don't you need to sample with O(n) time complexity conditioned on each z sample? Then the total time complexity would still be O(m*n)? --- Reply to Comment 1.1.1: Comment: We apologize if our previous explanation was unclear. Let me provide a clearer explanation. To generate an observation $\tilde{\mathbf{x}}$, we need to sample a pair $(\tilde{\mathbf{x}},\tilde{\mathbf{z}})$ from the distribution $p_\theta(\mathbf{x},\mathbf{z})$, where $\tilde{\mathbf{x}}$ represents the desired observation, such as an image. In informal terms, the process of sampling $(\tilde{\mathbf{x}},\tilde{\mathbf{z}})$ can be understood as searching for this specific pair $(\tilde{\mathbf{x}},\tilde{\mathbf{z}})$. We assume that the size of the x-space is $O(m)$ and the size of the z-space is $O(n)$, In joint generation, when sampling the pair $(\tilde{\mathbf{x}},\tilde{\mathbf{z}})$, the search is conducted simultaneously in both the x-space and z-space, resulting in a required search space of $O(m*n)$. In marginal generation, when sampling the pair $(\tilde{\mathbf{x}},\tilde{\mathbf{z}})$, the process involves initially searching in the z-space according to $p_\theta(\mathbf{z})$. Once $\tilde{\mathbf{z}}$ is found, it is fixed. This step's search space size is $O(m)$. Then, in the x-space, $\tilde{\mathbf{x}}$ is searched based on $p_\theta(\mathbf{x}|\tilde{\mathbf{z}})$. This step's search space size is $O(n)$, leading to a combined required search space size of $O(m+n)$. When the sampling efficiency is comparable for both generation methods, the needed time is roughly proportional to the size of the search space.
Summary: This paper proposes a new model for sampling-based neural inference. The model posits that the brain attempts to match the marginal distribution $p(\mathbf{x})$ of an internal generative model $p(\mathbf{x}|\mathbf{z})$ (based on a neural representation $\mathbf{z}$) to the observed distribution of sensory inputs. In the model, sampling occurs via Langevin dynamics. The contributions of the paper are 1) use of a hierarchical exponential family model, 2) introduction of a second latent represenation $\mathbf{u}$ (putatively identified with interneuron dynamics) responsible for estimating the demeaned residual $\phi(\mathbf{x}) - A'(\boldsymbol{\eta})$, and 3) use of a second-order Langevin sampling scheme. Experiments show that the resulting generative model can perform on toy data sets and there are further claims about relation to neural data in the experiments. This is a potentially interesting idea that I found to be muddled by lack of justification of several modeling choices and unclear exposition, particularly in discussing the rationale and results of the experiments. Strengths: - The neural sampling framework has been an interesting line of inquiry over the previous decade, particularly in thinking about how the brain might implement Bayesian inference, and this work furthers that approach. - I found the use of a hierarchical model and the second latent representation to be interesting technical twists that are potentially powerful. - The paper attempts to align both the additional latents and the second-order Langevin dynamics with known features of neural physiology. Weaknesses: - The paper is written as if its main contribution will be to theoretical neuroscience, but the experiments present evaluations of the generative model on toy data sets, and the experiments underlying Figure 6 have an unclear rationale. - Overall, the paper has some deficits in presentation. For instance, section 2 is somewhat confusingly written. There is some parallel material that might be merged and better motivation given to the proposed choices. Why Langevin dynamics? Why not some other sampling method? Similarly, I didn't find the diagrams in Figure 1 particularly helpful in giving an intuition for the math. - Likewise, the rationale behind most of the experiments is not clearly explained in the text, particularly, those underlying Figure 6. Captions are too sparse to explicate what the results mean, and the discussion in text only barely explains the relevant neuroscience background. I have no idea what analysis was done for Figures 6A and 6B, nor why. Figures 5 and 6E are impossibly small. - ll. 120-23: This is a pretty compressed discussion of previously proposed approaches and perhaps difficult to follow for a non-expert reader. - More generally, identifications of model constructs with neuroscience findings ($\mathbf{u}$ with interneuron activity, $\mathbf{v}$ with spike-rate adaptation) are simply stated without any real justification. I'm not saying these choices are impossible to justify, but no real arguments that these are plausible assumptions are given. Again, if this is supposed to be a generative model loosely based on neuroscience, that can be fine, but if it's a theoretical neuroscience model, it's not clear these choices have been fully thought-through, and the experiments should ideally be focused on demonstrating that the model can perform like the brain in some perceptual task. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - ll. 152-156: It's clear from (17) that $\mathbf{v}$ represents a a noise term with temporal autocorrelation. In typical neural models, this is assumed semi-empirically as the result of temporal correlations among a large number of inputs to a given system. Is there an intuition the authors can give in this text for why it is better to identify (17) with spike frequency adaption as opposed to the existence of autocorrelated noise in synaptic inputs? - It's not very clear from the results presented what benefits hierarchy in processing brings. Can the authors show in some more convincing way which posterior features hierarchy specifically helps to capture? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: - The model in the paper assumes a feedforward generative model rather than a recurrent model, as in most brain circuits. - The assumption $\tau_u \ll \tau_z$ (line 128) is a strong one, and one presumes is justified by the relatively higher firing rates of cortical interneurons compared with principal neurons. But can this be justified from firing rates alone? Is there an argument that needs to be made about relative equilibration times of neuronal voltage dynamics for these two types of cells to substantiate this? - It's clear from the material in the supplement that some versions of the model cannot deal with multimodal posteriors, and even the second-order Langevin dynamics proposed will have difficulty mixing well in high-dimensional latent spaces. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thorough review and valuable suggestions. We hope the following explanations will help alleviate the concerns raised in the weaknesses, questions, and limitations. **Weaknesses:** 1. As the reviewer correctly pointed out, our primary contribution is on theoretical neuroscience. Previously, Bayesian brain models have mainly focused on the inference process by using a pre-defined generative model and rarely considered how the generative model was learned. Here, we have taken a step further by explicitly modeling the learning process within the generative model framework. This inclusion may potentially give us insight into the learning mechanism in the brain's perceptual system. In Section 5.1, we employed several generative datasets to assess the learning capabilities of different energy models, and these evaluations help us to identify energy models of higher expression power that can learn complex data distributions. We used three datasets. Notably, CIFAR10 bears the closest resemblance to natural scenes the brain encounters. Therefore, we leveraged the representations learned from CIFAR10 to draw comparisons with neural representations in the brain in Section 5.2, and comparisons indicate that a HEE-based generative model may underlie the mechanism of the brain representing the external world information. This forms the rationale behind Section 5.2 and Figure 6. 2. We appreciate the feedbacks on our presentation, and we will improve Section 2, Figure 1, and all other parts of the paper for better clarity. The reason for we choosing Langevin dynamics as an example to introduce our overall framework in Section 2 is that: Langevin dynamics has been used in previous works to successfully model the sampling-based neural inference (see e.g., [1-2]), which helps readers to understand our framework easily (we will add this underlying rationale in the revised manuscript). In Section 4, we also discussed alternative sampling methods the brain might employ, such as second-order Langevin dynamics. 3. Thanks for the comments and suggestions. We will include the analysis details of Figure 6 and enlarge Figure 5 in the revised version. Figure 6 compares the learned representations of the HEE model from CIFAR10 (a natural image dataset) with the real brain representations in several facets, aiming to demonstrate the potential role of a generative framework in neural information representation. 4. We are grateful for the suggestions, and we will elaborate and integrate those short discussions on previous methods isolatedly appeared in several parts of the paper (line 34-38, line 120-23, and line 255-261). 5. As pointed out by the reviewer, our current work lacks a thorough exploration of the model's correspondence with the real neural system. Here, as the first step, our primary goal is to establish a framework, which addresses a particularly challenging problem in theoretical neuroscience, i.e., achieving simultaneous integration of biologically plausible local learning and sampling-based inference within a hierarchical network structure. But, as pointed out by the reviewer, we should at least give some justifications about the potential biological plausibility of model setting, which we will do definitely in the revised manuscript. **Questions:** 1. Thanks very much for the insightful suggestions. Indeed, interpreting eq.(17) as extracting the autocorrelation of inputs is biologically more reasonable. We will take this advice in the revised manuscript. 2. Previous researches have widely discussed about the advantages of hierarchical networks on information representation/learning. Compared to single or double-layer structures, deep networks can better approximate complex probability distributions p(x), enabling deeper layers to extract more abstract features [5], and these features further facilitate downstream tasks in the brain, such as decision making. In our work, Section 4 (lines 172-179) addresses the impact of network depth on convergence speed while keeping the total number of neurons fixed. Notably, an optimal network depth exists, as depicted in Figure 3D. Furthermore, our experiments demonstrate that as the number of layers increases, neural representations exhibit an enhanced linear discriminative capacity for object categories (Figure 6G), while neurons with orientation tuning decrease in number (Figure 6D). We will expand this discussion in the revised version. **Limitations:** 1. Please note that although our generative model adheres to a Markovian process and can be perceived as a feedforward process by graphical model (Figure 2A & Eq.(7)), the neural implementation of our generative model actually involves recurrent connections between neurons, resembling a recurrent neural network (Figure 2B & Eq.(9-10)). Modeling brain circuits involves neural networks rather than graphical model. This is also true for other generative models [1-4]. 2. We appreciate the reviewer for offering two theoretical approaches that could potentially substantiate this hypothesis. Please see **rebuttal to all reviewers (part 2)** for further discussion. 3. In our framework, we need suitable energy models that can express multimodal distributions (as this is essential for capturing the statistics of natural images). Therefore, in Section 5.1, we assessed the expressive powers of different energy models and selected the one which is capable of representing multimodal distributions for the followed study. We are deeply grateful for the comments of the reviewer, which has provided significant insights for improving our work. We are committed to addressing these issues comprehensively in the revised version, and hope that the reviewer can raise the score accordingly. [1] Rodrigo Echeveste,Nature neuroscience,2020 [2] Agnieszka Grabska-Barwinska, NeruIPS,2013 [3] Nessler, B,Pfeiffer, PLoS Computational Biology,2013 [4] Hennequin,NeurIPS,2014 [5] DiCarlo,Neuron,2013 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses to my concerns. While I do think there are interesting ideas here, I am concerned by the amount of revision that would be required to a) more fully situate this paper in a theoretical neuroscience context and b) perform appropriate comparisons of their model to similar approaches. --- Reply to Comment 1.1.1: Title: Revision and future work Comment: Thank you for summarizing the improvements we need to make in the revised version. In order to address the reviewer's concerns and remind us to make a better revision to the paper, we outline the changes we are definitely planning to incorporate in the revision. Additionally, we detail the future work that will follow this study. We provide real-time updates on the progress in our **reply to all reviewers.**
Summary: In this article the authors introduce a new model, called the ‘’Hierarchical Exponential family Energy’ (HEE) that is biologically plausible and captures the dynamic of inference and learning in the brain. The model introduces multiple layers to decompose the EBM normalizing constant in a bio-plausible fashion, and leverages a neural adaption mechanism to make the sampling process compatible with ‘biological time’. In a series of experiments on synthetic dataset, Fashion MNIST and CIFAR10, the authors demonstrate the generation abilities of the HEE model. In addition, the authors demonstrate that the representation elicited by the HEE are similar to those observed in biological systems. Strengths: This article offers an interesting and valuable perspective on Energy Based Models (EBM) taking inspiration from neuroscience. The proposed solution to the intractable normalizing constant is original and elegant, and its link to predictive coding is interesting. Overall, there are plenty of good ideas in these article. Weaknesses: My biggest concerns are related to the experimental part: (1) ablation studies that would validate the mathematical choice of the authors are missing , and (2) in general the experiments lack careful description. In addition, (3) the last part concerning the similarities between biological and HEE representation lack better comparison with other models. See more detailed comment below: 1 ) One of the innovation of this article lies in the decomposition of the intractable partition function using multiple layers. Intuitively I do see the advantage of the proposed method compared to the bufferization of the amortization method used in standard EBM. But I was expected a comparative experiment between the 3 different methods to confirm the advantage of the proposed method. The comparison with the IEBM of [1] is not carefully enough conducted to conclude to the superiority of the proposed method : the parametrization seems to be different (CNN versus FullyConnected + number of parameters) and other improvement are introduced (e.g. neural adaptation) that might biased the comparison. In general I would suggest more ablation experiments that would test the benefits of all the HEE components separately. 2) The networks are poorly described in the experimental part. What is the exact parametrization of the network (dimension of the fully connected network) ? What is the exact sampling procedure you use in the HEE : Do you first wait for a stable point in the sampling of the latent variable x_l before sampling the next one ? Or do you do one step of sampling for all the variables until you reach a global stable point ? Such implementation details might lead to very different equilibrium state and should be at least discussed. Also, how many sampling steps are you using ? Or are you forcing a stoping criterion (based on the prediction error ?) to decide when to stop the sampling process ? 3) The experimental results in section 5.2 lacks comparison with other similar model. There is nothing showing the results you show are not general to all networks (and not specific to HEE). Could you compare the HEE representations with those obtained for other EBM (or even other generative model) ? The idea would be to demonstrate that an EBM compatible (in terms of implementation) with biology would produce representation more aligned with biology. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * You did not include any comparison with other EBM for Fashion MNIST or the synthetic dataset. Is there a reason for that ? The synthetic dataset would have offered the perfect opportunity to conduct the comparison described previously . * One of the major difficulty in the IEBM [1], is the lack of stability of the training procedure (partly due to the mode traversing issue in the negative phase). Have you faced the same issue with the HEE ? * Line 200 : I don’t understand the sentence : "The dropout technique is employed as a method to mimic the receptive field behavior found in biological systems. » Could you further explain ? * Have you tried some convolutional architecture ? Don’t you think it would improve the results ? * The difference between HEE-NL-A and HEE-NL is only the number of layers ? This should be clarified. * In Table 1, could you define LS and SLD ? * In Fig5, the samples seems be visually lower quality than those of the IEBM [1], but the FID and IS are better. Do you have an explanation ? In particular, in the samples you present, the textures seem to be rather uniform… * What is the impact of the number of layers in the quality of the generation ?? Typo : Line 65 : EMB —> EBM Line 135: PCN is used but not defined yet. [1] Du, Yilun, and Igor Mordatch. "Implicit generation and generalization in energy-based models." arXiv preprint arXiv:1903.08689 (2019). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Limitations have been properly addressed by the authors Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We acknowledge the encouraging and valuable comments of the reviewer, and would like to address the concerns of the reviewer in details below. **Weaknesses:** 1. Please see **rebuttal to all reviewers (part 3)**. 2. We recognize the current paper lacks detailed experiment descriptions, and we commit to add them in the revised manuscript. We will also incorporate a diagram illustrating the neuronal connections and network parameters. The network parameters, denoted as $\theta_l$, represent the connection weights between $x_{l+1}$ and $\varepsilon_l$ (dashed arrows in Figure 2B). In the experiment on CIFAR-10 dataset , 80% of $\theta_l$ elements are randomly set to zero permanently, effectively disconnecting 80% of connections between $x_{l+1}$ and $\varepsilon_l$. We refer to this operation as "dropout" following the terminology in machine learning, but perhaps “sparse connectivity” is a better word. Our sampling approach involves performing every step of sampling for all variables till a global stable point is reached. This approach aligns better with the biological plausibility, and we have yet explored hierarchical sampling. We have presented the details of the training and generation processes in Supplementary Material (section 3). During training, we stop sampling after every 300 $\tau_x$. time steps, using the Euler method with a time step of $dt = 0.01\tau_x$. 3. We appreciate the suggestion and plan to incorporate these experiments and results in the revision. We are considering to add data from IEBM and PixelCNN [1]. Additionally, in future work, we intend to compare our model's results with the neural prediction accuracy benchmarks [2], by using our model's representations to predict the real brain neuron representations. [1] Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In ICML, 2016. [2] Zhuang, Chengxu, et al. "Unsupervised neural network models of the ventral visual stream." Proceedings of the National Academy of Sciences 118.3 (2021): e2014196118. **Questions:** 1. Because our primary focus is on theoretical neuroscience, our work aims to present a biologically plausible learning method. The purpose of the experimental section is to demonstrate that our proposed model (a generative model) could potentially underlie the mechanism behind perception in the brain. The experiments conducted on the 2D synthetic and FashionMNIST datasets in Section 5.1 were primarily intended to showcase the learning capabilities of our proposed 'local learning' approach. We aimed to compare different sampling methods and identify models with favorable performance. Subsequently, the study of using the CIFAR10 dataset was motivated by its inclusion of natural images that closely resemble visual inputs received by the brain. This evaluation aimed to provide evidence that our model indeed learns the underlying distribution of these natural images. This serves as a foundation to support the discussion in Section 5.2, where we explored the model's representations and their alignment with real brain neuron representations. In the revised manuscript, we will include performance comparisons with IEBM on 2D synthetic datasets and the FashionMNIST dataset. 2. While we have not trained IEBM, we can't pinpoint the exact reason for the issue it encountered. However, we did face instances of sudden gradient increasing and model instability during the training. After careful analysis, we identified that the uneven data distribution during learning led to inflated eigenvalues of the Jacobian matrix in the sampling dynamic Eq. (10). To mitigate this, we introduced a regularization term to the function $g(x)$, as described in Supplementary Material (section 3 and $g(x)$ in Table 1). 3. We randomly set 80% of $\theta_l$ elements to zero, effectively disconnecting 80% of connections between $x_{l+1}$ and $\varepsilon_l$. This implies that during inference, each neuron in the second neural network layer only receives 20% of information from the first layer directly, and this setting extends to subsequent layers (each neuron in the third layer can receive information from the first layer equivalent to 1 - 0.8 * 0.8.). This concept closely resembles the receptive field concept in the visual system. 4. We have not attempted this approach, as it would compromise biological plausibility (weight sharing is not biologically plausible). To better compare with IEBM, we plan to explore the CNN structure in future work. 5. HEE-NL-A and HEE-NL differ in their sampling methods. HEE-NL employs Langevin sampling without introducing adaptation, while HEE-NL-A incorporates second-order Langevin dynamics for sampling. 6. LS stands for Langevin sampling, and SLD for second-order Langevin dynamics. We will provide detailed explanations in Table 1 in the revised manuscript. 7. Our IS (6.47) is indeed less than the 10-ensemble approach of IEBM (6.78), while utilizing FID for evaluating our results yield better. Simply speaking, IS measures the quality of generated images, while FID assesses the similarity between generated images and those within the dataset. Therefore, in term of scoring, our generated image quality might not be as high as IEBM's, but the similarity of our generated images to CIFAR10 is closer compared to IEBM. Visually, the perceived lower quality and uniform texture might be attributed to the small size of Figure 5. Additionally, the images not being vector graphics could contribute to this. In the revised version, we will address this concern by including larger generated images in the Supplementary Material. This will provide a better opportunity to observe the finer details of generated images. 8. For FashionMNIST, we tried L=5, 10, 15, and 30. L=5 yielded the worst result, L=10 and L=15 exhibited similar performance, L=30 faced gradient explosion during training. Hence, we employed L=10 on the CIFAR10 dataset. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Sorry for the late feedback. I appreciate the detailed answer from the reviewer. I can't really update my rating without seeing a proper (and well-controlled) comparison between the proposed method and the IEBM... I think this paper would be greatly improve if the authors can include such a comparison (and also more details concerning the experiment part). --- Reply to Comment 1.1.1: Comment: As we are unable to submit the revised article at this moment, we can only provide a brief report here on the comparison with IEBM after controlling for more variables. **CIFAR-10 Unconditional:** | Model | IS | FID | Sampling method | Network structure | Parameters | | :----: | :----: | :----: | :----: | :----: | :----: | | IEBM | 6.78 | 38.20 | Langevin | ResNet | 5M| | HEE (previous) | 6.47 | 37.05 | second-order Langevin | Fully connected | 4M | | HEE (controlled) | 7.07 | 33.37 | Langevin | CNN (without skip connections) | 4M | | HEE (controlled) | 7.12 | 32.10 | Langevin | CNN (without skip connections) | 5M | We found that the CNN structure not only produces higher quality in terms of generation but also reaches steady state in a shorter time. During training, the fully connected structure requires $300\tau_x$, whereas the CNN structure only requires $100\tau_x$. When generating images, the fully connected structure requires $100\tau_x$, while the CNN structure only needs $50\tau_x$. We will provide a more detailed report of this result in the revised version.
Summary: In this submission, the authors introduce a biologically plausible method to train a hierarchical energy-based model as well as perform inference over it, via Langevin sampling. This accomplishes a goal that has long been sought in computational neuroscience, namely, neurally plausible methods for inference over a hierarchical generative model of the world. Previously, the main problem facing inference in hierarchical energy-based models is the difficulty of evaluating the partition function. The authors address this issue by modeling the partition function with Langevin sampling, but with a faster timescale than the Langevin sampling used for inference more generally. This appears to work well empirically. Also underlying their model's impressive generation performance is the use of a momentum term in the Langevin sampling, similar to Hamiltonian Monte Carlo, which they associate with short-term adaptation in the brain. Learning in this system is Hebbian and biologically plausible, and additionally the authors show some rough parallels with biological phenomena, such that the inferred activations of selected neurons have orientation- and hue-specific tuning curves. Strengths: I was impressed with the quality of samples generated by this hierarchical EBM, and all the more so because the learning process is so simple and Hebbian in nature. The model aligns well with a wide literature in computational neuroscience, including hierarchical predictive coding networks, and it extends previous EBM models in an important way by enabling a pure sampling-based inference procedure. The model combines multiple previous ideas into a single framework, and the approach feels natural. Weaknesses: Although generation was impressive, this is also a proposal for Bayesian inference, and there were no serious benchmarks quantifying inference. Some thought should be put into evaluating known benchmarks for inference over hierarchical energy-based models. Many examples are possible, e.g. hierarchical gaussian mixture models, etc. Clarity was a major issue. This was difficult to read. If this is going to be widely read, it should to be rewritten. I’ll describe specific problems below. First, there was not sufficient context in the surrounding literature. The introduction is rather short about EBMs. Otherwise it is difficult to know about this work’s specific contribution as many of these ideas have separately been introduced before (H-EBMs, Hamiltonian MC, etc.). Also, for example, section 2 should summarize previous hierarchal EBMs, as currently it reads like the authors’ original proposal (which it is not). The related work section should be greatly expanded. There were not also enough details about the experimental methods. I would certainly not be able to reproduce these figures from the manuscript. If there is not enough room, at least please put more details in the supplementary figures. For clarity, I would recommend adding more detail about the biological circuit proposal. It is difficult for me to imagine the connectivity of the inhibitory neurons. (Do they need to project from one area to another, for example?) A figure with biology in mind would help. More details about biology should be added, as right now it is rather vague as to how this would be implemented in the brain. In addition, many sentences state a controversial hypothesis as a known fact. These should be weakened considerably. For example, Line 46: “In this study, we show that our brain holds an intrinsic energy-based model”. ‘Show’ implies a certain conclusion. Better would be ‘hypothesize’ or ‘propose’. Line 70: “We demonstrate that our brain holds an intrinsic energy-based model”. Again, ‘demonstrate’ is conclusive. You are demonstrating that the brain *might* use such a model. Line 1: “The brain engages in probabilistic inference” This is just a hypothesis, although a popular one. Neural networks can appear to be Bayes-optimal even though they are not at all probabilistic internally. See Orhan, A. Emin, and Wei Ji Ma. "Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback." Nature communications 8.1 (2017): 138. **Minor comments:** The word ‘society’ is used when I think the authors want ‘community’. In general the sentences are often awkwardly phrased. I recommend asking for editing help, perhaps using a large language model for grammar. Section 4. This seems to be Hamiltonian Monte Carlo. Is that correct? If so, this should be cited (e.g. Neal (2010)) as well as previous HMC in the brain proposals, like the Aitchison and Lengyel (2016) paper cited elsewhere. If it is incorrect, at least please cite SLD. Line 31: Many have argued that variational methods are biological plausible. See for example: D. Rezende and W. Gerstner, “Stochastic variational learning in recurrent spiking networks,” Frontiers in Computational Neuroscience, vol. 8, p. 38, 2014, doi: 10.3389/fncom.2014.00038. Line 84: “avoid complex calculations”. Could you be more specific? Line 121-124. This sentence is unclear and needs to be unpacked. What is amortized generation method, etc. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - How many long does convergence take to produce the samples shown in the manuscript? - What is the connectivity of the interneurons? What cell types might these be? What evidence is there for their faster time constants? - I don't fully understand the circuit diagram for biological neurons. Could a schematic be drawn with individual neurons, complete with a table of the key properties of those neurons in this theory? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: There was not much mention of any limitations of this method. What are the least biologically plausible aspects? What aspects need to be experimentally confirmed? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We acknowledge the encouraging and valuable comments of the reviewer, and would like to address the concerns of the reviewer in details below. **Weaknesses:** a) Quantifying Inference: Thank for the suggestion of incorporating benchmarks into the inference section. This aligns with our future plan. Besides the benchmarks you mentioned, we intend to compare our results with "neural prediction accuracy" benchmarks [1]. This benchmark entails using our model's representations to predict real brain neuron representations. [1] Zhuang, Chengxu, et al. "Unsupervised neural network models of the ventral visual stream." Proceedings of the National Academy of Sciences 118.3 (2021): e2014196118. b) Clarity: Thank for the suggestions, in the revised version, we will 1. provide a more comprehensive introduction to EBMs to ensure a clear understanding of our work. 2. largely re-write the figure 6 caption to provide detailed descriptions for each subfigure. 3. merge figures 1 and 2 in the revision and present the neural connection diagram. 4. improve the writing and we will diligently address the parts highlighted by the reviewer, alongside other instances. c) Minor Comments: In Section 4, the second-order Langevin sampling we used indeed bears similarity to Hamiltonian Monte Carlo, although they are not entirely the same. Both methods fall within the broader family of sampling techniques involving auxiliary variables to accelerate the sampling process [1]. Neal (2010) first introduced the concept of augmenting sampling speed using auxiliary variables, and Aitchison and Lengyel (2016) extended this to neuroscience by employing a group of inhibitory neurons as extra variables. In our work, we propose using adaptation current as an auxiliary variable. We will appropriately cite the references in the revised version. As pointed out by the reviewer, the potential application of variational methods in the brain have been explored before. Some studies have also compared variational methods with sampling-based methods to determine their relative merits [2]. Sampling-based methods face convergence challenge, while variational methods involve both inference and generation stages during training, posing difficulties in multi-layered structure. In our current work, we primarily discussed the potential of sampling-based methods. Determining the brain's exact strategy requires further investigation, and we will duly address variational methods in the revised version. The inference process of a generative model involves calculating the posterior distribution $p(z|x)$, where $p(z|x)=p(x,z)/p(x)$. Computing the denominator term, $p(x) = \int p(x,z) dz$, becomes challenging when the dimensionality of $z$ is high. This integral can lead to complex calculations. Variational methods circumvent this by approximating $p(z|x)$ using a tractable distribution $q(z|x)$, thus avoiding the need to calculate $p(x)$. In sampling methods, like ours, we only require knowledge of $\nabla_z \ln p(z|x)$ to obtain samples that follow the distribution $p(z|x)$. Considering $\nabla_z \ln p(z|x)= \nabla_z \ln p(x,z)$, we also avoid the need to calculate $p(x)$. [1] Ma, Yi-An, Tianqi Chen, and Emily Fox. "A complete recipe for stochastic gradient MCMC." Advances in neural information processing systems 28 (2015). [2] Grabska-Barwinska, Agnieszka, et al. "Demixing odors-fast inference in olfaction." Advances in Neural Information Processing Systems 26 (2013). **Questions:** a) Convergence speed depends on network size and depth. For our CIFAR10 dataset, with a network of 10 layers and a total of 150k neurons, typically 50-100 $\tau_z$ steps are needed for high-quality image generation. b) Regarding Figure 2B, interneurons $\varepsilon_l$ and principal neurons $x_l$, $x_{l+1}$ are connected. The $\varepsilon_l-x_l$ connections follow one-to-one pairing, and $\varepsilon_l-x_{l+1}$ connections are established via the weight matrix $\theta_l$. The internal connections of $\varepsilon_l$ are determined by $g'(u_l)$ as described in Eq.(9). The discussion of interneurons is addressed in the **rebuttal to all reviewers (part 2)**. c) In the current manuscript, it is not possible to discern the specific neuronal connections from the figures alone. One would need to refer to the corresponding dynamic equations Eq.(9-10) to understand the precise connections. We will address this shortcoming in the revision by providing an illustration of the neuronal connectivity diagram. **Limitations:** *“What are the least biologically plausible aspects? ”* As you and many other reviewers have pointed out, the assumption $\tau_u \ll \tau_z$ (line 128) is indeed a strong one. The rationale behind making this assumption is that during the inference process described by Eq. (8), we need to have real-time knowledge of the value of $A'(\eta_l)$. However, we compute $A'(\eta_l)$ through sampling (as in Eq. (9)), and sampling inherently introduces some temporal delay. To minimize this delay, we need to ensure that the timescale of sampling $A'(\eta_l)$, denoted as $\tau_u$, is much smaller than the timescale of inference dynamics $\tau_z$. Nevertheless, we have also identified some empirical evidence that supports the validity of this assumption in the reply to all reviewers. *“What aspects need to be experimentally confirmed?”* 1.Our model assumes SFA as the adaptation term to accelerate the sampling process (as discussed in Section 4). Suppose one can manipulate the SFA strength in the experiment, then we can test this assumption. 2.Additionally, our model offers substantial flexibility, including the choices of $\phi(x)$, $\eta_l$, and $g(x)$. Currently, our selections primarily consider their impact on the model's learning capability (as discussed in Section 5.1), without necessarily aligning with real neural systems. Perhaps in the future, we could conduct experiments to validate the appropriateness of these choices. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response Comment: I'm happy to see that all reviewers engaged thoroughly, and that the authors are taking all such feedback seriously. If the authors do seriously implement all of the proposed changes, I think this would be a valuable paper at NeurIPS. Without actually seeing this revision, though, I cannot in good faith yet change my score. However, I want to emphasize that my criticisms are not about the soundness of the proposed method, but only about its contextualization, clarity, the depth of explicit relation to biology, and overall presentation, all of which can (in principle) be addressed in revision. --- Reply to Comment 1.1.1: Title: Revision and future work Comment: Thank you for acknowledging our work. In order to address the reviewer's concerns and remind us to make better a revision to the paper, we outline the changes we are definitely planning to incorporate in the revision. Additionally, we detail the future work that will follow this study. We provide real-time updates on the progress in our **reply to all reviewers**.
Rebuttal 1: Rebuttal: **Reply to all reviewers** We greatly appreciate the thorough review of our paper by all four reviewers. The positive feedbacks in the reviews have effectively highlighted the contributions of our work. We will provide a summary of this aspect in Part 1. Additionally, we will address the common questions raised about our work in Parts 2 and 3. **Part 1** Previously, Bayesian brain models primarily emphasized the inference process, utilizing pre-defined generative models, and rarely delved into how the generative model was acquired. In this work, we have extended beyond this by explicitly incorporating the learning process within the hierarchical generative model framework. This addition could potentially offer insights into the learning mechanism within the brain's perceptual system. Furthermore, our brain-inspired energy-based model (EBM) also presents a technique for estimating the partition function in EBMs, which is a challenging problem within the machine learning community. **Part 2** As reviewers zTHc, tHDe and moHy pointed out, $\tau_u\ll\tau_z$ (line 128) is a strong assumption. In the current paper, we do acknowledge the absence of a discussion on this assumption and interneurons. Therefore, we are providing additional discussion here and will incorporate this aspect into the revised version. There are various types of interneurons that target on pyramidal cells, comprising approximately 10-20% of the overall neuron population in the cerebral cortex [1]. The interneurons in the HEE model bear the closest resemblance to the Large Basket Cell or Nest Basket Cell [2], which collectively constitute around 50% of interneurons. Their electrophysiological characteristics include fast spiking, non-accommodating, and non-adapting behaviors. These interneurons have also been identified in the visual cortex of ferrets [3], displaying short-duration action potentials (approximately 0.5 ms at half height). This suggests that these neurons have shorter time constants compared to pyramidal cells. [1] Therese Riedemann. Diversity and function of somatostatin-expressing interneurons in the cerebral cortex. International Journal ofMolecular Sciences, 20(12):2952, 2019. [2] Markram, Henry, et al. "Reconstruction and simulation of neocortical microcircuitry." Cell 163.2 (2015): 456-492. [3] Descalzo, Vanessa F., et al. "Slow adaptation in fast-spiking neurons of visual cortex." Journal of neurophysiology 93.2 (2005): 1111-1118. **Part 3** As the reviewer nSfE pointed out, it is better to compare our model with other methods on the partition function estimation. However, we find that for the joint distribution in our model (belonging to the hierarchical exponential-family), implementing amortized generation and implicit generation are challenging. The difficulty arises from the fact that, in our model, besides requiring the partition function during training (Eq. 11), we also need the partition function during sampling (Eq. 8). In contrast, amortized generation and implicit generation do not consider partition function during the sampling process. Thus, if comparing our model with the other two methods, the joint distribution employed by them will be different from the one used in our model, and this makes comparison not in the same condition. We need carefully design experiments to address this issue. As the reviewer pointed out, our comparison with IEBM lacks controlled variables. In term of connectivity, IEBM employs a CNN architecture, while our model employs a full connection structure. While our model could also be adapted to a CNN-like structure, the operation of weight sharing in CNN is not biologically plausible, as it requires the changes of synapses over a large space are coordinated. Therefore, in our model implementation, we chose a fully connected architecture instead of CNN to maintain better biological realism. Regarding the sampling method, IEBM employs Langevin sampling, while our approach employs Langevin sampling + adaptation. This technique, while not identical to Hamiltonian Monte Carlo (HMC), falls within the same category of sampling methods [1]. Notably, IEBM also discusses the impact of HMC and highlights the challenge in controlling the number of leapfrog simulations during training. In term of parameter count, IEBM utilizes around 5 million parameters, while our model employs 4 million. Additionally, in term of loss function, our approach directly employs log-likelihood, while IEBM incorporates an additional regularization term. Moving forward, in order to enhance the comparison with IEBM, we are considering a strategy that temporarily sets aside biological constraints, rather we will focus solely on experimental outcomes by adopting a CNN architecture, employing Langevin sampling, and ensuring parity in parameter count. By this, we can conduct a direct and informative comparison with IEBM. [1] Ma, Y. A., Chen, T., & Fox, E. B. (2015). A complete recipe for stochastic gradient MCMC. Advances in Neural Information Processing Systems, 2015-Janua, 2917–2925.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Improving CLIP Training with Language Rewrites
Accept (poster)
Summary: CLIP uses data augmentation for image inputs but neglects the diversity of texts associated with the same image. To overcome this limitation, this paper introduces Language augmented CLIP (LaCLIP), a simple yet highly effective approach that enhances CLIP training through the startegy of language re-writing. Extensive experiments conducted on CC3M, CC12M, RedCaps, and LAION-400M datasets demonstrate the significance of LaCLIP, which outperforms CLIP by 8.2% on CC12M and 2.4% on LAION-400M in terms of ImageNet 0-shot acc. without imposing additional computation or memory overhead during training. Strengths: Overall, this is a good paper as it has notable contributions to the community. Vanilla CLIP often exhibits an imbalance where the image encoder is strong while the language encoder is comparatively weaker. It is partially because CLIP has very diverse image inputs yet lacks proper data augmentation strategies for language. This work presents a very interesting solution, i.e., language re-writing with a third-party language model or human efforts. The motivation behind the paper is commendable, as it is addressing a very important problem for vision-language pre-training. The method LaCLIP is simple and easy to follow. The experiments are extensive, and the results are very significant. Weaknesses: My major concern lies in whether LaCLIP could be regarded as a method of knowledge transfer. Specifically, while augmenting the captions by a large language model, the CLIP text encoder actually distills some knowledge from that. So, I am wondering if your LaCLIP outperforms vanilla CLIP simply because LaCLIP has a stronger text encoder. To ablate this problem, we can first load or distill from a pre-trained language model as CLIP's text encoder and then perform its training process. There are also some minor drawbacks: LaCLIP relies on a well-pretrained language re-writer such as ChatGPT and LLaMA, which incurs heavy computing cost for language data augmentation and prevents end-to-end training. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. How is the difference between your LaCLIP and the paradigm mentioned in "Weaknesses"? 2. Does a stronger LLM always yield better LaCLIP performance? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See "Weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their positive comments and insightful suggestions. **[Q1. Pre-trained Text Encoder]** Intuitively, leveraging pre-trained text encoder could be beneficial to CLIP training given its intrinsic understanding of textual context. However, earlier investigations into Locked-Image Tuning (LiT) [1] have conducted meticulous ablations, revealing that incorporating pre-trained text encoders, such as the BERT model in their case, may not yield substantial benefits. For comprehensive insights, we direct your attention to Figure 3 (top right) in their [paper](https://arxiv.org/pdf/2111.07991.pdf), The legend accompanying the figure aids in its interpretation, where each letter on the right corresponds to a distinct text encoder condition. To elaborate: * **U:** The Encoder is pre-trained * **u:** The Encoder is randomly initialized * **L:** The Encoder is pre-trained and its parameters are frozen It is crucial to focus on three distinctive lines in ImageNet 0-shot: * **uu (Orange dotted line):** Baseline performance of the vanilla CLIP . * **uU (Orange line):** Fine-tuning using a pre-trained text encoder. * **uL (Gray dotted line):** Pre-trained text encoder's parameters are frozen. Based on these results, fine-tuning based on pre-trained text models doesn'tnotably improve CLIP performance. Also, freezing the parameters of the pre-trained text model even leads to performance drop when compared to training from scratch. Contrarily, our experiments demonstrate that training LaCLIP with language rewrites induces far more substantial performance improvements. This highlights the need for explicit sentence augmentation instead of relying on implicit knowledge from pre-trained text models, thus showcasing the effectiveness of our LaCLIP approach. [1] Zhai, X., Wang, X., Mustafa, B., Steiner, A., Keysers, D., Kolesnikov, A. and Beyer, L., 2022. Lit: Zero-shot transfer with locked-image text tuning. In CVPR 2022. **[Q2.Heavy computing cost]** Acknowledging the potential computational expense of pre-computing language rewrites using LLMs is important. For instance, on a single machine equipped with 8 A100 GPUs, the rewriting of the CC3M dataset requires approximately 7 hours. However, it's crucial to note that this rewriting procedure is a one-time operation for each image-text dataset. Consequently, all CLIP models trained on the same dataset can subsequently reap the benefits without incurring additional computational overhead. To facilitate further research, we are planning to release all the precomputed rewritten texts for the datasets used in our paper. This proactive step eliminates the need for researchers to undergo the rewriting process again, streamlining their work and promoting the advancement of the field. **[Q3. Stronger LLM]** The potential strengths of LLMs can be attributed to two factors: **increased model size** and **better alignment**. We address the impact of larger model sizes in Appendix I, with summarized results in the table below: < Table S1>. Zero-Shot performance on LaCLIP trained with rewrites generated with different LLaMA model size on CC12M. | Model Size | Downstream | ImageNet | |:----------:|:----------:|:--------:| | N/A (CLIP) | 38.8 | 40.2 | | 7B | 42.3 | 44.5 | | 13B | 41.7 | 44.8 | | 33B | 42.6 | 44.4 | | 65B | 43.1 | 44.4 | We observed that even employing the smallest LLaMA model yields remarkable enhancements in vanilla CLIP training. While increasing the LLaMA model size can lead to performance gains on specific downstream tasks, the overall impact on larger model sizes remains relatively modest. Regarding improved alignment within LLMs, we believe that employing instruction-tuned models could introduce greater diversity into the rewritten text. We leave this exploration to future research endeavors. --- Rebuttal Comment 1.1: Title: Rebuttal follow up Comment: Dear Reviewer qFtF, Thank you for acknowledging the contributions of our work as well as your thoughtful insights once again! In addition to our discussion in the rebuttal, we have done some additional experiments with regard to your previous suggestions on **Pre-trained Text Encoder**, we put our **new reults** here and hope you to find them interesting and convincing: We follow your suggestions and directly compare the pre-trained text encoder setting in our exact experiment setup on CC12M, to further show the difference of LaCLIP comparing to using pre-trained text encoders. We replaced the text encoder and tokenizer with pre-trained BERT-Base model, and kept all other parameters to be the same. We tested 2 setups: *fine-tuning* the whole model and *frozen* BERT weight: < Table S2>. Zero-shot performance comparison between different pre-trained text encoder setups on CC12M | Method | Pre-trained Text Encoder | Text Encoder Freeze | Downstream | ImageNet | |---|:--|:--|:-:|:-:| | CLIP (Vanila) | N/A (from scratch) | No | 38.8 | 40.2 | | CLIP (BERT-Fine-tune) | BERT-base | No | 42.1 | 42.9 | | CLIP (BERT-Frozen) | BERT-base | Yes | 24.5 | 23.2 | | LaCLIP | N/A (from scratch) | No | **46.2** | **48.4** | The observations align with the findings depicted in Figure 3 of LiT [1]. Fine-tuning the pre-trained BERT model exhibits some enhancement in CLIP training performance, whereas retaining the frozen BERT encoder substantially degrades performance. In contrast, LaCLIP consistently outperforms all BERT pre-training configurations, underscoring the necessity for explicit sentence augmentation strategies. [1] Zhai, X., Wang, X., Mustafa, B., Steiner, A., Keysers, D., Kolesnikov, A. and Beyer, L., 2022. Lit: Zero-shot transfer with locked-image text tuning. In CVPR 2022. With 1 day remaining in the discussion phase, we would like to confirm if our response successfully addressed your concerns. Furthermore, we would love to provide additional clearification or experiments that you'd like to see! We will integrate our rebuttal based on your suggestions into the final version of the paper, including the distinctions between LaCLIP and pre-trained text encoders, results with stronger LLMs, and discussion on the computational costs. The inclusion of these aspects is poised to significantly enhance the quailty of the paper! Your dedication of time and effort in reviewing our work is truly appreciated. Please let us know if you have any additional comments or questions! Best Wishes, Authors
Summary: This paper proposes a new language augmentation method for training CLIP. Specifically, a large language model is prompted, by a few examples, to generate rewrites for existing texts in image-text paired datasets. Gains are demonstrated on CC12M, RedCaps, LAION-400M, etc. compared with CLIP and SLIP. Strengths: 1. The proposed solution looks simple, taking good advantage of existing pretrained LLMs. 2. Extensive training and evaluation experiments on multiple publicly available datasets and backbones, within a reasonable computation budget. 3. Good ablations on immediate variants of augmentation strategies and ICL strategies. Weaknesses: 1. Missing comparison with or discussion on relations to existing text augmentation methods applied to CLIP training, e.g. DeCLIP [1] introduced self-supervision and multi-view supervision which generated 2x2 pairs with both image augmentation and text augmentation such as synonym replacement, random swap, and random deletion. LaViLa [2] explored augmenting with a rephraser and even re-captioning with vision conditioned large language models. 2. Text augmentation might be addressing the issue of limited text data in image-text paired datasets. How would text augmentation compare with finetuning or frozen text encoder pre-trained on large scale text data? e.g. with BERT, ROBERTA, or MPNET embeddings, instead of learning from scratch on image-text paired datasets. 3. The current method uses ICL on LLaMA, how about instruction tuned models, do we still need the three examples? 4. It is also interesting to see examples of how language augmentation helped the learning of visual features. What are the examples that were corrected? 5. Text encoder size worths ablating, since the proposed solution focuses on text augmentation. 6. Scaling is unconfirmed with a large dataset and a large backbone at the same time, but this is not doable given limited compute. [1] Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm. ICLR 2022. [2] Learning Video Representations from Large Language Models. CVPR 2023. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It is preferred if there are existing results addressing the questions in the weaknesses section, but there is no need to do extra training due to the computation needed. There are related works on arxiv. Discussion is recommended but not required. [1] A Fistful of Words: Learning Transferable Visual Models from Bag-of-Words Supervision. https://arxiv.org/abs/2112.13884. [2] Improved baselines for vision-language pre-training. https://arxiv.org/abs/2305.08675. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and insightful suggestions. **[Q1. Comparison with DeCLIP and LaViLa]** DeCLIP and LaViLa also explore text augmentation benefits, but their methodologies differ from ours. DeCLIP requires multiple encodings for each training image-text pair, resulting in increased training overhead. LaViLa incorporates a Narrator component that depends on video frames and necessitates additional training efforts. We made a fair comparison with them by training CLIP using text augmentations from DeCLIP or LaViLa. DeCLIP employs EDA [1] for text augmentations, and LaViLa utilizes an open-source T5-based paraphraser [2] as its rephraser. We provide the result in Table S1, where our LaCLIP consistently outperforms both baselines. < Table S1>. Zero-Shot comparison with text augmentation baselines | Augment | Downstream | ImageNet | |---------------|:----------:|:--------:| | DeCLIP | 40.6 | 41.2 | | LaViLa | 40.4 | 41.9 | | LaCLIP (Ours) | **46.2** | **48.4** | Additional comparison with EDA and backtranslation can be found in Table 4 of the paper and Appendix D. A comprehensive discussion of these aspects will be integrated into the final version. [1] Wei, J. and Zou, K., 2019. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. In EMNLP 2019. [2] High-quality sentence paraphraser using transformers in nlp. https://huggingface.co/ramsrigouthamg/t5-large-paraphraser-diverse-high-quality **[Q2. Pretrained Text Encoder]** Intuitively, leveraging pre-trained text encoder could be beneficial to CLIP training given its intrinsic understanding of textual context. However, earlier investigations into Locked-Image Tuning (LiT) [3] have conducted meticulous ablations, revealing that incorporating pre-trained text encoders, such as the BERT model in their case, may not yield substantial benefits. For comprehensive insights, we direct your attention to Figure 3 (top right) in their [paper](https://arxiv.org/pdf/2111.07991.pdf), The legend accompanying the figure aids in its interpretation, where each letter on the right corresponds to a distinct text encoder condition. To elaborate: * **U:** The Encoder is pre-trained * **u:** The Encoder is randomly initialized * **L:** The Encoder is pre-trained and its parameters are frozen It is crucial to focus on three distinctive lines in ImageNet 0-shot: * **uu (Orange dotted line):** Baseline performance of the vanilla CLIP . * **uU (Orange line):** Fine-tuning using a pre-trained text encoder. * **uL (Gray dotted line):** Pre-trained text encoder's parameters are frozen. Based on these results, fine-tuning based on pre-trained text models doesn't notably improve CLIP performance. Also, freezing the parameters of the pre-trained text model even leads to performance drop when compared to training from scratch. Contrarily, our experiments demonstrate that training LaCLIP with language rewrites induces far more substantial performance improvements. This highlights the need for explicit sentence augmentation instead of relying on implicit knowledge from pre-trained text models, thus showcasing the effectiveness of our LaCLIP approach. [3] Zhai, X., Wang, X., Mustafa, B., Steiner, A., Keysers, D., Kolesnikov, A. and Beyer, L., 2022. Lit: Zero-shot transfer with locked-image text tuning. In CVPR 2022. **[Q3. Instruction tuned models]** For instruction tuned models we normally should not need three examples. ChatGPT, an exemplar of instruction-tuned models, can perform text rewriting without explicit examples. Nonetheless, it becomes unscalable when applied to datasets containing hundreds of millions of entries due to substantial financial and time costs incurred through API calls. **[Q4. Visualization of Examples being corrected]** We added visualization in the rebuttal Figure PDF. **[Q5. Ablation on Text Encoder Sizes]** Please refer to the rebuttal Figure PDF for experimental details. The results indicate that altering the text encoder alone lacks a significant impact on performance. Notably, for ImageNet zero-shot, an intriguing observation is that larger text encoders lead to a decline in CLIP's performance, while LaCLIP's performance improves. This suggests the potential for overfitting in Vanilla CLIP with larger text encoders, and LaCLIP could potentially mitigate this issue. However, the observed changes aren't significant enough to warrant advocating for text encoders larger than the Base model, considering the associated memory and computational overhead. **[Q6. Scaling with large dataset and large backbone]** Subsequent to the main paper deadline, we conducted experiments for LaCLIP with a ViT-B/16 backbone on the LAION-400M dataset: < Table S2>. Zeroshot performance of ViT-B/16 trained on LAION-400M. | Method | Downstream | ImageNet | |--------|:----------:|:--------:| | CLIP | 65.4 | 67.0 | | LaCLIP | 68.5 | **69.4** | The result shows that LaCLIP improves CLIP training with larger backbones on the LAION-400M dataset. Notably, LaCLIP maintains a 2.4% performance gain with a bigger backbone. Our aspiration is to continually advance the frontier of state-of-the-art open-sourced CLIP models. We remain committed to exploring larger datasets and models trained using the LaCLIP framework. **[Q7. Discussion with more related works]** We thank the reviewer for noting relevant works. For the Bag-of-Words paper, its reliance on word-level operations limits diversity, while its scalability on larger datasets is unverified. Conversely, LLM-based augmentation leverages large language models to enhance content with comprehensive details. In the Improved baseline paper, multiple augmentations on the same image increase training costs. Also, their text augmentations focus on word-level operations, limit the potential for variance. A comprehensive discussion will be included in the Related Works section. --- Rebuttal Comment 1.1: Title: Rebuttal follow up Comment: Dear Reviewer FjR1, Thank you for the positive feedback and the insightful suggestions again! We have conduct some **additional experiments** to further resolve your concerns with regard to **pre-trained text encoder**. Here is a summary of our new results: In addition to our discussion in the rebuttal, we directly compare the pre-trained text encoder setting in our exact experiment setup on CC12M, to further show the advantage of LaCLIP comparing to using pre-trained text encoders. We replaced the text encoder and tokenizer with pre-trained BERT-Base model, and kept all other parameters to be the same. We tested both of the 2 suggested setups: *fine-tuning* the whole model and *frozen* BERT weight: < Table S3>. Zero-shot performance comparison between different pre-trained text encoder setups on CC12M | Method | Pre-trained Text Encoder | Text Encoder Freeze | Downstream | ImageNet | |---|:--|:--|:-:|:-:| | CLIP (Vanila) | N/A (from scratch) | No | 38.8 | 40.2 | | CLIP (BERT-Fine-tune) | BERT-base | No | 42.1 | 42.9 | | CLIP (BERT-Frozen) | BERT-base | Yes | 24.5 | 23.2 | | LaCLIP | N/A (from scratch) | No | **46.2** | **48.4** | The observations align with the findings depicted in Figure 3 of LiT [1]. Fine-tuning the pre-trained BERT model exhibits some enhancement in CLIP training performance, whereas retaining the frozen BERT encoder substantially degrades performance. In contrast, LaCLIP consistently outperforms all BERT pre-training configurations, underscoring the necessity for explicit sentence augmentation strategies. [1] Zhai, X., Wang, X., Mustafa, B., Steiner, A., Keysers, D., Kolesnikov, A. and Beyer, L., 2022. Lit: Zero-shot transfer with locked-image text tuning. In CVPR 2022. In our previous rebuttal we meticulously followed the detailed suggestions provided by you, and have added all of the requested experiments, discussions and visualizations to make our work even more comprehensive. We will add all detailed response into the final version of the paper. Since there is only 1 day left in the discussion period, we would like to kindly ensure that the reviewer has seen our response, and eager to know whether there is additional clarification or experiments that the reviewer would like us to offer. We would be extremely grateful if the reviewer could consider favorably updating the review if our response effectively addressed your concerns. Thanks again for the effort and time you have dedicated to our work! Please let us know if you have additional comments or questions. Best Wishes, Authors
Summary: This paper introduces a simple text augmentation strategy to train vision-language models. Given an image-caption dataset, the core idea is to use an off-the-shelf LLM to "rewrite" image captions. Since LLM outputs do not necessarily read like image captions, authors use in-context learning -- input a few example caption rewrites to the LLM in-context to generate the rewrite. The authors use this augmentation to train CLIP, which they call "Language Augmented CLIP" (or LaCLIP). LaCLIP significantly outperforms CLIP on multiple downstream tasks at various training data/model sizes. Strengths: I think this paper matches the quality of a typical publication at the NeurIPS conference. I recommend acceptance; it is relevant to the conference audience and will spur exciting discussion in the community. Below I highlight the main strengths of the paper: 1. **Simplicity:** The proposed method is conceptually simple and empirically powerful. It improves the performance of contrastive image-text models like CLIP and SLIP on many downstream vision tasks. 2. **Proposed approach has no 'online' training overhead:** Language rewrites can be performed once for the entire training dataset 'offline' and saved to disk. During training, the only extra overhead is loading all candidate captions per image (a few extra lines of text), and doing a random number generator to select one candidate caption. This overhead is negligible — considering this, the empirical improvements are very appealing. 3. **Experimental thoroughness:** This paper is an excellent example of an empirically thorough study. The authors are interested in answering a single general question: How much can text augmentation benefits current CLIP-style models? The authors do an excellent exploration to answer this question: - Many design choices for language rewrites are experimented with, even a baseline that sources meta-prompts from human annotators! - This augmentation is plugged in with two approaches (CLIP and SLIP) to show its "drop-in" benefits. - Datasets and model architectures of different sizes are used to showcase the scaling behavior of this approach. - Empirical evaluations cover a variety of downstream classification tasks (zero-shot, linear probing, few-shot) and 15+ datasets. 4. **Excellent clarity in writing and presentation:** All technical details for empirical analysis are well-stated and easy to follow. The main paper and supplementary material have adequate implementation details to aid reproducibility. All result tables are neatly organized to highlight the central messages to the reader efficiently. Weaknesses: I have a few questions/concerns which I believe should be addressed or acknowledged, and other suggestions (optional) to improve the paper. 1. **Writing can be adjusted to decouple the approach from CLIP:** The proposed approach is at its core, a data augmentation strategy for methods that train with image-text pairs. This broadly encompasses methods that perform generative training, like image captioning (e.g. VirTex, BLIP) and masked language modeling (e.g. ICMLM). The writing can be slightly reworded to convey this, along with mentioning that in this paper only one instantiation is considered -- contrastive models like CLIP. 2. **Connection with prior works like BLIP?** BLIP and other such works perform iterative training through language rewrites, using a captioning model to _replace_ training captions. The proposed approach uses language rewrites as text augmentations -- I believe that making this connection in the paper is useful. 3. **Proposed approach lacks a mechanism to remove noisy captions:** This approach expands the set of candidate captions per image through language rewrites. However, the original caption in training data may be noisy and uninformative, which is common in web-scale datasets. This approach does very little to remove such captions, and may not generate a semantically relevant caption through rewrites because the rewrites do not condition on the input image. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have one question related to the weaknesses I mentioned: Have the authors tried training non-contrastive vision-language models with this training strategy? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed the limitations and broader impact of their method in the appendix. I agree with the authors' assessment and believe that the discussion is sufficient. However, I urge the authors to move it to the main paper and instead transfer any side experiments or implementation details in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer very much for their positive comments and insightful suggestions. **[Q1. Improved Writing]** We extend our sincere appreciation to the reviewer for acknowledging the broader potential utility of our proposed approach. We also share the belief that the text augmentation strategy holds potential for significantly wider applications. In response to the suggestion, we will revise the main paper to present the approach in a more encompassing manner. **[Q2. Connection with BLIP]** We express our gratitude to the reviewer for establishing the connections with BLIP. Notably, the BLIP model family incorporates iterative image captioning within its training pipeline, which inevitably brings more computation and memory overhead during training. However, a key advantage of BLIP lies in the fact that the generated captions are intricately linked with the image content, potentially yielding more relevance than strategies centered solely on language augmentation. A comprehensive discussion elucidating these aspects will be thoughtfully integrated into the main paper. **[Q3. Remove noisy captions]** This is an insightful observation. Indeed, the challenge of noisy data is one that all CLIP-based methods could potentially encounter. However, the empirical findings suggest that, given the current dataset scale, the shared information tends to outweigh the noise present in the text. This dynamic enables the model to learn meaningful image-text embeddings. Intuitively, in the context of LaCLIP, if the original text contains noise, the subsequent rewritten text could exhibit reduced significance and relevance to the associated image. To explore this aspect, we conducted experiments to train a CLIP model on CC12M dataset, using only one version of the rewritten text. Notably, the rewritten text was kept constant throughout the training process, without applying text augmentation, to ensure a fair comparison. < Table S1>. Zero-shot performance of CLIP trained with real and rewritten captions | Caption | Downstream | ImageNet | |:---------:|:----------:|:--------:| | Real | 38.8 | 40.2 | | Rewritten | 39.0 | 40.9 | Surprisingly, the empirical results demonstrate that training CLIP models solely with rewritten text can yield comparable or even slightly superior performance compared to using real captions. This outcome underscores the advantages of text rewriting, where the inclusion of additional details within the rewrites outweighs the potential impact of noise present in the captions. We anticipate that future research could delve into methods for integrating image information into augmented texts during the rewriting process. Additionally, employing image-guided models to filter out noisy captions from datasets could be explored as a means of data cleaning. **[Q4. Non-contrastive training]** In alignment with the reviewer's suggestion, we have incorporated the Language Augmentation strategy into the training pipeline of Virtex [1], resulting in the formation of Language-augmented Virtex (La-Virtex). Due to the constrained timeframe for the rebuttal, we meticulously replicated the identical setup in their official implementation and proceeded to train two models on the CC12M dataset. Subsequently, we evaluated the performance of these models on PASCAL VOC07 through linear classification. < Table S2>. Comparison of Virtex training on VOC07 classification | Model | VOC07 | |-----------|:-----:| | Virtex | 78.40 | | La-Virtex | 80.92 | The data presented in Table S2 highlights that the incorporation of language rewrites surpasses the performance of vanilla Virtex. This outcome suggests that the language augmentation strategy could potentially be beneficial to non-contrastive vision-language model training pipelines as well. [1] Desai, K. and Johnson, J., 2021. Virtex: Learning visual representations from textual annotations. In CVPR 2021. --- Rebuttal Comment 1.1: Title: Rebuttal follow up Comment: Dear Reviewer YRKL, Thank you so much for liking our work and sharing the insightful comments again! As we approach the final day of the discussion period, we are reaching out to you to ensure that our rebuttal effectively addressed your concerns. Moreover, we would love to provide further clarification or conducting additional experiments, should you deem it necessary. 
We will follow your invaluable suggestions to revise our paper and include all of the additional disscusion and experiment insights from the rebuttal into the main paper. We will be meticulous in incorporating all discussions on limitations into the main paper as well. We firmly believe that the inclusion of these elements will undeniably fortify the strength of the paper. Once more, we extend our heartfelt gratitude for your dedication and the time you've graciously devoted to the review of our work! Please don't hesitate to let us know if there's any additional questions or suggestions! Best Wishes, Authors
Summary: This paper introduces a straightforward yet highly effective language augmentation technique for the foundational vision-language pre-training model, CLIP. The authors focus on exploring the in-context learning capabilities of large language models, such as LLaMaA-7B, to produce four distinct rewrites for each text sample in the dataset. These rewrites are generated using ChatGPT, Bard, COCO, or Human meta-input-output text pairs, making the augmentation process relatively simple. On the experimental front, the authors present compelling evidence of strong transfer performance across multiple benchmarks, which is unsurprising given the rationality of informative text data augmentation. However, the paper could be even more impactful if the authors provide more thorough calibrations as suggested in the weakness section. Overall, this work contributes a valuable language augmentation method to the CLIP paradigm, and its potential recognition within the broader community could be further solidified by addressing the weaknesses and providing additional calibrations on why and how the proposed method work. Strengths: + This work exhibits a clear and compelling motivation, along with a straightforward and elegant solution. + The experimental results presented in Table 1, Table 2, and Table 3 demonstrate the strength and efficacy of the proposed approach. + The detailed descriptions provided in the paper render it highly accessible and easy to follow for readers. Weaknesses: The authors of this paper primarily focus on reporting impressive results without conducting a thorough analysis of why their proposed method performs better. To enhance the clarity of their work and provide a deeper understanding of their approach, several key aspects need to be addressed. - First, it is essential for the authors to include the training loss and validation loss curves in their analysis. This will help readers discern whether the proposed method improves optimization (by achieving smaller training and validation losses) or generalization (by achieving a smaller validation loss only). - Second, the main difference between the source description and the rewritten description lies in enriching the presented concepts with more detailed descriptions. However, this approach also faces the risk of hallucination, potentially generating descriptions that do not accurately reflect the facts. The authors should thoroughly analyze this issue and propose methods to alleviate it, rather than relying solely on hand-crafted instructions to guide the large language models (LLMs). - Furthermore, there is a glaring absence of critical ablation experiments in the paper. To provide a comprehensive evaluation of their method, the authors should consider the following groups of ablation experiments: 1. **The influence of the length of the rewritten description**: The authors could attempt to generate rewritten descriptions of varying lengths, such as 1.5x or 2x, or even up to 5x longer than the original descriptions. By reporting comparison results and analyzing the outcomes, readers can gain insights into the impact of description length on the model's performance. 2. **The influence of the text encoder model scale**: The authors currently opt for the smallest text encoder from CLIP, following previous work. However, it is vital to explore whether the rewritten descriptions may require a larger text encoder, especially for significantly longer rewrites. Providing detailed comparison results will offer valuable insights into the significance of the text encoder model's scale. By addressing these critical points, the authors can strengthen their work's analysis, transparency, and overall contribution to the field. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors should provide more analysis and discussion on why the proposed method performs better except for reporting straightforward comparison results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and insightful suggestions. We have thoroughly addressed the concerns raised by the reviewer in the following. We hope that our responses will provide greater clarity about our work and contribute positively to the assessment. **[Q1. Whether LaCLIP improves optimization or generalization]** We added the training and validation curve in the rebuttal Figure PDF. LaCLIP achieves higher validation accuracy and higher training loss, indicating its focus on generalization rather than optimization. Language augmentation makes the task more challenging and therefore improves generalization. This concept will be elaborated in the main paper. **[Q2. The risk of hallucination and methods to alleviate it]** LaCLIP might face hallucination risk when instructing LLMs to add substantial details to the original caption. However, our experiments demonstrates that more rewrites consistently boost performance. This implies that the advantages of diversified details outweigh the potential counterfactual risk. However, we acknowledge the possibility of harm to model training as the number of augmentations increases significantly. To address this concern, we have devised a strategy to identify and filter out potential outliers. In this strategy, we first train a CLIP model on real image-text pairs. Then, we employ this model to assess rewritten texts. We identify texts with the lowest CLIP-score with the paired images, as these are more likely to contain harmful hallucinations affecting model performance. We proceeded to conduct an initial experiment where LaCLIP was solely trained using the selected augmented texts. This yielded an ImageNet Zero-shot performance of 43.7%. Interestingly, this model's performance is inferior to all configurations outlined in Table S1. This observation implies that while this exclusion approach might remove the least relevant rewritten texts, the diversity introduced by these rewrites seems to outweigh the potential negative impact of hallucinatory content. Another direction for future exploration is to develop language rewriting techniques that take into account the corresponding images. Furthermore, as LLMs continue to progress and enhance their ability to manage hallucinatory and counterfactual errors, we anticipate an improvement in the quality of rewritten texts. This advancement, in turn, could result in an overall performance boost for LaCLIP. We intend to incorporate this aspect of the discussion into the main body of the text to provide a more comprehensive understanding. Nevertheless, our proposed language augmentation strategy remains simple and highly effective, demonstrating consistent performance gains on large-scale datasets. Our recent experiments on the LAION-400M dataset with ViT-B/16 further highlight LaCLIP's ability to yield enhancements within the context of large models and extensive datasets. With confidence, we assert that our approach has the potential to push the limits of state-of-the-art CLIP models. **[Q3.1 Ablation Experiments on Length of Rewrites]** Given that our approach employs in-context learning on LLMs to generate text rewrites, controling the exact length of the outputs isn't a straightforward task. To delve into this aspect, we conducted an ablation study using the existing generated caption rewrites. The methodology involved ranking the four rewritten texts for each instance in the CC12M dataset according to their length. Subsequently, these texts were grouped into four categories, ranging from the shortest to the longest. We then proceeded to train LaCLIP using the original text along with each of these four groups of text, each having varying lengths. The length statistics (relative to the original texts) and the ensuing LaCLIP performance are outlined in Table S1. < Table S1>. LaCLIP Performance with different text rewrites length on CC12M | length | 0.5x | 0.8x | 1.0x | 1.6x | |:-------------|:----:|:----:|:----:|:--------:| | IN zero-shot | 44.4 | 44.5 | 45.0 | **45.2** | The results suggest longer rewrites tend to improve CLIP training more, and the benefit of the increased diversity of the augmented texts outweigh any potential drawbacks related to counterfactual hallucination. **[Q3.2 Ablation Experiments on Text Encoder scale]** > it is vital to explore whether the rewritten descriptions may require a larger text encoder Please refer to the rebuttal Figure PDF for experimental details. The results indicate that altering the text encoder alone lacks a significant impact on performance. Notably, for ImageNet zero-shot, an intriguing observation is that larger text encoders lead to a decline in CLIP's performance, while LaCLIP's performance improves. This suggests the potential for overfitting in Vanilla CLIP with larger text encoders, and LaCLIP could potentially mitigate this issue. However, the observed changes aren't significant enough to warrant advocating for text encoders larger than the Base model, considering the associated memory and computational overhead. > especially for significantly longer rewrites. We also study the effect of text encoder sizes on rewrite text lengths: < Table S2>. ImageNet zero-shot with different text encoders on different rewrite length | Text encoder \ Length | 0.5x | 0.8x | 1.0x | 1.5x | |-----------------------|:----:|:----:|:----:|:----:| | Base | 44.4 | 44.5 | 45.0 | 45.2 | | Large | 44.0 | 43.8 | 44.5 | 44.8 | The results indicate incorporating a single augmentation does not seem to be sufficient for mitigating the overfitting issue, and it's worth noting that employing larger text encoders is not recommended when relying solely on one augmentation. We fully concur that integrating these analyses into our LaCLIP evaluations will render the paper more comprehensive and insightful, thereby benefiting future research endeavors. --- Rebuttal Comment 1.1: Title: Good rebuttal Comment: Thanks for your detailed responses. The author is encouraged to add the rebuttal contents to the main paper in the future. In light of the authors' response, I have adjusted the original rating from "Borderline Accept" to "Weak Accept." --- Reply to Comment 1.1.1: Title: Thank you! Comment: We sincerely appreciate your positive feedback! We will surely add all the additional discussions and experiments into the next version of the paper. Thanks again for your insightful comments, please do not hesitate to let us know If there are any further clarifications or experiments we can offer.
Rebuttal 1: Rebuttal: We express our gratitude to all the reviewers for dedicating their time and for providing valuable feedback, positive comments, and insightful suggestions. We are pleased to note that the reviewers recognized: * The clear and compelling motivation behind our idea. [Fifz, qFtF] * The simplicity and effectiveness of our proposed method. [YRKL, FjR1, qFtF] * The comprehensive experiments that showcase the effectiveness of LaCLIP. [All reviewers] * The manuscript's well-organized structure, clear writing, and reader-friendly presentation. [Fifz, YRKL] We have included a rebuttal Figure PDF, including training and validation curve [Fifz], ablations on text encoder sizes [Fifz, FjR1], and visualizations on corrected samples on ImageNet validation set [FjR1]. We have also provided comprehensive responses to the specific queries raised by each reviewer. We hope our explanations can effectively address all the concerns raised by the reviewers. We extend our gratitude once again to all the reviewers for their valuable time and insightful feedback. Should there be any additional clarifications or experiments required, please don't hesitate to let us know. We sincerely hope that our efforts will result in a favorable reconsideration of the scores by the reviewers. Pdf: /pdf/1c0983523311a6037fe68ef5f14238ef67b1dae0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation
Accept (poster)
Summary: This paper is motivated by the observation that the value of channels in novel dataset has a much more uniform distribution than that in the base dataset. Based on the observation, the author proposes a new similarity metric, Kendall’s rank correlation, to utilize the ranking information of channels instead of values to calculate the similarity of two embeddings. To address the non-differentiability issue of Kendall’s rank correlation, the author also proposes a differentiable approximation for meta-learning. Experiments show that the Kendall’s rank correlation and proposed approximation Kendall’s rank correlation are useful in some cases. Strengths: 1. The idea that uses the ranking information of channels instead of values to calculate the similarity of two embeddings is interesting. 2. The analysis in Section 5.4 is helpful to understand how the magnitude of channels’ value affects the performance at test-time. Weaknesses: 1. The proposed Kendall’s rank correlation is constrained with FSL methods that uses similarity metric to determine the similarity between two images, which limits the application scope of it. 2. The comparison in Section 4.2 is not convincing. The authors only consider a baseline that uses cosine similarity metric, which is not a very popular design for many FSL methods. Compared with more baselines using other similarity metrics (like Square Euclidean Distance in ProtoNet and a learnable metric in RelationNet) may help me to make sure the effectiveness and generality of proposed Kendall’s rank correlation. 3. The reported results in Table 1 are not clear. For example, why the results of baseline (cos+CE) and of CIM is much lower than that in Table 1 of [1] (lower than 10~20 points)? 4. The comparison methods (e.g. CAN and ConstellationNet) in Table 2 and 3 are not state-of-the-art methods as the paper claimed. Compared with more recent FSL methods (e.g. [2],[3],[4]) maybe more convincing to support the opinion that proposed method outperforms the state-of-the-art methods. [1] Channel importance matters in few-shot image classification. ICML 2022. [2] Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the Centroid. NeurIPS 2022. [3] Improving Task-Specific Generalization in Few-Shot Learning via Adaptive Vicinal Risk Minimization. NeurIPS 2022. [4] Matching Feature Sets for Few-Shot Image Classification. CVPR 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to "weaknesses" part for all my concerns. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Overall, this paper has limited contribution and unconvincing experimental results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The proposed Kendall’s rank correlation is constrained with FSL methods that uses similarity metric to determine the similarity between two images, which limits the application scope of it. A1: Thank you for your feedback. In fact, research in few-shot learning can be broadly categorized into metric-learning-based and optimization-based methods, with **metric-learning-based methods constituting a significant portion**. **Similarity measurement is an essential component of metric-learning-based approaches**. Our proposed method holds the potential to seamlessly integrate with this category by replacing cosine similarity with Kendall's rank correlation. On the other hand, as our method leverages Kendall's Rank Correlation to achieve consistency across feature channels, it can be easily integrated with those methods that address few-shot learning from other perspectives. Moreover, **the applicability of our method extends beyond the realm of few-shot learning**, as its underlying motivation is from observing an apparent difference in feature channel values between base data and novel data. We found that compared to base classes, when the feature extractor faces a novel class that is unseen before, most features' channels have small and closely clustered values, making it difficult for the model to distinguish the importance of individual channels. At this point, channel importance ranking can effectively accentuate the differences between channels. This property is applicable not only to few-shot learning but also to any task with cross-domain generalization attributes and holds inspiration. Based on the above aspects, we are confident that our method enjoys a broad application scope, far beyond the limitations initially perceived. Q2: The comparison in Section 4.2 is not convincing. The authors only consider a baseline that uses cosine similarity metric, which is not a very popular design for many FSL methods. Compared with more baselines using other similarity metrics (like Square Euclidean Distance in ProtoNet and a learnable metric in RelationNet) may help me to make sure the effectiveness and generality of proposed Kendall’s rank correlation. A2: Thanks for the comments. The **learnable metric-based methods** require meta-learning to train and obtain the optimal parameter values. However, in Section 4.2, we aimed to demonstrate that our approach does not require any training and solely relies on using Kendall rank correlation in the inference phase to achieve noticeable performance improvement. Additionally, the reason why we did not include the **Euclidean distance** in our experiments is that on the unit sphere, cosine similarity and Euclidean distance are equivalent. Furthermore, in few-shot learning, only ProtoNet adopts Euclidean distance, while many subsequent works are based on cosine similarity and have shown that using cosine similarity generally yields better results compared to using Euclidean distance. Nonetheless, we also **conduct experiments using the learnable metric in RelationNet and Euclidean distance** (See Table B of the attached PDF in our "global" response). The experimental results demonstrate the superiority of utilizing Kendall's rank correlation over the learnable metric in RelationNet and Euclidean distance. Q3: The reported results in Table 1 are not clear. For example, why the results of baseline (cos+CE) and of CIM is much lower than that in Table 1 of [1] (lower than 10~20 points)? A3: Sorry for any confusion caused. In fact, CIM’s Table 1 reports results for the **5-way 5-shot** setting, while our Table 1 presents results for the **5-way 1-shot** setting. We also conduct experiments in the 5-way 5-shot setting (See Table D of the attached PDF in our "global" response). As you can see, in fact, our reimplementation outperforms the results reported in CIM's Table 1, and **the effectiveness of Kendall's rank correlation is demonstrated**. Q4: The comparison methods (e.g. CAN and ConstellationNet) in Table 2 and 3 are not state-of-the-art methods as the paper claimed. Compared with more recent FSL methods (e.g. [2],[3],[4]) maybe more convincing to support the opinion that proposed method outperforms the state-of-the-art methods. A4: Thank you for bringing this to our attention. We will ensure the inclusion of the methods you've highlighted in our list of citations. It is pertinent to highlight that our latest experimental results encompass comparisons with more recent FSL methods, including those you've referenced (Please refer to Table A in the PDF in our "global" response). We would like to highlight that by integrating Kendall's Rank Correlation into a stronger backbone DeepEMD, **our method achieves SOTA performance**. --- Rebuttal 2: Title: We would be grateful if you could take a look at the response Comment: Dear Reviewer DzME: We sincerely appreciate your valuable time devoted to reviewing our manuscript. We would like to gently remind you of the **approaching deadline for the discussion phase**. We have diligently addressed the issues you raised in your feedback, providing detailed explanations. For instance, we have addressed your concerns regarding the reliability of our experimental results. Moreover, to comprehensively showcase the superiority of Kendall rank correlation, we have included comparisons between Kendall's rank correlation, Euclidean distance, and learnable distances. Furthermore, We have also included comparative experiments with SOTA methods, including the methods mentioned in your citations, demonstrating that by straightforwardly replacing cosine similarity with Kendall’s rank correlation, our method achieves state-of-the-art performance when combined with a stronger baseline DeepEMD. Would you kindly take a moment to look at it? We are very enthusiastic about engaging in more in-depth discussions with you. --- Rebuttal Comment 2.1: Title: Thanks for the response Comment: Dear authors, I have carefully read your rebuttal including the new experimental results. Some of my concerns (e.g., the confusion about the results in Table 1) have been properly addressed. Additional results are provided to show the effectiveness of your method. However, my doubts on the novelty and performance still exist. Specifically, your method with strong backbone DeepEMD (which is already SOTA) gets marginal improvement. Thanks for your detailed response, but I would like to keep my rating. --- Reply to Comment 2.1.1: Comment: Thank you for your valuable feedback. **We humbly acknowledge that our previous rebuttal addressed some of your concerns.** Regarding the points you highlighted about novelty and performance, we hope to offer a more thorough discussion on this matter. 1. As you mentioned, DeepEMD already exhibits a remarkable level of performance. **Further improvement on such a strong baseline is no trivial task; in fact, it poses a considerable challenge.** The stronger the baseline, the greater the challenge. Nevertheless, we would like to highlight that **our simple modification to DeepEMD**, the replacement of the cosine similarity with Kendall’s Rank correlation, **led to a notable improvement of 1.41%** on the mini-ImageNet (68.09% -> 69.50%) and **1.94%** on the tiered-ImageNet (71.16% -> 73.10%) in 1-shot setting. **This emphasizes both the simplicity and effectiveness of our approach.** As an example for your reference, FRN (accepted by CVPR) also involves direct modifications to DeepEMD, but yielded a modest improvement of **merely 0.54%** on the mini-ImageNet. 2. Additionally, **for most recently proposed methods** such as DeepEMD, CIM and InfoPatch, integrating our method with them can **obtain consistent improvement** across **various datasets with domain differences** as shown in Table 1 and Table A, even simply employing Kendall's rank correlation during the inference stage. This suggests that **our method is ready for integration with future SOTA methods**, paving the way for further enhancements and maintaining its leading-edge status. 3. Moreover, one of significant importance is the **strong generality of our approach**. Our method possesses the capability to seamlessly integrate with various cosine-based methods, and it also **holds the potential for extension into other domains**. 4. Furthermore, we would like to emphasize that **the motivation behind our proposed method is an unexplored aspect in prior research**. Our approach brings to light a novel observation within few-shot learning -- namely, the revelation that for a novel-class, features’ channels possess smaller and closely clustered values when compared to base-classes. We have substantiated this as a universally valid inference. This situation creates a challenge in employing geometric similarity to accurately distinguish the importance among feature channels. The utilization of channel importance ranking, instead, offers an effective solution to this challenge. **All of these aspects remain unexplored in prior research.** The points we've discussed above **underscore the novelty and effectiveness of our method**. We genuinely believe in the potential and merit of our approach. Given that we have addressed part of your concerns and further elaborated on the remaining ones in this response, **we would like to humbly request a reconsideration of the scoring**. We believe our research can provide valuable insights and contributions that would be of great interest to the NeurIPS community.
Summary: This paper proposes a new similarity metric for few-shot learning, named Kendall’s rank correlation, which originally come from the statistical concept of Kendall's rank. The motivation of this paper is from an experiment that the corresponding channel value for base and novel classes have a distribution difference. Additionally, the authors design a differentiable approximation for Kendall’s rank correlation and demonstrate favorable experimental results in comparative experiments. Strengths: 1. The paper is well-written and easy to follow, with a clear and well-organized structure. 2. The method is straightforward and simple. Weaknesses: 1. The ablation experiments only employ cosine distance, lacking a comprehensive consideration of Euclidean distance. 2. The performance of the differentiable approximation for various hyperparameter values is not adequately analyzed or visualized. 3. The optimal value for the hyperparameter is stated as 0.5 but lacks experimental or theoretical evidence to support this claim. 4. Table 3 includes the "meta-baseline" method, which belongs to another approach and should not be classified as part of the proposed method. 5. The testing procedure is not described at all, including whether the differentiable approximation or Kendall's rank is used during testing. 6. The reproducibility of the source code is not well identified. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The primary concern lies in the reproducibility of the source code, as it is currently non-functional. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The ablation experiments only employ cosine distance, lacking a comprehensive consideration of Euclidean distance. A1: Thank you for your feedback. We would like to point out that the reason why we did not include the Euclidean distance in our experiments is that on the unit sphere, cosine similarity and Euclidean distance are equivalent. Furthermore, in few-shot learning, only ProtoNet adopts Euclidean distance, while many subsequent works are based on cosine similarity and have shown that using cosine similarity generally yields better results compared to using Euclidean distance. Nonetheless, we have also included comparative experiments with Euclidean distance (See Table B of the attached PDF in our "global" response). The experimental results demonstrate the superiority of utilizing Kendall's rank correlation over Euclidean distance. Q2:The performance of the differentiable approximation for various hyperparameter values is not adequately analyzed or visualized. A2: Thank you for this feedback. We would like to point out that the performance of differentiable Kendall ranking correlation shows relatively low sensitivity to variations in the hyperparameter $\alpha$ within a specific range. Our experimental findings suggest that setting alpha to approximately 0.5 yields favorable performance. Setting this parameter too high may lead to overfitting on the base class data, while setting it too low may result in inadequate approximation to Kendall ranking correlation. A relevant analysis of this phenomenon has been carried out in the ablation experiments presented in Section 5.4. Here, we also conducted more detailed ablation experiments concerning this hyperparameter as follow. | Method| $\alpha$=0.1 | $\alpha$=0.2 |$\alpha$=0.3 |$\alpha$=0.4 | $\alpha$=0.5 | $\alpha$=0.6 |$\alpha$=0.7 |$\alpha$=0.8 | |:---:|:---:|:---: |:---:|:---:|:---:|:---: |:---:|:---:| |Kendall| 63.99|64.66|64.92|65.21|65.56|65.02|64.56|63.79| Q3: The optimal value for the hyperparameter is stated as 0.5 but lacks experimental or theoretical evidence to support this claim. A3: Thank you for this feedback. In Section 5.4 and in response to Q2, we have extensively investigated this hyperparameter through ablation experiments. We did not meticulously tune this hyperparameter to deliberately seek a better result. What we discovered from the ablation experiments is that setting $\alpha$ around 0.5 yields a relatively favorable outcome. Q4: Table 3 includes the "meta-baseline" method, which belongs to another approach and should not be classified as part of the proposed method. A4: Sorry for any confusion caused. In fact, Meta-Baseline is a a simple and effective method in few-shot learning, which proposes a two-stage training paradigm. Specifically, the model is first pre-trained on the base dataset using cross-entropy loss, following conventional supervised learning. During the meta-training stage, tasks are sampled from the base dataset, simulating the construction of test tasks in the form of N-way K-shot. The training objective is to accurately classify the query samples from the sampled tasks using cross-entropy loss as the loss function. In Meta-Baseline, cosine similarity is employed as the similarity measure to determine the semantic similarity between the query samples' embeddings and prototypes for nearest-neighbor classification. Our proposed method simply replaces the cosine similarity used in Meta-Baseline with the differentiable Kendall similarity for episodic training while keeping all other settings consistent. The relevant details can be found in Section 5.2 of the paper. Q5: The testing procedure is not described at all, including whether the differentiable approximation or Kendall's rank is used during testing. A5: Sorry for any confusion caused. In fact, in the **implementation details** provided in Section 5.2, we thoroughly describe our testing process: In the testing phase, we employ Kendall's rank correlation to compute the similarity between the embeddings of query samples and class prototypes for nearest-neighbor classification. Performance evaluation is conducted on 2000 randomly sampled tasks from the test set, and the average accuracy along with the 95% confidence interval are reported. The purpose of proposing the differentiable Kendall correlation is to address the non-differentiable issue of the original Kendall rank correlation in the ranking computation. During the testing phase, we use the original Kendall’s rank correlation for evaluation. Q6: The reproducibility of the source code is not well identified. A6: Sorry for any confusion caused. We would like to provide a detailed explanation of the content in our submitted code. The code for the inference phase is included in the 'eval.py' file, where you have the flexibility to set the testing mode. Specifically, setting the mode to 'kendall_test' implies the usage of Kendall ranking correlation in the inference phase. Regarding the training phase, it is divided into two parts, 'train_pretrain.py' and 'train_meta.py'. 'train_pretrain.py' corresponds to the pretraining process using the cross-entropy loss function. On the other hand, 'train_meta.py' is responsible for the meta-training stage, where the training mode is set to 'kendall_meta', indicating the utilization of our proposed differentiable Kendall ranking correlation for meta-training. The implementation of both the original Kendall ranking correlation and our proposed differentiable Kendall ranking correlation for meta-training can be found in the 'Models/models/kendall_fsl.py' file within the code.
Summary: This paper introduces a novel approach for few-shot learning using Kendall's Rank Correlation. The authors demonstrate that feature channel importance ranking is a more reliable indicator for few-shot learning than geometric similarity metrics. They propose replacing the geometric similarity metric with Kendall's rank correlation for inference, which improves the performance of few-shot learning across different datasets and domains. Additionally, the paper presents a carefully designed differentiable loss for meta-training to address the non-differentiability issue of Kendall's rank correlation. The contributions of this paper can be summarized as follows: 1. Introducing the use of feature channel importance ranking for few-shot learning. 2. Demonstrating the effectiveness of Kendall's rank correlation in improving few-shot learning performance. 3. Proposing a differentiable approximation of Kendall's rank correlation for meta-training, leading to further performance improvements. Strengths: 1. Originality: The paper presents a novel approach for few-shot learning that uses Kendall's rank correlation. This is a unique and innovative idea that has not been explored in previous research. 2. Quality: The paper is well-researched and presents a thorough analysis of the proposed method. The authors provide detailed experimental results and ablation studies to validate their approach. The proposed differentiable loss function is carefully designed and addresses the non-differentiability issue of Kendall's rank correlation. 3. Clarity: The paper is well-written and easy to understand. The paper is also well-organized, making it easy to follow the flow of ideas. 4. Significance: The use of Kendall's rank correlation has shown to be effective in improving the performance of few-shot learning across different datasets and domains. Weaknesses: 1. Lack of comparison with state-of-the-art methods: The paper does not compare the proposed method with state-of-the-art few-shot learning methods. 2. Lack of theoretical justification: The paper does not provide a theoretical justification for why feature channel importance ranking and Kendall's rank correlation are better suited for few-shot learning than geometric similarity metrics. Providing such a justification would strengthen the paper's argument and make it more convincing. 3. The motivation described in the paper is not clearly explained:Why replace geometric similarity metrics with Kendall's rank correlation? 4. From the results in Table 2, it can be seen that the performance improvement is marginal, and the final result is not SOTA. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. “This often leads to a relatively uniform distribution of values across feature channels on novel classes, making it difficult to determine channel importance for novel tasks.” I can't understand the meaning of this sentence? Please provide a detailed explanation. 2. “When we compare the values of different feature channels on the base dataset and novel dataset, we observe that the novel dataset has a much more uniform value distribution than the base dataset. ” Can this conclusion hold true? This is only an observation on one dataset, and there is no evidence to suggest that the new dataset has a multiple more uniform value distribution than the base dataset. 3. The third paragraph in the introduction uses an example of dogs and wolves to illustrate that cosine similarity cannot effectively distinguish between dogs and wolves, while Figure 1 (b) shows the identification results of dogs and crabs. 4. In Table 1, for deeper backbones, why does the Kendall's rank correlation have a decrease in performance compared to cosine distance? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors do not provide limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: Lack of comparisons with SOTA methods. A1: Thanks for the suggestion. In our latest experimental results, we have added comparisons with SOTA methods (See Table A of the PDF in our "global" response). We would like to highlight that by integrating Kendall's Rank Correlation into a stronger backbone DeepEMD, **our method achieves SOTA performance**. Q2: Lack of theoretical justification for why Kendall's rank correlation are better suited for FSL. A2: Thanks. Our method is inspired by empirical observations, making it an algorithmic paper rather than a theoretical one. Extensive experiments support our intuitive observations, which align with our expectations. While we will further provide intuitive justification in our next response, we leave the theoretical one as our future work. Q3: The motivation behind replacing geometric similarity metrics with Kendall's rank correlation is not clearly explained. A3: It seems that our original explanation of the motivation might not have been clear enough. Let us provide a detailed explanation below. Our approach emerged from observing an apparent difference in feature channel values between base data and novel data. We found that compared to base classes, when the feature extractor faces a novel class that is unseen before, the feature channel values become more uniform, i.e., **for a novel class, most non-core features' channels have small and closely clustered values** in the range [0.25, 0.5] (see Figure 1 of the paper). This phenomenon occurs because the model is trained on the base data, and consequently exhibits reduced variation of feature values when dealing with novel data. This situation creates a challenge in employing geometric similarity to accurately distinguish the importance among non-core feature channels. To provide a concrete example, consider distinguishing between dogs and wolves. **While they share nearly identical core visual features, minor features play a vital role in differentiating them**. Suppose the core feature, two minor features are represented by channels 1, 2, and 3, respectively, in the feature vector. A dog prototype may have feature (1, 0.3, 0.2), and a wolf prototype may have feature (1, 0.2, 0.3). Now, for a test image with feature (0.9, 0.28, 0.22), it appears more dog-like, as the 2nd feature is more prominent than the 3rd. However, cosine distance misleadingly places this test image closer to the wolf prototype (distance=0.031) rather than the dog prototype (distance=0.048). Contrastingly, the test image shares the same channel ranking (1, 2, 3) as the dog prototype, whereas the wolf prototype's channel ranking is (1, 3, 2). Inpired by this, we employ Kendall’s rank correlation to more accurately discern between dogs and wolves, highlighting the utility of our approach. We hope this clarification better conveys the underlying rationale for our method, and we will carefully review this section in the revised paper to ensure that the motivation is articulated more clearly. Q4: The results in Table 2 show marginal improvements, and the final result is not SOTA. A4: It's important to clarify that our current experiments are conducted based on a simple and widely-adopted baseline (meta-baseline). By simply substituting cosine similarity with Kendall's rank correlation, we achieve an improvement of 2%. Moreover, as mentioned in the response to Q1, our method could be easily integrated with existing methods. Combining our method with a stronger baseline, DeepEMD, we can achieve the current SOTA performance. Q5: Can't understand the meaning of the sentence mentioned. A5: In the response to Q3, we have detailed the motivation behind our approach, which is related to this question. To further clarify, let's consider an extreme scenario where all feature values are nearly equal. In this case, a minor perturbation in a feature vector could lead to significant changes in the channel ranking, while the geometric distance to other features might remain largely unchanged. Q6: Can the conclusion drawn from Figure 1 hold true? A6: We believe this is a universally valid observation. Intuitively, the model is trained on base data and hence the learned feature extractors should capture the largest variation features within that base data. For novel data that belongs to different classes from base data, the extracted features will exhibit less variation. Empirically, we additionally compare the variance of feature channel values between the base dataset and various novel datasets (See Table C). The results reveal a significantly smaller variance in feature channel values in the novel datasets compared to the base dataset. A smaller variance means values are closer to each other, which aligns with the observations depicted in Figure 1(a), validating the correctness of this conclusion. Q7: The third paragraph uses an example of dogs and wolves, while Figure 1 shows dogs and crabs. A7: The reason we use wolves and dogs as examples is that it is an intuitive idea. However, in the existing commonly used few-shot learning datasets, there are no classes that include both wolves and dogs. In our subsequent work, we will make examples and figures consistent. Q8: In Table 1, for deeper backbones, why does the Kendall's rank correlation have a decrease in performance? A8: Actually Table 1 serves as an exploratory experiment where we solely adopt Kendall’s rank correlation at test time, without any episodic training. Expecting any model to achieve performance improvement without training by simply replacing cosine similarity at test time, would be impractical. In fact, the results in Table 1 show that, in the vast majority of cases, this simple replacement leads to significant improvements. Despite the slight performance decrease on the mini-test, it is evident that across the other five datasets with more substantial domain variations, the use of Kendall’s rank correlation yields significant performance gains. --- Rebuttal Comment 1.1: Comment: 1. Although integrating Kendall's Rank Correlation into a stronger backbone DeepEMD, the performance is not SOTA. 2. The authors didn’t explain clearly why does the Kendall's rank correlation have a decrease in performance for deeper backbones? In the response, the authors claimed expecting any model to achieve performance improvement without training by simply replacing cosine similarity at test time, would be impractical. I think there is a problem with the expression of this sentence. In fact, if the proposed similarity measure is effective, there will be a performance improvement for various backbones. This has nothing to do with the depth of the backbone. 3. I think the author still hasn't explained the motivation of the paper well, and more of it is the phenomenon observed in the experiment, without clarifying the reason behind it. If the authors can theoretically explain why Kendall's Rank Correlation can be used to replace cosine distance, it will greatly improve the quality of the paper. I think this is the core innovation of the paper. In general, the rebuttal author provides partially addressed my concern. However, given the problem addressed above, I prefer not to change the rating. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our manuscript and provide your feedback. We appreciate your insights as they offer an opportunity for us to refine and clarify our work. --- *Q1. Although integrating Kendall's Rank Correlation into a stronger backbone DeepEMD, the performance is not SOTA.* We'd like to emphasize that **our method has indeed achieved SOTA results in the 1-shot setting for both mini-ImageNet and tiered-ImageNet datasets**. In the 5-shot setting, our approach maintains competitive performance relative to the current SOTA benchmarks. While we acknowledge your emphasis on SOTA performance, it's important to note that **the contribution and novelty of our work lie beyond this singular metric**. A central strength of our approach is its **simplicity** and **broad generality**. We wish to emphasize that our method can seamlessly integrate into a variety of cosine-based methods without imposing additional training costs, and it also holds the potential for extension beyond few-shot problems. As shown in Table 1 and Table A, for most recently proposed methods such as DeepEMD, CIM and InfoPatch, integrating our method with them can obtain consistent improvement across various datasets with domain differences, even simply employing Kendall's rank correlation during the inference stage. On the other hand, **while TCPR is the only method that marginally exceeds ours in the 5-shot setting, its complexity cannot be overlooked**. For a few-shot learning task, it requires hundreds or even thousands of re-samplings for data augmentation, leading to **a substantial increase in training time and effort**. Finally, we would like to kindly remind the reviewer that **top conferences have clearly stated in their reviewer guidelines that SOTA does not determine the merit or contribution of a work**. [NeurIPS](https://nips.cc/Conferences/2020/PaperInformation/ReviewerGuidelines#:~:text=Solid,%20technical%20papers%20that%20explore%20new%20territory%20or%20point%20out%20new%20directions%20for%20research%20are%20preferable%20to%20papers%20that%20advance%20the%20state%20of%20the%20art) says: >Solid, technical papers that explore new territory or point out new directions for research are preferable to papers that advance the state of the art, but only incrementally. [CVPR](https://cvpr2023.thecvf.com/Conferences/2023/ReviewerGuidelines#:~:text=not%20grounds%20for%20rejection%20by%20itself.) says: >A proposed method does not exceed the state-of-the-art accuracy on an existing benchmark dataset is not grounds for rejection by itself. [ACL](https://2023.aclweb.org/blog/review-acl23/#:~:text=SOTA%20results%20are%20neither%20necessary%20nor%20sufficient%20for%20a%20scientific%20contribution.) says: >SOTA results are neither necessary nor sufficient for a scientific contribution. In summary, **we are confident that our proposed method will be widely adopted instead of cosine distance** in few-shot learning, given its superior simplicity and effectiveness. Title: Further Clarifications (1/3) --- Reply to Comment 1.1.2: Title: Further Clarifications (2/3) Comment: *Q2: The authors didn’t explain clearly why does the Kendall's rank correlation have a decrease in performance for deeper backbones? In the response, the authors claimed expecting any model to achieve performance improvement without training by simply replacing cosine similarity at test time, would be impractical. I think there is a problem with the expression of this sentence. In fact, if the proposed similarity measure is effective, there will be a performance improvement for various backbones. This has nothing to do with the depth of the backbone.* Thank you for your insights. While we value your perspective, **we hold a different view on this matter: The effectiveness of a method can not translate to its consistent outperformance in all situations.** We'd like to emphasize that Table 1 is designed as an exploratory experiment, representing solely the utilization of Kendall rank correlation during the inference phase, without any incorporation of meta-training. Since the objectives of model **training and testing are not aligned**, there could be instances where employing simple Kendall rank correlation during the testing phase may not yield improved results. However, **when utilizing differentiable Kendall rank correlation for meta-training, a consistent performance enhancement can be achieved.** This observation becomes more evident from Figure 4, which illustrates the channel-wise ablation experiments. Here, the inclusion of differentiable Kendall rank correlation during training leads to a noteworthy enhancement in performance. As a similar scenario for reference, consider the use of **Euclidean distance** versus **cosine distance** in few-shot learning. While Euclidean distance was initially employed as the distance metric in ProtoNets, more recent methods have gravitated towards cosine distance. **Does this suggest that cosine distance invariably outperforms Euclidean distance in every few-shot learning scenario? Not necessarily.** But its wide acceptance doesn't diminish its relevance, especially if it demonstrates superior performance in a majority of cases or on average. Similarly, in our case, Table 1 clearly reveals that **the adoption of Kendall's rank correlation during the inference phase, in contrast to cosine similarity, yields significant improvements across multiple datasets with varying domains.** This improvement even extends to the latest specialized few-shot learning method for the inference phase, CIM. Even in the case of the deeper backbone networks you mentioned, Kendall's rank correlation actually exhibits noteworthy enhancement across five datasets with more substantial domain gaps. In summary, performance improvements across various scenarios are indeed observed. While there may be certain instances where outperformance is not evident, this doesn't diminish the effectiveness of the method. In fact, **we are confident that our approach will become a favored alternative to cosine distance in the future**. --- Reply to Comment 1.1.3: Title: Further Clarifications (3/3) Comment: *Q3: I think the author still hasn't explained the motivation of the paper well, and more of it is the phenomenon observed in the experiment, without clarifying the reason behind it. If the authors can theoretically explain why Kendall's Rank Correlation can be used to replace cosine distance, it will greatly improve the quality of the paper. I think this is the core innovation of the paper.* First of all, we would like to emphasize that **our paper focuses on presenting an algorithm inspired by empirical observations**. While we acknowledge the importance of theoretical aspects you mentioned, they fall beyond the primary scope of our current work. Our key contribution is **pinpointing a novel phenomenon and subsequently designing a simple and effective algorithm based on this discovery**. We also demonstrate the efficacy of our approach and its ability to integrate effortlessly with various cosine-based methods without incurring extra training expenses. We view the theoretical exploration as an avenue for future research. Nevertheless, instead of a theoretical explanation, **our paper offers an intuitive understanding.** The motivation behind our proposed method stems from the observation that, in few-shot learning, models are not exposed to novel classes during training and naturally exhibit distinct feature extraction in comparison to the base classes used for model pre-training. Our approach reveals a novel observation: for a novel class, feature channels exhibit smaller and tightly grouped values compared to base classes, which we have substantiated as a universally valid conclusion. This situation creates a challenge in employing geometric similarity to accurately distinguish the importance among feature channels. The utilization of channel importance ranking, instead, offers an effective solution to this challenge. **We would like to emphasize that all of these aspects remain unexplored in prior research.** Furthermore, while we understand that providing a theory would enrich a paper, **its absence doesn't devalue the core innovation of our work**. For your reference, we would like to mention a highly renowned paper titled "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks." This paper observed the existence of "winning tickets" within deep neural networks, and its validation was conducted through the design of a search algorithm for these winning tickets. **Noteworthily, this paper abstained from delving deep into theoretical justifications.** Yet, would one argue that this omission detracts from its impact? Quite the opposite; the paper's straightforward and thought-provoking approach led to it being distinguished as the **"best paper" at ICLR 2019, attracting almost 2800 citations**. Additionally, we'd like to gently draw the reviewer's attention to the official [reviewer guidelines of NeurIPS](https://nips.cc/Conferences/2020/PaperInformation/ReviewerGuidelines#:~:text=may%20be%20theoretical), which state: >**There are many examples of contributions that warrant publication at NeurIPS.** These contributions **may be** theoretical, methodological, algorithmic, empirical, connecting ideas in disparate fields (“bridge papers”), or providing a critical analysis (e.g., principled justifications of why the community is going after the wrong outcome or using the wrong types of approaches.). In summary, while we value your feedback, we believe the absence of a theoretical explanation does not diminish the innovation and significance of this paper. --- We hope this response addresses your concerns, and **we are open to further discussion** to ensure the quality and clarity of our work. --- Rebuttal 2: Title: We would be grateful if you could take a look at the response Comment: Dear Reviewer oKqH: We sincerely appreciate your valuable time devoted to reviewing our manuscript. We would like to gently remind you of the **approaching deadline for the discussion phase**. We have diligently addressed the issues you raised in your feedback, providing detailed explanations. For instance, we have included comparative experiments with SOTA methods, demonstrating the enhanced performance when our approach is integrated with a stronger baseline, DeepEMD, through a straightforward substitution of cosine similarity with Kendall’s rank correlation. We have also addressed your confusion about the motivation behind our proposed method through the utilization of carefully considered language, along with more intuitive examples. Would you kindly take a moment to look at it? We are very enthusiastic about engaging in more in-depth discussions with you.
Summary: The authors suggest a new similarity metric that utilizes differentiable Kendall’s rank correlation instead of the commonly used geometric similarity metrics like cosine similarity in few-shot learning. By levitating the importance of small-valued feature channels, the proposed approach significantly improves the few-shot performance across multiple datasets from various domains. Strengths: - The presented idea is simple and appears to be effective, and the overall approach is clearly expressed. The work is built on a simple yet powerful observation about the feature activation statistics (and their differences on base vs novel classes). - Utilizing a differentiable version of Kendall’s rank coefficient measure as an alternative similarity metric is an original idea to the best of my knowledge. - The proposed method enables efficient end-to-end few-shot learning without introducting problematic hyper-parameters. - It is impressive that using the proposed metric directly for testing, even without any pre-training, improves the result. - The proposed approach has proven to be effective in many common few-shot datasets across various domains, outperforming competitive methods in terms of performance improvements. - Ablation studies provide valuable insights that demonstrate the effectiveness of the proposed approach. Weaknesses: - While a solution to the Figure 1 observation is proposed based on Kendall’s rank correlation, I am not sure if this is the most simple way to handle the problem. In particular, could a simple instance-statistics-driven normalization scheme, such as group norm or instance norm or similar, could address the feature-scale issues? - Similarly, can’t the cosine similarity, combined with temperature scaling (a well known practice in , see Baseline++ paper for a detailed discussion) , where temperature may be different at test time, also address the problem of scale? - The explanations on Kendall’s rank correlation can be extended to make it more explanatory and to shed even more light on its complexities to make the paper more self-contained. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - The paper does a great job in pointing out a source of problem in few-shot classification and a good job in proposing a way to address it. However, it feels somewhat weak in terms of looking in-depth into the problem. Following the ‘weaknesses’ discussion above, it would have been great to improve the paper on this end and explore the advantages/disadvantages of some simple potential alternative fixes such as (i) instance/group normalization, (ii) temperature scaling combined with l2 normalization, or (iii) a simple attention mechanism such squeeze&excitation attention. - Suggestion: The name of the dataset on which the presented results are obtained can be added in the discussions (or captions) of Figures 5 and 6. - Suggestion: The overall figure quality can be improved. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No additional comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: Could a simple instance-statistics-driven normalization scheme, such as group norm or instance norm or similar, could address the feature-scale issues? A1: **No.** Actually, solely adopting Kendall's rank correlation during the inference stage far exceeds what can be achieved by employing simple feature scaling methods, and we conduct experiments to demonstrate this. Specifically, on top of the raw features from the model's output, we apply GroupNorm, InstanceNorm, and squeeze&excitation attention for feature scaling transformations, followed by classification using cosine similarity (Please find detailed results in Table B of the attached PDF in our "global" response). Regarding **GroupNorm**, we test two values for the "num_group" parameter, namely 16 and 32. Although its performance occasionally surpasses the original cosine similarity, we observe that it is generally inferior to Kendall's rank correlation in terms of performance. Concerning **InstanceNorm**, we notice that this operation results in a substantial performance decline, with a decrease of over 10% compared to directly using cosine similarity. Furthermore, we also explore the use of **Squeeze&Excitation Attention**. We integrate this module into the backbone network for training, and the results also show that it does not demonstrate a clear advantage. Actually, in Table 1 of Section 4, we have compared our method with the **CIM** method, which is a most recently proposed simple test-time feature scaling method in few-shot learning. Across multiple datasets with diverse domain differences, our method also consistently outperforms CIM. These findings provide strong evidence that the improvements obtained through Kendall ranking correlation are not attainable merely by simple feature scaling. We will include this discussion in the paper to emphasize the superiority of our proposed method. Thank you for your valuable review. Q2: Can’t the cosine similarity, combined with temperature scaling also address the problem of scale? A2: **No, it can't.** In fact, the temperature coefficient is also used in the meta-baseline method to adjust the output probabilities. When reproducing their experimental results, we follow the original settings of the meta-baseline and set this parameter as learnable during training. We conduct experiments accordingly, and if we solely adjust this parameter, the performance would be lower compared to the results reported in the meta-baseline paper, as shown below. | Method | Backbone | T: Learnable | T= 1 |T= 0.1 |T= 0.01 | | :---: | :---: | :---: |:---: |:---: |:---: | |Meta-Baseline (cosine)|ResNet-12|63.17|62.35|62.84|62.76| Clearly, adjusting the temperature parameter alone cannot achieve the level of performance improvement obtained by using Kendall’s ranking correlation. Q3: The explanations on Kendall’s rank correlation can be extended to make it more explanatory and to shed even more light on its complexities to make the paper more self-contained. A3: Thank you for bringing this to our attention. Let us provide a detailed explanation below. Our approach emerged from observing an apparent difference in feature channel values between base data and novel data. We found that compared to base classes, when the feature extractor faces a novel class that is unseen before, the feature channel values become more uniform, i.e., **for a novel class, most non-core features' channels have small and closely clustered values** in the range [0.25, 0.5] (see Figure 1 of the submitted paper). This phenomenon occurs because the model is trained on the base data, and consequently exhibits reduced variation of feature values when dealing with novel data. This situation creates a challenge in employing geometric similarity to accurately distinguish the importance among non-core feature channels. To provide a concrete example, consider distinguishing between dogs and wolves. **While they share nearly identical core visual features, minor features play a vital role in differentiating them**. Suppose the core feature, two minor features are represented by channels 1, 2, and 3, respectively, in the feature vector. A dog prototype may have feature (1, 0.3, 0.2), and a wolf prototype may have feature (1, 0.2, 0.3). Now, for a test image with feature (0.9, 0.28, 0.22), it appears more dog-like, as the 2nd feature is more prominent than the 3rd. However, cosine distance misleadingly places this test image closer to the wolf prototype (distance=0.031) rather than the dog prototype (distance=0.048). Contrastingly, the test image shares the same channel ranking (1, 2, 3) as the dog prototype, whereas the wolf prototype's channel ranking is (1, 3, 2). Inspired by this, we employ Kendall’s rank correlation to more accurately discern between dogs and wolves, highlighting the utility of our approach. We hope this clarification better conveys the underlying rationale for our method, and we will carefully review this section in the revised paper. Q4: It would have been great to explore the advantages/disadvantages of some simple potential alternative fixes such as (i) instance/group normalization, (ii) temperature scaling combined with l2 normalization, or (iii) a simple attention mechanism such squeeze&excitation attention. A4: Thank you for the comment. In the response to Q1 and Q2, we have verified and demonstrated that the performance improvement achieved by using Kendall's rank correlation far exceeds what can be achieved by employing simple feature scaling methods, which is related to this question. Suggestion: i) The name of the dataset on which the presented results are obtained can be added in the discussions (or captions) of Figures 5 and 6. ii) The overall figure quality can be improved. A5: Thank you very much for your valuable feedback. We will make the necessary revisions to the manuscript according to your suggestions. --- Rebuttal Comment 1.1: Comment: I am quite happy with the rebuttal’s responses and I value this paper not only for its novel technique, but also (or perhaps firstly) for its scientific contribution towards demystifying the few-shot learning’s challenges. I have increased my score to weak accept. I have also looked at the lower-rating reviews, while I appreciate them, I haven’t seen any criticism that leads to changing my mind; however they do have good points & suggestions. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to re-evaluate our paper after considering our rebuttal. We greatly appreciate your positive remarks regarding the novel technique we introduced and our contribution to shedding light on the challenges of few-shot learning. We are also grateful for the improved score you've given our work. Your constructive feedback and acknowledgement of our efforts is truly encouraging. We assure you that the valuable suggestions and insights from you and other reviewers will certainly be integrated into our revised version.
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable comments from all reviewers. We have endeavored to provide explanations for the questions raised in the respective comments section. Additionally, the supplementary experiments have been incorporated into the PDF attached to the global response. Specifically, the appended PDF encompasses the following: 1. **Comparison with state-of-the-art methods** when integrating our approach with DeepEMD and InfoPatch. This provides evidence that when integrating our method with stronger baselines, it is capable of achieving state-of-the-art (SOTA) performance. 2. **Comparison among Kendall's rank correlation, distances other than cosine similarity, and simple feature scaling transformation modules.** This highlights the superiority of Kendall's rank correlation over alternative distances in few-shot learning, while also confirming that this improvement cannot be surpassed by mere simple feature scaling methods. 3. **Comparison of the variance of feature channel values between the base dataset and various novel datasets.** This validates the widely applicable conclusion that features' channel values exhibit a high degree of clustering and similarity for novel classes unseen by the model. 4. **Additional experiments of Kendall's rank correlation on 5-way 5-shot tasks** across multiple datasets with diverse domain variations. Pdf: /pdf/19e30cb65ceeadab4c80a7ac6a1d88fbd7576325.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper aims at addressing the uniform distribution of values across features on novel classes, and proposes to use Kendall's rank corerlation instead of geometric similarity metrics to improve the performance of few-shot learning pipelines. Moreover, the authors propose a designed loss for meta-training to make Kendall's rank correlation differentiable. The experimental results demonstrate the usefulness of the propose method. Strengths: - The idea is technically sound. Theoretically, the channel importance is definitely beneficial to improve the accuracy of FSL. - The expeirment results show the usefulness of the propose method. - The presentation is well and easy to follow. Weaknesses: - The novelty and significance of the proposed method is roughly limited. The core idea of Kendall's rank correlation is to measure the consistency of pairwise rankings for each channel pair. However, such a issue has been considered in previous cross-matching related works such as DeepEMD. If this design can be incorporated with DeepEMD or other cross-matching pipelines, it would be interesting to see the overall performance and the conclusions will be more convincing. - The experimental results is not impressive enough. Though this method achieves improvements compared to metric-based methods, its performance is still below the SOTA. - In figure 1, it is proper to show more cases to demonstrate the advanceness of Kendall's rank correlation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It seems the work is based on prototype. Is it possible to perform it between support-query pairs rather than support proto-query pairs? As it highlights the importance of feature channels, it should be more meaningful to perform support-query correlation. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The proposed method's novelty is limited, since the consistency of pairwise rankings for each channel pair has been considered in previous works such as DeepEMD. Can this design be incorporated with DeepEMD? A1: Thanks for your insightful feedback. While it may seem that both DeepEMD and our method consider the consistency of pairwise rankings, they do so from **two different perspectives**. Specifically, DeepEMD employs the Earth Mover's Distance to match various **local regions** in the spatial domain of the image. In contrast, our method leverages Kendall’s Rank Correlation to achieve consistency across **feature channels**. Moreover, we would like to highlight that our method can easily be integrated into existing methods **without increasing training costs**. As an example, **our method can indeed be incorporated with DeepEMD** by substituting the cosine similarities used therein with Kendall’s Rank Correlations. Through this modification, we have observed **a large improvement for DeepEMD** (1%-2%). All the latest experimental results, demonstrating this enhancement, are detailed in Table A of the attached PDF in our "global" response. Q2: The experimental results are not impressive enough. Though this method achieves improvements compared to metric-based methods, its performance is still below the SOTA. A2: Thanks for bringing this to our attention. It's important to clarify that our current experiments are conducted **based on a simple and widely-adopted baseline** (meta-baseline) to **demonstrate the effectiveness** of our method. While this may lead to performance that is below the SOTA, we believe it adequately demonstrates the capabilities and potential of our method. Moreover, as mentioned in the response to the previous question, our method could be easily integrated with existing methods. **Combining our method with a stronger baseline**, DeepEMD, we can **achieve the current SOTA performance**, as shown in Table A of the attached PDF in our "global" response. Furthermore, even for most recently proposed methods such as CIM, we have shown that simply replacing cosine similarity with Kendall’s Rank correlation (our method) at the inference stage can result in significant improvements across various datasets with domain differences (see Table 1 of the submitted paper). This indicates that **our method is ready for integrating with future SOTA methods** to achieve additional improvement. We hope this explanation can address your concern, and provide a clearer demonstration of the value and adaptability of our method. Q3: In figure 1, it is proper to show more cases to demonstrate the advanceness of Kendall's rank correlation. A3: Thank you for this feedback. Indeed, there are numerous such examples. We would like to kindly remind you that, in the **supplementary material's visual analysis section**, we provide more intuitive illustrations of the superior performance of Kendall’s ranking correlation. Concretely, we employ Kendall’s Rank Correlation and cosine similarity to visualize the feature maps of the query samples with the aim of confirming the accurate localization of salient objects in the images. It is evident that the utilization of Kendall’s Rank Correlation results in a more precise localization of the distinctive regions within the query sample. Moreover, we also conduct an in-depth visual analysis involving channel ablation. It is noticeable that the discriminative key features of the dog predominantly exist in channels with lower values. Utilizing Kendall's rank correlation effectively captures these essential features, whereas cosine similarity disregards them, providing evidence of the effectiveness of our method. Q4: It seems the work is based on prototype. Is it possible to perform it between support-query pairs rather than support proto-query pairs? As it highlights the importance of feature channels, it should be more meaningful to perform support-query correlation. A4: Thanks for your insightful feedback. Indeed, our method can be applied to not only prototype-query matching **but also support-query matching**. For example, we replace cosine similarity with Kendall’s Rank Correlation in InfoPatch which is a **contrastive-learning-based few-shot learning method**, and observe an improvement in both 1-shot and 5-shot tasks on mini-ImageNet, as shown in Table A of the attached PDF in our "global" response. --- Rebuttal 2: Title: We would be grateful if you could take a look at the response Comment: Dear Reviewer 7vc7: We sincerely appreciate your valuable time devoted to reviewing our manuscript. We would like to gently remind you of the **approaching deadline for the discussion phase**. We have diligently addressed the issues you raised in your feedback, providing detailed explanations. For instance, we have elucidated that our approach and DeepEMD, in fact, address the challenges of few-shot learning from two distinct perspectives. Moreover, we have conducted experiments that demonstrate the integration of our method with DeepEMD. By straightforwardly substituting the cosine similarity in DeepEMD with Kendall’s rank correlation, we have successfully achieved state-of-the-art performance. Would you kindly take a moment to look at it? We are very enthusiastic about engaging in more in-depth discussions with you.
null
null
null
null
null
null
Language Models Meet World Models: Embodied Experiences Enhance Language Models
Accept (poster)
Summary: This paper proposes a new collection of data and tasks from an embodied environment to enhance the embodied reasoning ability of pre-trained language models. The paper also designs a fine-tuning strategy that combines the advantages of EWC and LoRA for stable fine-tuning. The experiments show that the proposed method can efficiently fine-tune GPT-J and outperform much larger language models. Strengths: This paper designs two ways of data collection in the embodied environment that address goal-specific and general cases in real-world applications. This paper designs a comprehensive reasoning task set based on the collected data to evaluate the reasoning performance from different aspects (plan generation, object tracking, etc.) This paper adopts a suitable parameter updating strategy (EWC_LoRA) for language model fine-tuning, which demonstrates time and memory efficiency. Weaknesses: The paper lacks experiments on why EWC is needed. The motivation for including EWC is to avoid overfitting downstream tasks or catastrophic forgetting on the pretraining task, but LoRA has a similar purpose. From the experiments in Table 2, EWC_LoRA has a lower performance than LoRA except for a slight improvement in perplexity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Related to weakness: It will be good to see whether there are any other benefits gained from EWC. 2. How do you ensure that the answers of the negation Housework QA are truly irrelevant? 3. The activity recognition performance of all methods is quite similar. Does this indicate that the activity types are easily distinguishable without requiring a full description of the experience, for example, just keywords at the end are sufficient for the task? Have you tried to include confusing activities in the evaluation set? 4. How do you introduce the confusion term (counting task, confusing unseen of plan generation)? Do you have any guideline or do you just randomly inject phrases? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author mentioned the limitations of only collecting embodied experience from one world model. societal impact: did not find negative impacts, probably if the data collection is extended to the real world in the future, it should avoid collecting personal or sensitive experiences. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive feedback. We would like to respond to your questions as below. **(1) Why EWC is needed** We want to clarify that EWC-LoRA has similar performance with LoRA. From Table 2, we can see EWC-LoRA matches or even outperforms its LoRA counterpart for GPT-J (e.g., 3.09 improvements on Counting QA, 1.45 on Object Path Tracking) and GPT-Neo (e.g., 1.03 on Count), while EWC-LoRA has a lower perplexity. Both methods outperform the base language model and have close performances on the specific tasks, in this case we focus more on the perplexity since we want its language modeling capability and generality to be preserved as much as possible. **(2) Irrelevant items in Negation QA** We use simple heuristics to sample irrelevant items, e.g., we sample items in the environment that are not directly mentioned in the ground truth plan to form the questions. **(3) Performance on Activity Recognition** For now, we just randomly sample different activities. The performances on this task are relatively higher than other tasks, but we can still observe significant performance gap of different models (69.22 by base GPT-Neo, compared to 87.98 by base GPT-J; see Table 6 in the appendix), and our method outperforms the base language model for both GPT-Neo (69.22 → 85.43) and GPT-J (87.98 → 88.52). **(4) Confusing terms** For the counting task, we construct samples from collected Random Exploration experiences, where an agent randomly executes actions in the environment. As a result, some of the actions are naturally irrelevant to object counting. For confusing unseen plan generation, we adopted the similar heuristics in (2). --- Rebuttal Comment 1.1: Comment: Thank you for the explanation and the results of your model scale-up experiments in the rebuttal. I would like to maintain my rating. I believe a fine design of the evaluation tasks is more important than carefully tuned models. Therefore, I still recommend this work to the community. Additionally, I have the following comments: 1. For Negation QA, if the irrelevant items are just a sample of things not mentioned in the ground truth plan, it seems that they cannot avoid being generally relevant. For example, "spoon" in the example of "which object is irrelevant to making coffee?". Since your evaluation set is relatively small (2 questions will affect 1% accuracy), it is better to have fewer ambiguous answers. As a benchmark task set, I would expect to see a more quantitative and systematic design, for instance, using n-hop relationships in the knowledge graph to construct the QA pairs. 2. Regarding the necessity of EWC, I have some doubts about it. It might be good to see the mean and standard deviation of multiple-round experiments. --- Reply to Comment 1.1.1: Comment: Thanks for your positive feedback and valuable insights! Your suggestions regarding the quantitative and systematic design of the benchmark set, as well as the multiple-round experiments for verifying the necessity of EWC, will greatly strengthen our work and improve our manuscript's presentation. We will include them in the revision.
Summary: The paper introduces a training paradigm termed "finetuning with Embodied Experiences from World Models (E2WM)" to enhance the abilities of language models (LMs) in reasoning and planning tasks associated with physical environments. The authors argue that LMs trained solely on large-scale text corpora lack the embodied knowledge necessary for robust performance in such tasks. To address this, they propose leveraging world models, specifically the VirtualHome simulator, to collect diverse embodied experiences and use them to construct fine-tuning tasks, such as plan generation, activity recognition, counting, and object path tracking. The proposed fine-tuning method incorporates Elastic Weight Consolidation (EWC) regularization with low-rank adaptation (LoRA) to preserve the models' generality and avoid catastrophic forgetting and make the training process more efficient. Strengths: - The paper introduces an approach to enhance off-the-shelf LLMs with embodied knowledge. The experiments involve both goal-oriented planning and random exploration. in collecting embodied experiences. While EWC and LoRA are existing methods (hence the paper may be perceived as with limited technical novelty), this work shows that their incorporation into the fine-tuning process helps retain the models' general knowledge and capabilities while adapting them to new tasks. - The related work seems sufficient, albeit a section on catastrophic forgetting, e.g., [1] would be a plus. - The paper provides an evaluation of the finetuned LMs on both seen and unseen tasks, and several ablation studies. The generalizability of the models is also tested, as well as their performance on the original pretraining data to ensure core language modeling abilities are retained. [1] Korbak, Tomasz, Hady Elsahar, German Kruszewski, and Marc Dymetman. "Controlling conditional language models without catastrophic forgetting." In International Conference on Machine Learning, pp. 11499-11528. PMLR, 2022. Weaknesses: - Lack of comparison: Only two GPT-based LLMs are used. Would be valuable to consider a more diverse set of recent LLMs. The paper also does not compare the proposed E2WM paradigm with existing methods for enhancing LMs with embodied knowledge, such as [1]. - Scalability to larger models: Although the EWC-LoRA approach is designed to improve efficiency, it is not clear how well it scales to larger LM architectures. Further investigation into the scalability of the proposed method would be beneficial. - Limitations on benchmarks: Evaluation is performed on bAbI but there are several embodied benchmarks and simulators that could be considered. Doing so would allow evaluation in realistic embodied settings, and compare other embodied models proposed (such as the Episodic Transformer [2] or [1]). - Experimental Results: The number of tasks and models evaluated is relatively small and the presentation of the results could be clearer, e.g., adding numbers to bars in Figs. 3 and4. It seems that ChatGPT outperforms the fine-tuned models, so overall the model capacity appears to be an important factor, and the proposed fine-tuning shows marginal improvements in some of the tasks. [1] Lin, B.Y., Huang, C., Liu, Q., Gu, W., Sommerer, S. and Ren, X., 2023, June. On grounded planning for embodied tasks with language models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 11, pp. 13192-13200). [2] Pashevich, Alexander, Cordelia Schmid, and Chen Sun. "Episodic transformer for vision-and-language navigation." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15942-15952. 2021. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Have the authors considered an evaluation on actual embodied benchmarks? What is the motivation for choosing Babi as the benchmark dataset? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: There is a very brief discussion on limitations as part of the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the helpful suggestions. The responses to your comments are shown below. **(1) More diverse LMs; Scalability to larger LMs** Thanks for the suggestion. We apply our approach on two larger LMs: OPT-13B and LLaMA-13B, respectively. The results on the 11 tasks (as in Figure.3 in the paper) are shown below. We can see improvements consistent to the case on smaller LMs (Figure.3). That is, our method substantially outperforms the respective base LMs while retaining a low perplexity, demonstrating the effectiveness of our method when scaling up to larger models. In addition, we’d like to note that our approach’s strong performance on **small** LMs (e.g., GPT-Neo is able to compete or even outperform ChatGPT) itself is of practical significance. And the smaller the LMs, the more significant it becomes for practical cost-efficient applications. | Model | Act Infer | Act Recog | Count | HouseQA | NegQA | ObjMoveQA | ObjMove | PlanGen | PlanGen Conf | PlanGen Unseen | PlanGen Conf Unseen | PPL | |----------------|---------|---------|---------|-----------|-----------|-----------|-----------|-----------|--------------|----------------|---------------------|---------| | OPT-13B | 67.94 | 89.07 | 20.10 | 81.61 | **43.21** | **37.00** | 33.49 | 36.00 | 31.92 | 29.34 | 36.98 | 4.0768* | | Ours (OPT-13B) | **70.61** | **91.44** | **62.37** | **84.29** | 40.21 | 33.00 | **96.28** | **50.15** | **49.87** | **45.11** | **47.93** | 4.3584 | | LLaMA-13B | **74.05** | 90.53 | 29.38 | 81.99 | **43.21** | 28.50 | 38.82 | 41.77 | 40.33 | 38.78 | 41.73 | 3.0359* | | Ours (LLaMA-13B) | 68.32 | **91.80** | **79.38** | **86.59** | 30.25 | **79.00** | **96.99** | **52.05** | **51.00** | **47.44** | **50.49** | 3.0690 | **(2) Comparison with prior LM work for embodied tasks (e.g., Lin et al.)** We want to point out the method in Lin et al. has a different goal with our work at all and thus is not comparable to our method. They aim to utilize LMs to enhance the performance on specific tasks in specific environments, thus they finetune LM to read the symbolic state and generate executable plans. However, after finetuning the model becomes a task-specialized model and loses its generality to solve various seen or unseen tasks. On the contrary, our goal is to enhance the LM itself, so the outcome of our method is still a general LM that can solve various tasks and can generalize newly acquired embodied knowledge to unseen tasks. Please refer to the general response for a more detailed explanation. **(3) Limitations on benchmarks** As we mentioned above, Lin et al. and Pashevich et al. are not comparable to our work. Evaluation in embodied environments not only evaluates the general embodied knowledge, but also tests how well the knowledge is utilized by special modules on top of the language model and how well the model is adapted to the specific environments. A strong LLM that possesses rich embodied knowledge can still fail in a specific environment since the generated action is not executable in the environment (e.g., LLM generates “Next, you should go to the kitchen” while the executable action is “<Walk> [kitchen]”) Therefore, previous works seldom evaluate an off-the-shelf LLM in an embodied environment. They either finetune it to get a task-specialized model (e.g., Lin et al. and Pashevich et al.) or build special modules on top of the LLM [1][2]. On the contrary, we aim to evaluate the LM as a general-purpose model that can generalize new acquired knowledge to common general tasks (like QA). Please refer to the general response for more detailed reply. **(4) Experimental Numbers** To make the figure clearer, we put all the numbers in Table 6. Please refer to the appendix for them. **(4) Comparison with ChatGPT** We think our results of enhancing LMs as small as GPT-Neo (1.3B) and GPT-J (6B) to compete or even outperform the latest large ChatGPT model is surprising and important, and of practical significance. The smaller the LMs, the more significant it becomes for practical cost-efficient applications. Besides, the experiments in (1) demonstrate that our method can be scaled up to larger LMs with better performance, which further highlights the model-agnostic advantage of our method. [1] Mu et al. EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought. 2023. [2] Wang et al. Voyager: An Open-Ended Embodied Agent with Large Language Models. 2023. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for the rebuttal and the new experiments that have addressed most of my concerns. I have raised my score to reflect this, albeit I am also not convinced that this work can be considered "embodied" in the full sense. It is worth noting that most of this work's evaluation criteria are designed to measure the performance of LMs in scenarios that do not rely on embodied context-specific physical actions or navigation within virtual worlds. General tasks such as question-answering and dialogue do not seem sufficient to claim embodiment, and while this work is valuable in broadening the general cognitive capabilities of LMs, it may be best to make it clearer that it differentiates itself from traditional embodied AI tasks, where agents operate and interact within specific environments to achieve tasks. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable reviews and are glad to know that our rebuttal and new experiments have addressed most of your concerns. We agree that our evaluation criteria are different from traditional embodied AI studies. This is due to our unique goal of improving the fundamental embodied knowledge of LMs for general language problems, which has not been studied before. We will definitely make the point clearer in the revised version as you suggested!
Summary: This paper proposes to incorporate world models into large language models to enable understanding of object permanence and planning capabilities that are missing in text-only models. Specifically, the authors utilize embodied environments (VirtualHome) to collect training examples including goal-oriented planning (plan generation, activity recognition) and random exploration (counting and object plan tracking). To avoid overfitting, this paper finetunes GPT-Neo-1 3B and GPT-J-6B models using LoRA and EWC. Evaluated on seen and unseen tasks, results show that the proposed method after finetuning outperforms the corresponding text-only baselines, and match or outperform ChatGPT performance under few-shot setting. Strengths: 1. This paper is motivated by an interesting limitation in text-only language models, which is the lack of embodied knowledge in pre-training. The proposed fine-tuning dataset, along with EWC-LoRA method, improve the model performance on challenging embodied reasoning and understanding tasks, without dramatically increasing the original language model perplexity. 2. The paper includes some interesting ablation studies to discuss how much improvement is observed from each fine-tuning task. Weaknesses: 1. The contribution and findings of the paper is limited. Although the limitation of the LLMs is well motivated in the paper, fine-tuning on similar distribution to improve model performance and thus outperform baselines is expected. Furthermore, fine-tuning with LoRA to improve efficiency, as well as EWC to reduce overfitting have been well studied. Given that the (relatively weak) baseline is the original text-only model (and a few-shot prompted ChatGPT model), it is not convincing that the proposed method (either finetuning data or method) is generalizable to broader embodied tasks (e.g., compared to an embodied or multi-modal model). 2. Some important details are missing and the evaluation is not very convincing. The paper is phrased to incorporate world model into LLMs, but it is confusing (without explicit explanation) that both training and evaluation are not with the embodied environment. Rather, only partial of the data is collected using VirtualHome. More importantly, it is not clear how the data are constructed (for example, how to sample "irrelevant actions" for counting QA and statistics of the training and eval sets) and what the held out set is for training the model. For evaluation, I understand the automatic eval is the most convenient metic, but have been widely adapted, metrics such as Rough-L for tasks like plan generation may not reflect model performance on complex text generation tasks. See more details in the questions below. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Have you done other evaluation on the base language model (i.e., NLP benchmarks such as superglue) apart from perplexity? 2. Why do you think EWC only outperform LoRA (in table 2)? As a regularization method, EWC constraint parameter updates and thus should understand LoRA by intuition. 3. Do you have a naive fine-tuning baseline to compare to? 4. Do you have detailed analysis on when and why the model improves performance on both seen and unseen tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper briefly mentioned limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **(1) Limited Contribution of Finetuning** We clarify that our key technical contribution is not “finetuning”, but rather identifying and formulating the limitations of current LMs (lacking embodied knowledge), developing novel ways to automatically collect embodied experiences of desired distributions, and designing diverse finetuning tasks. To best of knowledge, no previous work has done similar studies. **(2) Finetuning on similar distribution** We clarify that we finetune and evaluate not only on the similar distribution. To test the generality of our model, we develop and collect various *unseen out-of-distribution* tasks (e.g., time reasoning in bAbI and HouseworkQA). Moreover, we believe the results that our small models (GPT-Neo/-J) compete and even outperform the much larger ChatGPT are indeed surprising, providing the first evidence of using world models to improve LMs. **(3) Novelty of LoRA and EWC** Although the specific finetuning method is not the focus of this work, we want to emphasize that our EWC-LoRA method does provide new insights. * **We’re the first to show that EWC for LLM finetuning is better than the previous more popular method of KL regularizor [4-6]**. * The recent EWC-based LLM finetuning work has only focused on in-domain training tasks. That is, they show EWC could help LLMs to remember previously trained tasks. In contrast, **we additionally show that EWC even helps with out-of-domain unseen tasks**. That is, we show the LMs finetuned with EWC obtains better performance on unseen tasks compared to finetuning without EWC (Fig.3 and Fig.4). This is because EWC effectively preverses the LM generality. * For LoRA, previous works usually use LoRA to improve finetuning efficiency, but **we are the first to show that combining LoRA with EWC can further prevent overfitting and improve generality** (Table 2). **(4) Comparison with embodied or multimodal model** We want to clarify that our approach has a different goal with those models, thus is not comparable to them. Please refer to the general response. **(5) Incorporating world model into LLMs** As above, we are not “incorporating world models into language models”, but *using world models to improve language models*. Therefore, what we finally get is still a general-purpose language model. In the training, all the used embodied experiences are collected from VirtualHome. In the evaluation, we evaluate the model on common general tasks (e.g., QA) to see if it acquires new knowledge and can generalize it to unseen tasks. **(6) Details of Dataset Construction** We have already provided the statistics of evaluation set in Section 4.1. For the training, the size of Plan Generation/Activity Recognition/Counting/Object Path Tracking is 1659/1659/1000/1000, respectively, and the held-out validation set is a Plan Generation subset of size 200. We will include all details in the revised version. For the counting QA task, as we demonstrate in line 151, all the actions (including irrelevant actions) are sampled randomly from the action space of VirtualHome. **(7) RougeL not reflecting performance** We additionally conduct human evaluations on plan generation. We follow [3] to ask 3 people to annotate whether each task can be completed using a generated plan. We randomly sample 150 tasks and ask each person to annotate 50 of them. Results are below. The higher planning accuracy demonstrates the superior task planning ability of our model. |Model|Accuracy| |-|-| |GPT-J|24.0| |Ours (GPT-J)|**62.4**| **(8) SuperGLUE besides perplexity** We evaluate the base GPT-J-6B and our model on appropriate SuperGLUE tasks (that can be formulated as a multi-choice QA task without prompt engineering). |Model|BoolQ|CB|RTE|AX-g|AX-b|COPA| |-|-|-|-|-|-|-| |GPT-J|45.20|**41.07**|47.29|50.00|**57.50**|59.00| |Ours|**66.00**|**41.07**|**58.84**|**53.37**|54.00|**62.00**| Our model’s performance matches and even outperforms the baseline, showing our model retains the general language capability. **(9) Why EWC outperforms LoRA** If we understand correctly, your question was: EWC adds constraints to the parameter updating, then why does EWC outperform LoRA in Table 2? (Please correct us if our understanding is not correct.) LoRA freezes the base language model and only updates a small number of parameters in adapters. We hypothesize that this can also be seen as adding constraints to the parameter update, resulting in a slightly lower performance than EWC. **(10) Naive finetuning baseline** We finetune GPT-J (GPTJ-FT) and compare with our method (GPTJ-E2WM). Our method outperforms the baseline significantly. |Model|Act-Infer|Act-Recog|Count|HouseQA|NegQA|ObjMove-QA|ObjMove|PlanGen|PlanGen-Conf|PlanGen-Unseen|PlanGen-Conf-Unseen| |-|-|-|-|-|-|-|-|-|-|-|-| |GPTJ-FT|70.99|71.41|16.49|51.34|33.33|22.50|46.25|47.98|47.59|47.86|44.43| |GPTJ-E2WM|**74.43**|**88.52**|**67.01**|**85.44**|**39.51**|**34.50**|**98.67**|**51.23**|**48.94**|**49.58**|**45.60**| **(11) Improvements on both seen and unseen task** When using EWC, the LM learns new knowledge from finetuning while preserving its generality. Thus, on new unseen tasks requiring the same knowledge as in the finetuning tasks, the model can utilize acquired knowledge. We’re happy to do more analysis if you have more specific questions. [1] Mu et al. EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought. 2023. [2] Wang et al. Voyager: An Open-Ended Embodied Agent with Large Language Models. 2023. [3] Huang et al. Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents. ICML 2023. [4] Lu et al. Quark: Controllable Text Generation with Reinforced Unlearning. NeurIPS 2022. [5] Ouyang et al. Training language models to follow instructions with human feedback. 2022. [6] Liu et al. Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering. EMNLP 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response! I understand the main contribution of this paper which is to inject world knowledge into language models. I was pointing out that as one of the two main contributions highlighted in the paper (fine-tuning), it was not clear why they would perform better than fine-tuning baselines (as updated above, thank you for the results). Intuitively, and as been widely shown in previous continual learning studies, EWC and other methods reduces catastrophic forgetting, while lack behind fully fine-tuning results. However, it seems that results between GPTJ-FT and FPTJ-E2WM are the opposite. Do you have any intuition on why using E2WM performs better on the in-domain tasks? In terms of embodied and multimodal models, I was not looking for a direct comparison. I was mainly suggesting that as this paper claims the benefits in embodied environment (as also pointed out by other reviews), the evaluation does not seem to be convincing enough. It would be great if the author can provide some comparison to research in embodied agents in the paper revise. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We would like to reply to your question and comment as follow: **(1) Why EWC outperforms direct finetuning** We want to clarify that our evaluation tasks are **out-of-domain** tasks instead of in-domain ones. Since our goal is to train a general-purpose model, we intentionally designed evaluation tasks that are different from the training tasks (but require similar knowledge) to test its generality. For example, as mentioned in Lines 233-234, plan generation evaluation requires *free-from natural language plans*, while the ground truth for training is *executable plans following the schema* of the specific environments. Similarly, other evaluation tasks such as Housework QA, Object Path Tracking, etc., are very different from the training tasks in nature. In addition, we also introduce diverse evaluation settings (e.g., Vanilla Unseen, Confusing Seen, and Confusing Unseen, as in Figure 3) that differs greatly from the training. As mentioned in the paper (Lines 108, 175-182 and 194-196) and our response, EWC helps preserve the generality of LM capabilities and thus improve the generalization to out-of-domain tasks. On the contrary, previous continual learning studies with EWC typically evaluate finetuned models on in-domain tasks seen during training. **(2) Comparison to embodied agents** As we’ve clarified in the rebuttal and general response, prior works on embodied agents are not comparable to our work. Contrary to other research on embodied agents, we've taken a unique goal and approach. Most of these studies either fine-tune the LLM to create a task-specific model [1][2] or incorporate special modules on top of the LLM [3][4], tailoring it to particular tasks. This specialization contrasts with our objective: we are striving to develop an LLM that serves as a general-purpose model enriched with embodied experiences. We’ve also demonstrated that our models can generalize new acquired knowledge to common general tasks like QA. Notably, other reviewers have acknowledged our evaluation settings (7bDY) and contributions (Reviewers cenq, DfLY and 311c). We will make the point clearer in the revised version! We sincerely thank you for your efforts in reviewing our paper and your constructive suggestions again. We hope we have resolved all the concerns, and we will deeply appreciate it if you could reconsider the score accordingly. We are always willing to address any of your further concerns. [1] Lin et al. On grounded planning for embodied tasks with language models. AAAI 2023. [2] Pashevich et al. Episodic transformer for vision-and-language navigation. ICCV 2021. [3] Shinn et al. Reflexion: an autonomous agent with dynamic memory and self-reflection. 2023. [4] Yao et al. React: Synergizing reasoning and acting in language models. 2022.
Summary: This paper proposes a new method to improve the embodied planning capability of large language models (LLM) via adding both goal-oriented planning and random exploration data from a world model/simulator as well as the EWC-LORA regularizer that prevents catastrophic forgetting of pre-training tasks. The new regularizer EWC-LORA proposed by the authors is not only time and memory-efficient compared prior KL constraint-based regularization and also leads to negligible perplexity increase and better downstream task performance. The authors show superior performance of the method over base LLMs in constructed embodied planning and activity description tasks based on the VirtualHome simulator and also the bAbI a dataset for testing multiple types of knowledge and abilities including embodied knowledge, logic reasoning, linguistic knowledge, etc. Note that the method can only outperform much bigger LLMs such as ChatGPT in many of the scenarios. Strengths: 1. The authors presents a simple yet effective way to improve LLM's embodied planning capability without sacrificing the abilities during pre-training under a small/negligible compute/memory cost. Such idea is neat and of great practical importance. I think the method will be quite significant for the future directions of making LLMs better at embodied tasks. 2. The authors have done extensive experiments in both customized and existing benchmarks/datasets to show that the proposed method can outperform base models without fine-tuning with embodied data. It is impressive that the authors show that the small models with embodied planning data fine-tuning can outperform large LLMs such as ChatGPT in many scenarios. 3. The authors also performed detailed ablation studies to show the importance of the proposed EWC-LORA regularizer via comparing to KL divergence and EWC only. The ablation studies on including various data mixture also make the paper more complete. Weaknesses: 1. I think the authors should compare to stronger baselines that also consider improving language model's embodied decision-making and reasoning capabilities such as [1,2]. Improvement over base models is good but not that surprising and convincing. 2. The authors should consider other embodied decision-making benchmarks such as ALFWorld, which is used in [1. 2]. This would provide a clearer picture of the comparison between the method and prior approaches. It would also add more tasks in the empirical evaluation, which can further validate the method. [1] Shinn, Noah, Beck Labash, and Ashwin Gopinath. "Reflexion: an autonomous agent with dynamic memory and self-reflection." arXiv preprint arXiv:2303.11366 (2023). [2] Yao, Shunyu, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. "React: Synergizing reasoning and acting in language models." arXiv preprint arXiv:2210.03629 (2022). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please address the comments listed in the section above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback and suggestions! We would like to address your concerns in the following paragraphs: **(1) Comparison with Reflexion and React** We want to clarify that our approach has a different goal and is solving a different problem compared with the mentioned work, and our approach can be combined with these methods. Specifically, methods like Reflexion and React aim to utilize the existing embodied knowledge of LMs (which might be insufficient since they have no embodied experiences) to improve their performance on specific tasks such as navigation in MineCraft, thus they combine off-the-shelf LMs with different components like memory bank, prompting engineering, environment feedback collection, etc. On the contrary, our goal is to enhance **LM itself** by acquiring new embodied knowledge. Our final outcome is a language model with richer embodied knowledge which can still be integrated with existing methods like Reflexion and React. We aim to explore the combination in the future. In addition, we think our results of enhancing LMs as small as GPT-Neo (1.3 B) and GPT-J (6B) to compete or even outperform latest large chatGPT are significant and surprising. **(2) Other embodied decision-making benchmarks** The goal of our work is to inject fundamental embodied knowledge into LMs, which are not specific to particular embodied environments but are general and needed in common problems like QA and dialogue. Therefore, our evaluation is designed to assess the knowledge in those general settings (such as QA). Please refer to the general response for more details.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful and encouraging comments. We are encouraged by the reviewer’s appreciation that the motivation and idea of the paper are novel, interesting and promising (Reviewers cenq, DfLY, ekFP); the proposed method is neat, effective, and of great practical importance (cenq,DfLY, 311c), and the experiments and ablation studies are thorough and detailed (cenq, DfLY, 311c), showing strong improvements (cenq, DfLY, ekFP, 331c). We’d like to highlight the unique focuses of our work that differ from the previous work of embodied LMs: * **Our goal** is to inject *fundamental* embodied knowledge and skills into LMs, such as object permanence, action planning, spatial knowledge, etc. Those fundamental embodied knowledge and skills are **not** specific to particular embodied environments (e.g., VirtualHome, ALFWorld). Instead, they are general and needed in common problems such as various question answering (QA) and dialogue. Accordingly, **our approach** aims to finetune LMs while keeping them as general-purpose models, capable of handling the common general problems (e.g., QA) and generalizing acquired knowledge to unseen tasks. Similarly, **our evaluation** assesses the knowledge/skills in those common general settings, including the novel 11 constructed tasks (mostly in QA forms) and the well-known bAbI tasks designed for assessing the fundamental knowledge/skills of models. Those evaluations are independent of specific embodied environments. * In contrast, **prior work** on LMs for embodied tasks aims to apply the LMs to handle **specific** embodied environments, through either finetuning or prompting. Such work includes (mentioned by the reviewers) [1, 2] that finetune LMs to specific embodied environments, [3,4] that keep LMs frozen but design specialized prompts for respective environments, as well as those already discussed in our Related Work. Accordingly, their evaluation is to deploy the specialized LMs to complete specific tasks in the respective embodied environments. * In addition, in prior work, to deploy LMs to the specific embodied environments, there are additional components needed, such as mapping the LM-generated free-form text (e.g., “*Next, you should go to the kitchen*”) into the action space (“<Walk> [kitchen]”) of the specific environment. Our work does not involve those components as we focus on the fundamental knowledge/skills and general common settings such as QA. [1] Lin et al. On grounded planning for embodied tasks with language models. AAAI 2023. [2] Pashevich et al. Episodic transformer for vision-and-language navigation. ICCV 2021. [3] Shinn et al. Reflexion: an autonomous agent with dynamic memory and self-reflection. 2023. [4] Yao et al. React: Synergizing reasoning and acting in language models. 2022. Pdf: /pdf/f5d8588ec01f6b3857d9214c366fb483bc00d3a3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes to enhance language models by finetuning them on “embodied experiences”, which are textual data generated by household activity simulator, which is referred to as “world models” in the paper. Through evaluations on several tasks (e.g., planning, object tracking), it is shown that finetuning on these “embodied experiences” leads to better performance compared to larger models (e.g., ChatGPT) that are not trained on these data. To minimize catastrophic forgetting and improve training efficiency, the paper also proposes to combine “elastic weight consolidation” and “low-rank adapters” to finetune the language models. Strengths: - The premise of the paper is interesting, novel, and promising — because language models are not trained on embodied data, they might be less robust to those scenarios concerning interactions with the environments (i.e., the “embodied settings”). - The idea of using household activity simulator as a “world model” is interesting and likely significant in the context of enhancing language models. - Moreover, the authors conduct thorough experiments that support the central claim. - The writing and presentation of the paper are also clear. Weaknesses: Despite the strengths of the paper, below are some concerns for the problem settings and evaluations: - While it is shown on GPT-Neo and GPT-J (which are relatively small language models in today’s standard) that the proposed approach improves their capabilities on results such as bAbI, as also indicated in the paper evaluations, larger models (i.e. ChatGPT) which are not trained on these embodied data also attain similar performance. Because the constructed tasks only require scene context in the text form, it is unclear whether similar delta will be seen on larger models. Or put this in other words: will this improvement diminish with larger scale training even without embodied experiences? The reviewer would like to note that due to practical reasons, it is understandable that experiments like these may not be done for larger LMs, but it is worth further discussions or experiments in the paper to support the claim. - Another weakness is that the training tasks need to be hand-curated for the “embodied experience”, which includes planning, activity recognition, counting, and object path tracking. This is unlike how autoregressive LMs are trained, which only requires one main self-supervised task of next token prediction. This brings to the question of how scalable the proposed approach is, as the broader “embodied experiences” include tasks of much higher diversity, in addition to including other modalities such as vision. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: An important baseline is ChatGPT, which is not trained on “embodied experiences”, but it doesn’t seem like the prompt for querying ChatGPT is provided. In contrast, the paper notes that they used few-shot prompting for the smaller LMs. If they are given the same prompt, would the performance differ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations are not adequately discussed. See the comments in the “weakness” section above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and helpful suggestions. We would like to address your concerns as follows: **(1) Scalability to larger models** Thanks for the suggestion. We apply our approach on two larger LMs: OPT-13B and LLaMA-13B, respectively. The results on the 11 tasks (as in Figure.3 in the paper) are shown below. We can see improvements consistent to the case on smaller LMs (Figure.3). That is, our method substantially outperforms the respective base LMs while retaining a low perplexity, demonstrating the effectiveness of our method when scaling up to larger models. In addition, we’d like to note that our approach’s strong performance on **small** LMs (e.g., GPT-Neo is able to compete or even outperform ChatGPT) itself is of practical significance. And the smaller the LMs, the more significant it becomes for practical *cost-efficient* applications. | Model | Act Infer | Act Recog | Count | HouseQA | NegQA | ObjMoveQA | ObjMove | PlanGen | PlanGen Conf | PlanGen Unseen | PlanGen Conf Unseen | PPL | |----------------|---------|---------|---------|-----------|-----------|-----------|-----------|-----------|--------------|----------------|---------------------|---------| | OPT-13B | 67.94 | 89.07 | 20.10 | 81.61 | **43.21** | **37.00** | 33.49 | 36.00 | 31.92 | 29.34 | 36.98 | 4.0768* | | Ours (OPT-13B) | **70.61** | **91.44** | **62.37** | **84.29** | 40.21 | 33.00 | **96.28** | **50.15** | **49.87** | **45.11** | **47.93** | 4.3584 | | LLaMA-13B | **74.05** | 90.53 | 29.38 | 81.99 | **43.21** | 28.50 | 38.82 | 41.77 | 40.33 | 38.78 | 41.73 | 3.0359* | | Ours (LLaMA-13B) | 68.32 | **91.80** | **79.38** | **86.59** | 30.25 | **79.00** | **96.99** | **52.05** | **51.00** | **47.44** | **50.49** | 3.0690 | **(2) Hand-Curated Training Tasks** As discussed in Introduction and Method, the diverse embodied skills are centered around two core abilities, i.e., planning and object tracking (e.g., Lines.46-50, 127-129). We collect embodied experiences based on the two abilities, and design diverse training tasks to comprehensively digest the embodied experiences. This is indeed akin to some latest LM pretraining work. For example, next-word-prediction (or sequential denoising) can be seen as training LMs to acquire the core ability of denoising; UL2 [1] shows that diversifying the training tasks (e.g., sequential denoising, span denoising, etc.) for the same core ability of denoising can substantially improve the performance, compared to using only one task (e.g., next-word-prediction). Our design of diverse training tasks follows a similar idea. To further scale up, we speculate that there could be a small set of core abilities (like planning and tracking) that facilitate the collection and design of training tasks. We’re excited to study more in the future. **(3) ChatGPT prompts** The prompts we used for ChatGPT also include in-context few-shot exemplars, as well as instructions describing the task. We will include them in the revised version. [1] Tay et al. UL2: Unifying Language Learning Paradigms. 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the response and the effort for the additional experiments. I have raised the "soundness" score to 4, but I would like to maintain my overall rating given the scope considered in this work, i.e., I would still recommend acceptance for this work. [2] While I appreciate the effort for the extra clarification, I'm not convinced by the argument that the designed training tasks, planning and object tracking (at least in a simplified environment like VirtualHome), are sufficient to cover "embodied experiences". For example, physical or visual experiences such as those experienced by a physical robot are also core to "embodiment" (and likely would lead to better world understanding for LLMs), but these are simply not present in the setup explored in this work. The example brought up by the authors, regarding the objectives in LLM training, is also different from the setup in this work, as those objectives are unsupervised at its core, while the training data used here are more like labeled data. However, I still like the premise proposed in this paper, and this can serve as a stepping stone for future work. --- Reply to Comment 1.1.1: Comment: Thank you so much for your supportive review! We fully agree with your points and didn’t claim planning and tracking in our work have covered all relevant “embodied experiences”. We agree that there are more diverse types of experiences (like physical and visual ones as suggested). They present enormous new opportunities for incorporation and further improvement of LMs. We hope the idea and approach presented in this work can inspire more studies in this exciting direction.
null
null
null
null
null
null
Learning Robust Statistics for Simulation-based Inference under Model Misspecification
Accept (poster)
Summary: This paper proposes a general approach to handling model misspecification for SBI. The paper introduces a regularized loss function that penalises mismatches between the observed data and the learned model. The paper focuses on NPE and ABC as the likelihood-free approaches. Strengths: * The radio propagation example is interesting, and shows what the potential strength of using the MMD regularizer. * The structure of the paper is clear. Weaknesses: * The novelty of the work is a concern in light of reference [72] equation (9), compared to equation (9) in this paper. Additionally, equation (10) appears similar to the InfoVAE (https://arxiv.org/abs/1706.02262), which also uses an auto-encoder with a MMD loss in the latent space. * A minor weakness is that the paper is difficult to follow. While the structure made sense, the descriptions of the approach and the motivation for it were challenging to understand. For example, it seemed implicit that this approach did not perform amortised NPE until it was highlighted this was the case in the conclusion. This is not in itself a criticism of the approach, but it was very hard to follow what consisted of the training data, and observations. For example, line 255 mentions using 1000 samples for the training data and 100 realisations of both the observed and simulated data for each $\theta$. Under an amortised setting we would not have access to the observed $\theta$. It is not clear what this setup is from the description. Further questions in the next section highlight some of these confusions that likely stem from descriptions of the approach that could be improved. * An additional minor weakness is only comparing to ABC and NPE. Is there any limitation on applying this approach to other estimators such as Neural Likelihood Estimators? Also for ABC, the $\rho$ was only described in the supplementary materials as the Euclidean distance. After defining the MMD as the better regularizer, why would this not also be used/incorporated into the discrepancy? Typo: line 180, the $i$ should be subscript? * In Figure 5, it would be useful to ensure the three plots share the same y axis. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Could the authors provide more details about the step in Eq. (4) to go from infimum to upper bound? I found this challenging to understand. * The motivation behind defining a Q and a P that are different is a bit confusing. Why were these definitions highlighted? It seems that Q is equal to P when using the $\theta_{true}$, but is it a requirement to know $\theta_{true}$ prior to the experiments in order to simulate from Q? * The last sentence of section 5 highlights that the MMD is superior for the NPE-RS method. Is this surprising given it is built into the loss function? Are there other statistics that could be used here that are specific to the application domain? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: This is well captured by the paper. This is appreciated! Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **"The novelty of the work is a concern in light of reference [72]":** See the general response to all the reviewers. * **"equation (10) appears similar to the InfoVAE (https://arxiv.org/abs/1706.02262), which also uses an auto-encoder with a MMD loss in the latent space.":** InfoVAE tackles the issue of learning meaningful latent features in variational autoencoders by modifying the ELBO objective, whereas we tackle the problem of learning robust statistics for simulation-based inference. Moreover, the MMD in InfoVAE is computed between variational distribution and prior distribution, while we compute the MMD between the simulated and observed statistics. * **"Is there any limitation on applying this approach to other estimators such as Neural Likelihood Estimators?":** No, our method can be applied to Neural Likelihood Estimators (NLEs) in the same way as we applied it to ABC: by learning a summary function up front using neural networks. See Figure 2 in the attached pdf for NLE results under misspecification. * **"After defining the MMD as the better regularizer, why would this not also be used/incorporated into the (ABC) discrepancy?":** MMD is used as the discrepancy in ABC to circumvent the need to summarize data into summary statistics, as mentioned in ref [61]. However, this requires the need to select an appropriate kernel function for the data, which is not always feasible. In most practical cases, summary statistics are used in ABC along with the Euclidean distance, which is what we used in our experiments. Euclidean distance is preferred here as the data is summarized into one statistic vector for each parameter value, while MMD is useful when computing distance between datasets. * **"line 255 mentions using 1000 samples for the training data and 100 realisations of both the observed and simulated data for each $\theta$. Under an amortised setting we would not have access to the observed $\theta$.":** We are not sure we understand. Did you mean "in the real-world setting we would not have access to the true $\theta$"? In that case, yes, that is the situation in the real-data experiment of Section 5. In section 4, we simulate observed data using the true $\theta$ to check if we are robust to misspecification in the parameter space. * **"Could the authors provide more details about the step in Eq. (4) to go from infimum to upper bound?":** Here we utilize the fact that the average value of a random variable (its mean) will be greater than or equal to its minimum value (infimum). * **"The motivation behind defining a Q and a P that are different is a bit confusing. Why were these definitions highlighted?":** We introduce $P_{\theta}$ and $Q$ to define model misspecification in SBI, see Section 2 of ref [24] for similar notation. Here $Q$ is the unknown true data-generating process, and $P_\theta$ is the model we are trying to fit. We use $\theta_{\mathrm{true}}$ in experiments to simulate the observed data. In practice, $\theta_{\mathrm{true}}$ is not known and we would only have samples from $Q$, which is the case in Section 5. * **"The last sentence of section 5 highlights that the MMD is superior for the NPE-RS method. Is this surprising given it is built into the loss function? Are there other statistics that could be used here that are specific to the application domain?":** The MMD in Section 5 is used to test the performance of the different methods based on their predictive distribution, and uses a different kernel than the MMD in the loss function that summarizes the data, thus they are unrelated. Nevertheless, by using the KL divergence estimator instead of MMD between the observed data and predictions, we get the values 4.59, 5.81 and 2.30 for NPE, RNPE and NPE-RS, respectively, showing again that our NPE-RS method fits the data the best. * **"In Figure 5, it would be useful to ensure the three plots share the same y axis."** Agreed. We will edit the figure accordingly. * **"Typo: line 180, the $i$ should be subscript?":** Yes, thanks for the careful reading. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for your detailed response. > line 255 I think my confusion comes from the difference between the $m$ training samples and $n$ realisations and how they both come into the algorithm. I am willing to raise my score, thanks to your response and additional experiments. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for increasing your score. We really appreciate it. Regarding line 255: We have $n$ iid realisations for each dataset $\mathbf{x}\_{1:n} = \{\mathbf{x}^{(1)}\, \dots, \mathbf{x}^{(n)}\} \sim \mathbb{P}\_\theta$ simulated from the model for a given $\theta$. Thus, $(\theta, \mathbf{x}\_{1:n})$ form a pair. In order to train the NPE network (or for ABC), we generate $m$ such pairs by first sampling $\theta_1, \dots, \theta_m \sim p(\theta)$ from the prior, and then generating $\mathbf{x}\_{1:n, i} \sim \mathbb{P}\_{\theta_i}$, $1 \leq i \leq m$. This results in the training data $\{(\theta\_i, \mathbf{x}\_{1:n, i})\}_{i=1}^m$ of size $m$. So to sum up, there are $m$ simulated datasets in the training data, where each dataset has $n$ iid samples.
Summary: The authors propose a method to make neural posterior estimation robust to misspecification. The method relies on adding a regularizer to NPE loss function. The regularizer is implemented as the MMD between the embedding of the approximate (learned) posterior and the embedded observation. They apply their method to two low-dimensional benchmark tasks and a real-world example from radio propagation. Strengths: **Originality**: The paper tackles an important issue: how to make neural simulation-based inference robust to misspecification. The method is novel and intuitive. **Quality**: The method is applied to benchmark tasks and to a real-world problem (with iid data samples and high-dimensional data). **Clarity**: The paper is well-written and easy-to-follow. Figures are intuitive and well-designed. Weaknesses: My main concerns with the paper are that it (1) uses only two very low-dimensional benchmark tasks and (2) overstates it contributions. **Originality**: - the method is a very straightforward extension to Schmitt et al 2021 (which is only cited with a passing reference). The authors should clearly discuss how the works are related. **Quality**: - The authors repeatedly emphasize that their method performs on par with NPE for well-specified models. I do not think that their results support this claim though. NPE-RS performs significantly worse than NPE on well-specified data on one of the (only) two benchmark tasks. In order to make this claim, the authors should use significantly more benchmark tasks, ideally with varying parameter and data-dimensionality, and demonstrate that the behaviour shown in figure 2, left is a rare exception. In addition, the authors would have to show that NPE-RS converges (at least very closely) to the true posterior with many simulations (see below). - Convergence: The method proposed by the authors does no longer converge to the true posterior distribution for well-specified models, which further emphasizes that the method does not replace detection of misspecification. I would appreciate an investigation of how regularization strenght trades-off performance on well-specified data vs robustness to misspecification. - how is the hyperparameter chosen? The authors claim that the regularizer can be selected with a validation set (L353). It is unclear to me how this would work. What would be the loss function used to assess performance on the validation set? In L231 the authors also say that this might be done based on the posterior predictive distribution, but I think this is tricky because it (1) requires (potentially many) more simulations and (2) can easily lead to pathological cases where the posterior is off but the predictive distribution is good. Please elaborate on how the hyperparameter should be set in practice. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: The authors claim that RNPE is not amortized. However, as far as I understand, RNPE does not require retraining for new data (yes, it requires MCMC, but this can be very fast, especially for low-d parameter space). Please clarify if I misunderstood this or clarify in your paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: State more explicitly that the method does **not** converge to the true posterior. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **"The method is a very straightforward extension to Schmitt et al 2021.":** See the general response to all the reviewers. * **"... the paper uses only two very low-dimensional benchmark tasks ":** We respectfully disagree. While the tasks may be low dimensional in the number of parameters, they are very high dimensional in terms of data (100, 50, and 801 for Ricker, OUP and Turin model, respectively). Since misspecification occurs in the data space, we argue that the dimensionality of the data is more relevant to the problem under study. We would also like to point out that we added the arguably more important real-world case of misspecification (over additional synthetic benchmark examples). * **"NPE-RS performs significantly worse than NPE on well-specified data on one of the (only) two benchmark tasks.":** We disagree that NPE-RS performs *significantly* worse, and present additional results on different performance metrics in Figure 1 of the attached pdf. We see that the NPE-RS posterior is close to the NPE posterior in terms of MMD, and has comparable, if not better, empirical coverage. * **"Convergence: The method proposed by the authors does no longer converge to the true posterior distribution for well-specified models, which further emphasizes that the method does not replace detection of misspecification.":** Correct, our method does not converge to the true posterior. That is the price to pay to be robust to misspecification. What we claim is that our method achieves posterior consistency by leveraging Theorem 1 from Frazier et al. (2018), {Asymptotic Properties of ABC}, Biometrika, that says that as long as the statistics are informative about $\theta$ (which is the case for our method), the resulting ABC posterior concentrates on the true theta in the well-specified case. Unfortunately, such a result does not exist for NPE; however, as shown in Figure 1 of the attached pdf, the NPE-RS posterior is close to the NPE posterior in the well-specified case. Nevertheless, we agree that our results do not explicitly show that detecting misspecification is unnecessary, and we will remove that claim. * **"The authors would have to show that NPE-RS converges (at least very closely) to the true posterior with many simulations (see below).":** We do not claim that NPE-RS converges to the true posterior, but rather to the NPE posterior as $\lambda$ goes to 0, as shown in Figure 5(right). * **"I would appreciate an investigation of how regularization strength trades-off performance on well-specified data vs robustness to misspecification.":** This is exactly what we investigated in Figure 5 by varying the value of $\lambda$ in different settings. * **"How is the hyperparameter chosen? In L231 the authors also say that this might be done based on the posterior predictive distribution, but I think this is tricky because it (1) requires (potentially many) more simulations and (2) can easily lead to pathological cases where the posterior is off but the predictive distribution is good. Please elaborate on how the hyperparameter should be set in practice.":** Our method only has one hyperparameter $\lambda$, which we propose to set either by using posterior predictive checks or via inference results on a held-out validation dataset (which can be a subset of the observed data), as it depends on the degree of misspecification in practice. As shown in Figure 5, we found the inference results to not be very sensitive to changes in $\lambda$ in various settings. We further argue that setting $\lambda$ is similar to setting other hyperparameters such as the choice of architecture, number of layers, choice of activation, learning rate etc., which is now an accepted part of fitting any deep learning-based model. We argue that under model misspecification, it becomes difficult to say if the posterior is off or not (as there is no notion of a true posterior), as standard inference techniques are not reliable. We therefore rely on checking if the predictive distribution is accurate. * **"The authors claim that the regularizer can be selected with a validation set (L353). It is unclear to me how this would work. What would be the loss function used to assess performance on the validation set?":** We test the performance of the method on the validation set with different $\lambda$ values. The performance metric can be any loss function, such as MMD (which is what we used) or KL divergence, between the model's predictive distribution and the validation dataset. The same metric can also be computed on the space of statistics instead of the data. * **"The authors claim that RNPE is not amortized. However, as far as I understand, RNPE does not require retraining for new data (yes, it requires MCMC, but this can be very fast, especially for low-d parameter space). Please clarify if I misunderstood this or clarify in your paper.":** Agreed. Here, we refer to the amortization of the entire inference procedure, and not just the surrogate neural network, as noted in Box 1 of Lueckmann et al., Benchmarking Simulation-based Inference, AISTATS, 2021. Due to the additional MCMC step, the inference procedure is not amortized, which is also the case with neural likelihood and ratio estimators (NLE and NRE). We will clarify the text to reflect this distinction. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the detailed response and the additional simulations. They cleared some of my concerns, but some points are still unclear/unresolved to me: **Low-dimensionality:** I indeed had meant low-d theta. I agree that high-D x are relevant and important, but I still think that the paper would be much stronger if the authors demonstrated that the method scales to parameter spaces with dimensionality >> 2 **NPE-RS performs significantly worse than NPE**: I still think that the paper downplays this limitation. The empirical difference between NPE and NPE-RS is very clearly noticable, also in the new results (in addition to NPE-RS not converging to the true posterior). In my opinion, claims like `It provides accurate inference results even when the model is well-specified` (L 67) are not warranted and have to be removed or significantly trimmed down. **trade-off between performance on well-specified data vs robustness to misspecification**: Which of the three metrics shown in figure 5 should correspond exactly to "robustness to misspecification" and why is this warranted? Wouldn't negative log-likelihood on misspecified x be a better measure? **Hyperparameter choice**: The authors say that it can be set with a `held-out validation dataset (which can be a subset of the observed data)`. Does this mean that using cross-validation requires several datasets of observed data (which are potentially misspecified and can not easily be synthetically generated) in order to choose $\lambda$? What if only one dataset is available at training time? --- Reply to Comment 1.1.1: Comment: Thank you for your response. We really appreciate it. * **"...I still think that the paper would be much stronger if the authors demonstrated that the method scales to parameter spaces with dimensionality $>>$ 2":** Agreed. Having an example with more than 4 parameters (as is the case with the Turin model in Section 5) can further strengthen our paper. To that end, we ran our method on the 10-dimensional Gaussian linear example with fixed covariance matrix $\Sigma$ (parameter of interest is the mean vector) used in the RNPE paper and also available in the SBI benchmark library. To make the model misspecified, we used the same contamination model for the observed data used in the paper, i.e., $\mathbb{Q} = (1-\epsilon)\mathcal{N}(\theta\_{\mathrm{true}}, \Sigma) + \epsilon \mathcal{N}(\theta\_c, \Sigma)$ where $\epsilon = 10$%, $\theta\_{\mathrm{true}} = [0.5, \dots, 0.5]^\top$, $\theta\_c = [2,\dots,2]^\top$, $p(\theta) = \mathcal{U}([-1,1])^{10}$. The average MMD between the posterior predictive distribution and the observed data over 100 runs is shown in the following table (std. deviation is reported in the parenthesis). We will of course include these results in the paper. | | NPE | NPE-RS | NPE-RS | NPE-RS | |:---:|:----:|:--------------:|:------------:|:-------------:| | | - | $\lambda = 20$ | $\lambda=50$ | $\lambda=100$ | | **MMD** | 0.26 (0.02) | 0.19 (0.04) | **0.18** (0.06) | 0.21 (0.08) | * **"In my opinion, claims like 'It provides accurate inference results even when the model is well-specified' (L 67) are not warranted and have to be removed or significantly trimmed down":** Apologies if we weren't clear before. We agree that our results do not support the claim that "It provides accurate inference results even when the model is well-specified, thus circumventing any need to detect model misspecification", and we will remove it. * **"Wouldn't negative log-likelihood on misspecified x be a better measure?":** The log-likelihood is not applicable for simulator-based models due to the intractability of the likelihood function. We did consider fitting a multivariate Gaussian distribution to the predictive distributions from Ricker, OUP, and Turin model, and evaluating the log-likelihood of the data under the respective Gaussians. However, that introduces a new level of misspecification, as none of these models produce data that is jointly Gaussian. Moreover, when the observed data is corrupted by a few outlier points (thus causing model misspecification), the sum of log-likelihoods gets dominated by the few outliers, even if most of the observed data is explained well by the model. This is the reason why people have proposed generalised Bayesian inference (GBI) frameworks, where the likelihood term is replaced by a robust loss such as the MMD (which is known to be robust to outliers) to account for misspecification (see Knoblauch et al. (2019) and ref [24] for more details). We therefore used the MMD between the posterior predictive distributions and the observed data (bottom row of Figure 2) as the measure for robustness to misspecification (apart from RMSE). Knoblauch, J., Jewson, J., \& Damoulas, T. (2019). Generalized variational inference: Three arguments for deriving new posteriors. arXiv preprint arXiv:1904.02063. * **"Which of the three metrics shown in figure 5 should correspond exactly to "robustness to misspecification" and why is this warranted?":** In Figure 5, we are testing the claim that our method indeed converges to the NPE posterior as $\lambda$ goes to zero, and converges to the prior as $\lambda$ goes to infinity. To do that, we varied $\lambda$ and computed the distance (in terms of MMD) between the NPE-RS posterior and the prior (Fig 5(left)), and between the NPE-RS posterior and the NPE posterior (middle and right). * **Does this mean that using cross-validation requires several datasets of observed data (which are potentially misspecified and can not easily be synthetically generated) in order to choose $\lambda$? What if only one dataset is available at training time?:** We do not assume that multiple observed datasets are available, even though that is the case in some fields like Astrophysics where available data is plenty. We meant that for one observed dataset $\mathbf{y}\_{1:n} = \{\mathbf{y}^{(1)}\, \dots, \mathbf{y}^{(n)}\}$ with $n$ iid samples available at training time, we can use a random subset of the data, say $\mathbf{y}\_{1:m}$, where $m<n$, as the validation set to choose $\lambda$, and the rest of the $n-m$ points to run NPE-RS (which is what we did for the radio propagation experiment). In the limiting case that only one observed data point is available, there is not going to be any significant posterior update anyway.
Summary: This paper describes a method to perform simulation based inference, focusing on neural posterior estimation and approximate Bayesian computation (ABC), under model miss specification when performing inference using summary statistics of the data by using an MMD loss between the simulated and observed data summaries as a means to regulate miss specification. Experiments are performed to empirically examine the efficacy of the method, and the method is compared to standard neural posterior estimation and ABC, as well as a method called robust neural posterior estimation. Strengths: The paper is tackling an important issue, the method is interesting, and the description of the method and its motivation are made clear. Weaknesses: It is not clear how this work differs from ref [72], Schmitt et. al, "Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks". Much clearer description of the differences is needed to understand the novelty of this work It is not clear how to determine the amount of regularization, i.e. how to set the hyperparameter lambda. This seems to be a key missing piece of information on how to practically use this method. The paper seems to indicate that the summarizer acts on sets of iid samples. But is it not clear why must the summarizer act on the set of x's, rather than summarizing each observation, and using the fact that the summaries will remain iid. This seems to come back as a constraint later in line 179, but does not seem well motivated, or practically how people use summary statistics for iid data. This could also significantly impact the ability to train a conditional normalizing flow due to the massive reduction in information by summarizing over a set of examples rather that summarizing per example. In the experiments, it is not clear what summary statistic is used for RNPE? Summary statistics are not described in the RNPE paper, but rather attempting to model the p(theta|x), so how was this choice made? Does it affect the results? Moreover, in the experiments, why not compare to the method in ref [72]? In terms of reference, there are also methods to learn robust summaries prior to inference, e.g. using pivots, such as in Louppe, et. al, "Learning to pivot with adversarial networks", and similar domain adaptation approaches, some specifically using MMD. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How does this work differ from ref [72] ? Can you provide a much clearer description of the differences is needed to understand the novelty of this work. Can you compare to this work? On line 257, is is stated that lambda is set using simulations using theta_true. What does it mean that you set lambda using a data set with known theta_true? isn't this something that needs to be estimated? Doesn't this greatly reduce the challenge of inference, and this information is not realistically available? Why does the summarizer act on on sets of iid data, rather than on each iid data example? Acting on each example individually seems to be much closer to practical usage. Do you have experiments in this setting? Would this significantly impact the computation driven by MMD, it seems this may make the method impractical? Please provide more details on how RPNE was used in the experiments, and how the summary statistics were determined. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately discuss limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **"How does this work differ from ref [72] ? Can you provide a much clearer description of the differences is needed to understand the novelty of this work. Can you compare to this work?":** See the general response to all the reviewers. * **``It is not clear...how to set the hyperparameter lambda'':** We propose to set the hyperparameter $\lambda$ either by using posterior predictive checks or via inference results on a held-out validation dataset (which can be a subset of the observed data). As shown in Figure 5, we found the inference results to not be very sensitive to changes in $\lambda$ in various settings. We further argue that setting $\lambda$ is similar to setting other hyperparameters such as the choice of architecture, number of layers, choice of activation, learning rate etc., which is now an accepted part of fitting any deep learning-based model. * **"Why does the summarizer act on sets of iid data, rather than on each iid data example? Acting on each example individually seems to be much closer to practical usage. Do you have experiments in this setting? Would this significantly impact the computation driven by MMD, it seems this may make the method impractical?":** We do not specify explicitly whether the summarizer acts on sets of iid data or on each iid data example. In fact, the latter is a special case of the former. The summarizer can act on each data example as well, however, that means there are as many summaries as number of data points, thus invoking the curse of dimensionality when computing distances in ABC. Moreover, computing statistics on the whole dataset can be more informative about parameters that govern the distributional behaviour of the data. We refer to [68] that define the summarizer in the same general way as us. For the experiments shown in the paper, the summary network we used first summarizes each data example, and then aggregates them into a single statistics vector for each $\theta$, similar to [68]. * **"Summary statistics are not described in the RNPE paper, but rather attempting to model the $p(\theta|\mathbf x)$, so how was this choice made?":** We respectfully disagree. The RNPE paper does use summary statistics as $\mathbf{x}$, see the sentence before Section 2.2 in the RNPE paper that says "Hereafter, we perform model criticism and inference using handcrafted summary statistics...", and also Section 4.2 where they mention the statistics used for SIR and CS tasks. * **"Please provide more details on how RPNE was used in the experiments, and how the summary statistics were determined.":** The summary statistics learned using the joint NPE framework were the ones used for RNPE. That is, we used the output of the trained summary network in NPE as statistics of RNPE, thus making sure that both the methods used the same statistics so that comparing them was fair. * **"In terms of reference, there are also methods to learn robust summaries prior to inference, e.g. using pivots, such as in Louppe, et. al, "Learning to pivot with adversarial networks", and similar domain adaptation approaches, some specifically using MMD.":** Good point. We have made a note of it in the related works. * **"What does it mean that you set lambda using a data set with known $\theta_{\mathrm{true}}$?":** We do not assume the method knows the true theta; $\theta_{\mathrm{true}}$ simply is notation for the $\theta$ used for simulating the observed and validation data. We use the validation set, generated from true $\theta$, to set the value of $\lambda$. --- Rebuttal Comment 1.1: Title: Response Comment: Dear Author, thank you for your detailed responses. Some of my concerns have been alleviated, and some of my confusions cleared up. - In general I think would be useful to readers to ensure that the text clearly describes the differences to ref [72], as you have discussed in your responses. I think your response was helpful - Indeed, with a summarizer on iid data points, you only gain in the reduction of dimensionality from the data to summary size. It was not clear why this would necessarily be a problem in ABC if summary is low dimensional. It was also not clear to me how a summarizer over datasets could handle data sets containing millions of iid samples, as is common in mane scientific applications. Thus summarizing over datasets introduces a different challenge of dimensionality. Nonetheless, I recognize that this is not the main goal of this work, but rather the focus is on learning robust statistics, and not about exactly how one summarizes the data. - I suggest changing notation, as theta_true is highly misleading. I am also not quite sure I understand your response. I suggest the authors improve the text here. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for increasing your score. We really appreciate it. * **"In general I think would be useful to readers to ensure that the text clearly describes the differences to ref [72], as you have discussed in your responses. I think your response was helpful":** We are glad. And yes of course we will include that text in the paper. * **I suggest changing notation, as theta_true is highly misleading. I am also not quite sure I understand your response. I suggest the authors improve the text here.":** For simulation experiments, the observed data is generated from some ground-truth parameter values, which we refer to as $\theta_{\mathrm{true}}$. This is done in order to measure the performance of the inference methods in the parameter space (see e.g. Figures 2, 4 and 5 in ref [41] where the ground truth parameters are denoted by red lines). As our method requires a validation dataset to set the hyperparameter $\lambda$, we use the same ground truth parameters to generate it as well. We will edit the notation and the text in the paper to clarify this point.
Summary: The paper addresses the "data selection problem", i.e. identify a low-dimensional statistic (e.g. mean and variance for Gaussian distribution) of a high-dimensional dataset for which the model can replicate even when misspecified. The paper propose using the auto-encoding framework to automatically extract statistics and a penalty on mismatched statistics, i.e. statistics that the model is unable to replicate, as a general approach to handle model misspecification. Empirical results show robust inference in misspecified scenarios whilst still being accurate in well-specified scenarios. Strengths: 1. The paper introduces the "data selection problem". Sorry, I am not a subject matter expert in the field of "data selection" thus I cannot discuss on the originality and significance of this work. However, I learnt about the data selection problem from the paper. Weaknesses: 1. The ideas of auto-encoding, reducing mismatch and tuning the regularizer are not new, but they are perhaps new in the application to "data selection". Please discuss further. 2. From the paper, since it is an auto-encoding framework on the model, the statistics are informative of the model. On the other hand, how to ensure the statistics are informative of the observations? Please discuss further. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Since the extracted statistics are changing/ unknown beforehand, how will these statistics fit into downstream applications for downstream applications that were fixed beforehand? Perhaps the downstream tasks are retrained on the new robust statistics. 2. In the experiment implementation, should the model be the misspecified case while the observations be the true case? Or both can have some degree of misspecification while we try to extract the true parameters? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Authors mentioned the limitation of using the observed statistic during the training procedure and recommended working on it as future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: * **"The ideas of auto-encoding, reducing mismatch and tuning the regularizer are not new, but they are perhaps new in the application to "data selection". Please discuss further.":** Yes, our contribution is to show how to solve the important problem of simulation-based inference under model misspecification by framing it as a data selection problem. As models in real world are often misspecified, addressing this problem is necessary for the application of the inference methods in many fields. * **"From the paper, since it is an auto-encoding framework on the model, the statistics are informative of the model. On the other hand, how to ensure the statistics are informative of the observations?":** Statistics are functions of the observations, and should be informative about the model parameters to perform inference. The first term in the loss function makes sure the statistics are informative, while the regularizer term ensures they are robust. * **"Since the extracted statistics are changing/ unknown beforehand, how will these statistics fit into downstream applications for downstream applications that were fixed beforehand? Perhaps the downstream tasks are retrained on the new robust statistics.":** Yes, the new robust statistics can be used for the downstream tasks. It is a good point that if the downstream application is known beforehand, the statistics can be chosen according to that task. To do that, the first term in Equations 9 and 10 can be replaced by a term that minimises the downstream loss. Thank you for pointing this out; we will add a comment about this to the Discussion. * **"In the experiment implementation, should the model be the misspecified case while the observations be the true case?:** A model is misspecified or well-specified with respect to a given set of observations, which in the simulated experiments we have chosen to either match or not match the modelling assumptions. So yes, the observations are always the ’true case’. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have read all the other reviews and responses. My score remains. --- Reply to Comment 1.1.1: Comment: Thank you for your time. Do let us know if there are any concerns which we can address.
Rebuttal 1: Rebuttal: We thank the reviewers for their careful consideration. The reviewers agreed on the following strengths of the paper: * **Relevance:** Reviewers EXCq and KhF0 agree that the ''paper is tackling an important issue'' * **Good presentation:** EXCq: ''description of the method is clear''. KhF0: ''The paper is well-written and easy-to-follow. Figures are intuitive and well-designed''. 1V4M: ''structure of the paper is clear''. * **Methodology:** KhF0: ''method is novel and intuitive''. EXCq: ''method is interesting''. * **Real-world experiment:** KhF0: ''method is applied to real-world problem with high-dimensional data''. 1V4M: ''radio propagation example is interesting, and shows the potential strength of using the MMD regularizer''. The reviewers highlight concern regarding the novelty of the work in light of reference [72], Schmitt et. al, ``Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks'', which we clarify here. * **The purposes of the methods are different:** while we tackle the problem of robust inference under model misspecification, [72] addresses the problem of detecting model misspecification. * **Role of the regularizer:** we include the regularizer to learn a summarizing function that ensures that the observed statistic is not an out-of-distribution sample in the summary space. On the other hand, [72] proposes to add an MMD regularizer term between the simulated statistics and samples from a standard Gaussian, which ensures that the learned statistics are jointly Gaussian. * **How statistics are used:** we use the learned statistics to perform inference, while [72] conducts a goodness of fit test of Gretton et al, (2012) to detect if the model is misspecified. * **The scope is different:** while their method is only applicable for NPE, our method can be used to perform robust inference in other SBI methods as well, such as ABC, NLE, NRE. As [72] does not provide a method for robust inference, comparing our method with theirs is infeasible. Their method is complementary to ours, such that our method can be used once misspecification has been detected using [72]. This is the reason why in the paper we compared our method to RNPE, which addresses the same problem of robust inference as we do. We also include additional results in the attached pdf document in response to some of the questions raised by reviewers KhFo and 1V4M. We address all the individual comments of the reviewers in separate responses to each. Pdf: /pdf/308eafddba0a88a520866694b78c5abdeb47228f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Plug-and-Play Stability for Intracortical Brain-Computer Interfaces: A One-Year Demonstration of Seamless Brain-to-Text Communication
Accept (spotlight)
Summary: This paper introduces a method called Continual Online Recalibration with Pseudo-labels (CORP) that allows for self-recalibration of intracortical brain-computer interfaces (iBCIs) without the need for user interruption. iBCIs are used to restore communication abilities in individuals with neurological disorders like ALS but require frequent recalibration due to changes in neural recordings over time. The proposed method utilizes large language models (LMs) to automatically correct errors in iBCI outputs and uses these corrected outputs as "pseudo-labels" to update the iBCI decoder in real time. The CORP framework was evaluated over an 8-month period with a single participant in a clinical trial. The results showed a stable decoding accuracy of 93.71% in an online handwriting task, outperforming other baseline methods. This study demonstrates the longest-running iBCI stability with a human participant and presents a plug-and-play, high-performance communication iBCI that addresses a significant challenge in the clinical application of iBCIs. Strengths: 1. originality: This work presents an online recalibration method for iBCI which uses LLM to perform re-calibration for communication iBCI. 2. quality and clarity: This paper provides detailed experimental validation of the effectiveness of the proposed method. The figures and text are clear, easy to understand, and flow smoothly, ensuring a clear expression of ideas. 3. significance: The CORP method proposed in this paper achieves higher accuracy and stability in brain-to-text communication through intracortical brain-computer interfaces (iBCIs) than other recalibration methods. It has been validated on the longest-running iBCI involving human participants, proving its effectiveness and potential. Weaknesses: 1. The method presented in this paper has only been validated on a single subject, which may raise concerns about the limited sample size and lack of generalizability. 2. The stopping criteria mentioned in the article are based on whether the loss falls below a certain threshold to determine the termination of training. This criterion may not be sufficiently objective. It could be considered to explore alternative quantitative metrics for evaluating the training performance of the model. Technical Quality: 3 good Clarity: 3 good Questions for Authors: If a participant does not use the iBCI for an extended period without recalibrating the RNN model, the initial Word Error Rate (WER) and Character Error Rate (CER) can be very high. In such cases, using pseudo-labels generated with the help of LLMs may potentially mislead the training of the RNN model. Has this scenario been tested or are there any alternative approaches? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned by the authors in the paper, the main limitation of CORP lies in the pseudo-labels generated by the language model. If the error rate of the pseudo-labels is high, it directly affects the Character Error Rate (CER) of the recalibrated RNN model. Therefore, it is crucial to select more powerful language models to minimize errors in the pseudo-labels. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Weakness 1** We acknowledge the limitation of low sample size. In the future, we plan to address this limitation by collaborating with other BCI labs to test our method on a broader range of subjects. We believe that such collaborations will provide valuable insights and help us refine our approach. Additionally, we plan to publish the code associated with our method. This will allow other researchers to test our method on their data, further contributing to the validation and generalizability of our findings. > **Weakness 2** We agree that the stopping criteria based on whether the loss falls below a certain threshold may not be sufficiently objective. We will consider alternative quantitative metrics for evaluating the training performance of the model in future work. > **Question 1** In our paper, we have attempted to address this issue in the limitation section by simulating varying pseudo-label accuracy. We acknowledge that our method could fail if the initial error rate is too high, typically indicating that the neural data has changed significantly. In such cases, supervised recalibration could be used to rescue the system. However, this is an area where more research is needed. Understanding the nature of the nonstationarity in neural data is a complex challenge, and further investigation is required to develop methods that can automatically address this problem more effectively. > **Limitations** Thank you for the suggestion of using more powerful language models. Recent advancements in large language models (LLMs) offer promising avenues for improving the accuracy of pseudo-labels which we plan to explore in future work.
Summary: The paper proposes a self-recalibrating brain-computer interface system where inputs come from implanted electrodes in patient's motor cortex and the outputs characters that are patient images to hand-write. The output is then corrected by a language model to match to the most probable desired sentence. The recalibration in needed because in the real biological system there are many factors that change over time and the input data distribution drifts because of that. While it is possible to recalibrate the system by running a dedicated session, this is not optimal is it requires downtime, participation of a technician and ~daily mental effort from the patient. This work proposes a smart trick to used LLM-corrected word predictions as new labels for continuous retraining: the input signal distribution drift is gradual on the next day, while the drift has already occurred, the language model can still correctly recover what the intended word were. And these corrected words can be used as new labels to retrain the system a bit and this cancel the drift. If performed often the system will never drift too far and will always be able to self-correct. The experimental results from one test subject with implanted microelectrode array confirm that the error rate of the system stays stable thanks to continuous recalibration and it is shows that it would degrade without the proposed mechanism. Strengths: The paper is very clearly written and well-structured, to the extent that I find myself in trouble performing my role as a reviewer's and ask questions, because I had to delete most most of them as I progressed through the pages :) The application is, of course, amazing and the fact that this work was tested on a human subject with implanted electrodes makes it a unique contribution to the field of ML applications to BCIs. Weaknesses: I could not come up with any. I think paper is a very clear account of what was done and presents sufficient support for the claims. Perhaps more test subjects would be beneficial, but even with one subject the results stand. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Fig 1: What does the arrow from "Language model" to "Current session's data" mean? Is it somehow part of LM's training to predict session data? Or does it provide labels and the data in "Tx Fetures" is the same as in "Current session's data"? 126: How does the system know that the subject has finished thinking a sentence? Is there a special "stop"-though and if so - how reliably it can be detected? Q1: What was X's subjective feedback on the performance of the system after a few months? While the numbers clearly show that the system was more accurate, what it also noticeable from the user's perspective? Q2: How are spaces handles? I would image there is no clear motor imagery to think about to imagine making a space, how does the patient do that? 361: Perhaps this should go to Limitations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I would be curious to know about the natural limits of speed of writing with this approach. After all having a direct brain connection would seem like an opportunity to forgo "clunky" written language and make a more direct transmission possible. Just curious what are the authors' thought on feasibility of this and what could be the way to do it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Question 1** The arrow from "Language model" to "Current session's data" represents the generation of pseudo-labels by the language model. We will clarify this in the revised manuscript. > **Question 2** In this study, the participant would indicate to us when they had finished, and we would manually stop the decoder (since this particular participant can still speak, although he is paralyzed from the neck down). However, we believe the end of the sentence could automatically be detected without much difficulty, and this would be important for a locked-in user (the main target population of such a device) One potential method is to use the output of the RNN itself. The RNN we used in this study is capable of outputting a special "blank" state when the participant is not writing. If the RNN outputs this blank state for more than a certain threshold of time (t seconds), we can infer that the participant has finished writing. > **Question 3** Yes, our participant X provided positive feedback about the self-recalibration system. He reported that he could clearly perceive the performance difference between the no-recalibration blocks and the recalibration blocks, and he expressed a preference for the recalibration system. We will include X's subjective feedback in the revised manuscript. > **Question 4** A user would write‘>’ to indicate space, similar to [46]. > **Question 5** We will mention the one subject limitation in the Limitations in the revised manuscript. > **Limitations** In our study, participant X achieved a writing speed of 69.5 ± 8.6 characters per minute. In a previous study [46], the authors reported that their participant was able to write as fast as 90 characters per minute. For a comprehensive review of different BCIs' communication speeds, refer to this blog post (https://www.paradromics.com/blog-post/enabling-connection-ii-bci-for-assistive-communication). As for the possibility of a more direct transmission between the brain and a computer, bypassing the need for written language, this is indeed a fascinating topic. However, achieving this goal would require significant advancements in our understanding of the brain and how it encodes information. Right now, intracortical BCIs focus on motor areas of the brain that represent the intention to move. Therefore, speeds are tied to motor production, for example how long it takes to write a letter or speak a sound. In the future, it may be possible to record from different areas of the brain that may bypass the need to formulate a motor intent, if the neural code can be understood. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications!
Summary: There is a vastly growing literature reconstructing continuous language from non-invasive brain recordings using popular deep-learning models. These papers typically use invasive recordings from surgically implanted electrodes, while decoders that use non-invasive recordings can only identify stimuli from among a small set of letters, words, or phrases. This paper contributes to that literature by introducing a new approach, CORP, a continual online recalibration method for intracranial Brain-Computer Interface (iBCI) that maps neural activity to text. Specifically, the authors use a language model (RNN) to automatically correct errors in iBCI outputs by continually updating the iBCI decoder with an online learning method. The experimental results revealed that the proposed framework achieved a stable decoding accuracy of 93.71% in an online handwriting iBCI task, significantly outperforming other baseline methods. Strengths: The paper contains the following key contributions: * The novelty of this work: Different from previous continuous language reconstruction methods from non-invasive brain recordings, the authors build a framework to leverage the structure in language to enable self-recalibration of communication iBCIs without interrupting the user. * The proposed framework is agnostic to the communication task and can recalibrate any iBCI decoder that maps input neural signals to text output. Originality: * The prototype of a self-recalibrating handwriting iBCI system and assessed its performance over a period of time is interesting. Weaknesses: Weaknesses: * Although the paper's main idea is quite interesting, however, the experiment was conducted on one participant in a pilot clinical trial. Hence, the proposed method needs to be tested on more subjects. * Authors have not discussed recent brain decoding works: [1] Semantic reconstruction of continuous language from non-invasive brain recordings, Jerry Tang, Amanda LeBel, Shailee Jain, Alexander G. Huth [2] Decoding speech from non-invasive brain recordings, Alexandre Défossez, Charlotte Caucheteux, Jérémy Rapin, Ori Kabeli, and Jean-Rémi King * In the above works, the authors reconstruct the continuous language from non-invasive brain recordings (fMRI and MEG) with better perplexity. * Did the authors compare their methods on any existing datasets? * Since the RNN language model is limited in handling long-term memory information and vanishing gradient problems, did the authors try with recent pretrained Transformer language models? We may expect better accuracy since the Transformer model was pretrained on larger corpora. Moreover, we may expect that a replay buffer is not required in the case of Transformer models. Quality: The paper supports its claims with few details. Specifically, the methodology and experimental details are limited. Clarity: The paper needs a lot of improvement with clear methodology details and motivation behind using the RNN model. The information provided in the submission needs to be more comprehensive to reproduce the results. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Did the authors interpret the states of RNN in a continual online learning setup? Is there any trend across days? Which information is overwriting more in the language model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed several limitations and made future directions for the research community. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your time and effort in reviewing our manuscript. We would like to clarify that our work differs significantly from the recent non-invasive language decoding works. We'll make this difference clear below, and would like you to consider re-evaluate our work. > **Weakness 1** We acknowledge the limitation of the low sample size and understand the need for more extensive testing across additional users. Conducting studies with intracortical brain-computer interfaces (iBCIs) presents unique challenges, particularly in recruiting participants, compared to non-invasive BCIs. The nature of iBCIs, which involve invasive procedures, inherently limits the number of participants we can involve in our studies. However, we are committed to addressing it in the future work by collaborating with other iBCI research labs and testing our method on more participants. > **Weakness 2&3** We appreciate your suggestion to compare our method with recent brain decoding works. However, it's important to note that the context and objectives of our work differ significantly from those of the studies you mentioned. Our primary aim is to develop clinically viable BCI devices that can restore communication for individuals with paralysis. Two crucial requirements guide our work: the user must be able to freely express themselves by typing whatever message they desire, and the system must maintain high accuracy. In our referenced work [46] where a handwriting iBCI was first demonstrated, the authors used an RNN to decode brain signals into text with an accuracy exceeding 95% (measured as character error rate). Their method is general enough to express any English sentence. This paper extends [46] to address the issue of non-stationary neural recordings. We used n-gram language models and large language models to automatically correct the RNN decoder’s output, which is then used for online recalibration of the RNN. Our results indicated that with CORP, the RNN decoder’s performance can be maintained around 95% for a long period of time without burdening the user to collect new calibration data. The two papers you mentioned aim to solve a different problem. They attempt to decode text from brain signals (recorded using fMRI or MEG) while the participant is listening to a speech. In a clinical setting, it's unclear how this approach could help restore communication, which would require decoding text when a participant is speaking. Moreover, their decoding method can only achieve better than chance accuracy. For instance, in [1] Table 1, the word error rate (WER) is > 90%, meaning that 9 out of 10 words decoded are incorrect. While performance might be better when assessed with semantic metrics, our goal is to enable the user to express the exact text of their intended message. In summary, while we appreciate the suggestion, a direct comparison between our work and the two mentioned papers is not entirely appropriate due to the significant differences in context, objectives, and methodologies. We hope this clarifies our position and thank you for your insightful feedback. > **Weakness 4** In the field of intracortical BCIs (iBCIs), the only datasets available are those that study the long-term stability of cursor decoding. Unfortunately, there are currently no existing datasets specifically for handwriting iBCIs. Data collection for iBCIs is a challenging task, due to the very limited number of clinical trial participants. One of the significant contributions of our work is the 8-month long dataset we collected, which will be published alongside this paper. We believe that this dataset will be a valuable resource for the iBCI community, enabling researchers to explore more methods for addressing the issue of non-stationarity. > **Weakness 5** We appreciate your question regarding the use of Transformer models in place of RNNs. We would like to clarify that in this work no RNN language model was used. We used an RNN only to decode brain signals into character probabilities. A language model (3-gram LM + a transformer-based LLM) was then used to transform these probabilities into a string of words. The replay buffer was used to store recent data for the purpose of online recalibration of the RNN decoder. This is necessary regardless of the type of model used. We hope this clarifies the methodology of our work. > **Question 1** We appreciate your suggestion, but our RNN is not a language model. Please refer to the answer above. If you have more questions regarding the RNN decoder, we’d be happy to answer. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Dear authors, Thanks for the rebuttal. By considering authors' feedback, it is clear that they have provided a comprehensive rebuttal and taken diligent care in addressing all the questions. Hence, I have decided to revise and increase my score accordingly.
Summary: This work focuses on Strengths: Originality: Related work in Chen et al., IEEE SMC 2022 uses a language model for pseudo label corrections during BCI self-recalibration, though the study is focused on simulations with EEG data from a longitudinal study with participants with ALS using the P300 speller. This work is a creative combination of existing ideas to develop a new method to recalibrate BCIs for communication by using language models to improve pseudolabel quality and enhancing continuous learning during recalibration via the use of a replay buffer and data augmentation. Quality: This paper presents results from a longitudinal online BCI study to demonstrate utility of the proposed approach, which is the gold standard in evaluating BCI algorithms. The inclusion of results from offline analysis also enhances the paper. Clarity: The paper is very well-written and organised. Areas needing clarity and suggestions to improve readability are noted below. Significance: This work is highly relevant to developing automated approaches to periodically recalibrate BCIs for communication for long-term BCI use with minimal user disruptions. The approach is applicable to general BCIs for communication. Results from a longitudinal online study with a BCI user from a target end user population increase the impact of the paper. Weaknesses: - Low sample size. The paper presents results from one participant with generally high performance level, so difficult to assess the utility across a broad range of user performance levels. The low sample size is understandable given the challenge with conducting studies in target BCI end-user populations; in particular, this is an iBCI study, in contrast to a non-invasive BCI study. The authors recognise the limitation of the lack of generalisability of results given the low sample size. The authors include results from simulations using data from the current participant to investigate the impact of a broad range of character error rates on the recalibration performance (Figure 5). - Potential order effects due to lack of randomisation of the no recalibration block (block 2) and the recalibration blocks (blocks 3 and 4). If understood correctly, the RNN decoder is updated with the data from the current calibration block and does not rely on data from the seed model block, so the testing order could be randomised daily to mitigate order effects. - There is the confound of the recalibration blocks (blocks 3 and 4) displaying the LM-decoded outputs (“the top-scored result was displayed on the screen as the final decoded sentence.”) vs. the no recalibration blocks (block 2) displaying the RNN-decoded outputs. This difference in feedback may potentially impact the BCI user experience (mental state, motivation, etc.) and further compound order effects as the user is aware given the fixed block order. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - “The second block employed a frozen seed model, trained on a combination of data from [46] and data collected prior to this evaluation (21 sessions in total).” Are the “data collected prior to this evaluation” from the current participant? Also referred to as “newly collected data” earlier in the paper. Are these 21 sessions prior to day 0? - “updating the decoder after every sentence.” How is the end of a sentence detected? Automatically? - Equation 1: \theta_k, where k denotes a day implies that the recalibration uses all the data from that day. Is Equation 1 supposed to be \theta_{x, k}, where x refers to sentence? - What are the character error rates of the LM-decoded outputs (Figure 2b)? This is to assess whether the use of LM-based correction at word level introduces errors at the character level (vs. Figure 2a with RNN-decoder outputs). (Can be inferred based on simulations in figure 5). - Figure 2: It would be useful to include results from recalibration with the ground truth labels. Why are the amounts of data collection different on day 0 and day 105 different? If understood correctly, day 0 does not have four blocks. This needs to be specified/clarified in the text/caption. - Inconsistency: Over an approximately 8-month period, our participant used the iBCI system monthly and wrote on average 57.7 sentences per usage session.” vs. “X’s writing speed was 69.5 ± 8.6 characters per minute on average.” - What is “per-frame labeling”? - x_{i, t}, y_{i, t}: define subscript i. - Define all acronyms and variables in the captions and provide more context such that the captions standalone to understand the content of the presented information without necessarily referencing the text. Figure and table captions should be more informative to minimise confusion/misinterpreting the CER% or WER% results across figures/tables. For example, the mismatch between the average online WER % with CORP in table 1 vs. figure 2 is explained in the text and not the figure caption. Same with Figure 3. Captions should state if results are from offline vs. online analysis, specific blocks used during recalibration, etc., for clarity. - Check that the contrast between line styles is preserved when figures are in grayscale. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: - More discussion is needed on the societal impact of the BCI technology. In particular, how effectively the BCI communicates a user’s intent when there is no alternative. There is the potential concern that the LM may be more dominant than the user’s intent, particularly in cases with low BCI prediction accuracy. - “we do not anticipate pseudolabel quality to be a major concern in practice. This is because future clinically viable iBCIs are expected to have a high decoding accuracy... Users are also likely to utilize the iBCI frequently, resulting in small nonstationarities most of the time. ... we believe that the pseudo-labels will have high accuracy, allowing CORP to sustain the iBCIs accuracy indefinitely.” Given the low sample sizes and no current data from “future clinically viable iBCIs”, these claims are questionable. There are issues related to recording quality with long term use of intracortical electrodes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Related work in Chen et al., IEEE SMC 2022 ...** Thank you for pointing out this related work. We’ll add it to the revised manuscript. > **Weakness 2** We appreciate this suggestion. We collected new data recently (two additional days - 278 and 300, shown in the attached pdf). On day 300, we ran recalibration blocks first followed by the no-recalibration block. The results are still consistent with all other sessions. We’ll keep the session order randomized in all of our upcoming sessions and include this new data in the final paper. > **Weakness 3** It appears there was a lack of clarity in our methods description. Please note that both the recalibration blocks and the no-recalibration blocks display the LM-decoded outputs. We apologize for any confusion this may have caused, and we will make sure to clarify this point in the revised manuscript. > **Question 1** Yes, all the prior data is from the same participant. The 21st session is day 0. In this session, we collected 60 sentences, trained a decoder with 50 of those sentences combined with the previous 20 sessions’ data, and evaluated the decoder’s performance on the remaining 10 sentences. We’ll make this clear in the revised manuscript. > **Question 2** In this study, the participant would indicate to us when they had finished, and we would manually stop the decoder (since this particular participant can still speak, although he is paralyzed from the neck down). However, we believe the end of the sentence could automatically be detected without much difficulty, and this would be important for a locked-in user (the main target population of such a device) One potential method is to use the output of the RNN itself. The RNN we used in this study is capable of outputting a special "blank" state when the participant is not writing. If the RNN outputs this blank state for more than a certain threshold of time (t seconds), we can infer that the participant has finished writing. > **Question 3** Yes, thank you for catching this error. It should be \theta_{j, k}, where j refers to a sentence sample of day k. We will update Eq 1 in the revised manuscript. > **Question 4** The average character error rate on LM-decoded outputs is 1.9% ± 0.6 (5.9% ± 1.4 for RNN-decoded outputs). The fact that LM can significantly reduce word error rate (6.3% ± 2.3 for LM-decoded outputs and 25.1% ± 5.6 for RNN-decoded outputs) means that using LM can reduce both word error rate and character error rate. It’s not shown in Figure 2 due to space limitations. > **Question 5** Thank you for this suggestion. The data used to plot Figure 2 was collected during online evaluation. Unfortunately, as we didn’t run any ground truth recalibration blocks online, we cannot include this as an additional online baseline. In the future, we will consider including such blocks for comparison as we agree they would be valuable. Day-0 was intended only to establish a baseline performance. As mentioned above, we collected 60 open-loop sentences in that session, and used the first 50 together with past data to train a RNN decoder and evaluated the decoder on the last 10 sentences. The performance shown in Figure 2 is on those 10 sentences. On day-105, we had some technical issues so only one recalibration block was collected. We’ll revise the manuscript to make this clear. > **Question 6** We would like to clarify that these two statements refer to different metrics. The first statement refers to the total number of sentences written per session. The second statement refers to the rate of writing in characters per minute. These two metrics are distinct and provide different insights into the participant's usage and performance with the iBCI system. We hope this clarification is helpful, but please let us know if this does not resolve the issue. > **Question 7** Per-frame labeling means that each decoding frame (20ms time windows) needs to be assigned a ground-truth label. However, since our participant is tetraplegic, it’s impossible to get such labels. In [46], the authors used a hidden markov model to force align the neural data with the text to generate per-frame labels. > **Question 8** i indexes the trials in day t. We’ll revise the manuscript to include this. > **Question 9** Thank you for these helpful suggestions. We’ll revise the manuscript to include these clarifications. > **Question 10** Thanks for this suggestion. We’ll revise the manuscript to make sure that lines are distinguishable when in grayscale > **Limitation 1** We appreciate your feedback on the societal impact. We recognize the importance of accurately communicating a user's intent, and while handwriting iBCIs have shown high accuracy (>95% CER), a more comprehensive metric may be needed. We also acknowledge concerns about the LM dominating user intent, especially with low BCI accuracy, but note that this can be controlled via a weight parameter (Equation 3 in the Supplement). We will incorporate these points into the revised manuscript. > **Limitation 2** We acknowledge that the quality of pseudo-labels is indeed a crucial factor for the success of our proposed method. While we anticipate that future clinically viable iBCIs will have high decoding accuracy, we understand that this is a hypothesis that needs to be tested with more extensive data and over longer periods. Similarly, we recognize that there are potential issues related to the recording quality with long-term use of intracortical electrodes. This is an area that requires further investigation and we are committed to exploring this in our future work. We will revise our manuscript to more clearly acknowledge these concerns and the need for further research. We will also temper our claims to more accurately reflect the current state of knowledge and the limitations of our study. --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: Reviewer has read and appreciates the author rebuttal. The main concerns are mostly addressed. Revising score upward. Other: Suggest including example trajectories of characters, RNN-decoded and LM-decoded outputs.
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for your time and effort in reviewing our manuscript. Your constructive feedback and insightful questions have been invaluable in helping us improve the quality of our work. We acknowledge the limitation of our study regarding the low sample size. Conducting studies with intracortical brain-computer interfaces (iBCIs) presents unique challenges, particularly in recruiting participants, compared to non-invasive BCIs. The nature of iBCIs, which involve invasive procedures, inherently limits the number of participants we can involve in our studies. We understand the implications of this limitation on the generalizability of our results and appreciate your understanding in this regard. In the future, we plan to address this limitation by collaborating with other iBCI labs to test our method on a broader range of subjects. Additionally, we plan to publish the code associated with our method. This will allow other researchers to test our method on their data, further contributing to the validation and generalizability of our findings. The field is at an early stage in its efforts to address nonstationarity in iBCIs. Our work represents an initial step towards developing a solution that can maintain the stability of iBCIs over extended periods. We hope that the encouraging results we have obtained so far and the publication of our dataset will stimulate interest in this problem among the broader machine learning community. We believe that the involvement of more researchers in this area will accelerate progress towards a robust solution to the nonstationarity problem in iBCIs. Once again, we thank you for your thoughtful reviews and look forward to your continued feedback as we strive to improve our work. An updated Figure 2 is attached to show the performance of CORP over a 300-day period, extending the original study from 228 days. Pdf: /pdf/59e414177b7a9e06a6fe13aa00aed65e364f6c22.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Edit: I have read the rebuttal and am satisfied with the authors' comments. I stand by my score and would like to see this paper accepted. ######################################################################## In this work, the authors tackle the issue of non-stationarity in decoding technologies specifically for handwriting recognition. They propose several methodological steps to address like the use of a replay buffer, data augmentation and most importantly, an n-gram language model. They use data from a longitudinal study spanning over 200 days where they collect data from a single participant. Their model comprises an affine transform for each day followed by a 2-layer RNN that outputs character probabilities. These probabilities are then fed into a pertained n-gram language model that then generates plausible words (with beam search). The top-n words are then fed into GPT2-XL to rerank and the best word is displayed back to the participant. The output of the LM is also fed into a calibrator that uses this data along with some percentage of past data to recalibrate the RNN. The authors show that this model achieves very good character error rate and reasonable word error rate over a model that doesn't recalibrate for stationarity and an alignment method. Through several experiments, they also show the usefulness of different model components, hyperparameter choices and the reliability of the pseudo-labels. Strengths: Strengths: 1. Very interesting applications of ML techniques to handwriting decoding. 2. Well written and easy to follow. 3. Achieves very good performance and authors clearly show through many experiments how their method alleviates the problem of non-stationarity. 4. Good analyses on model choice with ablations and many different parameter sets. Weaknesses: Weaknesses: No major weaknesses apart from some questions below and a few missing details. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How long does the assumption of some stationarity hold? The authors say hat prior work found only a 1.5% error rate but is this something we can evaluate with respect to the drop in calibration error to find the most optimal window? Re hyperparameter tuning in figure 4: It was not clear what the shaded region represented here- standard error across all decoded characters? Also, how stable is this across different days, ie., does the non-stationarity affect the plots substantially? There were limited details provided on the affine transform for each day. How is the affine transform for each day trained and what role does it play in the online decoding system? Clarification: Since the LM outputs word n-grams, I assume that this is segmented into characters to provide the pseudo `y's`? Minor: What `p`was used in the final model? (esp for Table 4) Similarly, what is the `n` used for the GPT2-XL ranking step? I would also encourage the authors to allude to the effect of replay buffer size and other modeling parameters on CER in the main text. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, potential limitations were discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Question 1** Thank you for this insightful question regarding the assumption of stationarity and the optimal window for recalibration. The determination of the optimal window is a complex issue, primarily because the nature of nonstationarity in iBCI systems is unknown and can be highly unpredictable. Factors such as variability between individual participants, the recording modality used, and the specific tasks being performed can all influence the degree and pattern of nonstationarity. Given these complexities, a comprehensive analysis of this problem requires collaboration with more iBCI research labs and the collection of additional data. In our future work, we plan to pursue these collaborations to gain a deeper understanding of the underlying factors affecting nonstationarity and to develop more effective methods for managing it. > **Question 2** The shaded region in Figure 4 represents a confidence interval taken across 10 random seeds (we’ll update the figure caption to indicate this), computed via bootstrap resampling. Each data point in the figure is the average character error rate of all the recalibration sentences. This figure is to show that the recalibration process is stable around the operating points. Unfortunately we didn't compare these hyperparameters across different recalibration sessions to assess the effect of nonstationarity on these hyperparameters. However, given the stable recalibration accuracy in Figure 2 and the tight confidence interval around the operating point in Figure 4, we believe that nonstationarity has little influence on these hyperparameters. Thank you for bringing this to our attention, and we hope this explanation clarifies your query. > **Question 3** The affine transform is defined as: $y = Ax + b$, $x \in \mathbb{R}^{c \times 1}, A \in \mathbb{R}^{c \times c}, b \in \mathbb{R}^{c \times 1}$ where $x$ is the input, $y$ is the transformed input, and $c$ is the input dimension. Each session day has its own affine transform layer. The affine transform layers are trained together with the RNN. For a new session, a new affine transform is created and its weights are initialized with the previous session’s. During online decoding, the input neural features are transformed by the affine layer first before being processed by the RNN. More details about it can be found in [46]. We’ll include the above in the revised manuscript. > **Question 4** Yes. LM outputs words, which are then converted into character-level pseudo-labels. > **Question 5** During online evaluation, we set p = 0.6, n = 100. We’ll add these to Table 4 in the Supplementary. > **Question 6** Since we only have a few hundred sentences of data in total, we loaded all the data into the replay buffer. During recalibration, the replay buffer samples BATCH_SIZE * p of sentences from the new data, and BATCH_SIZE * (1 - p) from the past data. We'll make this point clear in the revised manuscript.
null
null
null
null
null
null
MomentDiff: Generative Video Moment Retrieval from Random to Real
Accept (poster)
Summary: This paper tackles the video moment retrieval task from the generative perspective and proposes a diffusion-based localization model, named MomentDiff. It could sample random temporal segments as initial guesses and iteratively refine them to generate an accurate temporal boundary. Moreover, this paper proposes two“anti-bias” datasets with location distribution shifts to evaluate the influence of location biases. Experiments on three public datasets validate the effectiveness of the proposed approach. Strengths: 1. This paper addresses the cross-modal moment retrieval task using a diffusion-based model, which is interesting. 2. This paper builds two datasets with location distribution shifts, which is valuable for this research community. 3. Experiments on three datasets: Charades-STA, QVHighlights, and TACos, demonstrating the effectiveness of the proposed approach MomentDiff. Weaknesses: 1. Despite the widespread use of datasets like TAcos, Charades-STA, and ActivityNet Captions, this paper chose not to conduct experiments using ActivityNet Captions. 2. Previous studies [1][2] employed CharadesCD and ActivityNet-CD to examine the influence of location biases. Nevertheless, this paper made the decision not to directly employ these datasets. Why? [1] Towards Debiasing Temporal Sentence Grounding in Video [2] A Closer Look at Temporal Sentence Grounding in Videos: Datasets and Metrics 3. To provide comprehensive evaluation, comparisons with other supervised, weakly supervised, and zero-shot moment retrieval methods are crucial. Examples of such methods include [3] DORi: Discovering Object Relationships for Moment Localization of a Natural Language Query in a Video, [4] Structured Multi-Level Interaction Network for Video Moment Localization via Language Query, [5] Multi-Modal Relational Graph for Cross-Modal Video Moment Retrieval, and [6] Language-free Training for Zero-shot Video Grounding. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Could you please explain how to obtain the values of $Q_{\hat{v}}$, $K_{\hat{v}}$, and $V_{\hat{v}}$ mentioned on page 4, line 146? 2. Regarding the use of span embedding as the query in Intensity-aware attention instead of combining it with the textual query, could you please elaborate on the reasoning behind this decision? Additionally, it would be helpful to know if any experiments were conducted to validate this choice and provide justification. 3. This paper aims to incorporate audio features and integrate multi-modal video information. Could you please explain the methodology used to integrate the multi-modal video information? Furthermore, it is important to elaborate on how the paper demonstrates that the performance improvements are not solely a result of introducing audio information. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: no Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1 why not organize experiments on ActivityNet Captions?** We explore the VMR problem based on DETR. The datasets used by the representative methods (MomentDETR and UMT) are QVHighlights and Charades-STA, so we use the same datasets. Besides, the results of our method on ActivityNet Captions (C3D features) are shown below, where we record the 1 epoch training time and the inference time for all test samples in one A100 GPU: |Method|R1@0.3|R1@0.5|R1@0.7|${MAP}_{avg}$|Training time|Testing time| |:---|---:|---:|---:|---:|---:|---:| |2DTAN [15]|59.92|44.63| 27.53|27.26|1.1h|523.74s| |MMN [16]|65.21|48.26|28.95|28.74|1.5h|662.12s| |MomentDETR [25]|61.87|43.19|25.74| 25.63|0.05h|52.72s| |**Ours**|62.79|46.52|28.43|28.19|0.06h|91.43s| 1. Our model has been improved compared with baseline (MomentDETR). Compared to SOTAs, we still have competitive results. 2. Compared with MMN, our method has **25 times** faster training time per epoch and **7.24 times** faster testing time. We will add more results in the revised version. **Q2 why not choose Charades-CD and ActivityNet-CD to organize experiments?** Thanks for your constructive suggestion. In VMR task, the span center $c\_{0}$ and length $w\_{0}$ are two important parameters. So we want to verify the model generalization from two perspectives of length and position (Charades-STA-Len and Charades-STA-Mom). The dataset construction strategy is very simple, and the evaluation perspective is comprehensive. As suggested, we use the same VGG features to organize OOD experiments on **Charades-CD**: |Method|R1@0.3|R1@0.5|R1@0.7|$MAP_{avg}$| |:---|---:|---:|---:|---:| |2DTAN [15]|49.71|28.95|12.78|12.60| |MMN [16]|55.91|34.56|15.84|15.73| |MomentDETR [25]| 57.34|41.18|19.31|18.95| |**Ours**|67.73|47.17|22.98| 22.76 | In the **ActivityNet-CD** dataset, we use the same C3D feature to organize OOD experiments: |Method|R1@0.3|R1@0.5|R1@0.7|$MAP_{avg}$| |:---|---:|---:|---:|---:| |2DTAN [15]|40.04|22.07|10.29|12.77| |MMN [16]|44.13|24.69|12.22|15.06| |MomentDETR [25]|39.98|21.30|10.58|12.19| |**Ours**|45.54|26.96|13.69|16.38| Conclusion: 1. **On Charades-CD and ActivityNet-CD, we exceed baseline (MomentDETR) by a large margin.** Although MMN achieved SOTA results on ActivityNet in the answer for Q1, MMN (R1@0.5: 24.69) is still lower than our model (R1@0.5: 26.96) on ActivityNet-CD. 2. These results prove the robustness of the model in dealing with OOD scenarios. Our generative framework alleviates the location biases problem. We will add **the above results and codes** in the revised version, which make our paper more convincing. **Q3 comparisons with other supervised, weakly supervised, and zero-shot methods.** Very good suggestion. We will refer to and compare these works in the revised version. For the fairness of the experiment, we use the same features for comparison. Charades-STA: |Method|Type|R1@0.5|R1@0.7| |:---|---:|---:|---:| | DORi[3] | VGG | 43.47 | 26.37 | | Ours |VGG| 51.94 | 28.25 | | SMIN[4] |C3D| 50.32 | 28.95 | | MMRG [5] |C3D| 44.25 | - | | Ours |C3D| 53.79 | 30.18 | ActivityNet: |Method|Type|R1@0.5|R1@0.7| |:---|---:|---:|---:| | ZSVG[6] |C3D| 32.59 | 15.42 | | Ours |C3D| 46.52 |28.43 | **Q4 explain how to obtain $Q_{\hat{v}}$, $K_{\hat{v}}$ and $V_{\hat{v}}.$** Sorry for the confusion. We get $Q_{\hat{v}}$, $K_{\hat{v}}$ and $V_{\hat{v}}$ through three different linear projection: $Q_{\hat{v}} = W_{q}\hat{V}, K_{\hat{v}} = W_{k}\hat{V}, V_{\hat{v}} = W_{v}\hat{V}, $ where $W_{q}$, $ W_{k}$ and $W_{v}$ are linear projection matrices, $\hat{V}$ is the embedding output by the previous cross-attention layers. We will describe this process clearly in the revised version. **Q5 the reason for using span embeddings as queries instead of combining it with the text.** There are three reasons: 1. The text has already interacted with the video in the Similarity-Aware Condition Generator, so continuing to add text information may be unnecessary. 2. Usually the query of the decoder in DETR series work (such as MomentDETR) is learnable embeddings, not text features. Learnable queries introduce location bias inherent in the dataset, so we set the query to be data-independent random spans. Besides, our model is based on the diffusion model. The entire training process is the noise addition and denoising process of ground truth spans. Both the input query and output results should be consistent spans. 3. We add text features to the query and find that the results on Charades-CD is reduced. We speculate that adding text features to the query may perturb the denoiser to perceive the noise intensity, and thus the results will decrease. The results on Charades-CD are as follows: |Method|R1@0.3|R1@0.5|R1@0.7|$MAP_{avg}$| |:---|---:|---:|---:|---:| |w/ text features|67.21|45.99|22.63|22.26| |Ours|67.73|47.17|22.98|22.76| **Q6 explain the method used to integrate multi-modal video information.** The key to this paper is how to alleviate the location bias problem and iteratively generate accurate temporal spans. For adding additional audio information, our purpose is only to prove that our method can expand more modalities, but this is not the focus of the paper. In addition, the process of fusing audio information is very simple, we only concatenate the audio features and the input visual features (e.g., VGG features). As shown in Table 1 of our main paper, the results are as follows: |Method|Type|R1@0.5|R1@0.7|$MAP_{avg}$| |:---|---:|---:|---:|---:| |UMT [26]|VGG+Audio|48.44|29.76|30.37| |MomentDiff|VGG|51.94|28.25|31.66| |MomentDiff|VGG+Audio|52.62|29.93|31.81| From the results, we can find: 1. Without using audio, the model still can achieve good results, such as R1@0.5=51.94. 2. Using the same settings (VGG+Audio), our model (R1@0.5=52.62) can exceed UMT (R1@0.5=48.44) by a large margin. The above results and more results in the paper prove the effectiveness of the model itself. --- Rebuttal Comment 1.1: Title: Response to authors Comment: The authors' response addresses some of my concerns. I will adjust my score to "borderline accept."
Summary: To deal with the problem of temporal location bias, the authors propose a diffusion-based video moment retrieval framework, MomentDiff. They introduce the diffusion process into temporal localization from a generative perspective, and gradually generate real span coordinates from coarse to fine. Compared to learnable queries, the random noise input to the model reduces the dependence on the location information of the dataset. Therefore, MomentDiff achieves better results on two "anti-bias" datasets with changing location distributions. Besides, MomentDiff consistently outperforms state-of-the-art methods on three public benchmarks. Strengths: a) This work proposes a novel and effective diffusion framework on the video moment retrieval task and alleviates the important location bias problem. The paper is also well motivated and well written. b) To demonstrate the robustness of the model, they propose two anti-bias datasets, which seem to be one of the main contributions of the paper. Promising experimental results. c) The paper presents promising experimental results. Authors will provide code and datasets. Weaknesses: While I don't see obvious weaknesses, there are a few minor suggestions, and additional questions that the author needs to answer carefully: a) Please revise the notation and typos of the paper. For example, \epsilon in Eq 6 is not clearly defined above, only \epsilon_m. b) Please re-check the paper and correct errors on formatting, grammar, etc. For example, L154. The Eq (1) uses “Snj”, “Spj”, but L154 writes “Spj”, “Sni”. c) Some of the figures in Fig. 1 and Fig. 3 are so small that they are difficult to see even when zoomed in. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In image generation tasks, images generated by diffusion models are often diverse. With different random noise inputs, are the span coordinates generated by the MomentDiff model for the same video-text pair quite different, and is the model performance stable? If the result is relatively stable, what is the reason? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The paper argues that if the user enters words that violate the law, the model may have a potential negative impact. I think sensitive word filtering technology can effectively solve this problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your appreciation of our paper, including the motivation, writing, and robustness of the model. We will carefully revise the paper according to the questions and suggestions raised by the reviewers. **Q1 typos and errors in formatting, grammar, and figures.** Thank you so much for your constructive comments and review. We will carefully fix these errors, including formulas, font sizes, and spelling mistakes. **Q2 is the model performance stable?** In Figure 1 in our Supplementary Material, we show the performance of the model with multiple random seeds (seed= 2023, 2022, 2021, 2020, 2019) on the Charades-STA dataset. For readability, we present it here in tabular form: |Type|R1@0.5|R1@0.7|MAP@0.5|MAP@0.75|$MAP_{avg}$| |:---|---:|---:|---:|---:|---:| |VGG, Glove|51.94 $\pm$ 1.9|28.25 $\pm$ 1.7|59.86 $\pm$ 2.4|29.11 $\pm$ 0.6|31.66 $\pm$ 0.4| |C3D, Glove|53.79 $\pm$ 2.1|30.18 $\pm$ 0.7|59.32 $\pm$ 1.6|29.85 $\pm$ 0.4|31.89 $\pm$ 0.4| |SF+C, C|55.57 $\pm$ 0.9|32.42 $\pm$ 1.8|61.07 $\pm$ 2.6|32.51 $\pm$ 1.5|32.85 $\pm$ 0.9| This shows that the model always converges and achieves stable results for different initializations. We believe that the reason is related to the characteristics of the diffusion model itself and the loss constraints. Specifically, during the training process of the model, the input is the noisy span, and we constrain the model to generate a real span. The real span has fixed manual time annotation, which is obviously different from the image generation task. So the model will not generate overly diverse results under the constraints of annotations and losses. In addition, since the input noisy spans are random, the model needs to learn the ability to generate real spans from arbitrary spans. This ability ensures that the model can perform well under different initializations to a certain extent. **Q3 recommendations to address potential negative impacts.** Thank you for your very professional review. We will investigate recent projects on sensitive word filtering and add to the supplementary material. --- Rebuttal Comment 1.1: Comment: Thanks for your answer. After reading the author's comments as well as other reviewers' concerns, most of my concerns were resolved, so I tend to keep the paper score unchanged. Also, I agree with reviewer uAtb that it is important to add the experimental results of ActivityNet, CharadesCD and ActivityNet-CD datasets, which will make the analysis of the paper more comprehensive.
Summary: This paper first tackles video moment retrieval from a generative perspective, and proposes a novel framework called MomentDiff based on recently proposed technique Diffusion Models. MomentDiff can generate correct results from random spans, which can resist the temporal location biases. The experiments on three public datasets and two anti-bias datasets proposed by the authors demonstrate the effectiveness of MomentDiff. Strengths: 1. The generative perspective for video moment retrieval is novel. 2. The designed MomentDiff is effective for the location bias problem and easy to reproduce. 3. Experimental results on three public datasets and two anti-bias datasets demonstrate the effectiveness of the proposed method. Weaknesses: 1. From a generative perspective, traditional generative models like GANs can also be applied to the video moment retrieval task. Do the authors believe that these methods can be used, and if so, what is the difference between GAN and Diffusion models in this task? If not, please provide a reason. 2. It is unclear how other methods have solved the problem of location biases. It would be helpful for the authors to compare and contrast the advantages of the proposed method with existing solutions. 3. The total loss function is missing. Please clarify whether both loss functions L_{sim} and L_{vmr} are weighted 1. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author points out that too many iterations will slow down the inference speed, and the solution is to reduce the number of iterations and make a trade-off between performance and speed. It is recommended that the author give the iteration round parameters on all datasets to show better trade-off. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comment. We answer your questions point by point. **Q1 Using GAN in VMR.** We tend to use diffusion models instead of GANs to build generative VMR frameworks for three reasons: 1. The VMR task requires the model to locate fine-grained moments, so a single-step generative model like GAN may not work well. In our experiments, the effect of MomentDiff at step=1 is significantly lower than that of subsequent iterative generation results. 2. In addition, the diffusion model is very flexible and easy to calculate, we can choose any number of spans and steps to solve the VMR task. 3. The training process of GAN is difficult and unstable. **Q2 existing methods about location bias.** To remove the harmful location bias, DCM [1] first disentangles the moment representation and applies causal intervention on the multimodal model input to remove the confounding effects of moment location. [2] samples from the training set and constructs the uniform dataset, and reduces the gradient of biased samples to achieve an unbiased model. We propose a new diffusion VMR scheme from a generative perspective, which mitigates location bias by replacing learnable queries with random noise. We believe that generative methods will bring new insights to the field. We will carefully emphasize these methods in related work. [1] Deconfounded Video Moment Retrieval with Causal Intervention --SIGIR 2021 [2] Towards Debiasing Temporal Sentence Grounding in Video --arXiv:2111.04321 **Q3 Weight value on $L_{sim}$ and $L_{vmr}$.** We set the weights of $L_{vmr}$ and $L_{sim}$ to 1 and 4, respectively. Keeping the weight of $L_{vmr}$ unchanged, we organize the weight influence experiment of $L_{sim}$ on Charades-STA (VGG features), as shown in the following table: | $\lambda_{L_{sim}}$ | R1@0.5 | R1@0.7 | MAP@0.5 | MAP@0.75| $\text{MAP}_{avg}$ | |:--- | :----: | ---:| ---:| ---:| ---:| |1| 50.21 | 27.42 | 58.33 | 28.02 | 30.17| |2| 51.23 | 27.95 | 59.55 | 28.83 | 31.19| |4| **51.94** | **28.25** | **59.86** | **29.11** | **31.66**| |8| 51.39 | 28.01 | 59.42 | 28.89 | 30.92| More results will be added to the supplementary material. **Q4 the iteration round parameters on all datasets.** We show the R1@0.7 results corresponding to different iteration steps (1, 2, 10, 50, 100) on the three datasets in the table below: | Dataset | Type | 1 | 2 | 10 | 50| 100| |:--- | :----: | ---:| ---:| ---:| ---:| ---:| | Charades-STA |VGG,Glove | 26.12 | 27.93 | 28.21| **28.25**| 28.27| | QVHighlights | SF+C, C | 37.47 | 39.42 | 39.59| **39.66**| 39.58| | TACoS | C3D,Glove | 15.24 | 16.81 | 17.69| **17.83**| 17.97| More results will be added to the supplementary material. --- Rebuttal Comment 1.1: Comment: Thanks, the responses effectively address my concerns. I do not have further questions. This paper is the first to propose a generative algorithm framework in VMR, and alleviates the current important location bias problem. After reading all reviewer responses, I think the algorithm performs well in OOD scenarios and outperforms existing methods. Therefore, I recommend accepting this paper.
Summary: This paper proposes a novel generative approach, MomentDiff, to address the Video Moment Retrieval (VMR) task. It replaces traditional dense or learnable proposals with random spans and a diffusion-based denoiser to refine predictions, mimicking the human process of identifying key video moments. This reduces the impact of temporal location biases and improves the system's generalizability. The authors also introduce two "anti-bias" datasets, Charades-STA-Len and Charades-STA-Mom, for evaluation. Experiment results showed that MomentDiff outperforms existing methods in efficiency and transferability. Strengths: 1. The proposed method creatively combines pre-trained video and text backbones for feature extraction, a similarity-aware condition generator, and a video moment denoiser. This composite approach takes existing tools and blends them in a unique way. The inclusion of audio data as a feature, alongside visual and textual data, also represents an innovative approach to video moment retrieval. 2. The paper showcases a high-quality approach by incorporating various feature extractors, utilizing a multilayer transformer for multimodal interaction, and deploying a similarity-aware fusion embedding. The fact that the paper also discusses the limitations of the proposed method speaks to its quality and rigor. 3. The proposed methodology is outlined clearly and in a structured manner. Each part of the system, from feature extraction to the inference process, is explained with sufficient detail. However, some areas could benefit from additional explanation (e.g., the impact of the quality of fusion embeddings on the denoising process), which could further enhance clarity. 4. The paper tackles the important problem of video moment retrieval, which has broad implications in fields like media indexing, recommendation systems, and video summarization. The solution proposed in the paper, especially with the inclusion of audio features, can be significant in improving the efficiency and effectiveness of video moment retrieval tasks. By outlining its method clearly and discussing potential limitations, the paper contributes to further research and improvement in the field. Weaknesses: 1. The proposed method relies heavily on the effectiveness of the chosen feature extractors. Although they have tested multiple feature extraction models, the paper does not discuss the impact of these choices on the final results in detail. Additionally, the models chosen for feature extraction could potentially limit the generalizability of the approach to datasets significantly different from those on which the models were trained. 2. The paper does not provide a clear comparison with existing methods in terms of computational resources. This makes it hard to gauge the improvement the proposed method offers over current techniques. 3. The paper mentions multiple hyperparameters but does not discuss how they are selected or tuned. This could impact the replicability and robustness of the model across different datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Clarification on Visual and Textual Representations: It would be helpful if the authors could elaborate on why they chose the specific visual and textual extractors, like VGG, C3D, CLIP, Glove, etc. Are there specific reasons these were chosen over other potential extractors? 2. Elaboration on Span Generation Process: In the span generation process, it is mentioned that for the same video, the correct video segments corresponding to different text queries are very different. Could you elaborate more on this? Is there a way to address this challenge? 3. Justification for Hyperparameter Choices: Could the authors provide further clarification on the selection of the hyperparameters used in the model? How were these optimized, and what was the impact on model performance? 4. Scalability of the Model: Could the authors discuss how this model scales with larger, more complex datasets? Can the method efficiently handle real-world scenarios with high volumes of data, and if so, are there any limitations or performance degradation? 5. Use of Pre-Trained Models: What are the implications of using several pre-trained models? How does it affect the generalizability of the proposed method across diverse datasets, especially ones that differ significantly from the datasets these pre-trained models were trained on? 6. Computational Resources: Could the authors provide details about the computational resources required for the model to run both in the training and inference stages? This is crucial for evaluating the practicality of the proposed model. 7. No motivation is provided in Similarity-aware Condition Generator, i.e., why specific modality features are selected as Query, Key, and Values? Why not any other combination? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Authors have provided limited limitations. 2. Code is not provided in supplementary that can help with in more detailed understanding. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1 the reason and impact of extractors.** Thank you for suggestions. We chose these extractors for two reasons: 1. We want to prove that our method is general to different types of extractors, so different models are used: 1)2D encoder, VGG. 2)3D encoder, C3D. 3)Cross-modal encoder, CLIP. Many methods only use a single extractor. 2. The extractors involved in the VMR task include VGG, C3D, CLIP. For the fairness of the comparison, we use the consistent pre-trained models. The impact of different feature extractors: We agree with your profound point: different pretrained models generalize differently on downstream tasks. VGG, C3D or CLIP are pre-trained on ImageNet, Sports-1M, or 4B image-text pairs respectively. However, since our video data is quite different from the above datasets, the impact of pre-training model may not lie in whether the data has been seen, but in the representation ability of pre-training model itself. VGG is an early image backbone, and its representation ability in VMR may be weaker than that of C3D and CLIP. Therefore, on Charades-STA, VGG is not as effective as C3D or CLIP. We will emphasize the impact of model choices in the revised version. **Q2 computing resources in training and testing.** We count the training time (Tr) of an epoch and the inference time (In) on Charades-STA with VGG features. Charades-STA: |Method|R1@0.5|R1@0.7|Tr|In| |:---|---:|---:|---:|---:| |MMN [16]|46.93|27.07|479.63s|53.42s| |MomentDETR [25]|50.54|28.01|40.74s|12.42s| |Ours-step 1|49.17|26.39|48.12s|7.56s| |Ours-step 2|50.81|27.84|48.12s|8.23s| |Ours-step 10|52.36|28.08|48.12s|11.01s| |Ours-step 50|51.94|28.25|48.12s|20.74s| 1. Compared with MMN, MomentDiff (Step=2) achieves better results and improves training speed by 10 times and inference speed by 7 times. Because MMN predefines lots of proposals, which increase computational overhead. 2. All experiments are performed on one A100 GPU. Memory usage is correlated with number and dimension of features . For VGG, C3D or SF+C, we extract video features every 1/6s, 1s or 1s on charades-STA. We use VGG (4096-d), C3D (500-d), or SF+C (2816-d) features with 40.1G, 3.86G, or 4.02G memory. Even using C3D features, MomentDiff outperforms SOTA models. We can flexibly select extractors according to our own resources. **Q3 the impact of hyperparameters.** Thank you for good suggestion. First, we provide the ablation study about scale factor, span number and batch size in Table 5 of our main paper and Table 3 of supplementary material. Next, we show more results on Charades-STA (SF+C features), including the loss weight $\lambda_{L1},\lambda_{iou},\lambda_{ce}$ and the weights $\lambda_{L_{sim}}$ of $L_{sim}$, Transformer layer number. 1. We show the results of different weights and $[\lambda_{L1}=10,\lambda_{iou}=1,\lambda_{ce}=4, \lambda_{L_{sim}}=4]$ is best. | $\lambda_{L1}$ | $\lambda_{iou}$ | $\lambda_{ce}$ | $\lambda_{L_{sim}}$ |R1@0.5 | R1@0.7| $MAP_{avg}$| |:---|---:|---:|---:|---:|---:|---:| |10|1|4|4|55.57|32.42|32.85 |10|1|4|2|55.03|31.94|32.22 |10|1|4|1|54.36|31.14|31.09 |5|1|4|4| 53.74|30.25| 30.48| |10|2|4|4|55.42|32.29|32.57| 2. We show the effect of the layer number of Similarity-aware Condition Generator (SCG) and Video Moment Denoiser (VMD). The default setting: SCG contains 2 cross-attention and 2 self-attention layers and VMD contains 2 cross-attention layers. The default setting works best: |SCG|VMR|R1@0.5|R1@0.7|$MAP_{avg}$| |:---|---:|---:|---:|---:| |1+1|1|51.74|28.97|29.82| |2+2|2|55.57|32.42|32.85| |3+3|3|54.39|32.16|32.54| **Q4 elaboration on Span Generation Process.** In datasets, there are multiple segment-text pairs for the same video. The content of text is different, it will correspond to different segments. We designed two solutions to this issue: 1. We use cross-attention layers in SCG. The video and text in the cross-attention layers are fully interacted, so that the model can perceive different video-text pairs. 2. We design $\lambda_{L_{sim}}$ to optimize the fusion embedding $F$. Note that $F \in \mathbb{R}^{N_v * d}$, where N_v is the number of frames. The purpose of loss is to pay attention to the frames within ground truth spans. In this way, for different segment-text pairs of the same video, the model can focus on positive frames to generate accurate spans. We will add explanations. **Q5 using complex datasets.** Thanks for valuable question. We organize experiments on ActivityNet Captions, which contains complex activities. There are 37417, 17505, and 17031 pairs for training, validation, and testing. The results on ActivityNet Captions (C3D features): |Method|R1@0.3|R1@0.5|R1@0.7|$MAP_{avg}$| |:---|---:|---:|---:|---:| |MomentDETR|61.87|43.19|25.74|25.63| |Ours|62.79|46.52|28.43|28.19| Compared with baseline, our method still has a certain improvement. To verify performance in real world, we refer to the Out of Distribution (OOD) division on ActivityNet-CD[1]. The results are: |Method|R1@0.3|R1@0.5|R1@0.7|$MAP_{avg}$| |:---|---:|---:|---:|---:| |MomentDETR|39.98|21.30|10.58|12.19| |Ours|45.54|26.96|13.69|16.38| The results prove model robustness. We will add results in the revised version. [1] A Closer Look at Temporal Sentence Grounding in Videos: Dataset and Metric. **Q6 why specific modality features are selected as Q, K, V?** Because the task purpose is to locate the start and end positions. Therefore, we hope that the features after multi-modal fusion are frame-level features, e.g., $ F \in \mathbb{R}^{N_v * d}$, which is convenient for frame-level loss design based on ground truth span. According to the attention formula in Transformer, the dimension of output embedding is consistent with the dimension of Query. Therefore, we need to design video features $ V \in \mathbb{R}^{N\_v * d} $ as Query and text features $ T \in \mathbb{R}^{N_t*d}$ as Key and Value. If $T$ is Query, $F$ is word-level, which may not be suitable for video to locate spans. --- Rebuttal 2: Comment: After reviewing the authors' responses and considering feedback from other reviewers, I have decided to maintain my initial score.
Rebuttal 1: Rebuttal: Thanks to all reviewers for their careful comments. We would like to thank all reviewers for their appreciation of our paper, including **"writing is easy to follow" (Reviewer oTs9), "the high-quality approach" (Reviewer KdBK), "a novel framework" (Reviewer 2CFW), "promising results" (Reviewer mu2A) and "valuable datasets" (Reviewer uAtb).** Our contributions specifically come from three aspects: 1. we are the first to tackle video moment retrieval from a generative perspective, which mitigates temporal location biases from datasets. 2. We propose a new framework, MomentDiff, which utilizes diffusion models to iteratively denoise random spans to the correct results. 3. We propose two “anti-bias” datasets with location distribution shifts to evaluate the influence of location biases. Extensive experiments demonstrate that MomentDiff is more efficient and transferable than state-of-the-art methods on three public datasets and two anti-bias datasets. In addition, we mainly reply to the reviewers from three aspects: 1. Clarify the motivation and the reasons for the module design. 2. We prove the effectiveness of the model in a larger dataset and more OOD datasets. 3. We show more ablation studies. **Please refer to individual responses to each reviewer for specific details.** Finally,**thanks again to the reviewers for their efforts, we have benefited a lot.** Pdf: /pdf/b9001bb4e8f96ba9ba92c773102b429a14174e11.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper first proposes a diffusion model for video moment retrieval to overcome proposal-based moment retrieval and distribution-specific methods. Strengths: [+] First work bridging generative frameworks into deterministic task as video moment retrieval. [+] Illustrative presentation [+] writing is easy to follow Weaknesses: [Motivation] [1] There are many solutions (i.e. 2DTAN, VSLNet) without relying on moment proposals. Fully-supervised VMR does not concern about proposal generation, as it is frame-level supervision is available, where the previous works already design the regression-based (i.e., Attention Based Localization Regression) methods. [2] As this paper proposes a new framework (diffusion framework) for the VMR, what the previous frameworks of VMR are relying on the distribution-specific proposals? This paper refers to the 2DTAN, the framework of 2D map with respect to start-time and end-time can allow all the possible moments, where the framework is not relying on the distribution-specific bias. I think the authors may understand the applying the masking in the map as resulting distribution-specific proposals, which is more related to the heuristic filtering by empirical experiments, not a bias problem. [3] Why the diffusion (generative framework) more can be better than the deterministic models? In fact, the authors' proposed method is presented to overcome the proposal-learnable methods, however, current VMR method is not relying on proposals. (rather, the weakly-supervised method is dependent on proposals). Furthermore, location bias problem is sourced from dataset distributions, proposed diffusion framework does not correlate to mitigating bias. [Method] [1] Preliminary section is recommended in the paper or appendix about the forward-backward process of diffusion framework (e.g., denoising diffusion probabilistic model, conditional diffusion, sampling) to enhance the readability. [2] x_{0} is a 2-dimensional vector of center point and width. Does this paper truly add Gaussian noise on to the 2 values and denoise them? [Experiment] [1] I can not trust the performances in Table. Is there any experimental qualitative evidence why denoising frameworks can guarantee more performance than previous work? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I want to get answers about the weaknesses above in the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See in the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments! We hope our answers can address you concerns. [Motivation] Thank you for your very professional review. In fact, our understanding is basically the same, but there are two points that need to be clarified before solving your doubts. **Definition of proposal.** There are indeed many methods that do not require proposal generation. However, the proposal we refer to is a broad concept, including dense anchors in 2DTAN and learnable queries in MomentDETR. Like [1], we regard both 2DTAN and MomentDETR as implicit proposal-based methods. [1] CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding **Promising DETR-based VMR framework.** Recent VMR methods are mainly divided into three types: dense proposal-based methods (e.g., 2DTAN, MMN), regression-based methods (e.g., ABLR, VSLNet) and DETR-based methods (e.g., MomentDETR, UMT). The disadvantage of dense proposal-based methods is that there is a large amount of proposal redundancy and the calculation speed is slow (refer to Q1's reply in Reviewer uAtb). Regression-based methods can alleviate this problem, but the results are often not good enough. Recently, DETR-based methods have weighed efficiency and performance. MomentDETR exceeds MMN 7 times in terms of inference speed (see Table 2 in Supplementary Material), and it also exceeds many regression-based methods in terms of performance. For example, MomentDETR can achieve R1@0.5 50.49 in Charades-STA, surpassing VSLNet(48.67) [18]. Therefore, we further explore VMR tasks based on DETR, which is a very promising genre. **Q1 many proposal-free methods.** The emerging DETR-based VMR method is an important genre, and we are solving the location bias problem existing in the DETR series. **Q2 motivation about 2DTAN.** Sorry for the misunderstanding. We do not think that 2DTAN suffers from severe location bias problems. We introduced 2DTAN mainly to illustrate the shortcomings of dense proposals: these methods have a large redundancy of proposals and the numbers of positive and negative proposals are unbalanced. Besides, in Table 2 in the supplementary material, we also show the inference efficiency comparison between 2DTAN (42.18s) and our method (20.74s). This proves that dense proposals do affect model efficiency. **Q3.1 & Q6 why can the generative framework reach SOTA results? Qualitative analysis?** We improved based on MomentDETR. Both MomentDETR and UMT have SOTA performance, refer to Table 1 in our manuscript. Specific model differences include iterative denoising paradigm and training details. 1. Iterative denoising paradigm. In Table 5(c) w/o VMD, we find that removing the denoiser for model optimization, the final result drops by 8.88% in R1@0.5. According to Table 5(f), the result (53.31) of only 1-step denoising is about 2% lower than the best result (55.62). When the model performs 1-step denoising, it is actually similar to MomentDETR. The above results prove the effectiveness of the iterative denoising paradigm. **Qualitative analysis.** In Figure 5 of our manuscript, we visualize the prediction results of MomentDETR and MomentDiff. **In Figure 1 of the rebuttal pdf file, we further present the results generated by our method in a single step.** The result of single-step generation still has a certain deviation from the correct video segments, especially the example on the right of Figure 1. Through continuous iterative diffusion denoising, our model can finally obtain better results than MomentDETR. 2. Encoder and loss functions. Compared with MomentDETR, we use additional cross-attention layers as encoder, which strengthens modality information interaction. We use point cross-entropy loss, which focuses on finer-grained and comprehensive positive and negative video frames. Definitely, we will present the above results, analysis and codes in the revised version. **Q3.2 reason about alleviating bias:** A simple idea is to solve it at the data level, which can be sampled from biased data sets or constructed with more evenly distributed data for training. But we hope to solve the problem from a deeper and essential perspective, that is, the model itself. Specifically, learnable queries in MomentDETR may tend to focus on video segments where locations in the dataset occur more often. Therefore, we directly replace the queries with data-independent random noise, which can alleviate the above-mentioned bias to a certain extent. In addition, more experimental results can refer to the reply of reviewer uAtb Q2. We achieve impressive performance on the Out of Distribution (OOD) evaluation dataset (Charades-CD and ActivityNet-CD). [Method] **Q4 preliminary section about diffusion process.** Thanks for your constructive suggestions. In the supplementary material, we have given the pseudo codes of diffusion training and inference in Algorithm 1 and 2 for the convenience of readers. To make it clearer, we will add the basic background, flowchart and more details of common diffusion models in detail in the appendix. **Q5 add noise to the span.** Yes,we add Gaussian noise according to the following formula: $\boldsymbol{x}_m=\sqrt{\bar{\alpha}_m} \boldsymbol{x}_0+\sqrt{1-\bar{\alpha}_m} \boldsymbol{\epsilon}_m$ and code: ```python #python3 noise = torch.randn(self.query_num, 2).cuda() sqrt_alphas_cumprod_m = extract(self.sqrt_alphas_cumprod, m, x_0.shape) sqrt_one_minus_alphas_cumprod_m = extract(self.sqrt_one_minus_alphas_cumprod, m, x_0.shape) x_m = sqrt_alphas_cumprod_m * x_0 + sqrt_one_minus_alphas_cumprod_m * noise ``` In addition, the denoising process is inspired by the diffusion model that can exploit random noise to generate images in specified semantics. We generate noise into the temporal spans corresponding to the query semantics, which can be fully realized through the guidance of video-text fusion embeddings and appropriate loss constraints. We will publish code and models. --- Rebuttal Comment 1.1: Comment: My questions are resolved well. It is highly recommended to release the code publicly available for enhancing reproducibility. I raise my score. Thank you! --- Rebuttal 2: Title: Request for your feedback in light of authors' feedback Comment: Thank you for your valuable insights and expertise which have contributed significantly to the review process. Following the initial review, the authors have provided a detailed rebuttal addressing the feedback and comments provided by our esteemed reviewers, including yourself. I kindly request that you take the time to carefully review the authors' rebuttal and assess its impact on your initial evaluation. Please share your thoughts and any additional points you may have after reading the authors' rebuttal. Thank you very much!
null
null
null
null
null
null
Leveraging Locality and Robustness to Achieve Massively Scalable Gaussian Process Regression
Accept (poster)
Summary: The paper proposes a Gaussian process regression technique that only uses nearest neighbors for prediction. The question of choosing hyperparameters is addressed and asymptotic behavior is analyzed. The method shows substantial speed up on UCI datasets while also providing improved predictive performance. Strengths: The empirical evaluation shows very good results. Even so, the method is theoretically analyzed and discussed with interesting insights and hypotheses. The paper is well-written. Weaknesses: A comparison with other nearest neighbor based techniques might have been helpful (e.g. ones cited in the paper). Also, it might make sense to investigate more into the geostatistical work on the subject and cite something from the area. I know that nearest neighbor based predictions are quite often considered to be standard in the area. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Due to time constraints and NeurIPS not accommodating my request for a reduced reviewing workload, I did not have the opportunity to thoroughly review the proofs or examine all the details. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Thank you for what time you had available and your comments. We take your point regarding references to other work on similar topics and intend to expand our section on "related work" to reflect this, including contributions from the geospatial community. 2. We acknowledge that a comparison to other nearest-neighbour based methods might have been interesting, but we neglected to do so on the basis that the thrust of this paper was to emphasise, for example, the following points: - The decoupling of parameter estimation and prediction and the associated advantages to computational cost this can incur - The asymptotic insensitivity of GPnn prediction to model misspecification making use of nearest-neighbours to do so, rather than the focus being primarily on NNs itself. 3. Please also see our response to reviewer MqBs bullet point 4 concerning our choice of "state of the art" comparisons. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal. My score remains unchanged. I suggest the authors to compare at least against https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5927603/, at least describing the differences between the mentioned paper and their paper in a sentence or two: the cited work is quite well-known and thus I think such a comparison would benefit the readers. --- Reply to Comment 1.1.1: Comment: Many thanks for this response. We agree that this would be a good paper to reference and briefly compare and contrast with our approach. We will implement your suggestion.
Summary: The paper proposes a change in perspective in how Gaussian process regression can be leveraged by conditioning predictive distribution on only the neighboring data points. The paper argues against the common practice of using one singular set of data points for model hyperparameter estimation and prediction, and presents a framework for running scalable approximation of GP regression using locality information. Strengths: I believe that the overall idea of leveraging locality in Gaussian process models has a lot of potential: while I am skeptical as the arguments in the paper seem unconvincing, I think there is merit in good empirical performance for such algorithms. However, I believe that a comparison with existing methods should be done in a more proper manner. Weaknesses: While using KNN in combination of GP regression might seem like a straightforward idea to massively speed up the inference of GP models, I believe that many key aspects of GPs are overlooked, constituting a major weakness from this paper. I list some of the specific examples below. - Combining KNN with GP is overall not a novel idea, the practice is not exactly new. A cursory search reveals that many research papers outside the machine learning community are already using it as a proxy for the less scalable fullscale GP regression. Some examples relevant to machine learning include: 1. Wu L, Pleiss G, Cunningham JP. Variational nearest neighbor Gaussian process. In: Proceedings of the 39th International Conference on Machine Learning, PMLR; 2022; 2. Chen H, Zheng L, Kontar RA, Raskutti G. Gaussian Process Parameter Estimation Using Mini-batch Stochastic Gradient Descent: Convergence Guarantees and Empirical Benefits. Journal of Machine Learning Research. 2022;23(227):1–59. - The paper focuses on _pointwise_ uncertainty prediction, while GP posteriors provide not only a pointwise uncertainty, but _pairwise_ covariance as well. The covariance kernel is such a crucial element that defines the hypothesis space of GP models that given a configuration of pointwise uncertainty, there exists countless GPs that takes identical pointwise variance. - While KNN is guaranteed to speed up massively in lower dimensions, it suffers from curse of dimensionality itself as it becomes harder to find nearest neighbors. - While it is not mandatory for parameter estimation and prediction to be conditioned on the same set of data, SVGP does not incur much computational expense at prediction level. Conventional GP regression and SVGP only need one matrix inversion operation to predict the uncertainty level on an arbitrary number of test points, while GPnn needs a matrix inversion operation for every separate data point -- essentially it becomes a balancing act of whether optimizing an SVGP or apply KNN to every test point is more expensive. - Theorem 1 discusses the asymptotic behavior as $n\rightarrow\infty$. While the result of insensitivity w.r.t. kernel parameter carries some value, the _rate_ of convergence matters in a finite data regime. For example, Ackermann function and inverse Ackermann function both converge to infinity but at drastically different rates. - While the common squared exponential kernel is designed to look at neighborhood information, many more expressive kernels can have longer-range connections. I think the paper tacitly admits that GPnn can only be used for interpolation tasks, but it is somewhat unfair to state that kernel hyperparameter does not matter as it is always a misspecified model. - The ease at obtaining easy uncertainty information given neighborhood information is conditioned on conjugate likelihood, with a Gaussian observational model. As some degree of variational inference is required for non-conjugate likelihood, GPnn struggles to apply beyond regression tasks. - Simulating synthetic data using Algorithm 11 seems flawed. Algorithm 1 generates test points as if every 2 test points are un-correlated, which naturally gives GPnn an advantage. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: None. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: I have listed the limitations in the "weakness" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. We briefly mention use of NN in other GP literature in sec. 8 "related work" but appreciate your feedback pointing to the need to expand that further, including how our use differs from theirs. We agree that the mere act of including kNN in some form within a paper cannot be regarded as novel. However, the novelty is in how we use it, the resulting strength of results and the substantial computational efficiency savings made. Novel aspects include: robustness in the limit to kernel choice and hyperparameters, substantial reductions in training costs, justification of detachment of prediction process from training process, improvements in calibration over other methods, algorithmic simplicity. These innovations are more general than use of NN alone. 2. Thank you for this perspective. We have focused on pointwise prediction and, in light of your feedback, will clarify that in the paper. Whilst accepting pointwise prediction does not cover the entire spectrum of GP applications it is an exceptionally important area in its own right (and the most heavily used in practice) for which nearly all research papers quote their performance results, e.g. [2,4,7,11,12,13,22,24,25]. The importance of making improvements in this area is also reflected in the comments of all reviewers of this paper. Selling points of GPs in the pointwise context are provision of accuracy and well principled uncertainty measures. Our method is shown to generally outperform the other methods in both respects at a small fraction of their training cost. 3. Prior to our analysis, we too had thought that curse of dimensionality would impact badly on comparative performance for high-dimensional datasets. Surprisingly, we found mse, nll and calibration still beat other methods at large d (e.g. Ctslice with d=378). This led to conjecture 6 and related follow-on work in progress. Computational cost does rise with d, as covered in 6.2, table 2 and figure 3, but even for the most extreme case (d=378) GPnn outperforms other methods on training time and is comparable at test time. Given that the precise implementation of the approximate kNN algorithm is beyond the scope of this paper, and should not negatively affect the locality arguments on which this work is structured, we believe that advances in this area will only improve the performance beyond its already strong baseline. 4. Whilst true that SVGP prediction is fast, GPnn prediction is comparable (see table 1 in the pdf) and training time much faster than that of SVGP, meaning for a given computational budget covering both train and test time GPnn is very competitive, especially at large n. On test point timings, table 1 gives exemplar prediction times in seconds, all obtained on a laptop (with 400 NNs and 1024 inducing points as in the paper). We will add similar results to the paper to reassure readers on this point. 5. We agree that the rate of convergence is of practical value and we have preliminary results for this but wish to delay publication of them (see 5.1). Nevertheless we believe the importance of sharing existing results with the community outweighs delaying publication for completeness. This view appears to be supported by the other reviewers. 6. GPnn _is_ designed for interpolation tasks on which we demonstrate excellent performance, even in the presence of somewhat severe misspecification, at modest expense. It is ongoing work to generalize our existing results and determine the dependence of the rate on the kernel properties. Practitioners generally fix a choice of kernel(s) and then optimize hyperparameters in relation to that choice. A key point that our paper makes is to caution against excessive effort in hyperparameter fine-tuning. Our empirical and theoretical results demonstrate that the predictive measures commonly used by the community are robust to non-optimal parameter specification for pre-picked kernels, which runs somewhat counter to conventional belief. 7. As covered under bullet point 2, improving GP regression is of major importance in its own right so we did not view a failure to address non-regression applications, e.g. classification, to be a weakness of the paper. In fact, we note that the use of GP for regression tasks is more common than for classification, and thus we would argue that our findings, although focussed in the regression domain, have wide reaching and significant impact to the community. Having said that, we take on board your comments on the potential advantages of variational methods beyond regression whilst also remaining open minded as to whether GPnn could potentially play a role in that domain. 8. - Re "Algorithm 11 seems flawed": Thank you for pointing out the need to justify this. We are confident the simulation is both valid and appropriate for what we are aiming to do and hope the following brief explanation helps: Firstly, we are focussing on point prediction and lack of correlation between test points does not impact that. Secondly, the validity of the estimates is provable: The gist of the argument is to show for fixed size-$n$ training set $X$ that $\\{(x_i^*,N(x_i^*),y_i^*,y_i)\\}\_{i=1}^{n^*}$ is a set of iid vector-RVs (here the $y_i$ are m-dimensional). Then since $e_i^*$ is a deterministic function of $(x_i^*,N(x_i^*),y_i^*,y_i)$, $\\{e_i\\}\_{i=1}^{n^*}$ is also a set of iid RVs. Finally $e^* = E(e_i)$ so that $e^* = E[\frac{1}{n^*} \sum_{i=1}^{n^*} e_i]$ as required. The same argument holds for $l^*$ and $z^*$. - Re "gives GPnn an advantage": We do not believe this to be the case because (a) the estimated mean mse, nll and calibration are valid for (point-estimate) GPnn and (b) we are in any case only using this simulation to experimentally demonstrate convergence toward the theoretical limits of theorem 1, not to compare GPnn performance with the other methods. --- Rebuttal Comment 1.1: Title: Post-rebuttal comments Comment: I appreciate the authors for their rebuttal and for other reviewers for their insights, and maintain the same assessment as before with a reduced confidence score. I have gathered more insight into the the interplay between GP models and its scalable kNN variants from the rebuttal and subsequent discussion from other reviewers. The paper “Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets” mentioned by reviewer 3iYQ is an important precursor work and bridges this paper with the original full GP model: we do not need to see nearest-neighbor GPs as a straightforward approximation to GP models, but as a hierarchical model with a nearest-neighbor element in its own right. I believe that this paper gives us more insight in how to bridge hierarchical NN-GP with the GP model that uses all the training data points. I have, however, found that the theoretical results presented in the paper to have a quite limited impact on how we understand this connection. The paper considers the $n\rightarrow\infty$ limit behavior for virtually all its theoretical findings, but it is the infinite training data regime that yields many confounding results: when we are presented with infinitely many training data, the $m$ nearest neighbors of an arbitrary test point is itself (or a selection of points within an arbitrarily small ball surrounding the test point). In this asymptotic scenario, no kernel hyperparameter would ever matter as the kernel matrix converges to a matrix of all $1$s, hence the result negating the purpose of kernel learning from the asymptotic sense. This also shows why evaluating the variance parameter in the observation model still matters: the nearest neighbor observed value is essentially a collection of observation centered around the ground truth value with variance $\sigma_\xi^2i$. The paper presents a counterintuitive result in Remark 2 that isotropic kernels already suffice for best MSE in the asymptotic sense, and I argue that this line of counterintuitive reasoning could go one step further as we are in the domain of infinite data: kernels with infinitely small length scale is the optimal choice for all regression tasks, as it offers the best flexibility in the prior space and all the rest of Theorem 1 still holds. I unpack this line of reasoning in order to show that the infinite limiting behavior might tell very little about this model in a realistic setting, to an extent bordering on no realistic meaning. I am not, however, using this logic to discount, for example, the empirical convergence result presented in Figure 2. The author demonstrate in the rebuttal that they think evaluating GPnn using Algorithm 1 is fair practice, but I am still unsure whether it is true, and I phrase my question in a more straightforward manner. I assume training and test data should be generated as an entire set at first in one go, and then partitioned, but Algorithm 1 treats training and test data differently. Could you tell me why generating test data conditioned only on its nearest neighbors does not make it easier to predict based on its nearest neighbors? I agree after some re-consideration that applying GP models along with kNN carries some utility in certain applied domains (for example, geostatistics), and this is the main reason for the reduction in my confidence score. I remain unconvinced about this paper’s theoretical results, as I believe that a true leap in understanding this type of models would involve some degree of non-asymptotic analysis. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments. We take on board and to some extent agree with your view on the importance of non-asymptotic results. We hope to produce such results that explicitly depend on the kernel parameters in the near future. Similarly, we understand your reservations and example regarding "infinitely small lengthscales" but argue that even without explicit convergence rate results, empirical results confirm that this approach (in less extreme circumstances) has very good performance and exhibits convergent behaviour, despite obviously existing in the finite data regime. In addition, we would point out that although rates obviously depend on the choice of kernel, as an example if we consider an isotropic kernel we can infer that the corresponding convergence will depend on the convergence of the distance between neighbouring input points, something which has been studied extensively in the classical kNN literature and shown to display gradual convergence rates (e.g. [1],[2],[3]). As such, we expect similarly gradual convergence for a large number of commonly-used kernels. Thank you for your insightful feedback and clarification regarding Algorithm 1. We now further understand your concerns, but remain confident in the validity of our conclusions pertaining to convergence behaviour. We believe that we can probe your concern by supplementing the current version with a computationally cheap deterministic function. Having applied this enhancement already to Figure 2 using the Oakley and O'Hagan function from Section 7.2, we can report that the plots maintain their core characteristics consistent with the original figure, while directly addressing the issues you raised. We can also run (smaller scale) versions to generate very similar plots to Figure 2 using data sampled directly from a GP and avoiding the use of Algorithm 1, further reinforcing our confidence in our conclusions. We hope that our successful reproduction of the salient features of figure 2 via alternative means will have already satisfied you on the matter of Alg. 1. However, we realise that the above does not fully explain its validity in response to your final question on the topic. Details are given below. If we include algorithm 1, in addition to other evidence, we will include a proof of its validity in the appendix. Define "Algorithm 1" as in the paper. As argued in our original rebuttal, the evaluations ${e_i^*}$ are iid RVs and so our estimator, $\frac{1}{n^*}\sum_{i=1}^{n^*} {e_i^*}$ is valid for the corresponding expectation. Define "Algorithm 1b" to be the procedure whereby we generate an $n$-length $x$ training set that we subsequently hold constant. For each generated test point $x^*$ we then generate an $(n+1)$-length GP sample $y$. We take the $m$ nearest-neighbours of the test point $x^*$ and evaluate the function $e(\cdot)$ to obtain ${e_i^*}'$. We repeat this $n^*$ times and compute a Monte-Carlo estimate of the expectation, $\frac{1}{n^*}\sum_{i=1}^{n^*}{e_i^*}'$. This method is clearly valid, albeit computationally very expensive. Finally, we show that the expectations in both cases are equivalent, where we begin with the case of Algorithm 1b and show that it is equivalent to that of Algorithm 1. We use $y^*$ to refer to the test observation, $y'$ to refer to the nearest-neighbours and $y''$ to the remaining disjoint observations. $y=(y^*,y',y'')$ refers to the full (n+1) length vector and hence $p(y)$ to the full joint distribution: $$ \frac{1}{n^*}\sum_{i=1}^{n^*} {e_i^*}' \approx E_{(y^*,y',y'')}[e(y^*,y',y'')] = E_{(y^*,y',y'')}[e(y^*,y')] = E_{(y^*,y')}[e(y^*,y')] \approx \frac{1}{n^*}\sum_{i=1}^{n^*} {e_i^*} $$ Since $e(\cdot)$ is only a function of $y^*,y'$. [1] L ́aszl ́o Gy ̈orfi et al. A Distribution-Free Theory of Nonparametric Re- gression. 2010., [2] Kohler, Krzyzak, and Walk, ‘Rates of Convergence for Partitioning and Nearest Neighbor Regression Estimates with Unbounded Data’, [3] Kulkarni and Posner, ‘Rates of Convergence of Nearest Neighbor Estimation Under Arbitrary Sampling’.
Summary: This paper starts by proposing use of a simple nearest neighbors scheme for prediction, where instead of using the training set, one uses the $m$ nearest neighbors of the training set to the test set. They then, in Theorem 1, analyze the asymptotic expected MSE, calibration and negative log-likelihood of this prediction approach under a given estimator of the parameters. They find that there is robustness to poor estimation except in noise variance. They show empirically that their simulations match theory in algorithm 1. They then propose a scalable GP regression algorithm: noting that when using their nearest neighbor prediction approach, the MSE, calibration and negative log-likelihood are *somewhat* robust to wrong parameters (except noise variance), they use multiple approximation steps, including using a subset of the training data and using a structured covariance matrix. In order to improve noise variance estimation, they add a calibration step. They show that it outperforms several baselines on real world datasets. Strengths: This is a very interesting paper: the resulting technique is very simple but pops out in a very non-obvious way. Further, it leads to a powerful consequence: you don't need the greatest parameter estimates for training if you do nearest neighbors prediction, as long as you add a calibration step to improve the estimate of noise variance. Despite my slight reservations about the baselines and the strange paper structure, I strongly recommend acceptance. Weaknesses: This paper is written in a very non-standard way and completely ignores the standard ML paper format. It completely ignores the standard introduction, replacing it by what is often the 2nd or third section, which is some technical background. It then has the 2nd section, which is *similar* to what often comes at the end of an intro, but is somehow different. The related work section is very short. In general everything is somewhat terse. I don't completely hate it, but it's jarring to read this, and I'm not sure what the justification for doing it this way is, particularly since the authors seem aware of literature and the techniques used in the literature so presumably are aware of how standard ML papers are written to flow a certain way. The main weakness is that I'm not sure how close to state of the art the baselines are. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Why did you use this particular, somewhat strange, paper structure? Why is your calibration definition what it is? I'm a little confused: you said you replace $X$, but eqn. 5 still has it. Is this a typo? Is SVGP really state of the art? What about SKI [1] and its extensions? [1] Wilson, Andrew, and Hannes Nickisch. "Kernel interpolation for scalable structured Gaussian processes (KISS-GP)." International conference on machine learning. PMLR, 2015. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Thank you for your comments on the structure. It was not a conscious decision to write substantially outside the stylistic norms of the community and on reflection perhaps we should have tried to conform more strongly. We do not wish to drastically change the structure of the paper at this stage in case that would lead to a requirement for other reviewers (who did not object) to reassess. We hope that you do not think this unreasonable and will definitely take your comments on board for future work. We are planning to elaborate and extend the related work section. 2. We define calibration this way for a few reasons: - For a well-specified GP, the $E_Y[MSE]$ matches the predictive variance, i.e. the uncertainty in mean prediction (or the magnitude of the residuals) is reflected in the variance predicted by the model. - It provides a convenient numerical baseline for what "well-calibrated" means, when the value is 1. - It allows us to carry out our simple recalibration procedure (Algorithm 2), to improve both our measure of calibration and NLL, whilst leaving MSE unchanged. - Although a "weak" measure of calibration, obtaining a value near 1 is a necessary condition for effective calibration, and marked departures from 1 were detected for some of the methods in figure 4 and table 3. Additionally, we have subsequently found that [1] uses the same definition and we will reference that in the paper. [1] Jankowiak, Pleiss, and Gardner, ‘Parametric Gaussian Process Regressors’. 3. This is not a typo, but we appreciate your pointing this notational ambiguity out so that we can rectify this. In a sense, under this model specifying $X$ and $N(x^*)$ are equivalent, since $N(x^*)$ is directly derived from the training set $X$. 4. Given the potential for impact that the findings in this paper imply, we were keen to ensure that our benchmarks included widely used methods, so that an interested user might easily determine the potential for these findings to inform their work. We are not aware of a method which has replaced SVGP in this position, in terms of generality of uptake and usage. We were aware of the paper on SKI that you cited, but were under the impression it had limited applicability to datasets with more than a handful of dimensions. We would like to thank you for exposing us to the various extensions however, and would aim to include them as additional baselines to compare against in future publications. We do not believe that the omission of this comparison harms the conclusions which we draw in this paper, however. --- Rebuttal Comment 1.1: Comment: Thank you for your response. You are right that SKI suffers from the curse of dimensionality. The extensions (e.g. [1]) have not yet taken the place of SVGP, although you should probably mention them and (if you have time) add a comparison of one of them in the final version. My score remains unchanged. [1] Yadav, Mohit, Daniel R. Sheldon, and Cameron Musco. "Kernel Interpolation with Sparse Grids." Advances in Neural Information Processing Systems 35 (2022): 22883-22894. --- Reply to Comment 1.1.1: Comment: Thank for your further comments. We will follow up on the suggestions you have made and will endeavour to make comparisons in this paper, and certainly add SKI-based methods to future evaluations.
Summary: This work at one level is about proposing a scalable GP approach called GPnn where the prediction step uses only the nearest neighbours of a test point in order to form the predictive distribution. This approach is well-motivated as the authors argue that there isn't a strong mathematical reason to couple the training step / estimation of hyperparameters and the prediction / generalisation over unseen inputs. At the end of the day, we care about the latter (prediction performance) and not really the former. At another level, there is a theoretical component where they show that predictive distributions obtained through GPnn are robust to a wide array of model misspecification, for instance, wrong kernel choice or Gaussian noise in the large data limit. Strengths: The paper is well-written and clear. There is always a demand for new GP approximations which scale to millions of data points as SVGP has a known issue with over-estimating the aleatoric uncertainty. The algorithms clearly describe the simulation and evaluation step. There is a good summary of other distributed methodologies and clear delineation of how this work is different. Some components from this paper can be applied to other methods in order to refine estimates and calibration (Remark 5). Weaknesses: Figure 2 is hard to read - there is a lot going on and ideally should have been plotted with N on the x-axis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What about a sensitivity analysis to the choice of M? or are GPnn predictions insensitive to this in the large data limit. - Can M be learnt instead of set? - Perhaps some areas of the input space require more M for good predictive performance than areas where the function isn't changing much. - It would be interesting to see the performance on non-stationary functions. - There isn't sufficient insight on the selection of nearest neighbours exact or approximate - at first blush one would dismiss the idea because of the added compute entailed in finding neighbours per test data point. What is the complexity of this as a function of N and d precisely? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is some discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 0. We agree that the figure is not the most straightforward to interpret, but since the intention was to convey the dependence of the performance metrics on the estimated parameters, for varying dataset sizes, reproducing the figures with n along the x-axis would lose much of this information (please see the pdf attachment for our best attempt). For example the third plot in the paper shows the tendency, with increasing n, for "MSE verses $l$" to flatten toward the limiting horizontal line (i.e. for MSE to become insensitive to $l$ ). For the final version of the paper we will modify our caption and text to clarify, and may include an additional figure with n along the x-axis, although we suspect the information this conveys will be largely covered by Fig. 5 (which will now reference in 5.2). 1. Theorem 1 in the paper indicates the degree of sensitivity of prediction performance to m in the large n limit, e.g. for MSE, noise variance scaled by 1/m. For finite n it might be interesting to do such an analysis. We chose the value of m in our experiments without much experimentation beyond a brief comparison on otherwise unused synthetic data. We found minimal impact on performance. For even larger datasets it may be possible to reduce m and decrease training and test time at little cost to performance, but we did not investigate this further since we wished to emphasis the minimal amount of tuning our method requires. 2. Yes, for example by cross-validation, as is commonly done in the literature (on kNN). This would of course add additional computational cost, but could be done efficiently by starting with a maximum number, and iteratively evaluating performance on decreasing subsets. 3. This is an interesting but non-trivial suggestion which we would like to explore in future refinements to the method. 4. Our current theoretical analysis relies on the stationarity of both the generative and predictive model, although we suspect this could be relaxed. We appreciate the potential interest in non-stationary functions, but do note that results using the Oakley and O’Hagan function (non-stationary) are included in 7.2 and provide initial encouragement. 5. The literature on wide ranging approximate kNN methods is extensive and it is difficult to find a general complexity for the procedure. We used the SciKit-Learn implementation, whose costs are described in the associated documentation, e.g. O(dlogn) query compute-cost for the Ball-tree algorithm which the default automated algorithm selection in SciKit-Learn should at least match. In contrast, exact kNN costs O(dn) which is why we have chosen approximate kNN in preference. In practice the predictive timings are fast enough for most applications, taking on a laptop, for example, approximately 0.06s per prediction on the high-dimensional (d=378 and therefore relatively extreme) Ctslice dataset. We will add both example timings and the N,d complexities to the paper, which should also reassure readers that the cost of kNN search is not detrimental to the test-times of GPnn in practice (see also our response to reviewer N8D3 bullet point 4 and table 1 from the pdf). --- Rebuttal Comment 1.1: Title: Post rebuttal Comment: Thank you for responding to the questions and I would encourage you to action point 0 and add details on 5 in the final manuscript. Otherwise, I am happy to persist my score. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback, we will follow your recommendations.
Rebuttal 1: Rebuttal: We thank all of the reviewers for their useful and constructive feedback and for the time and effort that they have voluntarily set aside for this task. Pdf: /pdf/e046894649812480de103710afd0a3eeb7ebe613.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Cognitive Model Discovery via Disentangled RNNs
Accept (poster)
Summary: This paper presents an RNN with bottlenecks (both a sampling and a transition bottleneck) and applies the RNN to behavioural data from simple RL models, as well as rodent data. It is shown that the model learns interpretable latents, and ‘rediscovers’ q-learning as well as actor-critic learning and is able to account for the rodent data as well as the current best cognitive models. Strengths: This direction is interesting, and relevant to the animal behaviour and neuroscience literature. The technical details are sound. Weaknesses: 1) The model is bespoke – a separate MLP for each element of z, as well as the bottleneck. It would be helpful to have an empirical understanding of how important these elements are to learn a disentangled RNN (e.g. via ablation). 2) It would also be helpful to understand the benefit of disentanglement, i.e. if you just trained a standard RNN then looked at the first few principle components do they correspond to the disentangled latents? 3) The benefit of this approach is that it can offer an understanding of how animals learn. But there is only one analysis of a single rodent experiment, which does not unveil much beyond existing cognitive models. 4) No analysis of model latent dynamics. Or learning dynamics – does the model go through several stages of understanding? Does this map onto animal behaviour? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I’m surprised that the models take up to 40 hours to train on a TPU?! The models are pretty small… See weaknesses for other questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses for limitations. I'm not sure if this counts as a limitation or not, but I found it hard to place this work with other concurrent-ish preprints that are conceptually similar. E.g. Li et al., 2023 (cited in this work) that came out 1-2 months prior which trains a RNN on cognitive tasks (but without disentanglement) and is interpretable, but is somewhat more complete in terms of model analysis and relationships to animal data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses** > The model is bespoke – a separate MLP for each element of z, as well as the bottleneck. It would be helpful to have an empirical understanding of how important these elements are to learn a disentangled RNN (e.g. via ablation). We will add an ablation analysis as new supplemental information, removing the disentanglement loss for either the latent bottlenecks only or the update rule bottlenecks only. > It would also be helpful to understand the benefit of disentanglement, i.e. if you just trained a standard RNN then looked at the first few principle components do they correspond to the disentangled latents? We informally explored this in previous work, and found that examining the dynamics of the first few principal components of activity in an LSTM typically does not reveal disentangled dynamics. One reason for this is that the LSTM typically will retain dynamical modes that are present prior to training, just from its random initialization, because it is under no pressure to unlearn them. Frustration with this approaches is in fact what led us to the disRNN ideas of 1) explicitly penalizing the network for retaining information in its activations that it was not actually using for anything and 2) allowing the update rule for those activations to be free-form, parameterized by a feedforward sub-network, rather than constrained by particular update equations. We will add new analysis demonstrating this in our synthetic datasets as new supplemental information. > The benefit of this approach is that it can offer an understanding of how animals learn. But there is only one analysis of a single rodent experiment, which does not unveil much beyond existing cognitive models. We have added analysis of a new experiment: decision-making via accumulation of evidence (rebuttal pdf second and third row). > No analysis of model latent dynamics. Or learning dynamics – does the model go through several stages of understanding? Does this map onto animal behaviour? The question of the model's learning dynamics is an interesting one to explore in future work. We have focused on using disRNN as a tool for discovering the asymptotic dynamics that govern behavior, but it is true that it could itself be considered as a hypothesis about how animals meta-learn these kinds of tasks. This would involve analyzing in detail the trajectories by which disRNN discovers the asymptotic dynamics, and comparing them to animal's learning trajectories. With respect to analysis of latent dynamics of the trained model, this is a central focus of our work: DisRNN facilitates understanding of these dynamics by allowing us to visualize the update rules that govern them. **Questions** > I’m surprised that the models take up to 40 hours to train on a TPU?! The models are pretty small… This surprised us as well! Typically the networks achieve good predictive performance very quickly (minutes to tens of minutes), but then require a much larger number of training steps to identify disentangled solutions. We expect that suitable schedules of learning rate and bottleneck penalty might substantially reduce training time, and plan to explore this in future work. --- Rebuttal Comment 1.1: Title: Many thanks for the response Comment: Many thanks for your responses. I appreciate the additional behavioural task. I have raised my score.
Summary: The authors propose a method for discovering cognitive models automatically by fitting them to behavioral data. They use a recurrent neural network and pass its output through a bottleneck (implemented using variational autoencoders) which is expected to extract relevant cognitive variables. They evaluate the model on synthetic data and on behavioral data from rats performing a reward learning task. The dataset consisted of sequences of binary choices made (left vs. right) and outcomes experienced (reward vs. no reward). The authors evaluated agents based on Leaky Actor-Critic and Q-learning. They used supervised learning and trained the networks to `` imitate this dataset''. The network had a total of five bottlenecks, while the task had two latent variables. After training, the authors found that only two bottlenecks were ``open'' and corresponded to the latent variables. This demonstrated the ability of the network to discover the latent variables. Strengths: This is quite an original effort to automatically learn cognitive models from the data. The use of bottlenecks was reasonable and well-justified. The results are consistent with the hypothesis as the model was able to recover the two latent variables and use only the necessary bottlenecks. Weaknesses: The model is fitted only to a single task, which makes it difficult to evaluate whether it would scale and how useful it would be for the broader community. It is not clear how robust the approach is - while in the particular setups, VAEs converged to the expected values, it is not known if that would happen for different hyperarameters (for example, what if the number of bottlenecks was different or if learning rate or some other hyperparameter was chosen differently). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you identify other tasks where this approach could be used? How should one go about choosing the number of bottlenecks? Should it always be larger than what is expected that the model will need and if yes, how larger? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses** > The model is fitted only to a single task, which makes it difficult to evaluate whether it would scale and how useful it would be for the broader community. We have now added similar results from an additional task: sensory decision-making via accumulation of noisy evidence. In this task, as we showed in the RL task, disRNN successfully recovers the structure of a handcrafted cognitive model, and reveals a plausible human-interpretable model when fit to a large rat dataset. This task is representative of a very heavily-studied class of decision-making tasks that is widely used in behavioral neuroscience because it isolates an important cognitive process thought to be a building block of cognition. The two-armed bandit task is likewise representative of a very heavily-studied class of “reward learning” tasks thought to isolate a different building block. We expect the method to be immediately useful for other tasks from these domains and others of similar complexity, make up a large fraction of behavioral neuroscience research. We agree that exploring how it will scale to tasks of much greater complexity, especially in the face of limited dataset sizes, will be an important question for future research. > It is not clear how robust the approach is - while in the particular setups, VAEs converged to the expected values, it is not known if that would happen for different hyperarameters (for example, what if the number of bottlenecks was different or if learning rate or some other hyperparameter was chosen differently). We have added simulations exploring different values of the weighting parameters $\\beta$ (rebuttal pdf, bottom row), which impacts the complexity of the models that are discovered. **Questions** > Can you identify other tasks where this approach could be used? See response above > How should one go about choosing the number of bottlenecks? Should it always be larger than what is expected that the model will need and if yes, how larger? For a network to learn disentangled representations, the number of latents available in the network structure does need to be at least as large as the number of true factors of variability in the generative process: if fewer are available, the network will necessarily either fail to learn some factors, or will entangle information about multiple factors into a single latent. In our simulations we have typically chosen a number of latent variables that was about double the number that we expected the fit model to need. While we have not explored this formally, informal experiments allowing larger numbers of latent variables resulted in networks that ultimately converged to similar solutions, though at the expense of longer training time and larger dataset size requirements. --- Rebuttal Comment 1.1: Comment: I thank the Authors for detailed response. I appreciate the addition of decision-making via accumulation of noisy evidence task. I updated my score accordingly.
Summary: The authors introduce a novel recurrent architecture that potentially learns more interpretable strategies than an LSTM while achieving roughly similar performance in fitting. They investigate the performance of their model on three separate datasets (two synthetic and one rat behavioral dataset), and offer qualitative evidence that the strategies learned by the model are easy to read out by first finding the "open bottlenecks," then looking at their activities. This is an exciting direction and I found the work very interesting and appropriate for NeurIPS. **Update** Increasing my score due to the updates provided by the authors. This is a very intriguing paper and a refreshing approach that could be especially useful for Neuro/Cog scientists. Strengths: - I like the motivation. The algorithmic approach is also very interesting as an extension of Beta-VAEs to RNNs. - This is really cool how the open vs. closed-ness of bottlenecks is indicative of whether or not units are being used for solving a task! Sorry for the naive question, but would this provide more insight than if you were to simply regularize units for sparsity? Can you clarify on what this would tell above and beyond that? Weaknesses: - I think this is a potentially fantastic direction for cog and neuroscience and interpretability. I just wish there was more validation of the proposed architecture, and stronger evidence that the learned strategies are representative of those used by the rats in that dataset + that the model achieves similar performance as LSTMs as task complexity scales. - The figures need work. Graphics are too small, there's awkward layouts, and color schemes need more contrast. I will list some detailed comments below. - I find the Architecture figure very difficult to understand. I have faced the daunting task of making RNN figs in the past so believe me I understand the challenge here. But I don't find the relationship between left and right panels intuitive, understand how this could be used for computing, or what the goal is of this architecture. Just a thought, but maybe something more high-level? Or even consider removing this fig as you do a good job of explaining the model in the text. - Fig 2b is hard to see. Could you stack the two parts of A on top of each other, then expand B? - Figure 3 is difficult to understand. The text describing this is super cool and makes sense. But I don't get much from this figure. It feels more like SI to me. - Figure 4, everything is too small. Am I correct in that 4C/G (and 3C/G for that matter) are cartoons of the data generating process? There's no clear correspondence between them and the data. I would either remove them or figure out a way to make the correspondence clearer with the data. - Figure 5, The light orange and light blue are hard to read. Also this could be bigger. - Figure 6, This one describes the most interesting dataset in the paper, but is hardest of all to read! I also don't understand whats going on by looking at it. I see the bottlenecks are changing as you reduce beta. But what does that mean? I know this is a tough ask but you could maybe focus on just one of these models and make it very intuitive to show how it is revealing an insight into the animals' strategies for solving the task. - I am glad the authors brought up Ji et al in their related work. But I am also confused why they didn't compare to that approach. I assume this work is in progress? If these are networks with a "very small number of hidden units" as the authors wrote then it should be straightforward to do. - The biggest limitation I see here is that the interpretability work is all postdictive. The model is used to explain existing generated datasets which is super cool, then fit to an animal dataset, which is even cooler. But the interpretability of this fit is purely qualitative. This would be a slam dunk if it could be validated experimentally in animal or human neural recordings (predicting neural activity after fitting to behavior) or behavioral data (e.g., identifying and testing biases, potentially). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you reduce the number of hidden units in the LSTM to roughly match the number of open bottlenecks in the DisRNNs, and get similar performance + interpretability? For interpretability, I guess you could look at the cell state of the circuit? It may not work of course because of the complexity of LSTMs. - Is there a code release? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See weaknesses. This is tantalizingly close to a great paper. More evidence is needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses** > The figures need work. Graphics are too small, there's awkward layouts, and color schemes need more contrast. I will list some detailed comments below. Thank you for these detailed and helpful suggestions! > I find the Architecture figure very difficult to understand. I have faced the daunting task of making RNN figs in the past so believe me I understand the challenge here. But I don't find the relationship between left and right panels intuitive, understand how this could be used for computing, or what the goal is of this architecture. Just a thought, but maybe something more high-level? Or even consider removing this fig as you do a good job of explaining the model in the text. We have substantially revised this figure, and believe that it is now much more clear (rebuttal pdf, top row). We welcome additional thoughts and suggestions. > Fig 2b is hard to see. Could you stack the two parts of A on top of each other, then expand B? We have increased the size of this figure > Figure 3 is difficult to understand. The text describing this is super cool and makes sense. But I don't get much from this figure. It feels more like SI to me. Figure 4, everything is too small. Am I correct in that 4C/G (and 3C/G for that matter) are cartoons of the data generating process? There's no clear correspondence between them and the data. I would either remove them or figure out a way to make the correspondence clearer with the data. We have increased the size of these figures. We have removed these cartoons. We agree that they don’t add much that is not already present in the “bottlenecks” plot. > Figure 5, The light orange and light blue are hard to read. Also this could be bigger.. We have increased the contrast and the size of this figure > Figure 6, This one describes the most interesting dataset in the paper, but is hardest of all to read! I also don't understand whats going on by looking at it. I see the bottlenecks are changing as you reduce beta. But what does that mean? I know this is a tough ask but you could maybe focus on just one of these models and make it very intuitive to show how it is revealing an insight into the animals' strategies for solving the task. We have revised this figure and our description of it in the text, in an attempt to be more clear. > I am glad the authors brought up Ji et al in their related work. But I am also confused why they didn't compare to that approach. I assume this work is in progress? If these are networks with a "very small number of hidden units" as the authors wrote then it should be straightforward to do. We agree with the reviewer that this will be an important direction for future work. In informal exploration we have found that LSTMs with a very small number of hidden units typically underperform those with larger numbers of units. It will be important to compare directly using matched models (GRUs and their "S-GRU"), and datasets. > The biggest limitation I see here is that the interpretability work is all postdictive. The model is used to explain existing generated datasets which is super cool, then fit to an animal dataset, which is even cooler. But the interpretability of this fit is purely qualitative. This would be a slam dunk if it could be validated experimentally in animal or human neural recordings (predicting neural activity after fitting to behavior) or behavioral data (e.g., identifying and testing biases, potentially). We agree with the reviewer that a key future direction for this line of work will be to analyze in detail the fits of the disRNN to the laboratory datasets, and to relate them in detail to existing cognitive models as well as to neuroscientific data. **Questions** >Can you reduce the number of hidden units in the LSTM to roughly match the number of open bottlenecks in the DisRNNs, and get similar performance + interpretability? For interpretability, I guess you could look at the cell state of the circuit? It may not work of course because of the complexity of LSTMs. We informally explored this in previous work, and found that shrinking the LSTM down to just a few units typically results in a loss of predictive performance (the "best LSTM hyperparameters" used in figure 5 have 7-9 hidden units). We also informally explored fitting larger LSTMs and analyzing the dynamics of the first few principal components – this typically reveals retention of some dynamics that are not induced by training but instead leftover from network initialization. Frustration with these approaches is in fact what led us to the disRNN ideas of 1) explicitly penalizing the network for retaining information in its activations that it was not actually using for anything and 2) allowing the update rule for those activations to be free-form, parameterized by a feedforward sub-network, rather than constrained by particular update equations as in an LSTM or GRU. We will add new supplemental information investigating this in our datasets >Is there a code release? We promise to open-source our code and synthetic datasets as soon as possible, and definitely well in advance of the conference. The laboratory datasets we used are already publicly available from the authors of the papers in which they were originally reported. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your hard work in responding to my and the other reviewers' questions and critiques. I am disappointed however that you weren't able to compare to existing baselines as I suggested beyond offering the "informal" summary you provided here. I'm also disappointed that code is not available for us to look at to better understand your work. I believe that these are very basic model comparisons and requests for code (even just model architectures) that should not be difficult to complete in a day or two. Please advise if I am misguided in this. Could you fix these issues over this discussion period? --- Reply to Comment 1.1.1: Comment: Thanks for pushing us on this! We agree that the manuscript would be stronger if it also explored whether conventional neural network architectures might also be useful for cognitive model discovery. We have added new analysis using GRUs, rather than LSTMs, because related work has reported good performance on similar datasets (Ji et al., Dezfouli et al., etc), and because having just one type of recurrent unit makes them easier to analyze. We will add this analysis to our manuscript as supplemental information. 1. For each of the five datasets (synthetic Q-Learning, synthetic actor-critic, synthetic bounded accumulator, rats two-armed bandit, rats, poisson clicks), we have fit GRUs of different sizes and report cross-validated quality-of-fit. We find for the synthetic datasets that “tiny” GRUs with only one or two recurrent units do provide a quality of fit that is broadly similar to that of larger networks. For the rat datasets, we find that the best quality of fit comes from larger networks. These results are reported in the table below. In the submitted manuscript, we showed that disRNN can provide a similar quality of fit on the rat two-armed bandit datasets to that of larger RNNs (the “best LSTM” hyperparameters called for either eight or nine units). 2. For the three synthetic datasets, we have fit “tiny” two-unit GRUs, and examined plots of example sessions and update rules. We see that these typically do not have a 1:1 relationship with the true generative latent variables. The exception is the Q-Learning agent, for which two-unit GRUs do discover a disentangled solution. Solutions found for the other agents are fully entangled, with each unit’s update dependent on the value of the other unit and on both input variables. We interpret this to mean that very small conventional networks can learn dynamics that recapture those of certain generative processes, but that they do not do so reliably. (It does not look like I have the option to update my “rebuttal pdf” at this time to show you these figures -- please let me know if I’m wrong about this or if there is another way to share them!). 3. For each of the three synthetic datasets, we fit larger ten-unit GRUs and summarize their dynamics using the first two principal components. While some human-interpretable structure is visible, there is still not a 1:1 relationship between PCs and the latent variables of the generative process. The dynamics are entangled, with each PC’s update depending on all inputs and on the value of the other PC. We interpret this to mean that the dynamics of the first few PCs of conventional networks can reveal human-interpretable structure, but do not reliably recapture the dynamics of the generative process. Taken together, we interpret these results to indicate that conventional neural networks like GRUs definitely can be a viable route to cognitive model discovery in some circumstances, but also that they have important limitations. One limitation is that, while task training ensures that the dynamics they contain are sufficient to solve the task, nothing ensures that all aspects of these dynamics are necessary (they are free to retain epiphenomenal dynamics). Another limitation is that, while the number of latent variables can be constrained by limiting network size or by only considering the top few PCs, nothing ensures that the update rules for these variables are sparse, and nothing encourages them to be “axis aligned”, mapping 1:1 onto the true generative dynamics. **Difference in Cross-Validated Normalized Likelihood vs Reference (Percentage Points)** Synthetic datasets: Average of three random seeds. Rat Two-armed Bandit dataset: Average of three random seeds for each of twenty rats Rat Poisson Clicks dataset: Average of three random seeds for each of nineteen rats | | GRU1 | GRU2 | GRU3 | GRU4 | GRU5 | GRU6 | GRU7 | GRU8 | GRU9 | GRU10 | GRU11 | GRU12 | GRU13 | GRU14 | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | Q-Learning | -0.72 | ref | 0.002 | 0.002 | 0.003 | 0.002 | 0.002 | 0.002 | 0.002 | 0.002 | 0.002 | 0.002 | 0.001 | 0.001 | | Actor-Critic | -1.38 | ref | 0.01 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | | Bounded Accumulation | ref |-0.008 | 0.001 | 0.003 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | | Rat Two-Armed Bandit | -2.16 | ref | 0.34 | 0.26 | 0.39 | 0.38 | 0.35 | 0.46 | 0.44 | 0.38 | 0.35 | 0.51 | 0.53 | 0.50 | | Rat PClicks | -0.36 | ref | 0.08 | 0.07 | 0.08 | 0.09 | 0.08 | 0.10 | 0.11 | 0.05 | 0.10 | 0.10 | 0.10 | 0.13 |
Summary: The authors propose a methodology to learn (or system-identify) parsimonious cognitive models directly from data. Specifically, they introduce the idea of disentangled RNNs (DisRNNs). DisRNNs are gated recurrent neural networks with additional bottleneck constraints. These (learnable) bottlenecks bound the information transfer capacity by parametrically controlling the signal-to-noise ratio per latent unit. Moreover, they propose that each latent unit in the DisRNN is updated independently per its own learning rule. The authors demonstrate the effectiveness of DisRNNs on a dynamic two-arm bandit task using synthetic and animal behavioral data. Strengths: The authors raise important points about data-driven models usually being inscrutable. System identification is a long-standing problem of interest over the years that several studies have aimed to address. Combining system identification with data-driven approaches is a certainly promising direction. This information-theoretic approach to controlling signal quality through noise induction is interesting. Finally, though speculative, the authors provide a sense of how testable hypotheses for neuroscience can be obtained from their model. Weaknesses: My main concern about this manuscript is its limited scope (in the formulation and experiments). The authors motivate their approach by stating that "discovering an appropriate model structure ..." (L21 Pg. 1). However, the model structure of the DisRNN has several carefully chosen inductive biases that closely mimic the model tested in this paper (including the linear update terms and logistic readout). The fact that the DisRNN is able to learn exact parameter specifications is not necessarily surprising. The interpretability aspect of the learned DisRNN also relies heavily on the apriori known ground truth model. Can the authors test this model on other cognitive process models, even within the scope of decision-making? The way the DisRNNs are set up, it is unclear to me how they'll scale -- both as a function of the complexity of the underlying model governing data and in terms of parameter and sample complexity. Particularly since each latent dimension has its own associated MLP parameters, it will be very beneficial for the manuscript to have extended numerical evaluations on this front. Writing style: The general clarity of the article in a few places can be improved. Here are some of my suggestions. I'd encourage the authors to pay more attention to language of this kind throughout the manuscript. Fig. 1 must be improved. The arrow marks (and corresponding colors) do not have a clear legend. The orange text indicates that these lines are modulated by a bottleneck but what does the intersection of orange and blue lines mean? Similarly, the "Update rule X" can be depicted as an MLP while clearly showing the dimensionality of $z$. It is unclear if the indices authors are using denote time (which seems to be the case for observations) but not for $z$. Clarity: "In order to learn an interpretable cognitive model, we encourage sparsity." Can the authors expand and clarify why sparsity implies interpretability? Clarity: "Synthetic datasets from two reinforcement learning agents performing this task.." It's perhaps better to say that data was generated using ground truth update equations since the "agents" themselves here were not trained but rather specified. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the weaknesses section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to the weaknesses section above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses** > My main concern about this manuscript is its limited scope (in the formulation and experiments). The authors motivate their approach by stating that "discovering an appropriate model structure ..." (L21 Pg. 1). However, the model structure of the DisRNN has several carefully chosen inductive biases that closely mimic the model tested in this paper (including the linear update terms and logistic readout). The fact that the DisRNN is able to learn exact parameter specifications is not necessarily surprising. The interpretability aspect of the learned DisRNN also relies heavily on the apriori known ground truth model. Can the authors test this model on other cognitive process models, even within the scope of decision-making? We thank the reviewer for pointing out this concern. We agree that the multiplicative update rules used in the Q-Learning agent and by the Actor component of the Actor-Critic are very similar to the multiplicative update we built into disRNN, and that their logistic decision rule is similar in form to the softmax cross-entropy loss function. We have now added a new set of experiments training the same disRNN architecture on synthetic datasets generated by a bounded accumulation agent. This agent shares neither of these features: the update rule for its decision variable is additive, with a nonlinearity at the sticky bound; its choice rule is binary with a lapse rate. We find that disRNN is able to recover the structure of this agent. We hope that this goes some way to reassure the reviewer that the structure of disRNN is suitable for fitting generic dynamical systems. It is of course also possible to construct disRNN using additive update rules $ z_i^{t+1} = z_i^t + \\text{MLP}_i (\\mathbf{z}^t, \\mathbf{o}^t) $, which more-obviously express generic dynamical systems. In informal exploration we found that these required more time to train but ultimately converged on similar solutions. Exploring more rigorously whether there are advantages to one variant or the other may be a useful question for future work. > The way the DisRNNs are set up, it is unclear to me how they'll scale -- both as a function of the complexity of the underlying model governing data and in terms of parameter and sample complexity. Particularly since each latent dimension has its own associated MLP parameters, it will be very beneficial for the manuscript to have extended numerical evaluations on this front. We agree with the reviewer that discovering cognitive models that are more complex, for example using behavior from more complex tasks, will likely require larger networks and therefore larger datasets, and that it seems likely that this will prove impractical for some applications, at least without introducing additional regularization. We note that in cognitive neuroscience, a large fraction of the literature focuses on tasks similar in complexity to those we examine here. These tasks are chosen because they are thought to isolate, in an experimentally tractable way, key building blocks of cognition. We believe that disRNN and methods like it will be useful for more complex tasks. But even if they are not, accelerating discovery in domains like trial-by-trial reward learning and like decision-making is in itself an impactful contribution. We have expanded the discussion of these issues in our manuscript. > Fig. 1 must be improved. The arrow marks (and corresponding colors) do not have a clear legend. The orange text indicates that these lines are modulated by a bottleneck but what does the intersection of orange and blue lines mean? Similarly, the "Update rule X" can be depicted as an MLP while clearly showing the dimensionality of z. It is unclear if the indices authors are using denote time (which seems to be the case for observations) but not for z. We have improved this figure following these suggestions, and those of the other reviewers (rebuttal pdf, top row). > Clarity: "In order to learn an interpretable cognitive model, we encourage sparsity." Can the authors expand and clarify why sparsity implies interpretability? We thank the reviewer for calling our attention to this important point. We have expanded our discussion to address it in detail. The relevant passage now reads: *Limiting the number of latent variables provides three distinct benefits. The first is that such a model is more likely to be useful for scientific tasks, such as searching for correlates in measurements of neural activity, that involve interacting with finite datasets. The second is that interpreting a fit disRNN requires a human expert to inspect the update rules. The smaller the number of latents and fewer inputs to the update rule for each, the less cognitive burden will be placed on that human expert, and the more likely they will be able to arrive at a satisfying human intuition about the cognitive mechanism embodied by the model. The third is that goal of discovery is to identify models that human experts will consider to be cognitively plausible. When evaluating classic handcrafted models, many experts agree that, all else being equal, simpler models (smaller number of equations, fewer terms in each equation) are more plausible.* > Clarity: "Synthetic datasets from two reinforcement learning agents performing this task.." It's perhaps better to say that data was generated using ground truth update equations since the "agents" themselves here were not trained but rather specified. We thank the reviewer for pointing out this issue of vocabulary. In cognitive science, the term “agent” is frequently used to denote software modules that take “actions”, interacting in closed-loop with an “environment”, regardless of whether these are hand-crafted or themselves the result of machine learning. We do feel this usage is appropriate here, but have added clarifying language to our manuscript in several places to assist readers from other backgrounds. --- Rebuttal Comment 1.1: Title: Appreciate the extensive response Comment: I thank the authors for their efforts in responding to mine as well as other reviewer's comments in detail. > Bounded accumulator Thanks for these experiments. Is there a reference for this process model? It would be good to see the exact ground truth update equations here. In general, I do agree that it's seemingly different from the Q-learning and the A-C agents. The fact that the disRNN is able to fit data interpretably from this model is a good sign. > About scaling Thanks for the comment. If I may clarify, my point about scaling was not only about extending this to large-scale problems. More so that the focus on "interpretability" here heavily relies on knowing the models under consideration apriori. This issue was raised in my initial review as well. I agree with the authors that most of the cognitive neuroscience literature has focused on models of this size. However, it is unclear to me and I would appreciate it if the authors can clarify how the methodology proposed here can **accelerate discovery** on the interpretable modeling front, for unknown systems. > The terminology "agent" If that's a term of art, I can understand the usage of the word "agent". In which case, as a reader, I would find it much clearer if the "reinforcement learning" prefix was dropped since this implies some form of learning/training. Overall, I do appreciate the authors' efforts during the discussion period and though I still reserve some concerns, I am happy to update my evaluation.
Rebuttal 1: Rebuttal: We thank the five reviewers for their insightful and helpful feedback on our manuscript. Three major concerns stood out to us as raised in similar ways by multiple reviewers. We summarize our responses to these three concerns here. We have also responded separately to each reviewer’s individual concerns. 1) Reviewers felt that the manuscript did not sufficiently make the case that disRNN is likely to be useful across a variety of different cognitive neuroscience tasks. They raised the possibility that it might contain structural biases that tailor it specifically to the domain of dynamic reward-learning tasks. We have addressed this concern by adding new experiments testing disRNN in a very different domain: decision-making via accumulation of noisy evidence. This domain, like that of reward-learning, is heavily studied in behavioral neuroscience and has been the subject of intensive cognitive modeling efforts. We consider specifically the “Poisson clicks” task (Brunton, Botvinick, and Brody, 2013). In each trial of this task, rats are presented with a series of auditory clicks delivered from a pair of speakers, one to the left and one to the right of the rat, and they are rewarded for reporting which speaker delivered a larger number of clicks. We first considered a synthetic behavioral dataset (click times and choices) generated by a “bounded accumulator” agent performing this task (rebuttal pdf, second row). This agent keeps track of the relative number of clicks on each side, and commits to a decision when the absolute difference crosses a bound, ignoring any clicks that occur after this bound has been crossed. We fit a copy of DisRNN to this synthetic behavioral dataset, and find that it is able to recover the structure of the bounded accumulator agent. We then considered an open-source laboratory dataset from rats performing this task (Brunton, Botvinick, and Brody, 2013). We fit several copies of DisRNN to behavioral data from individual rats, using a variety of different complexity penalties. We find that these often recover human-interpretable models which capture known features of rat behavior on the task. In the examples shown (rebuttal pdf, third row), the green latent is tracking the relative number of clicks on each side, while the blue latent is tracking a short-term sensory adaptation effect (clicks that shortly follow another click have a reduced impact on decision-making) that is known to play an important role in this task. 2) Reviewers wondered how robust the fits to synthetic data were with respect to various hyperparameters, especially the information penalty \beta. We have added simulations sweeping this hyperparameter in each of the three synthetic datasets (Q-Learning, Leaky Actor-Critic, and Bounded Accumulation) and report both cross-validated quality-of-fit and the number of open information bottlenecks (rebuttal pdf, bottom row). We find in each case that the true model structure (dotted lines) is reliably recapitulated by the networks which are the simplest (smallest number of open bottlenecks) that also achieve good predictive performance (comparable to that of the best models). 3) While reviewers felt that the clarity of the writing was high, they felt that the clarity of the figures could be improved, especially Figure 1, which is critical for readers to understand as it explains the structure of the model. We have substantially improved this figure (rebuttal pdf, top row) and made a number of edits to the other figures as well, relying heavily on the detailed and generous advice of reviewer SAvA. We believe this has greatly improved the clarity of the figures and of the manuscript as a whole. Pdf: /pdf/140967d90a4fac00dcc00f730d64876f7dba7ebd.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This study introduces "Disentangled RNNs" (DisRNNs), a type of interpretable RNN designed with sparse latent variables and simple update rules. The model's interpretability is achieved by utilizing noisy channels both to maintain the state of each latent variable and to read out these states into the update rules. An auxiliary loss function penalizes channels with non-zero Signal-to-Noise Ratio, balancing model fit with sparseness and simplicity. The authors trained the DisRNNs directly on behavioral data and then examined the resulting latent representation and learned update rules. First, the authors trained DisRNNs on synthetic behavioral data sampled from either a Q-learning agent or an actor-critic agent, successfully replicating the generating latent variables and their corresponding update rules. Subsequently, the authors trained DisRNNs on real behavioral data from mice performing a dynamic two-armed bandit task. The DisRNNs achieved a competitive cross-validated fit to the behavioral data. Upon analysis, a DisRNN fitted to a specific mouse exhibited a strong resemblance to the best-known human-derived model, with the model's complexity varying based on the weight assigned to the auxiliary loss. Strengths: 1. This manuscript presents a compelling approach to interpretable machine learning. The use of noisy channels is well justified from a computational neuroscience perspective, and the validation of the model on synthetic data from artificial agents is logically sound. Overall, this work may provide a promising foundation for further research in neuroAI using interpretable models. 2. The manuscript is exceptionally well-written. The foundational principles of the reinforcement learning agents are elucidated with the clarity of an outstanding textbook, and the algorithm's description is concise yet clear and accessible. Weaknesses: 1. Upon reading the paper, I found it somewhat disappointing that the interpretation of the DisRNNs trained on real behavioral data was quite cursory, focusing on a "typical-best" example without a systematic analysis of the learned representations across the trained DisRNNs. The study would have been more comprehensive and impactful had it demonstrated how applying this method to behavioral data could facilitate neuroscientific discovery. 2. The manuscript lacks explicit details regarding the selection of the parameter $\beta$ in section 4 (i.e., experiments with synthetic data). Consequently, the robustness of the procedure to the choice of $\beta$ remains unclear; it is uncertain whether there was a need for considerable adjustment of $\beta$ to achieve a good fit or to yield a sensible latent representation. This omission, coupled with the absence of open-source code and data, somewhat undermines my confidence in the applicability of this approach "out of the box". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What should guide the selection of $\beta$ for an unknown system? Is it appropriate to determine $\beta$ based on cross-validated fit? In essence, how can we interpret the continuum of models obtainable along the bias-complexity tradeoff? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weaknesses** > Upon reading the paper, I found it somewhat disappointing that the interpretation of the DisRNNs trained on real behavioral data was quite cursory, focusing on a "typical-best" example without a systematic analysis of the learned representations across the trained DisRNNs. The study would have been more comprehensive and impactful had it demonstrated how applying this method to behavioral data could facilitate neuroscientific discovery. We agree with the reviewer that a key future direction for this line of work will be to analyze in detail the fits of the disRNN to the laboratory datasets, and to relate them in detail to existing cognitive models as well as to neuroscientific data. In the current manuscript, we hope to establish that applying this method to behavioral data results in models that are suitable for standard neuroscientific workflows that currently rely on handcrafted models. For example, the timecourses of their latent variables might be used as regressors for analyzing neural activity, or they might be run to generate synthetic datasets that make predictions about animal behavior in new situations. We will revised the manuscript to be more clear about this. > The manuscript lacks explicit details regarding the selection of the parameter Beta in section 4 (i.e., experiments with synthetic data). Consequently, the robustness of the procedure to the choice of Beta remains unclear; it is uncertain whether there was a need for considerable adjustment of Beta to achieve a good fit or to yield a sensible latent representation. This omission, coupled with the absence of open-source code and data, somewhat undermines my confidence in the applicability of this approach "out of the box". We have added an analysis sweeping a range of $\\beta$s for each of our synthetic data fits (rebuttal pdf, bottom row). We find that good predictive fit can be found over a wide range of values of $\\beta$, and that sensible latent representations can be found at the largest values of $\\beta$ which also produce good fit. We will include this analysis in our revised manuscript, and also revise the text to be more clear about this. We promise to open-source our code and synthetic datasets as soon as possible, and definitely well in advance of the conference. The laboratory datasets we used are already publicly available. **Questions** > What should guide the selection of Beta for an unknown system? Is it appropriate to determine based on cross-validated fit? In essence, how can we interpret the continuum of models obtainable along the bias-complexity tradeoff? For our synthetic datasets, ground-truth involved only a small number of latent variables, and a procedure of selecting from among the models with good predictive performance the one with the smallest number of open bottlenecks would have been sufficient to identify it. For discovering cognitive models using laboratory datasets, we expect that the best $\\beta$ will depend on the scientific use-case of the model, and that it may sometimes be useful to consider a model which does not achieve the best cross-validated fit, for example because the disRNN has been fit to a large behavioral dataset, but its latents are being used as regressors for a much dataset of neural recordings. Although this does mean that disRNN can produce multiple models from the same dataset, it is worth noting two things: 1) Over several orders of magnitude of beta, it is typical that only a handful of distinct models emerge, and they are often closely related, for example with one new latent appearing as beta is reduced. 2) These models can often be thought of as different levels of resolution on the cognitive mechanism. Each can be useful depending on the level of resolution needed for the research question. If the fit disRNN is to be used as a cognitive model, a key question is whether the mechanistic claims it embodies are psychologically and biologically plausible about the system being studied. Evaluating this necessarily requires considering not only predictive performance and model simplicity, but also expert domain knowledge. We expect that disRNN will be most useful in a workflow that involves fitting different networks to discover a number of different models, then applying domain expertise as a filter to identify which (if any) are cognitively plausible and practically useful for the user’s current scientific goals. We have expanded our discussion of these issues in the manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed reply. I have no further questions. However, I find that [the remaining questions from "SAvA"](https://openreview.net/forum?id=SOEF0i0G1z&noteId=IILRgByF3Z) are significant, and I look forward to your replies. --- Reply to Comment 1.1.1: Comment: We have responded to the questions from reviewer SAvA below. If you have additional follow-up questions please do let us know.
null
null
null
null
null
null
Variational Annealing on Graphs for Combinatorial Optimization
Accept (poster)
Summary: This paper uses RL and Annealing to train a learned CO solver. They demonstrate that their algorithm empirically performs well over many problem types and dataset types. Strengths: 1. This paper uses several important baselines. 2. The breadth of the datasets chosen is good. Weaknesses: 1. You are missing a comma in the first sentence. 2. Use \citep to put parentheses around your citations 3. You misspell Reinforce in the table. 4. The experiments seem relatively limited in that comparison is only for two problem types. Moreover, the authors do not compare with Gurobi to get a range on how solvers that are not learned perform on these problems. Also, the time taken by each algorithm is not used to compare these algorithms. Indeed, these metrics are important to see to compare. 5. I do not understand the contribution of this paper. Indeed, it seems that this paper combines annealing from Sun et al. and RL for CO, which is also known. The subgraph solving seems to reflect many classical CO algorithms and is not new. The subgraph tokenization may be novel. In general, I'm a little confused about what the overall contribution of this paper is in the context of the literature. 6. Also, the improvement in the solving ratio seems very small, enough such that hyperparameter tuning your algorithm and not tuning the baselines can make this difference. I might also change the title of the paper. Currently, it closely resembles "Annealed Training for Combinatorial Optimization on Graphs" by Sun et. al. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. During the RL process, how do you compute the reward of an intermediate graph where spins for certain nodes have not been assigned? Do you only compute the reward for nodes that have assigned spins? Overall, due to the limitations in the experiments, issues with experiments, and lack of contribution, I am voting to reject this paper. However, if the authors can run some further experiments comparing this paper with Gurobi and including the time taken by each algorithm, I believe this paper will be improved. I am flexible, so if the authors adequately address my concerns, I will increase my score accordingly. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: They have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive review and would like to respond on various points in the following. ### Weaknesses: **(1-3) Latex citation style, comma, and typos:** We will update the manuscript accordingly. **(4)** **Limited experimental evaluation on only two different problem types:** We managed to extend our evaluation by adding benchmarks for Max-Cut and Max-Clique. For Max-Cut we significantly outperform the recent work by Zhang et al. [2023] on the Barabási & Albert (BA) dataset with 200 - 300 nodes. In case of the Max-Clique problem we benchmark on the ENZYMES and IMDB-Binary dataset and significantly improve upon the results in Karalias et al. [2022]. The results are shown in Tab. 4 & 5 in the rebuttal file. **No comparison to Gurobi:** As pointed out in “A.1 Evaluation Metrics” all our metrics are relative to Gurobi solutions (references in L272 and L286 and in Table 2 & 3). We will place a corresponding statement at the end of the first paragraph of the “Experiments” section. Importantly, our updated results in Figure 1 left and middle (see rebuttal file) show that VAG-CO outperforms Gurobi for similar time budgets on the hardest synthetic problem settings, i.e. for low values of $d$ and $p$. **Lack of runtime comparisons:** We agree that runtimes are an important aspect and added the corresponding times for all of our experiments. The rebuttal file contains runtimes for Table 2 & 3 and for all remaining experiments in Fig. 1. In Table 2 we currently cannot provide runtimes for the results that were reported from Karalias et al. [2022] since they did not provide this information for their MIS experiments. Figure 3 in the rebuttal file shows that Subgraph Tokenization does not only yield higher solution qualities (as already shown in Fig. 1, right) but also sizable runtime reductions. **(5) Relation to Sun et al. [2022] and RL for CO and unclear contribution and novelty:** The reviewer points out that Sun et al. [2022] also use annealing. This is correct, they use annealing in combination with a mean-field method. We cite them in the corresponding section on “Variational Annealing” in L249 (and in L36). We are also well aware that RL for CO is not at all new and we do not claim that is new. On the contrary, in the “Related Work” section in L211 we dedicate a subsection to “Unsupervised Learning and Reinforcement Learning” to discuss the relation of our work to RL for CO. We would like to re-emphasize our main contributions, including: (i) identification of a central limitation related to mean-field methods in numerous recent works, (ii) introduction of Subgraph Tokenization to enable efficient autoregressive graph generation, (iii) state-of-the-art results on several popular CO problems on real-world and on synthetic datasets, (iv) a motivation for annealing from statistical learning theory. With regard to the novelty the reviewer criticizes that “subgraph solving” is not new, which is correct and we do not intend to claim that this is a novelty in any way. The reviewer acknowledges that Subgraph Tokenization “may be novel”. To the best of our knowledge it is novel. In case the reviewer still has significant remaining doubt we would like to understand the underlying reasons. **(6) Small improvements, hyperparameters could explain them:** The reviewer states that improvements with VAG-CO are rather small and that in such scenarios insufficiently tuning the hyperparameters of baselines could explain superior performance. We agree that on real-world graphs (Table 2 & 3) the achievable remaining improvements are indeed small. For this reason we add synthetic problems which are known to be hard. On these problems there is more margin with respect to optimal solutions. We compare MFA methods, VAG-CO, DB-Greedy. In the updated figures we added results for Gurobi with various runtimes and find that we can outperform Gurobi on the hardest problems. The improvements of VAG-CO with respect to the other methods on these problems are sizable and highly significant (Fig. 1, left and middle in the rebuttal file). Importantly, we report our hyperparameter optimization strategies in detail in “A.7 Experimental Details” L630 ff. which shows that the hyperparameter search for the MFA methods in Fig. 1, left and middle was actually very extensive and certainly not less extensive than for VAG-CO. The reviewer criticizes that the title is too similar to the title of Sun et al. [2022]. If it is possible we would follow the reviewer’s advice and change the title to e.g. “Beyond Mean-Field: Autoregressive Graph Generation for Combinatorial Optimization”. ### Questions: **(1) How is the reward calculated for intermediate graphs?** We calculate the rewards only for the spins that have already assigned values (see Eq.6, L144, and A.13.1 Free-Energy Decomposition into Rewards). Based to the main points of this review we improved our work in terms of additional experiments, comparisons to Gurobi, and by adding runtimes. We hope that these improvements will be reflected in the updated score. Zhang et al. [2023], “Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets”, arXiv:2305.17010 Sun et al. [2022], “Annealed Training for Combinatorial Optimization on Graphs”, arXiv:2207.11542 Karalias et al. [2022], “Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions”, arXiv:2208.04055 --- Rebuttal Comment 1.1: Title: Request for Further Clarification Comment: I want to state I appreciate the detailed rebuttal. I have a few follow-up questions. ### Gurobi On real datasets such as Collab and Twitter, if you give Gurobi the same amount of time as VAG-CO to run, which is the stronger method in terms of approximation ratio? I understand you used Gurobi to calculate the approximation ratio, but which is the faster method on real datasets? I believe this is an important point for me. ### Questions about the experiments What do you mean when you say you can't provide the runtimes of Karalias et al. 2022 since they didn't provide the results? Are you not rerunning their algorithm on your setup? What about the method from Sun et al. 2022? I believe this is MFA-Anneal CE in your paper. Do you rerun their implementation or rely on the numbers from their paper? ### Code for the experiments May I ask why there was no code provided with this submission? I would have liked to check this code for an empirical paper such as this. ### Questions on Experiments Namely, of the four points you mentioned, the problem of MFA relying on the assumption that the parameters are statistically independent is known, so I do not believe point 1 contributes to this paper. Moreover, point 4 was discussed already in Sun et al. 2022. However, I agree that (ii) subgraph tokenization seems to be the central contribution of this paper that no other paper has done. Is there anything else I am missing? --- Reply to Comment 1.1.1: Title: Further Clarification Comment: We thank the reviewer for the prompt reaction. We are happy to read that the novelty and importance of Subgraph Tokenization is now acknowledged by the reviewer. We also point out that the reviewer is not correct in suggesting that we do not provide code. Our git repository is linked in a footnote on page one. **“On real datasets such as Collab and Twitter, if you give Gurobi the same amount of time as VAG-CO to run, which is the stronger method in terms of approximation ratio? ... which is the faster method on real datasets?”** In our experiments on MVC, MIS, Max-Cut, and Max-Clique Tab. 2,3,4,5 and Fig. 1 in the rebuttal file we report the Gurobi performance for 2-4 different Gurobi runtime limits which were chosen such that at least one runtime limit is comparable to the corresponding runtimes of VAG-CO. In the experiments on real-world graphs (Tab. 2,3,4) Gurobi is often able to solve the problems optimally within runtimes that are shorter than those for learned methods. Thus for these comparably easy CO problem instances Gurobi is the best performing method. Results on the hard synthetic datasets in Fig. 1 left and middle in the rebuttal file show again that Gurobi is superior for the easier synthetic problems, i.e. in Fig. 1 left the $AR^*$ is better for Gurobi at low $d$ and in Fig. 1 middle $AR^*$ is better for Gurobi at high $p$. For the particularly hard problems (high $d$, low $p$) VAG-CO outperforms or matches the results of Gurobi for comparable time budgets. **“What do you mean when you say you can't provide the runtimes of Karalias et al. 2022 since they didn't provide the results? Are you not rerunning their algorithm on your setup?”** As we state in the captions of Tab. 2 - 5 “(r)” indicates that these results are taken from the corresponding references - this does also hold for the corresponding runtimes. That means indeed that we did not run the methods from Karalias et al. [2022] ourselves but that we only report these results. We want to point out here that re-running all competing methods ourselves is computationally infeasible for us. **“What about the method from Sun et al. 2022? I believe this is MFA-Anneal CE in your paper.”** No, this is not “MFA-Anneal CE” in our paper. In contrast to Sun et al. [2022] our “MFA-Anneal CE” is trained via REINFORCE as stated in L273: “We also report results of our own implementation of an MFA-based method that is trained with REINFORCE.”. **“Do you rerun their implementation or rely on the numbers from their paper?”** As pointed out above “MFA-Anneal CE” is not the method of Sun et al. [2022]. We ran all experiments that are not indicated as “reported” by “(r)” by ourselves. This includes “MFA-Anneal CE”. The reviewer criticizes the four contributions of this paper that we listed in the response to the reviewer’s initial review. **“Namely, of the four points you mentioned, the problem of MFA relying on the assumption that the parameters are statistically independent is known, so I do not believe point 1 contributes to this paper.”** Our first point reads: “identification of a central limitation related to mean-field methods in numerous recent works”. The reviewer correctly states that the mean-field approximation (MFA) relies on the assumption that the parameters are statistically independent. In fact, that is the defining property of MFA. But this is not the point of our first contribution. Our contribution is that we point out that many recent methods in the field of neural CO rely on MFA and are limited by this assumption. We show empirically that by using a more expressive approach that does not rely on MFA we can obtain superior performance in particular on CO problem instances that are hard. In case the author is aware of any prior work that made this point we would be very interested in the corresponding references. **“Moreover, point 4 was discussed already in Sun et al. 2022.”** Our fourth point reads: “a motivation for annealing from statistical learning theory”. This point refers to Remark 1 in L194 in which we make a formal statement about the sample complexity of approximating Boltzmann distributions. We consider this theoretical insight as one of our contributions. We are not aware of any formal learning theoretical statement in Sun et al. [2022]. Based on our Remark 1 we argue that annealing can be regarded “as a principled curriculum learning approach”. The connection between annealing and curriculum was conjectured without learning theoretical arguments in Sun et al. [2022]. For this reason we will cite Sun et al. [2022] as follows in L201: “A connection between annealing and curriculum learning was put forward less formally already in Sun et al. [2022].” Karalias et al. [2022], “Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions”, arXiv:2208.04055 Sun et al. [2022], “Annealed Training for Combinatorial Optimization on Graphs”, arXiv:2207.11542
Summary: The authors present Variational Annealing on Graphs for Combinatorial Optimization (VAG-CO), a novel method for tackling combinatorial optimization problems. This method presumably combines elements of variational inference, annealing, and graph theory to form an effective optimization approach. ===== After reading the rebuttal, the authors have convinced me to increase my score. Strengths: Novel Methodology: The paper presents a new approach to solve combinatorial optimization problems, Variational Annealing on Graphs for Combinatorial Optimization (VAG-CO), which is innovative and adds value to the existing literature. Overcoming Limitations of Existing Methods: The authors identify the limitations of the widely used Mean-Field Approximation (MFA) and propose a way to overcome these, which demonstrates a deep understanding of the problem space. Improvement in Efficiency: The use of sub-graph tokenization and entropy regularization to improve the efficiency of the training and inference process could potentially revolutionize the way these types of problems are solved. Weaknesses: 1. The experiments are only completed on synthetic data. It will be more convincing if the proposed method can be applied and compared on some real data. 2. Maximum Independent Set (MIS) and Minimum Vertex Cover (MVC) are well-known problems in graph theory and are widely used as benchmark problems in combinatorial optimization. However, they only represent a subset of the vast array of combinatorial optimization problems. A more extensive evaluation of the proposed method would involve its application to a larger and more diverse set of problems. This would enable a more comprehensive understanding of its capabilities and limitations. If the authors want to claim the wide application of the proposed method, they may want to conduct experiments for Traveling Salesman Problem, Knapsack Problem, Vehicle Routing Problem etc. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How do the authors define hard problems in the paper? What are the criteria? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The experiments are not extensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive review. We are happy to read that: - our method "is innovative and adds value to the existing literature", - we "identify the limitations of the widely used Mean-Field Approximation (MFA) and propose a way to overcome these, which demonstrates a deep understanding of the problem space.", - the "The use of sub-graph tokenization and entropy regularization ... could potentially revolutionize the way these types of problems are solved.". Please find comments to the review in the following. ### Weaknesses: **(1): “The experiments are only completed on synthetic data. It will be more convincing if the proposed method can be applied and compared on some real data.”** We are not sure about the reviewer’s definition of “synthetic”. Many of the graph datasets that we use like TWITTER, PROTEINS, COLLAB, IMDB-Binary, MUTAG, and ENZYMES are considered to be real-world datasets in the literature. **(2): “A more extensive evaluation of the proposed method would involve its application to a larger and more diverse set of problems.“** We managed to extend our experimental evaluation by adding benchmarks for MaxCut and MaxClique. For MaxCut we significantly outperform the recent work by Zhang et al. [2023] on Barabási & Albert (BA) graphs with 200 - 300 nodes. In case of the MaxClique problem we benchmark on the ENZYMES and IMDB-Binary dataset and significantly improve upon the corresponding results in Karalias et al. [2022]. These results are shown in Tab. 4 and 5 in the rebuttal file. ### Questions: **(1): “How do the authors define hard problems in the paper? What are the criteria?”** Our claim of hardness of MIS on Random Regular Graphs (RRGs) is based on the observation of a clustering transition for degrees $d > 16$ in the corresponding independent sets (see Barbier et al. [2013] for details). In the context of neural CO it was argued in Angelini et al. [2022] that these MIS instances are hard. They write: “We have argued that for the MIS using $d$-RRG with $d < 16$ is likely to be an easy problem and the test would be not very selective (we have in mind now only smart algorithms, not the GNN of Ref. [4] whose performances are so poor to be rejected even by easy tests). However, for larger $d$ we expect the optimization to become much more demanding because the clustering of the IS of large size is likely to create relevant barriers that affect any algorithm searching for the MIS.”. This claim is empirically compatible with our updated Fig. 1 left in the rebuttal file. It shows that all compared algorithms become worse as $d$ is increased from 3 to 20. Remarkably, if Gurobi is restricted to the same runtime as VAG-CO, i.e. 0.08 s (or even to a longer runtime of 0.1 s), it is outperformed by VAG-CO for $d > 3$. The hardness of the RB graphs for the MVC problem is based on a correspondence between the MVC and forced satisfiable SAT problems on RB graphs (see Xu [2004] for details). These graphs were also used recently in Wang et al. [2023]. In agreement with their observations we observe in Fig. 1 middle in the rebuttal file that for low values of the parameter $p$ all methods, including time-limited Gurobi, exhibit a rise in the approximation ratio, i.e. these problems become harder for all investigated methods. Barbier et al. [2013], “The hard-core model on random graphs revisited”, arXiv:1306.4121 Angelini et al. [2022], “Cracking nuts with a sledgehammer: when modern graph neural networks do worse than classical greedy algorithms”, arXiv:2206.13211 Wang et al. [2023], “Unsupervised Learning for Combinatorial Optimization Needs Meta-Learning”, arXiv:2301.0311 Xu [2004], "BHOSLIB: Benchmarks with Hidden Optimum Solutions for Graph Problems", http://vlsicad.eecs.umich.edu/BK/Slots/cache/www.nlsde.buaa.edu.cn/~kexu/benchmarks/graph-benchmarks.htm
Summary: This paper explores the use of autoregressive deep generative models for solving combinatorial problems. Starting from an optimization problem, the authors reformulate it using a Boltzmann distribution on the solution set. This distribution can then be approximated with a parametric family, typically a neural network, following a variational approach (minimization of the free energy). While many previous works use the independent (or "mean field") approximation, it is suggested that building the solution iteratively is more expressive, and thus performs better. In this case, the autoregressive model used is a graph neural network, which gives a conditional probability for the current spin given past choices. To increase inference speed, two tricks are introduced: subgraph tokenization (making several decisions at once) and dynamic graph pruning (removing fixed vertices from the graph). Training is achieved using a temperature annealing scheme which favors gradual concentration around global optima. This annealing process enjoys an interpretation in terms of sample complexity and curriculum learning. Numerical experiments on the maximum independent set and the minimum vertex cover problem show the promise of this new method. Strengths: ### Originality This paper describes a new combination of known ideas, coherently motivated by the specific goal of solving hard optimization problems: - formulation of optimization in the language of statistical physics - variational approximation of the Boltzmann distribution - iterative construction of a solution - RL framework with partial rewards - temperature annealing As far as I can tell, the training tricks related to tokenization and pruning seem to be novel. ### Quality The proposed model makes theoretical sense, and rests on solid foundations of previous work. I particularly enjoyed the interpretation of annealing as curriculum learning: as the temperature gets lower, the learning gets harder. As for the numerical experiments, they appear to be thorough and include a wide variety of algorithms. The authors also took care to test on harder instances whenever nearly optimal solutions were too easy to reach. They do however exhibit a few inconsistencies, which I point out below. ### Clarity n.a. ### Significance This paper contributes to a wider discussion on the best variational ansatz for combinatorial optimization. Comparing mean field with autoregression is an important first step, and hopefully the field can evolve towards more generic structured representations. Subgraph tokenization is also an interesting idea, which deserves to be pushed further. Weaknesses: ### Originality ### Quality A questionable algorithmic choice is the arbitrary order in which nodes are processed. It isn't obvious why BFS ordering of the vertices makes more sense than other options. The same goes for subgraph tokenization: why take the $k$ next vertices in the BFS order, even though they might be very far apart from each other? As for the experiments, the realistic datasets of the first batch are seemingly too easy, since every method implemented by the authors (including the baseline DB-Greedy) shows near-perfect accuracy. The results of Table 2 are very surprising to me, as both EGN and MFA are mean field methods with conditional expectation decoding, yet they exhibit wildly different performances on the minimum AR. On the other hand, the average AR is only reported for new methods, not for the state of the art. I would welcome more detail from the authors on this point. In any case, the benefits of autoregression as opposed to independence are not definitively proven by this series of benchmarks. The synthetic datasets of the second batch are designed to be harder to solve, but the associated plots do not include other methods from the state of the art, which were analyzed on realistic datasets only. Again, clarification from the authors would be very welcome. ### Clarity The notations are sometimes a bit hard to follow. Several algorithms included in the benchmark suite are only mentioned earlier in the text. ### Significance Depending on the validity and fairness of the benchmarks, the results may be less significant than announced by the authors. I look forward to the rebuttal period to enlighten this aspect. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: L121: Why BFS in particular? Is there a way to make the order itself parametric? L160: Is there a way to perform subgraph tokenization that does not scale exponentially with $k$? L198: Are there other insights from curriculum learning that we could draw inspiration from? I'm not at all familiar with this literature L215: What if our problem provides no natural way to define partial rewards? L312: Why change the evaluation metric from approximation ratio to relative error? Not sure I grasp the difference Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors do not discuss many limitations of their work, although they mention the additional complexity. Societal impact is irrelevant here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive review and would like to respond on various points mentioned in the weaknesses and questions. ### Quality: **BFS:** See answer to question on L121 below. **“the realistic datasets of the first batch are seemingly too easy”:** These are datasets that are used in recent works in this field. We agree with the reviewer that these datasets might be too easy (see L304). For this reason we decided to benchmark also on synthetic datasets that are known to yield hard MIS and MVC problems. **“The results of Table 2 are very surprising to me, as both EGN and MFA are mean field methods with conditional expectation decoding, yet they exhibit wildly different performances on the minimum AR.”** A possible reason might be the training method: in EGN the gradients are calculated directly from exact expectation values while our MFA estimates gradients via REINFORCE. The latter approach might be less prone to getting stuck in local minima due to an increased variance in the gradient estimation. Interestingly, the recent work of Zhang et al. [2023] also states in App. D that EGN is worse than any of their baselines on their MIS benchmarks. **“On the other hand, the average AR is only reported for new methods, not for the state of the art.”** The purpose of the average AR metric in Table 2 & 3 is to compare the best achievable solution quality for VAG-CO with mean-field based approaches when they do not use conditional expectation. Hence, these results can be regarded as an ablation study in which no non-learned decoding method is used but where we simply report the average AR of the sampled solutions. The computational requirements of including all other under-performing (in terms of $AR^*$) methods in this ablation study would be sizeable. Consequently, we decided to consider only the best performing mean-field methods. **"The synthetic datasets of the second batch are designed to be harder to solve, but the associated plots do not include other methods from the state of the art…."** Since these experiments are computationally expensive we conducted them only for the best performing methods in terms of $AR^*$ in Table 2 & 3. However, due to a request by reviewer rhwA we added the results on synthetic dataset for Gurobi with several time limits to the rebuttal file (Fig. 1). ### Clarity: **“The notations are sometimes a bit hard to follow. Several algorithms included in the benchmark suite are only mentioned earlier in the text.”** If specific weaknesses of the notation are pointed out to us we would be happy to improve them. We will introduce each algorithm in the experiment section of the updated manuscript. ### Questions: **L121: Why BFS in particular?** BFS will typically order the nodes such that the next $k$ spins are likely to be directly connected to already generated spins and will typically have several generated spins in their neighborhood. Consequently, the newly generated spins receive direct information on the already generated spins via message passing. For example, with depth-first search (DFS) one would expect that the newly generated spins have fewer already generated spins in the neighborhood. Consequently, in DFS there would be less information flow from the already generated spins to the spins that are to be generated next. This should lead to an increased probability of generating tokens that are sub-optimal once they become connected to the previously generated spins. Investigating this question empirically could be an interesting direction for future work. **L121: Is there a way to make the order itself parametric?** One can certainly make the order of the spins parametric, e.g. by sampling the order of spins from a probability distribution that is obtained via attention. This would be an interesting direction for future work. **L160: Is there a way to perform subgraph tokenization that does not scale exponentially with k?** One can reduce the number of possible tokens by masking out tokens that violate constraints of the CO problem (e.g. the independence condition in MIS). Whether this would result in a sub-exponential scaling depends on the specific CO problem instance. **L198: Are there other insights from curriculum learning that we could draw inspiration from?** Yes, one could increase the size of the graphs throughout training like in Lisicki et al. [2020]. More generally, one could cast the task of generating suitable training graphs for curriculum learning in neural CO as a task for a separate agent like in “Teacher-Student Curriculum Learning” by Matiisen et al. [2017]. Here a teacher learns to generate the tasks (e.g. graphs in CO) for a student that learns to solve the corresponding CO problem. **L215: What if our problem provides no natural way to define partial rewards?** Without partial rewards the RL setting would be changed to an MDP with sparse rewards that are received once a complete solution was generated. This RL problem would be harder but our method could still be applicable. **L312: Why change the evaluation metric from approximation ratio to relative error?** The reason was to use in all three subplots of Fig. 1 metrics where lower is better. For the approx. ratio (AR) lower is better for MVC while for MIS higher is better. We agree with the reviewer that it might be better to not use the relative error in Fig. 1 (left) and to stick to $AR^*$. We will do so in the updated manuscript. The updated Fig. 1 can be found in the rebuttal file. Zhang et al. [2023], “Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets”, arXiv: 2305.17010 Lisicki et al. [2020], “Evaluating Curriculum Learning Strategies in Neural Combinatorial Optimization”, arXiv: 2011.06188 Matiisen et al. [2017], “Teacher-Student Curriculum Learning”, arXiv: 1707.00183 --- Rebuttal Comment 1.1: Comment: Thank you for your answer, which alleviates many of my concerns. In particular, I now understand the numerical experiments a bit better. A few minor details: - The comparison between EGN and your MFA might deserve a remark. - Among the slightly confusing notations: $\omega$ as a probability distribution, $\nu_i$ for the graph nodes (instead of just $i$) - On each result plot, please specify whether lower is higher or worse, since it now changes > BFS will typically order the nodes such that the next spins are likely to be directly connected to already generated spins and will typically have several generated spins in their neighborhood. This might be true at first, but as you expand the BFS, the radius gets larger and there is no guarantee that the $i+1$-th node visited comes from the same parent as the $i$-th one. In fact, it might be located in a very different part of the graph. Plus the whole procedure is very dependent on the choice of source. I agree that DFS seems worse but are there other algorithms one might consider? --- Reply to Comment 1.1.1: Title: Answer to Comment Comment: We thank the reviewer for his prompt response and his follow up suggestions. ### Regarding the minor details: **"The comparison between EGN and your MFA might deserve a remark."** \ We will add a remark to the updated version of our manuscript. **"Among the slightly confusing notations: $\omega$ as a probability distribution, $\nu_i$ for the graph nodes (instead of just $i$ )"** \ We agree that using $\omega$ as a probability distribution may be confusing. Therefore, we will use $q$ instead and clarify in the text that it denotes a probability distribution. Also, we will use $i$ instead of $ \nu_i$ for the graph nodes. **"On each result plot, please specify whether lower is higher or worse, since it now changes"** \ We will specify in each figure whether lower or higher is better. ### Regarding the BFS question: We agree with the reviewer that in BFS, when the radius gets large, the next $k$ spins will very likely not be close to each other. However, we do not see why this should be a problem. It would,indeed, be interesting to investigate this in dedicated experiments. **"I agree that DFS seems worse but are there other algorithms one might consider?"**\ If one wants to have the property that the next $k$ spins have to be close to each other and that already generated spins are also close to these spins, we think the following algorithm could be considered: **Step 1:** select the first of $k$ nodes according to BFS.\ **Step 2:** search and select the $k-1$ unassigned nodes that are the closest (in terms of hop distance) to the first selected node (e.g. with a neighborhood search)\ **Step 3:** generate the spin values selected in Step 1. and Step 2. with Subgraph Tokenization\ **Step 4:** repeat Step 1. until all nodes have assigned spin values
Summary: This submission proposes a novel unsupervised framework for solving graph combinatorial optimization problems. The proposed method is coined as VAG-CO, which autoregressively generates solutions to CO problems via annealing / reinforcement learning. Corresponding theoretical analysis is provided and numerical demonstration on various datasets are conduct. Strengths: - This paper is well written, and the technique is solid. - The proposed autoregressive generation is appropriate for solving graph CO problems whose solutions are a set over the nodes, which could be represented with a binary vector. - The authors use RL to optimize and carefully design an MDP to achieve that. - An open sourced implementation is also provided. - Solid technical analysis is provided. - Various experiments on both simulated and realistic benchmark and on both MIS and MVC tasks. Weaknesses: This submission is great. A few issues are listed as follows. - It seems the method is also applicable for other CO problems such as Minimum Dominate Set. Would it possible for the authors to elaborate about at least the possibility? - It would be also good to evaluate under some subset of the MIS benchmark (https://github.com/maxiboether/mis-benchmark-framework). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive review. In particular, we are glad that the reviewer appreciates among other things our theoretical analysis, that the paper is "well written", and most importantly, that our method is "solid". **Applicability to other CO problem types, like Minimum Dominate Set (MDS):** Our method is designed to be directly applicable to problems that can be written in the form of an Ising Hamiltonian (Eq. 1). This class of problems is very broad and encompasses all of Karp’s famous 21 NP-complete problems (Lucas [2014]). The reviewer asked specifically about the MDS problem. This problem can indeed be formulated as an Ising-type CO problem. This is possible by using the reduction from MDS to the Set Cover problem (Kann [1992]). The latter can then be expressed as an Ising-type problem as shown in Section 5.1 in Lucas [2014]. **Benchmarking on MIS-Benchmark:** We agree that using the MIS-Benchmark would be interesting. Nevertheless, we decided to focus on benchmarks from the most recent publications in the field, like Karalias et al. [2022] and Wang et al. [2023]. We added 3 new problem settings on 2 new CO problem types (Max-Cut and Max-Clique) works. For Max-Cut we significantly outperform the very recent work by Zhang et al. [2023] on the Barabási & Albert (BA) dataset with 200 - 300 nodes (BA 200 - 300). In case of the Max-Clique problem we benchmark on the ENZYMES and IMDB-Binary dataset and significantly improve upon the corresponding results in Karalias et al. [2022]. The results are shown in Tab. 4 and 5 in the rebuttal file. Kann [1992], “On the approximability of NP-complete optimization problems”, Doctoral dissertation, Royal Institute of Technology Lucas [2014], “Ising formulations of many NP problems”, arXiv:1302.5843 Karalias et al. [2022], “Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions”, arXiv:2208.04055 Zhang et al. [2023], “Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets”, arXiv:2305.17010 Wang et al. [2023], “Unsupervised Learning for Combinatorial Optimization Needs Meta-Learning”, arXiv:2301.03116 --- Rebuttal Comment 1.1: Comment: Thank you for the feedback. I will keep my previous score.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their helpful reviews. Based on these reviews we could clarify several aspects of our work. In the following we will highlight the points of criticism that led to an extension of the presentation of our results and one point where we have difficulties in comprehending a fundamental point of criticism. **Addition of runtimes** Reviewer **rhwA** rightfully argued that the work would be improved if we added runtimes for our experiments. We did so for all of our experiments and whenever this information was available for reported results. All runtimes are now provided in the updated figures and tables in the rebuttal file. The obtained runtimes show that our autoregressive VAG-CO exhibits similar runtimes as recent mean-field (MF) methods. The reason for this surprising result is that the MF methods rely on conditional expectation which is a time consuming procedure. Importantly, our autoregressive method would not be able to achieve such remarkable runtimes if we would not employ our newly introduced Subgraph Tokenization (ST) technique. As shown in Fig. 3 in the rebuttal file an increased $k$ for ST yields better results and substantially reduced runtimes. We thank reviewer **rhwA** for raising this question since it highlights the benefit of ST. **Experimental evaluation** We evaluate our method on 8 real-world datasets and two synthetic datasets for in total 14 different hardness settings. We report results for 13 different methods. Therefore, we are somewhat surprised that the extend of the experimental evaluation is criticized by reviewer **59Wh** (both weaknesses are related to this aspect) and reviewer **rhwA** (weakness 4: "The experiments seem relatively limited in that comparison is only for two problem types."). However, the reviewers are right in stating that our experiments focused on two CO problem types and that there are many others that would be interesting. Despite the very limited rebuttal period we, therefore, did our best to fully convince reviewer **59Wh** and reviewer **rhwA** of our experimental evaluation and can now report results on two additional CO problem types: Max-Cut and Max-Clique (Tab. 4 and 5 in the rebuttal file). For Max-Cut we demonstrate that VAG-CO outperforms the very recent work Zhang et al. [2023] on their version of the Barabási & Albert dataset with 200-300 nodes (BA 200- 300). Similarly, our new results for Max-Clique on the ENZYMES and IMDB-Binary datasets improve upon the results by Karalias et al. [2022]. In both cases VAG-CO represents the state-of-the-art. We include runtimes for all of these results and report the Gurobi performance with various runtimes. These additional results underpin the strong performance of VAG-CO and complement the numerous results that we already reported on MIS and MVC. In total, we are convinced that the updated experimental evaluation is extremely extensive in comparison to the standards in this field and that we could clearly establish the strong performance of VAG-CO. In summary, VAG-CO is among the best performing methods on 10/11 real-world problems and the single best method on 8/10 real-world problems when no non-learned algorithmic components like conditional expectation are used. Since these real-world datasets that are frequently used in the recent literature are almost optimally solved we also include synthetic datasets that are known to be hard. VAG-CO outperforms the best learned methods on the real-world dataset on all 13 hard synthetic problem settings by a large margin. **Unclear contribution and novelty** While three reviewers agree that this work represents a valuable contribution the remaining reviewer **rhwA** writes "I do not understand the contribution of this paper." and "In general, I'm a little confused about what the overall contribution of this paper is in the context of the literature.". In the following we would like to discuss several aspects of this point and hope that we can resolve potential misunderstandings. Reviewer **rhwA** correctly pointed out several aspects of our work that are not new. As we argue below in our response to weakness (5) by reviewer **rhwA** this is correct but we never claimed these aspects to be new. For example, it is argued by the reviewer that the concept of annealing was already utilized in Sun et al. [2022] which correct and also stated in L249. Then the reviewer calls into question the novelty of ST by writing that it “may be novel”. To the best of our knowledge it is novel and as our results show it is essential for the performance and runtime of VAG-CO. We hope that the reviewer will either outline the reasons for the doubt on this novelty or acknowledge it as an important new contribution. We would like to point out that our main contributions: (i) identification of a central limitation related to MF methods in numerous recent works, (ii) introduction of Subgraph Tokenization to enable efficient autoregressive graph generation, (iii) state-of-the-art results on several popular CO problems on real-world and on synthetic datasets, (iv) a motivation for annealing from statistical learning theory, are explicitly stated at multiple prominent places in the manuscript, including the abstract, the last paragraph of the introduction, and the conclusion. We hope that this clarification will facilitate the assessment of our contributions. Finally, we are convinced that all other points raised in the reviews were clarified in our rebuttals below and we are happy to address any questions in the discussion period. Zhang et al. [2023], “Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets”, arXiv:2305.17010 Karalias et al. [2022], “Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions”, arXiv:2208.04055 Sun et al. [2022], “Annealed Training for Combinatorial Optimization on Graphs”, arXiv:2207.11542 Pdf: /pdf/8812b16c5415bf9462e1cc2c0b4fa3a532f1d0b4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model
Accept (poster)
Summary: The paper presents a two-stage latent text diffusion model that uses an autoencoder to condense lengthy texts into a limited number of paragraph embeddings, and a continuous time diffusion model that359 learns the distribution of these embeddings. The paper presents detailed experiments on open-ended generation tasks and shows that the proposed model alleviates the issue of repetition and advances generation diversity across different tasks. Strengths: The paper is well-motivated. The examples in Figure 1 with the self-reinforcing effect are impressive. The proposed method first summarizes the paragraph information with an autoencoder and then applies the diffusion process to control the token generation process, which is novel from my point. The paper conducts extensive experiments on open-ended generation tasks and provides some analyses of the running time. Weaknesses: 1. Could you provide more experimental analyses on the learned paragraph embedding? I think that paragraph embedding is used to learn a high-level concept for planning but ignores the details of sentence/phrase structure to avoid copying similar phrases from previous prompts and repeating them. Could you provide more experiments to show the quality of learned paragraph embedding? Directly extracting the first K tokens from the encoder is not the best choice for me. 2. In experiments such as Table 1, the proposed method with a greedy decoding mode is comparable to baselines with top-p sampling in terms of diversity. A question is whether PLANNER combined with top-p can lead to better results. Are the proposed methods compatible with stochastic decoding methods? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Please see the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging feedback. We address the questions in below: *1. Paragraph Embedding* Please refer to the general question titled "Motivation and Impact of $k$." Additionally, we provide an analysis of how the acquired embedding influences the eventual generation, along with examples illustrating the abilities of reconstruction, denoising, and interpolation in Appendix A. *2. PLANNER with stochastic decoding* It is possible and straightforward to implement stochastic decoding for the PLANNER. In fact we experimented with this option. In our experiments, we utilized nucleus sampling with a value of p=0.92 and K=50 (see below for an experiment on hotel review generation). By incorporating stochastic decoding, the diversity and repetition metrics can be improved, although at the expense of relevance and accuracy scores. It is important to mention that the decoder's role in PLANNER is to faithfully translate the latent code into the desired target text, rather than performing compositional/planning job. Stochastic decoding may disrupt this role and can lead to undesirable generation, as we observed an increase in hallucinations when combining PLANNER with stochastic decoding. | Method | PPL | DIST/ENT | S-BL | Rep-4 | BLEU | ROUGE | Len | |-----------------|------|------------|------|----------|-------|-------|-------| | PLANNER greedy | 47.3 | 0.17/6.60 | 0.52 | 1.55% | 0.77 | 7.9 | 168.1 | | PLANNER top-p | 72.0 | 0.20/6.80 | 0.38 | 0.94% | 0.58 | 6.1 | 173.2 |
Summary: This paper presents a 2-stage generative model for text. The first stage involves training a VAE over the data to obtain an effective encoder and decoder modules. The second stage uses the output of the frozen VAE encoder as the hidden state of the input text and learns a regular continuous diffusion model over the encoded hidden states of training data instances. For generation, the diffusion model produces a hidden state from noise and this hidden state is fed into the VAE decoder to generate text. This model is evaluated on sentiment-guided generation, text completion, and summarization and compared against baselines that include token-based diffusion models (instead of hidden state text diffusion models in this work), VAE/Enc-dec models, and standard autoregressive models (like fine tuned GPT-2). Strengths: – Although the approach seems straightforward, to the best of my knowledge, surprisingly I am not aware of other latent sequence/paragraph embedding based diffusion models for text generation. Hence, this fills a gap in the research on diffusion models for text. – The experimental design is reasonable and informative. – This approach seems to be more fluent, diverse, and effective than popular token-based diffusion models following the results. – Although typically less fluent than the autoregressive models, it still performs competitively when considering other metrics like diversity, reference overlap, and control. Weaknesses: – I have some concerns about evaluation, particularly related to the quality of baseline models. For example, on CNN-DM summarization task, publicly available results (https://paperswithcode.com/sota/abstractive-text-summarization-on-cnn-daily) show that T5 achieve R-L that is ~7-10 points better than reported in this paper (and is hence better than the R-L proposed model achieves). Therefore, I am doubtful about the quality of training/tuning of the baseline models reported in the paper. – Choosing “first k hidden state vectors” from the encoder in the VAE seems arbitrary. What is the motivation behind this hyperparameter? What is the effect of this k? – The paper alludes to learning “paragraph embeddings” which capture high level semantic properties which enables “coarse-to-fine” generation and allows for “planning”. However, no evidence is presented in this paper about compositional/planning capabilities of the model. While it is true that paragrpahs/tokens are generated from a lower dimensional manifold, it doesn’t automatically imply that these embeddings are linguistically interesting or allow for controlled planning-oriented generation. At an abstract conceptual level, how different are these representations from VAE based representation for example? – Although this is a generative model, the training objective is arbitrary and doesn’t seem linked to maximizing the likelihood/ learning the distribution of the training data. This is because the VAE and the diffusion components are separately trained and hence preclude a clean interpretation of the training objective. – I am not convinced that the “Distributional smoothness metric” reflect the nature of the manifold as is claimed in the paper. – Runtime analysis seems to compare unbatched autoregressive models with batched diffusion models. Sorting by length and then padding for batching is a standard practice for autoregressive models. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback. Next, we address the comments: *1. Weak baseline* Thank you for the pointer. We have examined the link. Please correct us if we are making mistake, but it seems to us that the T5 model that achieved 7-10 points better than our T5 baseline might be much larger (11B, [https://paperswithcode.com/paper/exploring-the-limits-of-transfer-learning](https://paperswithcode.com/paper/exploring-the-limits-of-transfer-learning)) than our baseline (770M). We also found some publicly accessible model ([https://huggingface.co/sysresearch101/t5-large-finetuned-xsum-cnn](https://huggingface.co/sysresearch101/t5-large-finetuned-xsum-cnn)) that have finetuned T5-large using XSum + CNNDM, and they have reported lower ROUGE scores than ours. *2. Motivation of $k$* Please refer to general questions. *3. Planning* Thank you for the insightful comment. According to our understanding, the diffusion model engages by utilizing the previous step output and the controlling signal to refine the representation and incorporate more intricate details. For an example, please refer to Appendix Table 8. Initially, the decoded text from early diffusion steps lacks specificity and coherence. However, as the diffusion inference progresses, the model gradually incorporates additional syntactic and semantic details, and also determines when to remove certain information. This process bears resemblance to the image diffusion process. The external controlling signal provides guidance at each stage of the text generation process. While the representation is learned by VAE, our approach differs in that the diffusion model generate samples close to the ones from the posterior distribution of the latent code, which potentially possesses a more sophisticated structure than a simple Gaussian distribution. *4. Arbitrary objective* Our understanding is that Diffusion is the process of optimizing the evidence lower bound (ELBO) as VAE, which is a valid objective. Our diffusion model is essentially a Latent diffusion model (LDM) which share the same objective thus remains meaningful. Our two stage training setup resembles that of VQVAE/VQVAE2 (which actually reports likelihoods). In VQVAE, they state "Whilst training the VQ-VAE, the prior is kept constant and uniform. After training, we fit an autoregressive distribution over z, p(z), so that we can generate x via ancestral sampling." We share similar rationale with VQVAE. In the first phase, we train a regular VAE with a simple prior and posterior; In the second phase, we freeze the decoder and posterior, but train a diffusion-model-based prior, which only optimizes the $KL(q(z|x) || p(z))$ term but freezes the $p(x|z)$ part. As a result, the ELBO should continue to improve. *5. Distribution smoothness metric* We acknowledge that the PPL of the linear-interpolated sample might not fully reflect the nature of the manifold. Nevertheless, we would hope it has the potential to provide some insight into whether the posterior distribution is highly multimodal, with spikes and numerous density "holes". Given our knowledge we might not be aware of other better alternatives to evaluate the smoothness of a distribution. Therefore, we would greatly appreciate any suggestions that could expand our understanding. *6. Running time analysis* We are aware that the unbatched autoregressive model is not the best choice (see line 343). When we evaluate the full test set of CNNDM and XSum we performed the batched version by sorting input text by length and maximally batchifying them as possible. The total running time reduced more than 4x from 11 hours (unbatched version) to 2.5 hours (batched version). We admit there might be more room to accelerate the autoregressive baseline. In fact, we do not intend to indicate our method can be faster than the autoregressive ones, but only to show the convenience of arranging input into the same length vectors. We will make this clearer. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I am willing to agree that the results I mentioned might have been achieved by a much larger model. However, my overall impression remains similar as before so I am keeping my current score.
Summary: In this paper, the authors propose PLANNER, a model that combines latent semantic diffusion with autoregressive generation to generate fluent text while exercising global control over paragraphs. The proposed method is evaluated on various conditional generation tasks, and the results show its effectiveness in generating high-quality text. Strengths: 1. The paper addresses the issue of repetitive and low-quality output generated by autoregressive models and proposes a novel approach using latent semantic diffusion. 2. The combination of autoregressive decoding and latent diffusion allows for efficient generation over paragraph generation. 3. The proposed method is evaluated on various tasks and shows improved generation quality compared to autoregressive and text diffusion baselines. Weaknesses: 1. I understand that revisiting and revising the generated sentences can alleviate exposure bias, as errors can be reduced through further editing at the token level. However, revisiting and revising the latent space does not seem reasonable to me. From my perspective, the exposure bias occurs in the process of picking words out, but the authors merely employ GPT-2 without making any changes in the phase. 2. The paper claims that they can generate longer text and paragraphs, but there is no further analysis about relationship between length of generated sentence and the performance. 3. The evaluation of CNN/Daily Mail and XSum is performed on 256 subsampled examples from the test set. This manner is not convenient for the subsequent works to follow, and this also hinders reviewers from making fair comparisons between this work and others. 4. I have reservations about the ability of the variational paragraph embedder to learn effective representations of sentences with different lengths. What would happen if there is a significant difference in sentence lengths, such as one being very short (e.g., 5 tokens) and the other being much longer (e.g., 512 tokens or more)? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Unlike autoregressive models based on conditional probabilities, diffusion models are unable to ascertain the optimal sentence from the generated set. How did you address this problem in your paper? 2. As the reverse diffusion goes on in latent space, does the latent representation z get closer to the representation of the ground-truth one in cosine distance or other distances? 3. What is the influence of paragraph embeddings number, k in line 119? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We address the comments in below. *1. Evaluate on subset datasets* Please refer to general questions. *2. Exposure bias* Our hypothesis is that exposure bias occurs when there is a discrepancy between the training and inference stages, specifically during teacher forcing. In our diffusion model, we predict the latent semantics code in an non-autoregressive manner. No partial ground truth latent code has been fed into the diffusion model during training time. Consequently, the diffusion model is less affected by exposure bias issue. As for the decoder, it is trained using an autoencoder with teacher forcing, which means that exposure bias can still exists in this stage. However, the impact of exposure bias is limited to the final translation of the semantics into text. We demonstrate that the error in this translation is minimal (reconstruction BLEU > 80\%) due to the simplicity of the autoencoding task and the strong influence of the input latent code on the decoder, resulting in less error compounding effects. Hence, we experience much less exposure bias compared to the autoregressive approach. *3. Length Ablation* We conducted tests on a total of five datasets using our model. The target generation length varied within a range of 15.2 to 181.29. This range was selected to encompass diverse generation lengths. If comparing across tasks, our method received more diversity/repetition metrics improvement in lengthier generation tasks (review generation). In the hotel reviews dataset, the length of the target sentences can vary from 23 to 512. In Appendix Table 6-7, we provide examples of sentences reconstructed from the latent codes. Generally, shorter sentences are easier to represent and reconstruct, evidenced by the fact that tasks with shorter target sentences (XSum, CNN-DM) achieved higher $BLEU_{clean}$ scores. We will make these clearer. *4. No optimal solution* Our method, inheriting from the latent diffusion model, is a sampling technique. It is worth noting that even for tasks such as translation and summarization, the default generation methods for autoregressive models like popular LLMs rely on sampling. Previous work [1,2] has demonstrated that the notion of an "optimal sentence" may be a misleading "red herring", as optimizing the likelihood can result in low-quality outputs such as repetition and generation artifacts. This is particularly evident in open-ended generation scenarios, where the distribution of text is inherently multimodal. *5. Representation Distance* Yes, as the reverse diffusion process progress, the representation will be closer to the ground truth representation. This is evidenced by the Figure 6 in the Appendix, which shows the BLEU score between the ground truth text and the text translated from the latent code at time $t$. The graph reveals a consistent pattern of progressive improvement in the BLEU score as $t$ decreases from 1 to 0. *6. Impact of $K$* Please refer to general questions. [[1]](https://arxiv.org/abs/1904.09751) Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. *The curious case of neural text degeneration.* In ICLR, 2019 [[2]](https://arxiv.org/abs/2206.02369) Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, and Jian Li. *Learning to break the loop: Analyzing and mitigating repetitions for neural text generation.* In NeurIPS, 2022 --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I will raise my rating.
Summary: This paper proposes to combine latent semantic diffusion with autoregressive generation to alleviate the issues of exposure bias in training / inference of text based language models, and computational and performance cost of purely diffusion approaches. They improve the diffusion process by applying it to the latent semantic space instead of the token / embedding space. For this, they learn some "semantic tokens" for encoding paragraph level information and then use a decoder to map these to the raw text space. Strengths: Working with the diffusion model in the semantic spaces opens up the door for controllable generation. They also provide an extensive study of the requirements for a good latent space for paragraph diffusion models. The paper is well written and easy to follow. The ideas employed to fix / ensure local smoothness (by perturbing data) and distributional smoothness (by using VAE) are simple and useful. They propose a novel (?) metric called AuBLEU to evaluate the denoising capabilities of the model. I believe is this generally suitable for other works and could be impactful. However, it does not feel to be properly justified / grounded. Hparams are provided. Human eval results are significant. The other results are sound and robust evaluation is performed. The paper is just lacking ablations on the design choices. The analysis is complete and justifies the main claims made by the authors. Weaknesses: The changes employed by the authors, especially during the training stage are not properly ablated. It is not clear if the proposed fixes provide benefits. Some of the design choices are not experimentally justified (eg: line 182) Evaluation is performed on just a sample of the test set (this is the first time I have seen something like this in a paper and I'm not sure how to take it - I'm not super comfortable as this makes your technique essentially un-comparable). This also might not be robust. (line 196). No intention of providing code and it might be very hard to reproduce because of the many changes especially in the training setup. Nitpicks: typo line 181 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Can you talk a bit more about the design choices and how much impact they had on the results? For eg: how much impact does fixing "distributional smoothness" have? Have you performed other ablation experiments to justify claims? Why did you subsample the test sets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Addressed in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback. We will fix the typo pointed out. We address the questions in below: *1. Evaluate on subset datasets* Please refer to general questions. *2. Ablation regarding rescaling* We omit the rescaling step in this study due to the absence of the "rescaling invariant" property in the latent text code. Specifically, we have not imposed any constraints to ensure that the generated output remains the same after rescaling the latent code. In contrast, for Imagen, where the generation takes place in the raw pixel space, rescaling will predominantly retain the shape information while altering only the contrast and brightness. Initially, we conducted experiments involving rescaling, but results demonstrated poorer performance compared to the non-rescaled version, as evidenced by a **-10.6** ROUGE score drop on CNNDM. Consequently, we opted for the dynamic thresholding without rescaling. We will make this clearer. *3. Code Release* We have a python+pytorch implementation that reproduces experiment results, and are finalizing legal approvals to open-source the codebase. We will release the code upon publication. *4. Ablation on design choice* We indeed performed an ablation for "distribution smoothness" on the sentiment-guided generation task. The model trained without the variational objective underperforms the full model in all metrics by **-10.1** AuBLEU, **+18.8** PPL, and **-15.2%** ACC. We will incorporate the results in the next revision. The evaluation pipeline might get a bit expensive if the diffusion model training and evaluation is also involved. Instead, we mostly use a surrogate metric (line 255) to monitor the overall quality of the learned representation. Detailed evidence presented in the appendix demonstrates a reasonably strong correlation between this empirical metric of the representation quality and the subsequent performance of diffusion generation. The models trained with the variational objective consistently improve the performance across the board. --- Rebuttal Comment 1.1: Comment: I acknowledge the rebuttal and appreciate the full evaluation. Can you add statistical tests to your evaluation results as well? I thank the authors for the clarification and would like to stick with my initial score. --- Reply to Comment 1.1.1: Comment: Thank you for reading our response and your additional suggestions! We will perform the statistical analysis on the evaluation results in our next revision.
Rebuttal 1: Rebuttal: **Common Questions**: *1. Evaluate on subset datasets* In order to expedite the iterations of the experiment, we opted for a partial evaluation of our method as the full evaluation of our method / Genie takes 7h / 2d to complete on CNN-DM or XSum test set. Nevertheless, we agree with the reviewers' concern that this approach could potentially compromise the comparability of our results. Consequently, we have conducted a full evaluation on the entire test set of CNNDM and XSum, which is presented in the table below. The main conclusion remains the same. We will include all the updates in our forthcoming version. | **Arch.** | **PPL** | **DIST/ENT**↑ | **S-BL**↓ | **Rep-4**↓ | **BL**↑ | **R-L**↑ | **Score**↑ | **Len** | **AuBL**↑ | |:---------:|:-------:|:-------------:|:---------:|:----------:|:-------:|:-------:|:---------:|:------:|:---------:| | | | | | | | | | | | | **CNN Dailymail dataset** | | | | | | | | | | | **T5-search** | 58.12 | 0.11/7.726 | 0.24 | 6.69% | 7.66 | 34.48 | 0.66 | 45.51 | - | | **T5-sample** | 67.58 | 0.11/7.790 | 0.20 | 3.50% | 5.05 | 30.15 | 0.64 | 48.51 | - | | **Genie** | 179.9 | 0.09/7.293 | 0.24 | **4.16%** | 3.22 | 30.47 | 0.58 | 40.94 | 27.21 | | **Genie$^{(10)}$** | 170.6 | 0.10/7.355 | 0.24 | 4.32% | 6.48 | **37.09** | 0.62 | 40.81 | - | | **PLANNER** | 49.21 | **0.10/8.037** | 0.15 | 5.25% | 6.92 | 30.43 | 0.62 | 52.33 | **43.91** | | **PLANNER$^{(10)}$** | 49.07 | 0.10/8.019 | **0.15** | 4.96% | **11.42** | 36.81 | **0.66** | 53.14 | - | | **Human** | 49.477 | 0.12/8.226 | 0.16 | 5.63% | - | - | - | 51.15 | - | | | | | | | | | | | | | **XSum dataset** | | | | | | | | | | | **T5-search** | 29.41 | 0.12/7.200 | 0.31 | 14.83% | 6.11 | 36.08 | 0.74 | 18.97 | - | | **T5-sample** | 36.17 | 0.13/7.449 | 0.24 | 6.47% | 3.62 | 31.18 | 0.71 | 20.78 | - | | **Genie** | 186.7 | 0.09/6.935 | 0.28 | 8.56% | 2.38 | 34.85 | 0.66 | 20.44 | 30.85 | | **Genie$^{(10)}$** | 178.2 | 0.09/6.924 | 0.30 | 9.66% | 5.06 | **41.59** | 0.68 | 19.97 | - | | **PLANNER** | 67.94 | **0.11/7.553** | **0.21** | **5.38%** | 4.84 | 33.97 | 0.69 | 20.04 | **57.88** | | **PLANNER$^{(10)}$** | 67.46 | 0.11/7.529 | 0.23 | 5.82% | **11.61** | 41.23 | **0.72** | 19.89 | - | | **Human** | 37.8 | 0.13/7.656 | 0.21 | 5.56% | - | - | - | 21.19 | - | *2. Motivation and Impact of $k$* The parameter $k$ determines the number of latent codes used to represent a paragraph and therefore controls the compression level. Latent codes with smaller values of $k$ are easier to model using the diffusion model, but may struggle to accurately preserve all the information in the original text. Additionally, smaller values of $k$ offer computational efficiency. as the sequence length for the diffusion model is $k$. To determine the best set of latent codes, we indeed conducted experiments using three different methods: 1) selecting the first $k$ hidden vectors, 2) selecting the last $k$ hidden vectors, and 3) selecting interleaving hidden vectors, one for every $L/k$ hidden vectors. The results of the ablation study are presented below. Based on our findings, we observed no significant difference among the different choices, so we opted for option 1. Furthermore, we discovered that increasing the value of $k$ does not lead to a dramatic improvement in performance (as stated on line 218, see the ablation study below). To tradeoff between efficiency and performance, in most of our study we focus on using $k=16$ | Experiment (on hotel review) | BLEU_clean | BLEU_robust | |-----------------------------|------------|-------------| | First k (k=16) | 79.59 | 43.17 | | Last k (k=16) | 78.96 | 42.85 | | Interleaving k (k=16) | 79.81 | 43.35 | | k=8 | 57.90 | 30.68 | | k=32 | 82.31 | 45.14 |
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Accept (poster)
Summary: Similarly to automatic differentiation which enabled to shift the focus from computing derivatives to deriving new algorithms, the authors propose to automate large scale linear algebra, with "structure-aware" linear algebra. Various algorithms leverage problem structure to expedite the evaluation of operations on linear operators. This toolbox aims to bridge the gap, allowing individuals without extensive knowledge of algorithms and tricks to still benefit from them. The successful accomplishment of this paper's objective could have a fair impact on the machine learning community by automating a significant portion of model optimization. Strengths: This article is well written. The problem is well posed, and the proposed solution is convincing. Importantly, the library's well-thought-out and modular structure is crucial for its growth and positive impact on the community. Weaknesses: The number of dispatch rule is relatively small. The authors mention 70 dispatch rules on `l. 224`, but in appendix A only a dozen such rules are shown (perhaps we reach 70 dispatch rules by counting all the possible combination?). If I had to write a new model, I'm wondering if (1) I would use this framework, with the (arguably small) associated overhead of using a new library or (2) simply scroll through appendix A to see what are the rules relevant for my implementation. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: **Porting to Julia.** It is a nice use case of multiple dispatch, and I like the functional paradigm used. Are there any plans to port it in Julia? There already exists in the `LinearAlgebra` module "structure aware" types (`Symmetric`, `Hermitian`), and the multiple dispatch-functional paradigm are built in, so I believed it would certainly meet its public there. **Using SVRG.** `l. 245` and `248`, the complexity of SVRG seems bigger than Lanczos (at least in $\kappa$; what is $M$?). A reference for the convergence rate of CG and Lanczos would be welcomed. Why is an accelerated version of SVRG not considered, to speed up convergence and further improve the runtime performances? The cyanure toolbox holds some [benchmarks](http://thoth.inrialpes.fr/people/mairal/cyanure/benchmarks.html) along with some [references](http://thoth.inrialpes.fr/people/mairal/cyanure/references.html). Is it because of the memory limitations? **Arnoldi Iterations.** In Fig. 3, it is shown that `Arpack` performs better than the Python implementation of the Arnoldi operations. Given the modularity of your approach, wouldn't it be worth to add a dispatch rule which specifically uses `Arpack`? **Typos.** * `l. 82` : matrix vector **multiples** ? * Fig. 1 could benefit from having one legend for all 3 subplots. Contrary to what is asserted in the paper's checklist, there are no error bars. Given the moderate runtime ($10^2$s), using more than 3 repetitions (e.g 10) with error bar is necessary. * `245`: missing parenthesis Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The article has no Limitations section but seems compared extensively to other libraries in sec. 3.4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your thoughtful and supportive review! Below we hope to bring clarity to your questions. > If I had to write a new model, [could I] simply scroll through appendix A to see what are the rules relevant for my implementation? Yes, implementing the necessary rules and algorithms for a specific problem is always possible. However, while one could simply use Appendix A as a lookup table to select appropriate algorithms, there are many scenarios where the best rules may not be immediately obvious, especially as the compositional structure becomes increasingly nested and/or complex. Additionally, we note that—beyond the dispatch rules—our library includes memory-efficient gradients of iterative operations that are difficult to implement correctly, particularly while retaining the matrix free LinearOperator abstraction. We expect that most researchers writing their own implementations of common algorithms such as GMRES or stochastic Lanczos quadrature will not find it high on their priority list to implement memory efficient backpropagation rules and thus leave the compute and memory savings unexploited. We also would like to think that having a large existing linear algebra ecosystem (that new rules can be slotted into if necessary) will help free up researcher time for other pursuits. **SVRG:** We have listed the complexity of CG and Lanczos $O(\sqrt{\kappa}\log1/\epsilon)$ in terms of the number of matrix-vector-multiplies, equivalent to full passes through the sum when applied to a sum linear operator $A=\sum_{i=1}^M A_i$, where M here is the number of elements in the sum [7]. The $O((M+\kappa)\log1/\epsilon)$ iteration complexity of SVRG (as it is usually expressed) becomes $O((1+\kappa/M)\log1/\epsilon)$ measured by these full passes. For large values of M, SVRG can still be faster than CG/lanczos even without the acceleration. We found that well chosen momentum values for accelerated SVRG can result in speedups; however, compared to the learning rate, we found it difficult to automate the selection of this hyperparameter from the data in a robust and inexpensive way. **Porting to Julia:** This would definitely be a worthwhile endeavor and we hope that this can be accomplished in the future, but it remains outside the scope of our current project. We chose to write the library in Python to interface with the popular ML frameworks of PyTorch and Jax. Nevertheless, the design of our library was certainly influenced by Julia and its programming paradigm, and we hope that these ideas can make it back into Julia LinearAlgebra. **Arnoldi Iterations:** We appreciate the suggestion and wrapping ARPACK. This is something that we have been debating ourselves as there is a clear runtime benefit but at the cost of not modular, not clean and hard to read code for the user. However, for the particular Arnoldi case that you mention, the best way to proceed is simply to wrap ARPACK and the result will still be autograd enabled because of how we define our custom autograd rules. **Typos:** We will correct the typos you have identified. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal. While not fully convinced of the added value of the dispatch rules (in place of implementing directly the efficient update rules myself), I am very sensible to the argument of providing efficient gradients for some specific algorithms which can be a very tedious process. I am convinced that an expanded linear algebra ecosystem emerging from this library could be highly beneficial for researchers. I hope this will make it to Julia LinearAlgebra and look forward to trying the library. --- Reply to Comment 1.1.1: Title: Thank you Comment: We really appreciate your support and thoughtful comments, and look forward to having you as a user of CoLA! Yes, indeed, we believe efficiently backpropagating through iterative methods has the potential to jumpstart many research efforts. We will take your comments into account when updating the manuscript. If you are open to increasing your score in light of our rebuttal, it would be much appreciated, but of course no pressure.
Summary: This work proposes a new library for solving linear algebra problems involving structured matrices. The library incorporates highly efficient linear algebra kernels tailored to handle specific types of structured matrices. Moreover, it uses compositional rules to tackle problems involving matrices with composition structures. By employing these rules, the library eliminates the need for manual implementation of many efficient algorithms applicable to such matrices. Strengths: 1. Designing efficient numerical linear algebra libraries is important to the machine learning community. 2. This work is well-written and the presentation is clear. Weaknesses: Overall I believe the novelty of the work is very limited. The numerical linear algebra algorithms discussed in the paper do not appear to be novel, and the proposed library essentially consolidates these existing algorithms. While this library may be the first to utilize compositional rules for automated algorithmic development, the rules themselves are straightforward and uncomplicated, lacking significant novelty. In summary, this work represents a combination of various engineering efforts, but its scientific contributions are mostly incremental. Technical Quality: 3 good Clarity: 3 good Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We appreciate the positive remarks regarding the clarity of our work. Below, we clarify the novel methodological contributions of our submission. At the same time, we also note that many impactful papers at NeurIPS have been centered around frameworks where algorithmic innovation is not the focus (see references in the general remarks). **Novelty in Composition Rules**: We respectfully but strongly argue that the use of compositional dispatch rules to recursively subdivide linear algebraic operations is a substantial innovation. The rules may seem “uncomplicated,” but in this context, simplicity adds to the practical utility of the approach, and we ask that our research is judged by its impact rather than its complication. (We also note that the NeurIPS reviewing guidelines explicitly endorse new combinations of known techniques.) The potential impact of this approach can be seen in Figures 1 and 2, which thoroughly demonstrate scenarios where this compositional approach is not only advantageous but also atypical. **Additional Contributions Overlooked**: We’d like to highlight several novel contributions of our paper that may have been overlooked. We have introduced a novel algorithm for stochastic diagonal and trace estimation of sum objects that we prove (in appendix B.1) and empirically verify that it achieves a speedup over the standard Hutchinson estimators (in the attached PDF, Figure A - Left). Moreover, we provide memory efficient autograd rules (section 3.4) to be used in conjunction with the iterative algorithms. **Significance of Software Libraries in Scientific Contribution**: We'd like to emphasize that many impactful papers presented at NeurIPS and other leading conferences have been centered on the development and presentation of software libraries. This community has recognized that the practical implementations, when done effectively, propel research forward, as evidenced by works such as [1, 2, 3, 4, 5] (see general remarks). These libraries, even without introducing radically new algorithms, provide tangible benefits to the research community. In conclusion, while individual rules may appear obvious, the framework for integrating these structures in a general way, and the cumulative impact of the many features we introduce help move the research community forward. Given these points and the precedence of NeurIPS accepting software-centric papers as crucial contributions, we kindly request you to reconsider your assessment of our work. --- Rebuttal Comment 1.1: Comment: I appreciate the comprehensive response from the authors, and I've decided to adjust my score accordingly. Regarding the supplementary contributions, the authors have mentioned the "Doubly stochastic diagonal and trace estimation" algorithm as a novel aspect. However, this term is absent in the paper's main body, which might leave readers confused about its relevance and the contributions of the paper. I agree that a good software library for scientific computations is important, even if the foundational idea seems straightforward. However, without access to the code, determining its significance based solely on the paper is challenging. --- Reply to Comment 1.1.1: Comment: Thank you for your response and reevaluating our work. We briefly mentioned our doubly stochastic estimator in the main text (lines 250-260) but indeed we would expand this discussion to highlight this contribution.
Summary: In this paper, the authors introduce CoLA, a library which streamlines the use of various linear algebra routines within frameworks of relevance for machine learning applications. The library revolves around the LinearOperator object, and provides numerous implementations of various numerical algorithms involving operations with such object, from inversion (system solution), to eigenvalue computation, to operators manipulation. Most notable features of the library are: - Structure-awareness and automatic dispatch: the CoLA framework is able to leverage information regarding the relevant structure of the LinearOperator considered (such as positive-definiteness, symmetry, sparsity, composition of operators), so to automatically identify and utilise the most apt algorithm specialisation for the target operation - Integration with existing frameworks: CoLA can interface with both JAX and PyTorch codebase, supports both GPU and TPU acceleration, and has some support for low-precision arithmetic and complex numbers - Autograd for iterative routines: CoLA implements some relevant iterative routines for linear algebra, and defines some ad-hoc rules for efficiently performing automatic differentiation through them The main reported results pertain the performance comparison of the CoLA-implemented routines versus alternative implementations. Overall, CoLA performance is shown to be comparable to that of the baselines; moreover, when there is gain to be had from leveraging the structure of the underlying linear operator, CoLA is shown to be able to do this effectively. Strengths: - The authors propose a very useful framework. Most noticeably, it automates away the need for manually tuning the method choice depending on the properties of the operator, when numerical linear algebra algorithms are involved. Moreover, it effectively leverages some clever design choices (such as multiple dispatch). Overall, it can become a valid tool for numerous applications in the field of ML (and other fields as well) - The authors propose an interesting solution to efficiently performing autograd on iterative procedures, which are ubiquitous in linear algebra applications - The paper is reasonably well-written. Even though the breadth of applications considered in their work is indeed rather large, the authors still manage to present the key advantages of their framework in a clean manner, without being dispersive Weaknesses: - The main weakness I see, is the lack of actual “novelty” in the work being proposed - at least in the classical sense of the word. Apart from the autograd rule in Appendix B2, in fact, the various methods proposed and implementation choices are not new. This notwithstanding, the main goal of the project consists in collecting available linear algebra routines into a unified, ready-to-use, efficient library which can be easily encapsulated within existing ML frameworks, and as such it is still valuable to the research community Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall I’m quite satisfied with the paper. One minor doubt / curiosity I still have is: - When comparing CoLA with other existing baselines in Fig3, it seems like your implementation underperforms in both (a) PCA and (d) Spectral Clustering. You elaborate a bit for (d) in Appendix D4, but could you expand on this? In particular, is the difference in performance simply due to the lack of an optimised implementation on your side, or are there some structural causes? And what are the main implementation differences? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations of CoLA are not explicitly commented upon, but at the same time the framework proposed seems very flexible and efficient (as showcased in the experiments). The main limitations are hence connected to what methods are readily implemented in the framework, rather than being structural. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful and supportive review. We were glad to see your appreciation of the broader ways in which CoLA can help contribute to the scientific community. **Performance gap on PCA and Spectral clustering**. On PCA, we believe the runtime gap is primarily the result of CoLA’s Python overhead. On the spectral clustering example we use Lanczsos to compute the smallest eigenvectors whereas scikit-learn typically uses LOBPCG. When comparing against scikit-learn using Lanczos (sk(L) vs CoLA(L)), we perform slightly better due to minor differences in the Lanczos implementation (scikit learn uses implicit restarts and we do not). We have now incorporated LOBPCG into CoLA and we compare the results (sk(B) vs CoLA(B)) in Figure B (Left), with the CoLA implementation coming out slightly ahead again. Please let us know if we can assist with any other questions. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my doubt, which I consider resolved. I confirm the score given, and once again underline that, even though I understand the other reviewers' concern on the possible lack of novelty in this work, it is my opinion that this paper deserves being acknowledged nonetheless --- Reply to Comment 1.1.1: Title: Thank you Comment: Thanks for engaging with our response. We really appreciate your support!
Summary: This works presents CoLA, a framework for extending the linear algebra interface of modern numerical packages to take advantage of the structural properties of linear operators present in machine learning and other applications. By adding adaptive multi-type based dispatching CoLA adaptively exploits both dense and iterative methods to decrease the computational costs of performing a given operation. CoLA is also capable of exploiting the compositional structure that is present by combining different linear operator properties, creating a large collection of possible applications. Additionally, CoLA provides all these features in an extensible framework capable of backpropagation to ensure integration in modern machine learning and deep learning applications. Evaluations support the authors assertion that providing structural information during processing facilitates better performance on many benchmark applications. Strengths: - The work clearly addresses an ongoing issue in many numerical linear algebra applications that require specialized structural operator structures to be individually implemented and exploited on a per-application basis. This process is not on tedious and time-consuming but also ripe with opportunities for error during the computation of the backward pass updates required for integration in a modern machine learning application. - Overall the writing and exposition of the problem, proposed solution and evaluations are clear and well articulated in the text. - The proposal dovetails naturally with the well-known LinearOperator interfaces that exist in Scipy and provide an easier route to define further extensions for developers to provide application-specific knowledge for further customization. - An interface for composing different linear operator properties for CoLA to exploit seems to be a novel and interesting extension over other existing implementations. - Performance results sufficiently demonstrate that having access to more structural information naturally supports more opportunities for application performance improvement while lowering the cost of the developer to exploit that structure through an intuitive and simple interface. Weaknesses: - Though interesting the work presented may be a better fit in a venue that focuses on numerical methods and software. The intended audience is quite broad. - The core contributions and methods are relatively straightforward and are well-known to the numerical linear algebra community. This work seeks to make the process of exploiting the structural properties of linear operators easier for developers to use and integrate into a modern machine-learning application but it's not clear whether this contribution would have an appreciable impact on the developer community. Though this is my personal opinion it seems that problems encountered by developers are simple enough to be solved manually in most cases. - The evaluation section focuses on applications that exhibit basic structure but it's hard to see the value of the additional features mentioned: backprop and lower precision. - There seems to be a strong reliance on the appendix to fill in missing explanations due to size constraints. - Although the interface provides more flexibility regarding basic and/or compositional structure this still leaves a host of additional options to select the appropriate iterative approach to solve a sparse matrix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It probably doesn't help that one of the applications I know the most about, spectral clustering, is the application where CoLA seems to provide mixed results. Would closing the performance gap between CoLA and the sk (PyAMG) backend be a simple process of extending the multi-dispatch interface? - I found the added ability to efficiently backprop through iterative solve methods interesting, were evaluation results for this feature presented in any of the experiments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations of the work are clear from the presentation and no issues require further acknowledgment, to the best of my knowledge. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. In our response, we provide substantial clarifications, as well as experimental results catalyzed by your comments. We appreciate your feedback, and hope you can consider increasing your score in your final evaluation. We strongly believe this effort will have a significant impact on the machine learning community, comparable to highly impactful libraries such as GPyTorch [6] and BoTorch [2], both of which appeared at NeurIPS in previous years. We would be happy to engage if there are further questions. **Fit to NeurIPS**. Our library, though by no means limited to machine learning problems, was specifically designed with machine learning applications in mind. Consider the following design decisions: 1. _Machine learning specific features_. Unlike other frameworks, CoLA offers backpropagation (a nontrivial contribution of our paper), GPU acceleration, and low-precision operations—three necessary features for modern machine learning applications. While these features are broadly applicable, their impact has unquestionably been dominated by ML in recent years. 2. _Algorithms suited for the “implicit structure” of machine learning problems_. As outlined in Section 3.3, many of the algorithms used by CoLA (e.g. randomized diagonal estimation, randomized preconditioning, SVRG, etc.) are especially well-motivated for ML objective functions, which often feature large summations over data that are amenable to randomized algorithms. 3. _Ability to rapidly prototype with different structures_. The flexibility that results from the use of dispatch rules is of particular value to ML researchers who are more inclined to prototype different structures (diagonal, convolution, low-rank) without a strong a priori sense of what might provide a good approximation. 4. _Evaluation on machine learning applications_. The applications in the paper are dominated by machine learning relevant topics (GPs, equivariant neural nets, neural PDEs, spectral clustering, PCA…). We thus argue that the machine learning community has the most to gain by using CoLA. We would also note that, as stated above NeurIPS has been a venue for similar software frameworks like GPyTorch [6] and BoTorch [2]. CoLA is similar in nature to these other frameworks, but has arguably an even greater potential for impact in ML as its applicability spans more applications. **Impact on the developer community**. We respectfully disagree that “problems encountered by developers are simple enough to be solved manually in most cases.” In machine learning applications (e.g. second order optimization), it is often common to prototype with different types of matrix structure (e.g. block diagonal versus low-rank versus Kronecker approximations of the Hessian matrix). While the rules that govern these different structures are not necessarily complicated, switching between these different structures is a tedious and error-prone process. Indeed, this is a pain point that we encountered in many of our own projects, which inspired our development of this library. CoLA will automate away this process, enabling more rapid prototyping. The usefulness of such automation should not be underestimated. Automatic differentiation frameworks have significantly impacted ML research and development, even though computing gradients is conceptually straightforward. We argue that CoLA will have a similar effect by targeting a different bottleneck in the ML pipeline. **Value of backprop and lower precision.** We note that these two features are critically important to our library. Optimization through backpropagation is now the dominant paradigm in machine learning, and low-precision arithmetic is becoming increasingly prominent to improve speed and memory consumptions. We also emphasize that our implementation of these features are nontrivial contributions. A naive implementation of backpropagation (i.e. directly backpropagating through numerical methods) would incur significant memory (see Figure 4 in the supplementary), and the standard implementations of numerical methods are well known to be unstable for low-precision arithmetic. For a specific experiment that demonstrates the value of backpropagation, we would draw your attention to the Gaussian process application in Figure 3. The parameters of the kernel are chosen by backpropagating the negative log marginal likelihood function, which requires computing a solve and log determinant of the kernel matrix, as well as the gradients of these operations. To demonstrate the value of low precision, we added a linear regression experiment in float16 to show the runtime efficiencies that can be gained. Please see Figure B (Right) of the attached rebuttal pdf. **Selecting the appropriate interactive approach**. We believe that there may be some confusion about how the appropriate numerical method is selected. Most users will likely use the *high-level CoLA interface* (i.e. calling `cola.solve`, `cola.eigs`, `cola.trace`, etc.), in which CoLA automatically determines an appropriate default numerical method based on the underlying structure and size of the linear operator. “Power users”, who may want to specify the underlying algorithm, can use the *low-level CoLA interface* (i.e. directly calling `cola.cg`, `cola.gmres`, etc.) or pass a keyword argument to the high level interface solve(A, b, method=”cg”). We will clarify this point in the paper. **Performance of CoLA on Spectral clustering**. As you note, there is a performance gap between CoLA and scikit-learn on the spectral clustering application. This gap is the result of CoLA not using the same algorithm as scikit-learn. We have now incorporated LOBPCG into CoLA and you can find the results on the attached PDF, Figure B (Left). Now that we have included LOBPCG (sk(B) vs CoLA(B)), CoLA again achieves better runtimes. We appreciate your thoughtful questions and we are happy to engage further! --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough responses. Based on their feedback I have increased my rating for the paper accordingly. The methods proposed will benefit the wider machine learning community. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thanks for engaging with our response and increasing your score. We really appreciate your support!
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful and strongly supportive feedback. In this general post, we highlight some of the new experiments that we conducted inspired by reviewer comments, and provide some general remarks about CoLA. We also have separate posts individually replying to each reviewer. We were happy to see that reviewers share our enthusiasm for CoLA. Linear algebra is a core foundation for machine learning algorithms, where common modeling assumptions give rise to structure that can be exploited for significant computational savings. CoLA will help make researchers more broadly aware of the structure they can exploit, and significantly reduce the bottleneck to implementing methods that exploit structure, as well as easily prototyping various different structures for their problems (as it is often not clear a priori what structure will be most beneficial for a given problem). **Impact and significance.** An analogy with PyTorch and reverse mode automatic differentiation is helpful for understanding the significance of CoLA and potential impact. From a narrow point of view, autograd is merely an application of the chain rule, and yet its impact on machine learning research has been almost immeasurable. Autograd obviates the need for deriving backpropagation rules for each model separately, and bespoke autograd rules (such as our own for iterative solvers in CoLA) can be slotted into an existing language of differentiable functions only when necessary, without requiring the whole structure to be constructed anew. Likewise with CoLA, we have developed an approach such that new rules can be slotted into an existing linear algebra ecosystem, and that ecosystem need not be rewritten for each use case. In this respect, the simplicity of CoLA is a strength, and will help enhance its usefulness to the community. Overall, the NeurIPS and broader ML community has found software libraries that simplify the research process as significant and impactful contributions (for example, consider the NeurIPS papers [1, 2, 3, 4, 5]). **Methodological contributions.** We also note that, while they are not the primary focus, we propose novel numerical algorithms that are important to our framework, such as doubly stochastic trace and diagonal estimation. When applied to linear operators with sum structure, we prove that this estimator has considerably lower variance than the standard Hutchinson estimator in Appendix B.1. More broadly, we have provided a novel procedure to automatically compute gradients through iterative methods, as well as an automatic procedure to compute diagonals and transposes / adjoints of linear operators (section 3.1 & 3.4) through their matrix-vector product routine. **Additional experiments.** Inspired by reviewer feedback, we have put a significant effort into providing some new results (see attached pdf for Figures A, B and C): - Figure A (left) shows how our doubly stochastic estimator reduces the runtime by orders of magnitude when applied to large sums such as when estimating the variance (diagonal of the covariance) of the PCA application from Figure 2a. - Reviewer qUas suggested, for the Bi-Poisson problem in Figure 1(b), that we perform the comparison with a multi-grid solver, a method which is generally faster for solving elliptic PDEs. We have added this comparison in Figure A (right) using the CoLA decomposition rules to split the solve into two multi-grid solves, which like for our previous case in Figure 1 (b) when we were using conjugate gradients, CoLA also accelerates the convergence of multi-grid. - Moreover, for the spectral clustering example in Figure 3 we have now incorporated the LOBPCG algorithm into CoLA (before LOBPCG was giving scikit-learn an edge over us). As you can see in Figure B (left), CoLA’s LOBPCG results have improved significantly. - Furthermore, on Figure B (right), we have added an example on how low precision can improve runtime for linear regression. - Finally, we added on Figure C the runtime and memory consumption of backpropagating through a log determinant. We compare CoLA’s autograd rules against naively backpropagating through the iterative algorithm used to estimate the log determinant. The plot shows the substantial savings in runtime and memory from our approach. Taken together, Figure C and Figure 4 give the quantitative impact of our backprop rules for the two operations (solves and log determinants) needed for training Gaussian processes. As such, these backprop rules were used for the experiments in Figure 1 (a) and Figure 3 (c). We are thankful for the questions, and would appreciate it if our responses and clarifications can be considered in your final assessment. _References_ [1] Paszke et al., 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. NeurIPS. [2] Balandat et al., 2020. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. NeurIPS. [3] Daxberger et al., 2021. Laplace Redux – Effortless Bayesian Deep Learning. NeurIPS. [4] Frank et al., 2021. Cockpit: A Practical Debugging Tool for the Training of Deep Neural Networks. NeurIPS. [5] Pineda et al. 2022. Theseus: A Library for Differentiable Nonlinear Optimization. NeurIPS. [6] Gardner et al., 2018. GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration. NeurIPS. [7] Golub et al., 2018. Matrix Computations. 4th Edition. The Johns Hopkins University Press. Pdf: /pdf/1f653014be6015ac299c0dc8e13f9d976b4ba354.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a library to automate the efficient execution of numerical linear algebra kernels commonly occurring in ML applications. The library recursively exploits compositional structure beyond versus standard packages which treat numerical matrix kernels as back boxes. The proposed framework provides memory efficient automatic, differentiation, low precision computation, GPU acceleration in both JAX and PyTorch. Strengths: -) The quality of the manuscript is good. All concepts are communicated clearly and the paper is very well-written. -) The authors have put a lot of effort to provide automation for several important numerical kernels and structures. A long list of important applications are listed, and COLA can be of major significancy in several ML areas, essentially speeding up innovation. -) The numerical results indicate that COLA can be faster than baseline alternatives on a wide range of numerical tasks. Weaknesses: -) One aspect I found confusing is the lack of information regarding the numerical algorithms featured in some of the results. For example, in Figure 1, it is not clear why the library is faster for the Kronecker problem. Likewise, the same is true for the Bi-Poisson problem. Can the underlying algorithms be found online? -) In similar spirit, it is not clear whether COLA is compared against state-of-the-art numerical approaches. For example, Multigrid is the de-factor choice for elliptic Poisson problems, is this what the authors list as 'iterative'? If not, what is the point of listing a comparison against a non-optimal iterative (or direct) algorithm? -) In Figure 3, scipy is competitive with COLA, if not better in some tasks. This furthers complicates the message of the paper. What is the main reason for publication? The fact that COLA includes a wide-range of solvers for composite tasks or that it can be faster or more memory efficient in general? Technical Quality: 3 good Clarity: 3 good Questions for Authors: -) Is there any new numerical method involved in the library? My understanding is that every single numerical method used for linear systems, eigenvalue problems, etc, is known, and the main novelty of the paper is to wrap them together in an efficient manner for composite tasks. -) Judging COLA as a library requires some reasonable level of insight in the library itself. I understand that the authors must stay anonymous, but it is rather difficult to fully understand the benefits of COLA without looking at the code itself. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are appreciative of your thoughtful feedback. In our response, we provide important clarifications, and new results inspired by your comments. Although contributions of this type can be hard to evaluate, we believe CoLA should be judged by its strong potential for scientific impact, and hope you can consider raising your score in light of our response. **Why CoLA composition rules produce algorithmic speedups and Figure 1**. The main objective of Figure 1 is to show how exploiting the compositional structure provides an algorithmic improvement. For example, for matrices with Kronecker product structure, it is common practice to use an iterative algorithm like CG to perform a linear solve (see e.g. [6]). However, we show it is more efficient to split the problem into two using the Kronecker structure: in particular, decompose $(K_T \otimes K_X)^{-1} \mathrm{vec}(Y) = \mathrm{vec}(K_T^{-1} Y K_X^{-1})$ and the complexity is reduced from $O\big(\sqrt{\kappa_T \kappa_X}(m^2n + mn^2)\log1/\epsilon\big)$ to $O\big(\sqrt{\kappa_T}m^2n\log1/\epsilon+\sqrt{\kappa_X}mn^2\log1/\epsilon\big)$. The computational burden is likewise reduced for when using dense LU or Cholesky based solvers. For the BiPoisson problem, ($\Delta^2 x = \rho$), it is more efficient to use the product structure and perform the two linear solves separately. As you mention, a specialized multigrid method can very efficiently solve the discretized elliptic differential operator here. Just as in the dense and iterative approaches that we discussed, the multigrid method also benefits from splitting up the problem with CoLA’s composition rules. We have run the multigrid method both with the CoLA decomposition (solving the PDE by inverting $\Delta$ twice) and without (solving by inverting $\Delta^2$) and show in the attached PDF (Figure A (Right)) that applying that doing so yields significant runtime improvements. In other words, CoLA’s approach of recursively breaking up structure provides runtime benefits independent of the algorithm being used to solve the problem (CG or multi-grid). **Why CoLA should be published at NeurIPS**. The research behind CoLA serves to substantially reduce the bottleneck of deriving efficient algorithms that exploit the structure commonly found in machine learning and scientific computing. Regarding the scientific impact of frameworks such as CoLA, we would like to draw an analogy to PyTorch and reverse mode automatic differentiation (NeurIPS 2019). From a narrow point of view, autograd is merely an application of the chain rule, and yet its impact on machine learning research has been almost immeasurable. Autograd obviates the need for deriving backpropagation rules for each model separately, and bespoke autograd rules (such as our own for iterative solvers in CoLA) can be slotted into an existing language of differentiable functions only when necessary, without requiring the whole structure to be constructed anew. Likewise with CoLA, we have developed an approach such that new rules can be slotted into an existing linear algebra ecosystem, and that ecosystem need not be rewritten for each use case. The potential impact of this framework is extremely large. In a sense, machine learning is largely linear algebra, and common modeling assumptions give rise to structure that can be exploited for significant computational savings. CoLA will help make researchers more broadly aware of the structure they can exploit, and significantly reduce the bottleneck to implementing methods that exploit structure, as well as prototyping various structures for their problems. The speed gains when using CoLA depend on the degree of compositional structure. For some problems like the Schrodinger equation (Figure 3 (e)), CoLA achieves parity with SciPy; in more complex problems like equivariant neural networks (Figure 1(c)) CoLA achieves remarkable speedups over existing solutions by exploiting compositional structure (see Figure 1). Of course, CoLA will not be a magic bullet for every problem, and we believe it is actually to the paper’s credit that it provides an honest and comprehensive presentation, including results where CoLA is essentially on par with alternatives. **On the purpose of Figure 3**. We would like to clarify the purpose of Figure 3. While Figures 1 and 2 demonstrate the efficiency gains on problems with compositional structure, Figure 3 demonstrates the breadth of CoLA’s applicability in real-world applications, including on problems with no compositional structure. On these problems without compositional structure, we do not expect CoLA to outperform existing specialized methods. However, as seen in Figure 3, CoLA remains competitive with these specialized methods, demonstrating that even in the “worst case” scenario (no compositional structure) there is no downside to using CoLA. We will clarify this point about Figure 3 in the main text. **Regarding new numerical methods**, we believe you may have missed the novel doubly stochastic trace / diagonal estimator that we introduce Section 3.3. This algorithm, though not the centerpiece of our paper, is an important methodological contribution as it yields lower variance than the standard Hutchinson estimator (see Appendix B.1). Moreover, we provide a novel procedure to automatically compute gradients through iterative methods, as well as an automatic procedure to compute diagonals and transposes / adjoints of linear operators (Sections 3.1 & 3.4) through their matrix-vector product routine. **CoLA’s code**. We have been careful to retain anonymity, but we are beyond excited to publicly announce the library, as we feel the community would value it greatly. However, we argue that the key ingredients behind CoLA (a pleasingly simple programmatic mechanism for exploiting compositional structure, as well as the algorithms covered in Section 3) are sufficiently general purpose concepts that can be evaluated independent of implementation. --- Rebuttal Comment 1.1: Comment: The rebuttal (.pdf) is useful and so are the responses. Some responses: -) There is nothing surprising from a numerical perspective and I do not think that what the authors think as novel really is. The break-up of most if not all structures discussed in the paper are basically trivial for anyone working in NLA. Numerical analysis is not the main part of the paper so I am not going to reject just for that, but I would pale down the tone. -) The overall framework is useful and I am in favor of it. Nonetheless, judging the full potential of a library-based framework without running /reviewing the code and having access to more information is (extremely) limiting regardless of what papers where accepted in the past. Stating that the whole framework is a simple programmatic mechanism is also not relevant. This is a general issue with software-oriented papers and double blind peer review. -) The doubly stochastic trace estimator (I did not miss it) is rather straightforward. My understanding is that you eliminate the need to consider the cross-product of the matrix sum from the variance upper bound. But how often do you really encounter trace computations where 'A' is expressed as the sum of 'm' matrices? Most of the times we want to compute the trace of f(A) where f(x)=x^3, f(x) = e^x, etc (triangle counting, subgraph centrality). How do you break this into pieces to fit your framework? What non-trivial application exists for the proposed diagonal estimator? -) The statement "CoLA will not be a magic bullet for every problem, and we believe it is actually to the paper’s credit that it provides an honest and comprehensive presentation" is rather strange. Is there any other way to write a paper other than provide an honest assessment? I think you meant to say that you did the best of your ability to present a fair and informative comparison even in scenarios where CoLA is not expected to outperform. Overall, the rebuttal is useful and I will increase the score to borderline accept. Good luck with your submission. --- Reply to Comment 1.1.1: Title: Clarifications Comment: Thank you for your response! We appreciate your support. Below we make some clarifications in response to your comments. * We agree that breaking-up structure is a well-known concept in NLA. Yet, the novelty in our paper does not come from being the first ones to exploit the structure but by doing it automatically (through our recursive dispatch rules). All the rules in appendix A are simple but they still require that the practitioner write an explicit method to use the given structure at hand, and instead we provide a framework to do so automatically. Let us illustrate with an example. If a user wants to compute the determinant of a matrix $A=B \otimes C$, where $A$ is a tridiagonal matrix, and $C=PLU$ is the product of P,L,U matrices in its PLU decomposition. Our framework will split the determinant into $\mathrm{det}(A)^{dim(C)}\mathrm{det}(B)^{dim(B)}$, compute the efficient diagonalization of a tridiagonal matrix to find $\mathrm{det}(A)$, extract the diagonal of $L$ and $U$, and compute the sign of the permutation $P$ to find its determinant and then assemble all these components together into the final result. This functionality is highly practical, as it helps free the user to focus on the modeling assumptions behind A rather than on the linear algebra. There is no current framework with these capabilities. * Our doubly stochastic estimator can be highly practical. While the classic application of stochastic trace estimation is matrix polynomials, more recent work from the ML community applies this technique to matrices that are summations over datasets [1,2]. Consider computing the diagonal or trace of the Hessian of a neural network. This diagonal is relevant in quantization (where in e.g. [1] it is computed using the naive Hutchinson estimator), in optimization [2], and elsewhere. Since the Hessian is the sum over hessians for a large number of data points ($m$), we can expect a reduction in the number of iterations required to reach the same variance by roughly $1/m$. * Indeed we phrased that poorly. As you mentioned, our goal is to present a fair and informative comparison even in scenarios where CoLA is not expected to outperform. Thank you again for your feedback. [1] Dong, Zhen, et al 2020. "Hawq-v2: Hessian aware trace-weighted quantization of neural networks." NeurIPS. [2] Liu, H., et al. 2023. Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training. arXiv 2305.14342v1.
null
null
null
null
null
null
Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning
Accept (poster)
Summary: This paper addresses learning models that are sufficient to model the environment, in the sense that optimising for a risk-sensitive objective on that model is equivalent to optimising the risk-sensitive objective on the actual environment. The authors show that value-equivalent models (which match the real environment in expectation) are insufficient in the risk-sensitive case. The authors introduce the distribution equivalence principle, which defines the set of models that induce the same return distribution as the real environment (and therefore the same risk-sensitive values). To relax the assumption that the entire return distribution must be matched, the authors introduce statistical function equivalence, meaning that the models are are equivalent in terms of some statistics of the return (e.g. mean and variance). The authors define loss functions that implement these insights. Strengths: * I think the paper addresses a novel topic: how best to learn models when the goal is to learn a risk-sensitive policy rather than a standard expected value policy. * The writing is clear in sections 2-5, and the ideas are well-formalised. I think sections 2-5 are very strong. Weaknesses: * The paper makes it clear that value-equivalent models fail in the risk-sensitive setting (Proposition 3.2 and experiments), however it fails to motivate why one would use distribution equivalence instead of the standard approach to learning a model - maximum likelihood estimation. Presumably, if the model has enough capacity and is expressive enough, we can expect a model learnt using MLE to learn the correct distributions for distributional equivalence. The paper should explain and demonstrate in which situations the model learnt using the proposed approach results in better performance than MLE model estimate. For example, in the case where model has limited capacity, there is limited data, or the model uses a simplified distribution (such as a Gaussian over successor states), we might expect the proposed approach to work better for risk-sensitive optimisation. However, the paper does not discuss or demonstrate these potential advantages. * The description of the experiments is unclear - please see my questions. * In the empirical evaluation, it appears that the authors do not compare against MLE model estimation, which is the most obvious baseline. In Four Rooms/Frozen Lake/Windy Cliffs, the only baseline is the value equivalent model (which is obviously a bad approach for risk-sensitive optimisation). In the option-trading environment, the authors compare against the VE model again, as well as a model-free approach. The authors outperform the model-free approach, but lines 318 and 329 hint that this is because their approach is model-based (and therefore obtains better sample efficiency). Thus, it is unclear if the approach of the authors is better because of the ideas introduced in the paper, or simply because it is model-based. The authors should compare against the MLE model baseline to demonstrate that in some situations their approach is better than the most naive MLE model-based approach. * Proposition 3.2 uses this introduced notion of epsilon-strictly risk-sensitive. This definition is suitable for CVaR, but does not apply to many spectral risk measures, which apply non-zero mass to all quantiles (e.g. Wang risk measure). In the latter case, epsilon is zero, and therefore Proposition 3.2 does not support the arguments of the authors, as it shows the error is greater than or equal to zero. It seems like it should be possible to come up with a different bound for proposition 3.2 that is more general (and therefore shows that value-equivalence can be sub-optimal for any spectral risk measures other than expected value). Minor comments: * Reproducibility: the code provided has no Readme, and for the option trading environment it is completely unclear which of many files to run to generate the results in the paper. * $\Pi$ is used to denote both a set of policies, and a projection operator. This notation is a little confusing. Other than the lack of discussion about the potential advantages of the proposed approach (compared to MLE), the lack of a comparison to standard MLE baseline in the experiments, and uncertainty about some of the experiment details, I really like the paper. Thus, I am likely to increase my score to an accept score if I think these concerns are adequately addressed during the rebuttal period. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Are you able to provide experiments demonstrating the difference in performance between between risk-sensitive policies optimised using your distribution-equivalent models vs an MLE model? Perhaps we might expect that models learnt using your approach result in better risk-sensitive performance when model capacity is limited (as demonstrated in Grimm 2020 for the value-equivalence case). * Can you explain the situations (such as limited model capacity, or limited data) where your approach is more suitable than MLE model learning for risk-sensitive policy learning? * Section 5 does a good job of explaining that models can be equivalent only for a certain set of return statistics that are of interest. However, in the tabular experiments the authors learn an equivalent model for the mean and variance, and then optimise for CVaR. This seems to contradict section 5, as mean and variance are not sufficient for estimating CVaR. What was the motivation for choosing the mean and variance as the return statistics of interest in this experiment? * What return statistics/functionals were used for the distribution-equivalent model in the option trading experiment? * Is the improvement in sample efficiency (Line 329 of the paper) compared to the Lim & Malik (2022) approach because of the distributional-equivalence approach you have proposed, or simply because any model-based approach (such as MLE) is more efficient than Lim & Malik (2022)? * How is the model represented in the experiments? In the tabular environment, I assumed it was just a categorical distribution over successor states. However, the appendix says that GPU training was used - indicating that this is a neural network model? Likewise, what was the model architecture for the option-training case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the authors have done a good job of addressing potential limitations. In particular, by proposing the approximate version of distributional equivalence in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and very useful feedback, and are grateful for their positive comments on the paper. We address their concerns and questions below. ## Comparison to MLE We thank the reviewer for raising this, as we mainly focused our discussions on comparing to value equivalence, but we completely agree that a more focused comparison to MLE-based approaches will strengthen our paper. We added a discussion on this topic, which we provide below. The standard approach to learning a model is to use maximum likelihood estimation (MLE) based on data, which given a model class selects the model which is most likely to have produced the data seen. If the model class is expressive enough, and there is enough data, we may expect a model learnt using MLE to be useful for risk-sensitive planning. However, the success of this method relies on the model being able to model everything about the environment, which is an unrealistic assumption in general. In contrast, our method focuses on learning the aspects of the environment which are most relevant for risk-sensitive planning. With that in mind, we may expect our method to outperform the MLE when the model class is not expressive enough to model the entire environment, which may be due to a limited model class, or for example if the environment is very complex and impossible to model fully. ## Empirical comparison to MLE baselines We added an MLE baseline to all existing experiments (tabular and option trading) in the paper. We found that in these environments the MLE baseline performed approximately on par with our approach, and so we didn’t include these updated figures in the PDF for lack of space (although we will of course update the figures in the paper). To demonstrate empirically settings in which our approach out-performs naively using the MLE, we added a number of additional experiments: - We repeat the tabular experiments and constrain the model’s estimated transition matrix to be a certain rank, effectively restricting model capacity (Figure 2 in PDF) . - We repeat the option trading experiments, however we add additional dimensions to the state space which consist of uniform random noise, increasing the complexity of the environment to model (Figure 3 in PDF). - We repeat the option trading experiments, limiting the size of the hidden layer of the model, once again restricting model capacity (Figure 4 in PDF) . In each of these settings, our approach out-performed the MLE baseline, demonstrating our arguments from the previous point. ## Generalizing Proposition 3.2 We thank the reviewer for raising this point, as we believe that their feedback has strengthened the impact of our result. The reviewer is correct that we formulated the proposition with CVaR in mind, and as such it is limited to the range of risk measures it is applicable to. We have generalized the proposition, and present it below, so that it is now applicable to all spectral risk measures. We say that a spectral risk measure $\varphi$ is $(\varepsilon, \delta)$-strictly risk sensitive if it corresponds to a function $\varphi$ such that $\varphi(\varepsilon) \leq \delta $. We note that our previous definition corresponds to the case that $\delta=0$. Moreover, this new definition is applicable to all spectral risk measures, in the sense that for any spectral risk measure there exists an $(\varepsilon, \delta)$ pair satisfying the definition. With this new definition, the bound $\frac{R_{max}}{1-\gamma} \varepsilon$ is replaced by $\frac{R_{max}}{1-\gamma} \varepsilon (1-\delta(1-\varepsilon))$. In particular, as the reviewer mentioned, this now provides a non-zero bound for all spectral risk measures other than expectation. ## Return statistics/risk functionals used in tabular experiments For the tabular experiments, we used the two moment functional due to the fact that it is Bellman-closed, so that it can be learnt exactly in a dynamic programming fashion, while there is no known Bellman-closed functional equivalent to CVaR, so learning it in a dynamic programming fashion will lead to a biased result. Of course, as pointed out by the reviewer, using the two moment functional to plan for CVaR may result in error due to the fact that the first two moments are not sufficient for estimating CVaR. We will make this more clear in the paper, and we have also expanded the tabular experiments to additionally learn a CVaR-equivalent model (although it is biased as discussed before), which we present in Figure 1 of PDF attached to the official comment. ## Return statistics used for option trading The statistical functional used in the option trading experiment is the functional $\psi=(F_{\mu}^{-1}(\tau_1), \dots, F_{\mu}^{-1}(\tau_m))$, where $\tau_i=(2i-1)/2m$, where $m=100$. In particular, our implementation of QR-DQN learns this functional $\psi$ of the return in order to take actions. We added a section in the appendix to discuss this in detail, and discuss that in a general sense our method can be combined with a model-free algorithm which learns a functional $\psi$ of the return to obtain a $\psi$-equivalent model. ## Experiment descriptions The model for tabular experiments was an exact categorical distribution over states, and the model for the option trading environment was a Gaussian transition model. We will explicitly describe these in the appendix, and modify Appendix E to provide CPU-hours for the tabular experiments rather than GPU-hours. ## Minor comments We agree that in its original state the code was not clear how to reproduce our results, we re-organized the code and included a readme with instructions to reproduce each figure. We apologize for the overloading of $\Pi$, we replaced the use of $\Pi$ for projection with $\operatorname{Proj}$. --- Rebuttal Comment 1.1: Comment: Well done on this strong rebuttal, and thank you for addressing all of the points that I raised in my review. I think the new results are great, and demonstrate that there is practical utility to this approach in additional to the theoretical contributions. I also appreciate the improvement to Proposition 3.2. In light of these improvements, I now believe that the paper should be accepted. I will update my score to a 7.
Summary: This paper studies the intersection of model-based RL and risk-sensitive RL. Firstly, the authors theoretically demonstrate that proper value equivalence can only plan optimally in the risk-neutral setting, and its performance will deteriorate as the risk level increases. Then the authors introduce distributional equivalence principle and prove that distributional equivalent models can be used for optimal planning with any risk measure. However, due to the inherent challenges of distribution, learning distributional equivalent models is not practical. Therefore, the authors combine the statistical functionals and propose statistical functional equivalence, which is parameterized by the choice of a statistical functional. The authors further demonstrate that the choice of a statistical functional determines the risk measures that can be used for optimal planning, and provide the loss functions for learning these models. Additionally, the authors show how the proposed framework can be integrated with existing model-free risk-sensitive algorithms. Finally, the authors validate the performance of the framework in both tabular experiments and option trading scenarios. Strengths: 1. The paper is well written. The original contributions are highlighted clearly. 2. This paper demonstrates clear logic and presents a series of comprehensive theoretical proofs to the validity of the proposed methods. 3. The structure of this paper is complete, it provides an illustrative example that is simple and easy to understand. Weaknesses: 1. The selection of parameters of the experimental environments and algorithms is not clearly given, such as the reward setting in Four rooms and the parameter settings of various methods in Option Trading. 2. The work in the experimental part is insufficient, and there is no more presentation of experimental results in the appendix, which causes the lack of persuasion. 3. I think the authors can add some enhanced verification experiments on the performance of more model-free risk-sensitive RL algorithms augmented with the proposed framework. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the distinction between the spectral risk measures used in the paper and the distorted expectation risk measures? 2. Why is it necessary for the weight function of spectral risk measures to satisfy a non-increasing condition? 3. Tabular experiments only show the performance comparison with respect to ($\operatorname{CVaR}(0.5)$). I think it's better to show the expected returns as well. 4. For the combination of the modification of QR-DQN and statistical functional equivalent models, it is better to provide a description of the procedures of the algorithm for understanding. 5. When using statistical functional equivalent models to augment model-free risk-sensitive RL algorithms, will the data generated by the model be added to the replay buffer to improve sample efficiency during training? 6. In Line 110, $\mu$ in equation $F_\mu^{-1}(u)=\inf \{z \in \mathbb{R}: \mu(-\infty, z] \geq u\}$ seems to have a different meaning from $\operatorname{CVaR}_\tau(\mu)=\underset{Z \sim \mu}{\mathbb{E}}\left[Z \mid Z \leq F_\mu^{-1}(\tau)\right]$. Is this true? 7. The formulas in the paper lack proper numerical labels. Please improve it. 8. In Line 126, I think there may be something wrong when you are calculating the equation $\eta^{\pi^b}(x)=U([-2,2])$, because the superposition of uniform distributions should result in a triangular distribution. Can you show me your detailed calculation steps? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the authors need to add more discussion around the limitations of their approach. Specifically, can the proposed framework augment any model-free risk-sensitive RL algorithm? Will statistical functional equivalence limit the risk measures that can be used? A discussion on this point would be useful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in reviewing our paper, and we are grateful for your positive feedback on the paper's clarity, original contributions, and comprehensive theoretical proofs. We address the weaknesses and questions below. ## Environment selection and parameters We currently describe the environments in detail in Appendix E.1., if the reviewer believes we are missing details there we will happily add them. Regarding the hyperparameters of the option trading algorithm, we use the same hyperparameters as was used in [1]. We will specify this in the appendix. ## Insufficient empirical results Thank you for your comment, we have taken the following steps to enhance our empirical support: - We added experiments highlighting the benefits of our method over learning models using MLE: - We repeat the tabular experiments and constrain the model’s estimated transition matrix to be a certain rank, effectively restricting model capacity (Figure 2 in PDF) . - We repeat the option trading experiments, however we add additional dimensions to the state space which consist of uniform random noise, increasing the complexity of the environment to model (Figure 3 in PDF). - We repeat the option trading experiments, limiting the size of the hidden layer of the model, once again restricting model capacity (Figure 4 in PDF) . - If there are any other experiments that the reviewer believes should be included, we are open to consider it. ## Comparison to more model-free baselines We chose to use the algorithm from [1] as a baseline to illustrate how our framework can be combined with an existing model-free algorithm. If the reviewer has a risk-sensitive model-free algorithm in mind which they believe would be illustrative to include as an additional baseline, we would be happy to include it. ## Additional limitations We thank the reviewer for pointing these out, and we added discussion to both points raised. Regarding “*Specifically, can the proposed framework augment any model-free risk-sensitive RL algorithm?*”, we added a section in the appendix discussing this point, and demonstrating how our framework can augment a model-free algorithm which learns a statistical functional $\psi$ of the return with a $\psi$-equivalent model. Regarding “*Will statistical functional equivalence limit the risk measures that can be used?*”, we previously touched on this in Appendix F, but we have since expanded the discussion. In particular, our theory demonstrates that statistical functional equivalence can be used for any risk measure which is in the span of the statistical functional used (in the sense of Proposition 5.10.). However, our experiments demonstrate that in practice, statistical functionals can often plan near-optimally for risk measures not in their span, for example the moments functional planning for CVaR in the tabular domain. We highlighted understanding approximate planning in this sense as a direction for future work. ## Questions 1. Spectral risk measures and distorted expectation risk measures are related formulations, as they both weighted integrals of the quantile function $F^{-1}_{\mu}$. Spectral risk measures are a proper subset of distortion risk measures, as shown in [2], this is reflective of the fact that spectral risk measures are all coherent (see the following answer for a discussion of this term), while distorted expectation risk measures are not. 2. The requirement that the weighting function $\varphi$ is non-increasing is required so that spectral risk measures are coherent [3]. A coherent risk measure is intuitively one that satisfies a collection of properties which makes it a ‘desirable’ measure for decision making. 3. We agree with this suggestion, and we have added figures with the expected returns as well (Figure 1 in PDF). 4. To increase the clarity of our architecture combined with QR-DQN, we will add a section with a detailed description in the appendix, as described in the previous section (additional limitations). 5. In our experiments, we sample from the replay buffer, and replace the real next states with our model’s predicted next states, the same method which was used in [4]. We remark that this is a rather simple way to use the model, and future work can be done to find more sophisticated techniques of using the model. We will make this clear in the paper, and highlight it as a potential for future work. 6. In both equations, $\mu$ is the same real probability measure. We use $\mu(A)$ to indicate the measure of a set $A$, and $\mathbb{E}_{X\sim \mu}[f(X)]$ to indicate the expected value of $f$ under $\mu$. We will make this more clear in the paper. 7. We will add numerical labels to equations for increased clarity. 8. The calculation that $\eta^{\pi^b}(x)=U([-2,2])$ is presented as Example 2.10. of [5], we refer the reviewer to this reference as we believe we would not be able to reproduce it as clearly as it is shown there. We note that one possible source of confusion is that a triangular distribution would result from the sum of continuous uniform random variables $U([-1, 1])$, while the quantities being added here are discrete uniform random variables $U(\\\{-1, 1\\\})$. [1] Lim, S. H. and Malik, I. Distributional reinforcement learning for risk-sensitive policies. Advances in Neural Information Processing Systems, 2022. [2] Gzyland, H. and Mayoral, S. On a relationship between spectral and distorted risk measures. Spanish Finance Association, 2016. [3] Artzner, P., Delbaen, F., Jean-Marc, E., and Heath, D. Coherent measures of risk. Mathematical Finance, 1999. [4] Grimm, C., Barreto, A., Singh, S., and Silver, D. The value equivalence principle for model-based reinforcement learning. Advances in Neural Information Processing Systems, 2020. [5] Bellemare, M. G., Dabney, W., and Rowland, M. Distributional Reinforcement Learning. MIT Press, 2023. http://www.distributional-rl.org. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing responses to my questions. I still believe that the experimental design and results in the paper are insufficient, causing its limited persuasiveness. Therefore, I will maintain my initial rating. --- Reply to Comment 1.1.1: Title: Thanks! Concrete suggestion for experiments? Comment: We appreciate your response. Thank you! Since you mentioned that the experimental design and results in the paper are insufficient, even after our new results during the rebuttal phase, we would like to kindly ask you if you have any specific experiments in mind that you would like to see? We will consider your suggestions in our future revisions, so that we have a more convincing paper.
Summary: The authors propose to extend the notion of value equivalence to distributional model equivalence for the purpose of risk-sensitive reinforcement learning. Theoretically, the paper first shows that value equivalence is insufficient for planning risk-sensitive policies, then introduces both exact and approximate versions of distributional model equivalence, along with some of their theoretical properties. Empirically, the utility of the proposed approach is demonstrated on a number of simple domains for evaluating risk-sensitive policies. Strengths: The main theoretical contribution of the paper is in generalizing the notion of value equivalence to distributional model equivalence for the purpose of learning risk-sensitive policies. This is well-motivated and the presentation is clear. Weaknesses: The empirical evaluations are rather limited. It seems that sample efficiency is the key motivation for the entire approach, yet this is not clearly demonstrated by the examples in section 7. It might be more illuminating if one could see how the performance changes with respect to the number of training samples (actual and/or modeled). Technical Quality: 3 good Clarity: 3 good Questions for Authors: What about policy gradient approaches? Is the proposed notion of distributional model equivalence still relevant? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their useful comments, and are pleased to hear that they found our paper well-motivated and clearly presented. We address the highlighted weakness and questions below. ## Empirical evaluations To address the reviewer’s concerns regarding limited evaluation, we made a number of modifications: - We added sample efficiency curves for the option trading experiments to the appendix (not included in PDF of figures due to lack of space). - We made the discussion more clear so that the main motivation is not only sample efficiency, but also the benefits of our framework over a naive MLE model. We demonstrated this in Figures 2, 3, and 4 of the PDF attached to the top-level comment. ## Applications to policy gradient While we focused on the value-based setting in this paper, we believe that distributional model equivalence may be adapted for the policy optimization case, so that it may be useful for risk-sensitive policy gradient formulations such as [1]. Our current work would likely need to be adapted in a similar fashion as the construction in [2]. We will highlight this as a potential direction for future work. In the actor-critic setting, our current method can be used to learn a better critic, which can then contribute to learning an improved actor. This can be especially useful for a risk-sensitive actor critic framework such as [3]. [1] Aviv Tamar, Yinlam Chow, Mohammad Ghavamzadeh, Shie Mannor. Policy Gradient for Coherent Risk Measures, NeurIPS, 2015. [2] Romina Abachi, Mohammad Ghavamzadeh, and Amir-massoud Farahmand. Policy-aware model learning for policy gradient methods, arXiv, 2020. [3] Prashanth L.A., Mohammad Ghavamzadeh, Actor-Critic Algorithms for Risk-Sensitive MDPs, NeurIPS, 2013. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. I'll keep my score.
null
null
Rebuttal 1: Rebuttal: We thank all of the reviewers for their time and effort spent reviewing and the feedback provided. We believe that based on their feedback, we were able to significantly improve the quality of our work. We now highlight some of the main modifications made. - As suggested by Reviewer B3wZ21, we added discussion on the benefits of our method over model learning using an MLE baseline, and which settings one would see a benefit. We further added a number of experiments to corroborate our reasoning which can be found in Figures 2, 3, and 4 of the attached PDF. - As suggested by Reviewer B3wZ21, we generalized Proposition 3.2. so that it is applicable to any spectral risk measure, while previously it only applied to CVaR-like risk measures. In particular, the bound now provides a non-zero optimality gap for any spectral risk measure other than expectation. - As suggested by Reviewers du98 and B3wZ21, we added a section in the appendix clarifying the experimental setup of how QR-DQN was combined with our model-learning framework, and more generally how a model-free algorithm which learns a statistical functional $\psi$ of the return can be augmented with our method to learn a $\psi$-equivalent model. Pdf: /pdf/e545b3ab4ccd30eb416fe3487cb2f12b69c2507e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transformer Compression via Subspace Projection
Reject
Summary: The paper proposes a subspace projection algorithm spanned by the features' principal components. The projection's fusion with the different network layers is presented. A gradient-free pruning approach is further suggested based on the parameters and activation statistics. Finally, the proposed framework is experimented on BERT and T5 and achieves a compression ratio of 44% with at most 10 1.6% degradation. Strengths: The paper is fairly written. The apparent main novelties of the paper are the low-rank approximation of the features and a statically based pruning approach. Weaknesses: Low-Rank approximation:\ The main weaknesses seem to arise from the comparison to prior/competing works. For example, low-rank approximations of features have already been presented see for example https://cs.nju.edu.cn/wujx/paper/AAAI2023_AFM.pdf \ Also, I am unsure why lines 37-38 are true: performing PCA (/Kosambi–Karhunen–Loève) is a pretty old technique for model acceleration, the subspace being defined by the principal components of the parameters or activations, this is a low-rank approximation.\ Thus, the low-rank approximation contribution of the paper should be narrowed to the definition of the data matrix. Pruning: \ it's unclear how these simple statistics perform compared to other pruning methods or heuristics. Experiments: \ the subspace dimension as well as the compression ratio are not given which leaves the speed-up metric subjective. The method performance on mid-size LLM is not very good compared to old methods. \ Random projection as ablation is a very weak baseline. Clarity:\ The paper can be refined in terms of clarity (also typos (e.g., lines 235, 282)) Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1) The novelty issues described in the Weaknesses section should be discussed/addressed. 2) Comparison with recent techniques such as LORA and more compression metrics should be presented. 3) An ablation with SVD(weights) should be the very least comparison. 4) An ablation of the pruning should be provided Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No limitations discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The prior work [1] and [2] are both data-aware low-rank factorization. However, unlike our approach which directly reduces the input and output dimension of the weight matrix, they still target weight matrix decomposition. This distinnction in approach is the key difference, but not the low-rank factorization techniques that are employed by both their methods and ours. Compared to such weight matrix decomposition methods, our method has two main advantages. First, it can reduce the model's runtime memory overhead, as the dimensions of activation (i.e., both the input and the output) are reduced. Second, weight matrix decomposition methods break down a matrix into two matrices, leading to an increase in the total number of matrix multiplications during the inference process. This subsequently increases the overall time overhead of initiating the cuda kernel, as well as the communication overhead between the global memory and shared memory in the GPU. On the contrary, our method direclty reduces the dimension of weight matrix, without augmenting the number of matrix operations. As a result, it avoids theseese additional overheads. 2. LoRA is an efficient fine-tuning method rather than a model compression method. This makes it fundamentally differnet from our purpose. 3. In the experimental section, we compare our method with [1] and [3]. Both of them represent improved algorithms for SVD. These papers also demonstrate that direct SVD decomposition of the weights leads to very poor model performance; therefore, we do not make a comparision with SVD applied directly to the weights. 4. In Sectino 5.2 (Table 1), w TCSP{25\%, 0\%} shows the effect without pruning , and w TCSP{25\%, 25\%} shows the effect with pruning. These exactly present the ablation studies about pruning. Meanwhile, the recent paper [4] supports the validity of our statistics-based pruning method. 5. *the subspace dimension as well as the compression ratio are not given* In table 1, the notation TCSP(25%, 25%) refers to the method applied. The first 25% means the compression ratio for hidden dimension. Then for BERTbase with a hidden dimension of 768, the subspace dimension is 768*(1-25%) = 576. 6. *Random projection as ablation is a very weak baseline* The experiment involving random projection does not act as a baseline, but is designed to demonstrate that SVD decomposition is necessary. It shows that the success of the method is not solely due to the subsequent model fine-tuning. [1] Chen P, Yu H F, Dhillon I, et al. Drone: Data-aware low-rank compression for large nlp models[J]. Advances in neural information processing systems, 2021, 34: 29321-29334. [2] Yu Hao, Wu Jianxin. Compressing Transformers: Features are Low-Rank, but Weights Are Not! AAAI 2023. [3] Hsu Y C, Hua T, Chang S, et al. Language model compression with weighted low-rank factorization[J]. arXiv preprint arXiv:2207.00112, 2022. [4] Sun M, Liu Z, Bair A, et al. A Simple and Effective Pruning Approach for Large Language Models[J]. arXiv preprint arXiv:2306.11695, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your answer. I think there are some misunderstandings with my review I would like to point out and elucidate. 1) Please correct me if I am wrong. Your main claim is the *recursive* application/approximation of $Wx \approx W(PP^{T}x)=\hat{W}\hat{x}$. In regular low-rank approximation $W\approx QQ^{T}$ you will have similar low-rank computations as the former. The reference was to prove that data-driven low-rank approximation has been done. So as in the former review, should I understand your main claim resides in the fused data-driven subspace? 2) As you mentioned, Lora is an "efficient" and "fine-tuning" method (like yours). In this field, the relevance and impact are more related to the implementation and the performance rather than the exact setting. The implementation of the method on modern LLM is important but if not (as this is the case here) there is a need to at least compare with SOTA acceleration methods. 3) As I wrote, a fair *ablation* study with SVD (random is quite irrelevant) is required in order to assess the impact of the different compounds of the method. 4) By ablation I meant regarding what I wrote before *" unclear how these simple statistics perform compared to other pruning methods or heuristics."* 5) By compression ratio I mean numerical compression ratio (why? c.f. Appendix: *" we choose to exclude the compression of its first layer."*) Also, not sure why the same dimensions should be adopted for all layers. 6) Every "ablation" is to be performed according to some baseline. The random projection was the baseline you chose which is too weak for a fair understanding and comparison (even if it proves fine-tuning is not the key to the performance. SVD of the weights, even if suboptimal would have been better). --- Reply to Comment 1.1.1: Comment: Thank you for your reply. 1. Compared to the previous work, we focus on both matrix decomposition and matrix fusing, but the previous work stays at decomposing the matrix, and they change the original matrix operation $ W * x $ to $ W_1 * W_2 * x $, while our method tries to integrate the matrix obtained from the decomposition with the original weight matrix, and change the original matrix operation $ W*x $ to $(P_0^T * W * P_1) * x$. At the same time, $(P_0^T * W * P_1)$ can be merged into a single matrix before model deployment, which has smaller input-output dimensions compared to the original matrix $W$. 2. In Table 2 and 6, we present a comparison between our method with the existing compression methods, all of which focus on model compression. These techniques are more closely related to our method than Lora is. 3. Table 3 precisely show the ablation study with SVD. We use random to replace SVD to demonstrate that the component of SVD is neccessary. 4. Since pruning is not a cirtical component, we initially overlooked the ablation of this component. However, we could add this part of the experiment later. 5. 0.390625 The total number of parameters before compression n_b is 768 * 768 * 12 12 (there are 12 layers, eacy layer contains four 768 768 matricies for MHA, and two 768 3072 for FFN, thus 768 * 768 4 + 768*3072 =768 * 768 * 12 ), and the total number of parameters after compression n_a is (ignoring the first layer) 768 * 768 * 12 + 576 * 576 * 12 * 11 + 768 * 576 * 2 (two extra matrices are used for dimensionality reduction and upscaling). Thus the compression ratio is 1 - n_a / n_b = 0.390625. The reason for using the same dimensions for all layers is because of the residuals, if we don't use the same dimensions we need to record additional matrices for the residuals for dimension changes. 6. Drone [1] has demonstrated that using SVD of the weights results in performance substantially inferior to their proposed Drone method. Thus we chose not to conduct that redudant experiment and insteaed directly compared our approach with Drone. [1] Chen P, Yu H F, Dhillon I, et al. Drone: Data-aware low-rank compression for large nlp models[J]. Advances in neural information processing systems, 2021, 34: 29321-29334.
Summary: This paper presents TCSP, a model compression approach for transformers by reducing hidden size via low-rank factorization. In addition, TCSP is compatible with other compression methods such as model pruning and head size compression. Experiment results demonstrate the effectiveness of proposed method, achieving a high compression ratio while incurring rare performance drop. Strengths: $\cdot$ This paper is well-structured and clear to understand. $\cdot$ The algorithm is general enough, and is compatible with other compression strategies. $\cdot$ Experiment results verify the effectiveness of the proposed method. Weaknesses: $\cdot$ The novelty of this paper is limited, the core idea resembles low-rank factorization with SVD, and the approach is more like a combination of SVD and model pruning. $\cdot$ The author claims it is the first work to reduce the hidden size, but I doubt if the method can be successfully implemented in the industry since the lack of experimental results related to inference speed of compressed model. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: $\cdot$ What are the experiment settings in Tab. 2 and the ablation study? $\cdot$ There are some writing mistakes in this paper, e.g. “fien-tune” in the header of Tab. 3. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The author has addressed the limitations and social impacts in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Our approach is fundamentally different from existing low-rank factorization methods. While traditional methods focus on weight matrix decomposition, or method directly reduces the input and output dimension of the weight matrix. Compared to the weight matrix decomposition methods, our method has two main advantages. First, it can reduce the model's runtime memory overhead, as the dimensions of activation (i.e., both the input and the output) are reduced. Second, weight matrix decomposition methods break down a matrix into two matrices, leading to an increase in the total number of matrix multiplications during the inference process. This subsequently increases the overall time overhead of initiating the cuda kernel, as well as the communication overhead between the global memory and shared memory in the GPU. On the contrary, our method direclty reduces the dimension of weight matrix, without augmenting the number of matrix operations. As a result, it avoids theseese additional overheads. 2. In Section 5.2 (Table 1), We explicitely report the speed up achieved by the compression model. 3. The experimental setup of Table 2 is the same as w TCSP{25%, 25%} in Table 1. --- Rebuttal Comment 1.1: Comment: For the weaknesses 2, actually what I want to see is the speedup ratio of the compressed model on the inference engine, such as the FasterTransformer (https://github.com/NVIDIA/FasterTransformer). I have noticed the result of Tab. 1, but I didn't see the settings of hardware or inference engine.
Summary: This paper proposes a decomposition-based method, called Transformer Compression via Subspace Projection (TCSP), for compressing transformers. By decomposing the feature matrix extracted by some sample data, the model is projected onto a subspace to reduce the size of hidden dimensions. Experimental results on the datasets GLUE and SQuAD show that TCSP enables 44\% parameters reduction with at most 1.6\% accuracy loss and surpassing existing methods. Strengths: This paper compresses the hidden dimension of the transformers, which is less explored. The overall presentation of the paper is easy to understand. Weaknesses: TCSP is indeed just principal component analysis (PCA) or compressed sensing (CS), all working with the dominant subspace derived from SVD. Why another name? I have concerns about the following aspects: 1. Motivation: From lines 51-59, this paper discusses the compression methods for transformers. Also, it mentions "We do not delve into knowledge distillation and weight sharing, as they involve training models from scratch". However, knowledge distillation (KD) includes both task-agnostic and task-specific schemes. For task-agnostic KD methods, they do not involve training models from scratch, see e.g., Wu T, Hou C, Zhao Z, et al. Weight-Inherited Distillation for Task-Agnostic BERT Compression. Meanwhile, it is a normal setting in KD to reduce the hidden size of the transformer model. Therefore, task-agnostic KD methods should be compared, too. 2. Computation: As TCSP requires SVD decomposition for a large matrix, more discussion about the computing cost and scalability is needed, especially in Table 2. 3. Robustness: Regarding the quality of subspace, how is the performance if we add noise or adversaries to the input data when generating the projection matrix? How do you ensure the sample data are representative? The timing overhead and complexity of the SVD to ensure a good projection subspace should be explicitly characterized and quantified. Indeed, there are recent decomposition-based compression algorithms applied to transformers which the authors may benchmark against, e.g., Ren, Y., Wang, B., Shang, L., Jiang, X., \& Liu, Q. (2022). Exploring extreme parameter compression for pre-trained language models. arXiv preprint arXiv:2205.10036. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my questions above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The models used are relatively small in size, e.g. T5-base, BERT-base. There are no experiments on the Large Language Models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. We compare our approach to the KD approaches such as DynaBERT in Appendix F.3 . We will also add the paper [1] in the comparison. Compared to the KD method, our method is a lightweight compression algorithm. To illustrae, while the WID method from paper [1] requires 16 hours of training on 8 A100 GPUs, our method requires only 2 hours of training on one 3090 GPU, in addition to 10 minitues for matrix decomposition. Actually, we also report this comression and training time for our method and other KD methods in Table 3. We argue that accuracy should not be the sole metric, but that the time and space efficiency should also be considered. 2. We report the time needed for SVD decomposition and model fine-tuning in Section 5.3 (Table 3). It is evident that for a BERT-base sized model, only 10 minutes are needed for matrix decomposition. Furthermore, in Appendix F.6 (Table 9), we present the results of compressing llama-7b (with a hidden size of 4096), demonstrating that our method is applicable to large-scale models. For even larger models, we can avoid performing SVD on oversized matrices by grouping the feature channels and performing the compression algorithm independently for each group. 3. We show the performance of the compression algorithm in Section 5.4 (Table 4) with different samples. The results demonstrate that good performance can be obtained by randomly picking samples on the training set. As previously discussed in answer 2, the time required for SVD decomposition is considerably less than the time needed for model fine-tuning. 4. *Limitations: The models used are relatively small in size, e.g. T5-base, BERT-base. There are no experiments on the Large Language Models.* Experiments on llama-7b are presented in Table 9 in Appendix F.6. [1] Wu T, Hou C, Zhao Z, et al. Weight-Inherited Distillation for Task-Agnostic BERT Compression[J]. arXiv preprint arXiv:2305.09098, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for your feedback. Indeed, the core algorithm and almost everything thus derived is still projection onto the dominant SVD subspace (with another name & acronym). Regarding the level of innovation or novelty, I feel this falls on the low side. Regarding the benefit in a shorter runtime, it's not coupled with surprising/impressive performance either. In your rebuttal point 2, "we can avoid performing SVD on oversized matrices by grouping the feature channels and performing the compression algorithm independently for each group", why don't you do this already in the existing work and check out the tradeoff between accuracy & timing? --- Reply to Comment 1.1.1: Comment: Thank you for your reply. 1. Although both our method and traditional matrix decomposition method involve projecting into subspace, the traintional approach for each matrix operation Wx in the whole model requires performing a dimenstion reduction $ Vx $, followed by a dimension enhancement $ U(Vx) $. This results in multiple dimension reduction and enhancement throughout the whole model inference process. In contrast, thanks to the matrix fusing introduced in section 4.2, we only need to implement a dimension reduction operation at the begining of the model input (i.e., $ P^T $ in equation 14), and a dimension enhancement operation at the end of the model output (i.e., $ P $ in equation 14). All other matrix operations are performed in a low-dimension space within the body of the model (i.e., $ \hat{L}^{(1\sim N)} $ in equation 14). Thus, when compared with the traditional weight matrix decompostion method, we can reduce the matrix operations, subsequently lessening both the runtime memory overhead and cuda kernel initialization overhead. 2. We did not perform this part of the experiment because we are currently only experimenting with models of size LLMA-7b. We can still use SVD directly for matrix decomposition for models of that size. We will add this part of the experiment in the future.
Summary: This paper proposes an approach to compress the hidden size of a transformer model using subspace projection. On a high level, the paper aims at projecting the transformer model into a lower dimensional subspace using a projection matrix that is computed using a sample of the training data. This method is compared against other compression techniques using the T5 and BERT models on GLUE and SQuAD datasets and it is shown that the proposed method performs on par or better then the methods under comparison. The highlight of the experimental result is that the proposed transformer compression via subspace projection technique is able to compress models by as mush as 44% with only 1.6% degradation in performance. Strengths: The paper addresses an important problem of transformer compression. In the age of ever-increasing model sizes, it is vital to develop methods that compress large models with minimal loss in performance, if any. This paper presents a simple yet effective approach to leverage linear subspace projection for compression. The paper is easy to follow, offers sufficient literature review, and presents convincing experimental results. The experimental results are particularly strong -- 44% compression with only 1.6% loss in performance. Weaknesses: Some notation is used before definition. Could include more recent literature in Related Work section -- see below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is k in line 25? Please define it before using it. 2. How are the training data sampled to estimate the projection matrix? 3. How is the rank value (k) chosen? The explanation to this and 2. are delegated to the appendix but it would be beneficial if a couple of lines addressing these are presented in main text. 4. Was the SVD solved in batch mode? Can this be done using incremental solvers instead? This may allow the use of a large matrix X. 5. What is the k value in the experimental studies? Is it 2000? Please clarify. 6. What is the average hidden layer size? What are its maximum and minimum values in the models considered? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1.The k in line 25 represents a dimension smaller than $d$, and in previous work, the matrix $W \in R^{d \times d}$ was decomposed into two matrices $W_1 \in R^{d \times k}$ and $W_2 \in R^{k \times d}$. 2.We randomly select samples from the training set and continue to randomly select a certain number of tokens from each sample to estimate the projection matrix. 3.The value of k is determined by the target compression rate r of the algorithm and the original hidden dimension d of the pretrained language model. It is given by the equation k = r * d. 4.We directly use the SVD solver. This solver is not suitable to very large matrices. But for large-scale models, we can group their features randomly. For each feature group, we independently calculate the projection matrix through SVD. This could approximate the accurate SVD and avoids the need to perform SVD on very large matrices. 5.As noted in answer 2, k = r * d. In the experiment, the value of k is calcualed as 768*(1-25%) = 576. The number 2000 refers to the total sampled tokens of all the sampled instances. 6.In our experiment, the minimum and maximum values of hidden layer size were 768 (BERT base) and 4096 (llama-7b), respectively. Thank you very much for your suggestions. we will add these explanations in the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Adversarial Robustness through Random Weight Sampling
Accept (poster)
Summary: The paper studies randomizing the weights in a neural network during inference to improve adversarial robustness. For example, the weights can be sampled from a distribution $\mathcal{N}(\mu,\sigma)$, for some learned parameters $\mu$ and $\sigma$. Intuitively, randomizing the weights makes the task of generating adversarial examples harder because it increases the gradient noise during inference. The traditional learning setup corresponds to the case where $\sigma=0$ and we learn $\mu$ alone (here, I'm using $\mu$ for the mean of the weights, not the mean of the additive noise). In some prior works, $\sigma$ is selected based on a grid search. In this paper, on the other hand, the authors propose learning $\sigma$ using the reparameterization trick $w = \mu + \sigma\delta$, with $\delta\sim\mathcal{N}(0,I)$. This by itself would encourage less variance (i.e. $\sigma\approx 0$) (to have a better fit on the training data) so the authors constrain $\sigma$ in a bounded region $[A, B]$, where $A$ and $B$ are chosen based on some theoretical arguments. Finally, the authors evaluate this approach on CIFAR10/100 and ImageNet using three ResNet architectures (ResNet18, ResNet50, and WRN34). They report improved robustness compared to three baselines: $\sigma=0$, fixing $\sigma$ to its lower bound, and training $\sigma$ without constraints. Strengths: - The empirical results are quite positive, showing a big improvement in robustness compared to the baselines. - The paper extends prior works by proposing to learn the noise distribution during training. Weaknesses: The idea of randomizing weights for robustness is not new and has been studied much earlier dating back to SVM and Differential Privacy (see for example [1, 2, 3, 4]). The primary contribution in this work is to propose a bound on the noise variance. However, the theoretical result do not appear to be sound, and I would appreciate a clarification from the authors on this please. Let us take Lemma 1 for example. If I choose $\varepsilon_r'=\sum_i\mu_i$ (which is perfectly acceptable), the lower bound says that setting $\sigma=0$ is enough to guarantee that the cosine similarity is small. But, $\sigma=0$ means that the cosine similarity is at its maximum. In general, the paper contains many claims that are not precisely stated and makes it hard to follow. Again, we can take the main lemmas (Lemma 1 and 2) as examples. In Lemma 1, the authors say that the "constraint $cos(...)\le \epsilon \to 0$ holds with a probability of at least $F(\alpha)$. What does $\to 0$ in the statement of the constraint mean? Also, in the same lemma, the authors write $\varepsilon'\to0$ when defining the symbol in Equation 10 even though it has a precise definition stated later. In Lemma 2, the authors state that "The probability that [some event] is hoped to be bigger than [some quantity]. The noise parameter $\sigma$ [some bound]". I'm not really sure how to read this lemma. Given that these two lemmas are the main contribution in the paper, the authors should state them precisely. In the current form, my understanding is an educated guess. The issue with presentation is also there throughout the paper. Here are some examples. - Line 32: Projected Gradient Descent (PGD) is not an adversarial attack algorithm. - Line 47: The cited paper [15] does NOT fix $\sigma$ to 1. In fact, they study what happens as $\sigma$ is varied and discuss the same tradeoffs mentioned in this paper. - Throughout the papers, the authors sometimes use $\mu$ when referring to the mean of the noise (as in Line 45) and sometimes use it as the mean of the parameters (as in Line 85). - Writing $x^\star = \arg\min_{x^\star} f(x^\star)$ is confusing and wrong (see Equations 1 and 3). It should be written with different symbols: $x^\star = \arg\min_{x} f(x)$. - The authors sometimes refer to the additive noise as "random weight" (see Line 83)" and later refer to the sampled weights ($\mu$ plus noise) as random parameters (see Line 91). - The paper uses non-standard symbols, such as using $G$ for the gradient instead of $\nabla$. - Typos: e.g. "logist" in Line 69, "ever element" in Line 102, "exceed" (instead of the opposite) in Line 122, $Wr$ instead of $W^r$ in Line 147 ... In terms of the experiments, I'm curious to know why the authors choose to compare against a single fixed value of $\sigma$. If you propose to constrain it to some interval $[A, B]$, it would make sense to compare the method against fixing $\sigma$ to $A$, $B$, and the middle $(A+B)/2$. The reason I bring this up is because the gap is already small between learning $\sigma$ and fixing it to the lower bound $A$ in Tables 1 and 2. [1] Cynthia D, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 2014. [2] Alabdulmohsin, I, et al. "Adding robustness to support vector machines against adversarial reverse engineering." 2014. [3] Chandrasekaran, V, et al. "Exploring connections between active learning and model extraction." 2020. [4] Orekondy, T, et al. "Prediction poisoning: Towards defenses against dnn model stealing attacks." 2019. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - Can you please provide a precise statement of the two lemmas? Please see my comments above. - Have you compared against other fixed values of $\sigma$ (e.g. as suggested above)? - In Line 273, you mention that $A=1000$ and $B=2400$. Is this independent of the architecture? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Re Novelty. Thanks for providing relevant works of randomness [1-4]. We will discuss these works in the related work in our final version. However, we argue that the contribution of our work is sufficient compared with [1-4]. We will clarify the novelty in two aspects. First, the suggested works [1-4] apply randomized weights to some traditional DL/ML models, such as SVM, LeNet, etc., which are out-of-date in current adversarial robustness evaluation. In contrast, we deploy the randomized weight to ResNet and Wide ResNet so that the randomized defense can achieve superior performance compared with state-of-the-art algorithms. Furthermore, the direct deployment of randomized weights in these popular neural networks cannot achieve satisfactory performance, as discussed in the paper. The robustness performance is related to the noise design (Table 1-3), random weight location (Figure 2), etc. Thus, exploring a better way to incorporate randomized weight into modern complex neural networks can be important. In this work, we focus on the complex neural networks, such as ResNet and Wide ResNet. Based on the analysis of the influence of randomness parameters on the gradient similarity and output difference (Lemma 1 and 2), we introduce a novel noise design for complex DNNs. Second, the major difference between our proposed randomized weights and those in previous works including [1-4] is that we have a novel way to implement a learnable random weight, while the suggested works often decide the value of $\sigma$ manually. For example, in [2], while using a simple network like SVM, the tuning method is to manually tune $\sigma$ as a hyper parameter. In contrast, we maintain a learnable $\mu$ and $\sigma$. This means the parameters $\mu$ and $\sigma$ will be tuned automatically at the back propagation process while training the network. Furthermore, during the optimization of $\mu$ and $\sigma$, we impose the constraints on them according to the theoretical analysis in Lemma 1 and 2. In particular, to maximize the probability of having a relatively high natural accuracy and relatively low cosine similarity, we proposed an upper bound and a lower bound. ### Re $\sigma$ in Lemma1. We modify Eq. 10 as: $\sum_{i=1}^{m}\sigma_i\geqslant\frac{\epsilon_r^\prime - \sum_{i=1}^m\mu_i}{\alpha\sum_{i=1}^{m}|W^r[1]_i|}>0$, which does not include the special case of $\sum_{i = 1}^{m}\sigma_i=0 $. Because when $\sum_{i = 1}^{m}\sigma_i=0$, all the $\sigma_i$ are $0$. This means $W^r$, whose standard deviation is $\sigma$, will lose randomness. In this case, $W^r$ will be the same during each inference. However, the motivation of this paper is to use the randomized weight to establish different routes between each inference process, which needs $W^r$ to be different each time. So the situation that $\sum_{i = 1}^{m}\sigma_i=0$ does not fit the motivation of this paper. ### Re $\to$ in Lemma 1. $\epsilon\to0^+$ denotes that $\epsilon$ is a constant close to 0 from and is positive. $\to$ in $\epsilon\prime \to 0^+$ denotes the same. We use this symbol as a simplified version of $\lim\limits_{\epsilon \to 0^+}$. ### Re Claims in Lemma 2. We provide a more precise statement. We denote the probability that the difference of output is smaller than a minimal term $\epsilon_y$ as $P((Y_1-Y_2) < epsilon_y)$, and this probability is expected to be larger than $ F(\sqrt{2}\beta/2)$. Formally, we have $P((Y_1-Y_2) < \epsilon_y) \geqslant F (\sqrt{2}\beta/2)$. ### Re Presentation. Thanks for pointing out these presentation issues. * We understand that PGD is an optimization algorithm. But in the field of adversarial attack, it has been used in recent work on adversarial attacks. And in recent research work related to adversarial defensiveness, this method is considered as an important method to verify the effectiveness of adversarial defences. * We understand that the $\sigma$ in the altered article is treated as tuned as a hyper-parameter. But what we are trying to convey here is that the $\sigma$ of that work is not learnable. * The random weights proposed in this paper are actually a Gaussiannoise. At both line 45 and line 85, $mu$ is the mean value of the Gaussian distribution $N(\mu, \sigma)$. At line 85, the random weight $W^r$ introduced is the noise N($\mu$, $\sigma$) is mentioned at this point to show that the random weight is a Gaussian noise. * We will correct the expression errors in those formulas in the final version. We will take a different notation. * $\mu$ and $\delta \odot \sigma$ as a whole make up the random weight. This whole is $W^r$ , which is the "random weight" in the phrase "except for the random weight" at line 83. * Thank you for the correction to our notation specification. Those notations will be revised in the final version. * Thank you for pointing this out. We will recheck the grammar, words and symbols and fix the typos in the published manuscript. ### Re Line 273 The values taken here for the upper and lower bounds are based on preliminary experiments conducted on networks that did not incorporate random weights. The results about fixing $\sigma$ on those value mentioned will be shown at the rebuttal about question2. ### Re Questions **Q1.** We will rewrite Lemma 1 and Lemma 2 according to the reply above. **Q2.** Related experiments have been added and the results are shown as: |Value of $\sigma$|$PGD^{20}$| |-|-| |Fixed to lower bound| 67.75| |Fixed to upper bound| 66.06| |Fixed to mid value| 67.50| |CTRW (ours)| 69.48 | Our algorithm achieves the best performance, which demonstrates the necessity of trainable randomness parameters. **Q3.** The values taken here for the upper and lower bounds are based on preliminary experiments conducted on networks that did not incorporate random weights. --- Rebuttal Comment 1.1: Comment: Thank you again for your comments and suggestions. We would like to know if we have addressed your issues. Meanwhile, if you have any other concerns, we are open to further discussion. --- Rebuttal Comment 1.2: Title: Acknowledgement Comment: Thank you for the response. The issue about novelty is not a serious concern in my opinion. I just wanted to say that (since you argue you propose randomization) that it has been used in various contexts in the past, including to mitigate the risk of adversarial attacks. But, as I mention above, I see the main contribution to be proposing the bounds on $\sigma$. However, the main theoretical results do not appear to be right. As I discuss in my comment above, if Lemma 1 were correct, then it would hold when $\epsilon_r' = \sum_{i=1}^m\mu_i + \delta$ for any $\delta>0$. But, as $\delta\to 0^+$, the condition is satisfied by choosing $\sigma\to 0^+$ which means that the cosine similarity will approach 1, which contradicts the statement of the lemma. This cannot be fixed by simply saying that you assume $\sigma>0$ as you mention in the rebuttal. My other comments about preciseness and clarity still stand, particularly about Lemma 2. Can you please state here precisely the full statement of Lemma 2? Regarding the notation, note that writing $\epsilon\to 0^+$ to denote a constant close to zero is wrong. The correct way is to write $\epsilon\ll 1$ or $\epsilon=o(1)$ (little-O notation). Thank you for including the other values of $\sigma$ in the experiment. --- Reply to Comment 1.2.1: Title: Official Comment by Authors Comment: Thank you for the response. ### Re Lemma 1 We agree that Lemma 1 holds for $\sigma \to 0$ if $\epsilon_r^\prime = \sum^m_{i=1} \mu_i + \delta$ and $\delta \to 0$. However, we argue that the condition that $\epsilon_r^\prime = \sum^m_{i=1} \mu_i + \delta$ where $\delta \to 0$ does not exist in practice since $\epsilon_r^\prime \to 0$ and $\sum^m_{i=1} \mu_i \not\to 0$. We further demonstrate the fact that $\sum^m_{i=1} \mu_i \not\to 0$ in three aspects: 1. From the very beginning, $\mu_i$ is set to $1$ since our proposed noise is a multiplicative noise as illustrated in Figure 1(a); 2. $\mu$ is a trainable parameter in our setting and $\sum^m_{i=1} \mu_i \not\to 0$ is not our optimization objective, which makes it difficult to satisfy this condition; 3. We further provide some empirical evidence in the experiments. Taking ResNet-18 on CIFAR-10 as an example, It is observed that $\sum^m_{i=1}\mu_i$ converges to 986.3. Thus, $\sum^m_{i=1} \mu_i \not\to 0$ holds in practice. We thank the reviewer for pointing out the boundary conditions of Lemma 1, and will incorporate them into the final version. ### Re Precise Statement of Lemma 2. Due to word limitations, we were not able to show a precise description of Lemma 2 in the rebuttal. Now we show the exact description as: Given an $n$ layer network, let the parameters of the random weights after mapping $n-r$ times be $\mu^\prime$ and $\sigma^\prime$, $x^{l}$ denotes the output feature map of layer $l$ and $W^{l}$ denotes the weights in layer $l$. The output difference $Y_1-Y_2$ is hoped to be smaller than a minimal term $\epsilon_y$. The probability of $(Y_1-Y_2) < \epsilon_y$ is defined as $P((Y_1-Y_2) < \epsilon_y)$. And $P((Y_1-Y_2) < \epsilon_y)$ is expected to be larger than $ F ( \sqrt{2}\beta / 2 )$. To meet this constrain, we have $ P((Y_1-Y_2) < \epsilon_y) \geqslant F ( \sqrt{2}\beta / 2 )$, $\sum_{i = 1}^m \sigma_i^\prime$ is upper bounded as, $\sum_{i = 1}^m \sigma_i^\prime \leqslant \frac{\epsilon_y}{x^{n-1} \cdot W^{n}\beta}$, where $m$ is the size of the feature map. ### Re Notation. Thanks for giving a more rigorous expression. The notation will be changed into the standard format as is mentioned.
Summary: This paper proposes a random injection approach to improve the adversarial robustness against attacks. Different from previous work, the proposed algorithm includes random weights in the optimization and imposes constraints for better trade-offs. The constraints rely on proving the following Lemma: 1.(Lemma 1). The variance of distribution where the noise is sampled is lower bounded to ensure lower gradient similarity after sampling. 2.(Lemma 2). The variance of distribution is also upper bounded to ensure lower natural classification error. The proposed algorithm combines the lower and upper bound to perform adversarial training with random weights under constraints. Several experiments are provided to verify the effectiveness of the proposed algorithm. Strengths: 1.The theoretical analysis provides good insight that leads to the proposed constraints, which could benefit randomized defense community. 2.The evaluation is conducted on various datasets and models, and the proposed algorithm achieves reasonable improvement over baselines. Weaknesses: 1.Lack of assumption before or in the lemma, which makes it difficult to verify the scope. 2.Although the proposed algorithm proposes to perform adversarial training in a black-box manner, there is no evaluation of black-box attacks in the experiment section. The authors should conduct more experiments to verify the performance of proposed algorithm under popular black-box attacks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I think more clarification and evaluation are needed for the concerns mentioned in the Weakness section. I would appreciate the authors making amends along those directions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The evaluation of proposed algorithm under black-box attacks is relatively limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Re Lack of assumption. Thanks for the suggestions on the lack of rigor in our theory. Lemma2 inference requires that the nonlinear layer maintains the conditions mentioned in Equation 14 in the supplementary material. In addition the lower bound of $\sigma$ should be greater than 0 to ensure the randomness of the random weights. We will add this section to the text in the final version to ensure that the theory is as rigorous as possible. ### Re Lack of black-box attack. Thanks for the suggestion. We include more evaluation of black-box attacks, such as Square and Pixle. We conduct evaluation ResNet-18 on CIFAR-10. The results are shown in the following Table: | Method | Square | Pixle | |------------|--------|-------| | baseline | 54.68 | 8.10 | | CTRW(ours) | 77.73 | 72.14 | As shown in the table, our proposed algorithm achieves better performance under black-box attacks, especially under Pixle attack. --- Rebuttal Comment 1.1: Comment: Thank you again for your comments and suggestions. We would like to know if we have addressed your issues. Meanwhile, if you have any other concerns, we are open to further discussion.
Summary: The authors attempt to analyse the effect of the design of noise in random networks on the network and to improve the network adversarial robustness. The authors suggest designing an interval to constrain the intensity of the noise within a desirable range. The authors conducted experiments based on CNNs and the results support these conclusions. Strengths: S1) The paper attempts to answer the question of the relationship between adversarial robustness of stochastic networks and noise. This is one of the topics of the NeurIPS audience. S2) This paper provides a mathematical analysis that explains, from a theoretical point of view, the effect of random noise strength on network performance. The results of the analysis are also interesting: an interval is used to constrain the noise to achieve relatively desirable results. S3) This work shows that the optimisation of noise in random networks has a large impact on network adversarial robustness. Their success also provides an interesting way of thinking about the problem of random network design. S3) The results are interesting and intuitive, and consistent with the theoretical analysis. Weaknesses: W1) Lack of code usability statements, although the network structure is relatively simple. W2) Lack of evaluation under non-adversarial training may be a potential weakness, but existing evaluations are sufficiently representative. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see Weakness part Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This is a solid paper that presents a complete analysis and interesting results. The paper provides novel ideas for random network design. As an improvement, the links to the code is needed for more adequate support to the data in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Re Code usability. Thanks for this nice concern about code usability. According to this year's rules, the code about our method has been given to Area Chairs in the form of an anonymous link. ### Re Lack of experiment. Thanks for this nice concern. It is interesting to see the performance without adversarial training. We conduct evaluation with ResNet-18 on CIFAR-10. All the algorithms are trained with natural training. The results are shown in the following table: | Method | Natural | $PGD^{20}$ | |------------|---------|------------| | baseline | 84.17 | 0.00 | | CTRW(ours) | 84.61 | 3.44 | As shown in the table, our proposed algorithm can achieve better adversarial robustness than the baselines, which demonstrates the effectiveness of our algorithm. Compared with the results after adversarial training, the results without adversarial training cannot be competitive. Thus, adversarial training is still a necessary training strategy in our algorithm. --- Rebuttal Comment 1.1: Comment: Thank you again for your comments and suggestions. We would like to know if we have addressed your issues. Meanwhile, if you have any other concerns, we are open to further discussion.
Summary: This paper proposes to incorporate random weights into the optimization to exploit the potential of randomized defense. A theoretical analysis of the connections between randomness parameters and gradient similarity as well as natural performance is also provided. The method is evaluated on several datasets and benchmark CNNs to verify the utility. Strengths: The paper addresses a novel task and presents a unique method to handle it. The paper is well written and easy to follow. Theoretical analysis is provided in this paper which can provide more insights in the field. The reported results are impressive. Weaknesses: The method is only evaluated on CNN-based architectures. Since transformer has been widely adopted on both vision and language tasks, it will be helpful to provide the results on transformer-based architectures. From Figure 3(c), we can see that for resnet18, with the increasing of the PGD attack steps, the robust accuracy first quickly decreases and then increases slowly after about 10 steps, can you provide some explanation about this phenomenon? The authors state that: to verify the accidental of the results, the authors repeated the evaluation on CIFAR-10 multiple times, and the results are illustrated in Table 5. However, how many times the experiments are repeated should be explicitly presented in the paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: More analysis should be provided for the curves of the robust accuracy for resnet18 in Figure 3(c). The repeated number of the experiments illustrated in Table 5 should be explicitly presented. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Re Experiments on ViT. Thanks for the suggestions regarding the experimental integrity of this work. Our proposed algorithm can be simply deployed to other neural networks, such as ViTs. For illustration, we deploy CTRW on ViT-S and evaluate the performance on CIFAR-10. The results are shown in the following table: | Method | cw20 | $PGD^{20}$ | |------------|-------|------------| | baseline | 34.62 | 33.49 | | CTRW(ours) | 45.21 | 45.68 | As can be seen in these results, our proposed algorithm can be easily adopted by different neural networks. On ViTs, our proposed algorithm achieves better adversarial robustness than the baseline. ### Re Trends in Robust Accuracy. The trend of robust accuracy as shown in Fig. 3 is a constant trend in the method of designing path migration. The method designed in this paper actually makes a difference between the paths at attack time and at inference time. When the number of PGD steps is small, the gradient rise is not sufficient. At this point, as the number of PGD steps increases, the gradient rises along a path that is very close to the direction of the optimization path, which has a greater impact on the network. Therefore the adversarial robustness gradually decreases. However, when the number of PGD steps is high, the gradient rise is already sufficient, and at this time, as the gradient rises, the difference in direction of the gradient at each step is getting larger and larger compared with the gradient at the optimization corresponding to it. So the attack becomes less and less effective. ### Re Times of the Experiments. Thanks for this nice concern. All experiments in this paper were repeated 10 times and averaged. We will clearly state it in the final version. --- Rebuttal Comment 1.1: Comment: Thank you again for your comments and suggestions. We would like to know if we have addressed your issues. Meanwhile, if you have any other concerns, we are open to further discussion.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Unlocking Deterministic Robustness Certification on ImageNet
Accept (poster)
Summary: This paper addresses the issue of certified accuracy with deterministic approaches by utilizing the Lipschitz property of neural networks. The authors propose a novel layer called LiResNet, which enables easy computation of the Lipschitz constant. Additionally, they introduce a new loss function called Efficient Margin Maximization (EMMA), that stabilizes robust training by simultaneously penalizing worst-case adversarial examples from all classes. Finally, the authors conduct experiments on CIFAR-10/100, Tiny-ImageNet, and ImageNet datasets, demonstrating that their approach achieves state-of-the-art results in certified accuracy under $\ell_2$ perturbations and $\epsilon = 36/255$. Strengths: - The paper is clear and well-written. - Improving certified accuracy with deterministic approaches is clearly an important problem and the improvement over state-of-the-art is impressive. Weaknesses: While the experimental results are good and interesting. The paper suffers from overclaiming important contributions and not discussing important related works. 1. **Overclaim on Lipschitz Residual layer**: The claim that the paper is the first to propose a Lipschitz Residual layer is incorrect. The CPL layer [1], and more recently the SLL layer [2], are two previous works that have introduced a 1-Lischitz _Residual_ layer with ReLU activation. The authors mention these two papers in the related work without discussing their contribution. The authors should discuss these works. 2. **On the efficiency and tightness of the approach**: In the abstract, the authors have written: "We show that fast ways of bounding the Lipschitz constant for conventional ResNets are loose, and show how to address this by designing a new residual block" - **On the efficiency of the approach**: I don't see how the approach is more efficient than previous works, since the authors just use the power iteration (verified in the code provided in the supplementary material as this is not mentioned in the paper or the appendix). The power iteration has been used in three previous works [3, 4, 1]. We can also note that the authors use a power iteration with 10 iterations (default value in the code) while [3,4,1] showed that using only 1 iteration was sufficient and more efficient. Therefore, I don't see how the author’s approach is "more efficient" than previous work. - **On the tightness of the bound**: It is true that the value of the Lipschitz constant of LiResNet is tighter than the one from a Residual Layer with nonlinearity, in fact, the power iteration computes the _exact_ Lipschitz of LiResNet, since LiResNet is a _linear_ layer -- the PM computes the Lipschitz of the map $x \mapsto (I + W) x$. Computing the Lipschitz of a Residual layer with a nonlinearity leads to a looser bound, because the nonlinearity allows for a lower Lipschitz. The authors should clarify this. 3. **On the new EMMA loss**. The Efficient Margin MAximization (EMMA) loss introduced in the paper is very similar to the one provided by [3]. The authors discuss the difference between their loss and the one from [3] in Appendix A, arguing that the main difference is the use of the _Lipschitz constant for each margin_ while [3] uses the _global Lipschitz constant_ (I believe this paragraph deserves to be in the main paper). This is true, however, the use of the Lipschitz constant for each margin has been proposed twice before [5, 6] (the last layer normalization reduces to the same Lipschitz), therefore, the EMMA loss is a simple combination of two known approaches. 4. **On the experiments**: Table 1 presents results only for the perturbation level of $\epsilon = 36/255$. The authors should present results for a multitude of values to show the overall robustness of their approach (or make a graph with certified accuracy vs \epsilon). A large body of work [1, 2, 6, 7, 8, 9, 10] has presented the certified robustness of their model for at least 3 or 4 perturbation thresholds. I would like to see the same comparison. In Table 2, the authors talk about "VRA performance (%)". What does VRA performance (%) mean? Again the authors should provide VRA with respect to a specific perturbation and not assert that the overall robustness of a model can be computed with only one single perturbation threshold. **Conclusion**: The paper feels like a patchwork of several existing ideas and looks more like engineering work than research work. The paper has combined the work of [3] and [5,6] for the loss, used the same algorithm (power iteration) as [3,4,1] to compute the Lipschitz, and proposed a new linear layer that seems to have good properties. They used several tricks (epsilon scheduler presented in Appendix B1, normalization trick presented in Appendix B2) without ablation study. With all this, they showed that it is possible to improve the certified accuracy for $\epsilon = 36/255$ and managed to achieve very good certified accuracy on ImageNet, which has not been done before. I find the results of this paper interesting because I think the overall problem is interesting and important, but I think the paper (in its current form) has little impact on the topic of certified accuracy with deterministic approaches. To improve the research value of the paper, the authors should provide a comprehensive ablation study and explain how and why these different techniques, when combined together, significantly improve certified accuracy. [1] Meunier et al., A Dynamical System Perspective for Lipschitz Neural Networks ICML 2022 [2] Araujo et al., A Unified Algebraic Perspective on Lipschitz Neural Networks, ICLR 2023 [3] Tsuzuku et al., Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks, NeurIPS 2018 [4] Farnia et al., Generalizable Adversarial Training via Spectral Normalization, ICLR 2019 [5] Leino et al., Globally-Robust Neural Networks, ICML 2021 [6] Singla et al. Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100, ICLR 2022 [7] Trockman et al., Orthogonalizing Convolutional Layers with the Cayley Transform, ICLR 2021 [8] Singla et al, Skew Orthogonal Convolutions, ICML 2021 [9] Prach et al., Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks, ECCV 2022 [10] Huang et al., Training certifiably robust neural networks with efficient local Lipschitz bounds, NeurIPS 2021 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The authors have combined many different techniques and tricks to achieve their certified robustness. An ablation study would be interesting to identify those that increase the certified robustness. - What is the performance of the author's architecture with an SLL layer that is 1 Lipschitz? - What is the performance of the author's architecture with the loss from [3]? How does the EMMA loss improve the VRA? The authors added some tricks for better training: the epsilon scheduler presented in Appendix B1 and the normalization trick presented in Appendix B2. - How does the epsilon scheduler affect the final certified accuracy? - How does the normalization trick affect the final certified accuracy? - What is the certified accuracy for other perturbation thresholds? (e.g. $72/255$, $108/255$, $1$ to allow comparison with other work) [3] Tsuzuku et al., Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks, NeurIPS 2018 Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors discussed the limitation of robust classification. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > On the efficiency of the approach Perhaps *efficient calculation* in the following sentence from the Abstract causes some degree of misunderstanding: “A key challenge in certifying ResNets is efficient calculation of the Lipschitz bound for residual blocks.” We would like to revise it as, “A key challenge in certifying ResNets is **efficiently calculating a _tight_ Lipschitz bound for residual blocks.**” We are not claiming that using the power method is our contribution. The power method is more efficient than orthogonalization, which is relatively expensive, and used by a lot of prior work. However, as we point out, power iteration is not sufficient to provide _tight_ bounds on _traditional_ residual blocks. Our primary contribution is to identify the source of the problems faced by residual blocks, and propose a new design for residual blocks that achieves the necessary properties to avoid these pitfalls. Taken together, our proposed LiResNet architecture can be seen as _enabling_ efficient and tight bound-calculating with the power method, as the same results cannot be achieved using the same efficient bounds without our architecture (see the comparison of ResNet to LiResNet). We will comment more on the computational cost in the author comment. > On the tightness of the bound It is correct to say PM computes the an **exact** Lipschitz bound for $x+Wx$ if one uses the reparameterization trick to directly compute $Lip(I+W)$ instead of using the upper-bound $1+Lip(W)$, which is often used in conventional ResNet blocks. Thanks for the suggestion and we will make this point clearer in writing. Additionally, we would like to share more results to showcase how switching from the conventional ResNet to LiResNet increases the tightness of the bounds: We compare the empirical lower-bound of the Lipschitz Constant, obtained by maximizing |f(x) - f(y)|/|x - y| w.r.t randomly initialized $x$ and $y$ till convergence for a network $f$. Here are results when $f$ is a LiResNet or a ResNet trained on CIFAR-10. “UB” (upper bound) is our Lipschitz constant estimation and “LB” is the lower bound from maximizing |f(x) - f(y)|/|x - y|. | | LB | UB | LB / UB | |----------|-------|-------|---------------------| | LiResNet | 53.35 | 59.03 | 0.90 | | ResNet | 23.93 | 101.4 | 0.24 | From this table, we see that LiResNet can have a much tighter Lipschitz constant estimation than conventional ResNet. > On the new EMMA loss EMMA could be considered simple, but our proposal is not the result of trivially combining existing works. LMT has existed for a long time but has not been shown to perform well. Few following studies have used this loss function. Our paper chronicles a progressive evolution of methodologies leading up to the SoTA VRA, transitioning from LMT to EMMA. We have discovered that by using the LC of the margin and adjusting epsilon dynamically, there are consistent and incremental improvements in certifiable robustness. By delving into the training dynamics and observing how the second highest classes—termed as the “threatening classes”—rotate throughout iterations, we've pinpointed a suboptimal aspect in the TRADES loss function. Furthermore, we illustrate why EMMA is especially advantageous for problems with a larger number of classes, where existing loss functions like TRADES can grapple with the phenomenon portrayed in Figure 2b. We hope that the intricacies and depth of our exploratory efforts in training certified robustness aren't overshadowed by the apparent simplicity of the technique we advocate for. > What is the performance of the author's architecture with an SLL layer that is 1 Lipschitz? We take the reviewer’s suggestion to replace linear residual blocks with SLL residual blocks and leave everything else the same (i.e. the whole network still has 12 convolution layers with 512 channels and 2 dense layers of 2048 neurons). We train both models in the same way (without any epsilon scheduler, DDPM, and etc.) and report the VRAs for a list of radii as suggested by the reviewer. For SLL models, we tried the models with 1) their default settings and 2) our optimization settings and reported the best VRAs. Here are our results on CIFAR-10/100 (other results will be in the revision of the paper): | | epsilon | Using LiResNet Block | Using SLL Block | |-----------|---------|----------------------|------------------| | | 36/255 | 66.1 | 59.5 | | CIFAR-10 | 72/255 | 54.1 | 50.6 | | | 108/255 | 45.2 | 42.2 | | | 36/255 | 37.5 | 30.2 | | CIFAR-100 | 72/255 | 27.9 | 22.1 | | | 108/255 | 22.1 | 16.5 | > How does the epsilon scheduler affect certified accuracy? Note that all of our ablation studies use the same epsilon scheduler setting. Thus, the difference between the VRAs of TRADES/ConvNet and EMMA/LiResNet in Table 2 reflects a difference that cannot be accounted for by the epsilon scheduler. On our largest model L12W512, turning the scheduler on introduces 0.4% VRA improvement on CIFAR-10. > How does the normalization trick affect certified accuracy? We follow the same setting from [1, 2], which is a widely used method for normalization-free methods. Please note that this method is also applied by SLL (the diagonal scaling matrix q in Equation 8 from SLL paper). Again, all of our ablation studies use this same setting. On our largest model L12W512, using the normalization-free methods can introduce 0.8% VRA improvement on CIFAR-10. [1] Shao et al. Is normalization indispensable for training deep neural network, NeurIPS 2020 [2] Zhang et al. Fixup Initialization: Residual Learning Without Normalization. ICLR 2018 --- Rebuttal Comment 1.1: Title: Thank you for this extensive rebuttal. Some further comments. Comment: Thank you for this extensive rebuttal. Here are some further comments: 1. **On the tightness of the bound** : thank you for providing this experiment, however, in my opinion, it is not even necessary to provide this comparison. The looseness of the bound on the residual layer with ReLU comes from the nonlinear activation, by removing it the calculation of the Lipschitz of the layer becomes exact. It is actually straightforward. Nevertheless, a small discussion must be provided in the paper: computing the exact bound using power iteration, the PM can approximate the Lipschitz of the layer with arbitrary precision with respect to the number of iterations. A small number of iterations can be done during training as the PM can converge progressively during training. A large number of iterations can be performed during inference as the values can be cached given they are independent of the input. 2. **Comparison between SLL and LiResNet**: I strongly believe that this experiment was missing in the original version of the paper. It justifies why LiResNet is a good idea and useful. The remaining experiments are, in my opinion, further optimizations to improve the accuracy. I would suggest expanding the discussion on state-of-the-art ResNet-Like Lispchitz layers (which is almost non-existent in the current version) and explaining the difference between LiResNet and SLL. 3. **Regarding the title of the paper**: The authors insist on the "depth" and on "unlocking certified accuracy on ImageNet". First, from Table 2 (b), the increase in certified accuracy with respect to the depth is minimal, so I would assume that certified accuracy clearly saturates very quickly. Furthermore, it is not very clear that depth allows for an increase in certified accuracy, it could also be the increase in the number of parameters. Therefore, I am not sure that the authors should focus on this specific parameter. Second, the title claims the first certified accuracy with ImageNet, which is incorrect. Randomized smoothing has provided probabilistic certificates on ImageNet for a long time. What the authors have shown is the first method that provides certified accuracy on ImageNet with _deterministic_ certificates and this is not emphasized enough. Maybe a better title would be: "Unlocking Deterministic Robustness Certification on ImageNet" or something similar. I will raise my score and encourage the authors to update the paper accordingly. --- Reply to Comment 1.1.1: Title: Thank you for your responses. Comment: Thanks for providing us with more feedback! We would love to add the comparisons with SLL nets and the discussion on LiResNet v.s. SLL nets to the main body of the paper once we are giving a chance to update the writing. The discussion of PM will also go in the writing in the near feature. We thank for the suggestions on the title. "Unlocking Deterministic Robustness Certification on ImageNet" seems to be a good one. We will reword a bit but the reviewer's concern on the deterministic v.s. probabilistic aspect makes sense to us. Thank you!
Summary: This paper proposes Linear ResNet (LiResNet) and Efficient Margin Maximization (EMMA) loss for scalable training of provably robust deep neural network classifiers. With these two contributions, this work is able to achieve SOTA deterministic VRA on medium-to-large classification tasks. Strengths: - The paper is clearly written. - The proposed method demonstrates strong empirical performance. Weaknesses: I am willing to raise the score by 1 or 2 points if the authors answer my questions satisfactorily. **Weakness 1 : Ambiguity regarding the scalability of LiResNets.** - The authors do not exactly pinpoint why LiResNets are scalable. In Table 2 (b), the authors show VRA of ConvNets, ResNets, and LiResNets at various depths. Then, the authors just describe the table as-is, without any further discussion or analysis. Why do ConvNets diverge? Is it because of vanishing or exploding gradients? Why do ResNets have lower VRA than LiResNets? Is it because LiResNets enable tighter Lipschitz constant estimation? Is it because LiResNets allow faster convergence to optima? If there are multiple factors at work, how much does each factor contribute to the final VRA? **Weakness 2 : Hand-wavy logic regarding the rotating threatening class phenomenon and EMMA.** - Section 5.2, line 282, "rotating threatening class phenomenon observed during training may contribute to suboptimal training." --> Table 2 (a) only shows VRA before and after applying EMMA. How do we know whether performance gain comes because EMMA prevents the rotating threatening class phenomenon? Only a hand-wavy explanation in Section 4, line 209-215 is given. I would appreciate a more rigorous theoretical or experimental analysis of EMMA. **Concern 1 : Representation power of LiResNets.** - I get that LiResNet can admit tighter Lipschitz constant estimation. However, how scalable is LiResNet compared to ResNet in terms of clean accuracy when trained with cross entropy? Doesn't using linear residual blocks reduce the representation power of ResNets? If LiResNets have less representation power than ResNets, wouldn't we eventually have to return to ResNets when we have better training techniques and aim for even higher VRA? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: **Question 1** : Power method is used throughout training to estimate the global Lipschitz constant. How many iterations of power method is used at each step of training? How tight is the Lipschitz constant estimate? **Question 2** : It is widely known that there is an accuracy-robustness trade-off. TRADES offers a hyper-parameter to control that trade-off. Would it be possible to control the trade-off for EMMA as well? How does the accuracy-robustness pareto frontier for LiResNet+EMMA compare to other methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Discussed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The authors do not exactly pinpoint why LiResNets are scalable ... Why do ConvNets diverge? Is it because of vanishing or exploding gradients? Yes, it is because of exploding gradients. Even for standard (non-robust) cross entropy training, ConvNets with 18 layers perform worse than ResNets. The task of certificate robustness is more difficult than standard cross entropy training since the training objective has a strong regularization from the network Lipschitz estimation, making the training of ConvNets even more difficult. > Why do ResNets have lower VRA than LiResNets? Is it because LiResNets enable tighter Lipschitz constant estimation? ... If there are multiple factors at work, how much does each factor contribute to the final VRA? There might be other factors behind why LiResNets enjoy higher VRA than ResNets, but the tighter Lipschitz bounds is definitely the dominating one. The Lipschitz constant lower bound for a conventional ResNet is much larger than the actual Lipschitz constant of the network, thus much of the model’s expressiveness is wasted. You can think of it as though a ResNet is trained to be robust at a much larger noise radius, but we can still only verify at a small noise radius. This also means ResNets will suffer greatly from overregularization during certified training. In our experiments, ResNets have converged when we report their VRAs so we suspect the convergence speed is not a major factor. More discussion on ResNet vs. LiResNet performance is included in our updated paragraph above. > Section 5.2, line 282, "rotating threatening ..." --> Table 2 (a) only shows VRA before and after applying EMMA. How do we know whether performance gain comes because EMMA prevents the rotating threatening class phenomenon? .... I would appreciate a more rigorous theoretical or experimental analysis of EMMA. Thanks for pointing out there might be missing steps in our reasoning line. By plotting the percentage of samples that the penalized non-label logits change in two consecutive epochs, Figure 2b locates the **rotating threatening class (RTC)** issue in the existing GloRo training with TRADES loss. **We run the same experiment with EMMA loss and plot the result .** Please check Figure 1 in the attached PDF file in the global rebuttal section. We found that RTC happens to less training points if using EMMA loss, compared to TRADES loss. We hereby empirically validate that EMMA loss helps to reduce the times of having RTCs during training, mitigating the suboptimal issue raised in the paper. The mitigation of RTC is expected for GloRo Nets to converge smoothly and faster to achieve higher VRA, which is later evidenced by Table 2(a). Taking Figure 2b and Table 2 from the paper and the new plot (Figure 1 attached in the PDF) comparing EMMA loss and TRADES loss, we demonstrate an empirical correlation between the mitigation of RTC and the improvement of VRA. We will also clarify this reasoning line in the writing. > How scalable is LiResNet compared to ResNet in terms of clean accuracy when trained with cross entropy? Doesn't using linear residual blocks reduce the representation power of ResNets? It is not clear that a LiResNet is meaningfully less expressive than a traditional ResNet. Here we provide some results comparing LiResNet and ResNet on ImageNet classification, which show that LiResNet is capable of achieving similar performance of a ResNet and a VGG Net around the similar network depth. | | number of layers | Top 1 accuracy | |------------------|------------------|----------------| | LiResNet | 18 | 73.3% | | VGG | 19 | 74.2% | | ResNet | 18 | 69.8% | > How many iterations of the power method is used at each step of training? How tight is the Lipschitz constant estimate? We use 10 iterations during training. We empirically verified the tightness of our Lipschitz constant estimation of the entire network by optimizing |f(x) - f(y)|/|x - y| w.r.t x and y on the network f. Here are the results of our trained LiResNet models on 4 datasets, which show that the upper bound is fairly tight. “UB” (upper bound) is our Lipschitz constant estimation and “LB” is the lower bound from maximizing |f(x) - f(y)|/|x - y|. | | ImageNet | CIFAR-10 | CIFAR-100 | Tiny-ImageNet | |----|----------|----------|-----------|---------------| | LB | 1.12 | 53.35 | 12.95 | 7.16 | | UB | 1.54 | 59.03 | 14.03 | 7.73 | > It is widely known that there is an accuracy-robustness trade-off. TRADES offers a hyper-parameter to control that trade-off. Would it be possible to control the trade-off for EMMA as well? Yes, we can manage this trade-off. For example, we can use a weighted sum of EMMA loss and cross entropy loss: CE-loss + k * EMMA-loss for some hyper-parameter k. To make our method simpler, we only present EMMA loss in the paper. > How does the accuracy-robustness pareto frontier for LiResNet+EMMA compare to other methods? It is hard to compare the pareto optimal of our method and other methods theoretically. However, we want to emphasize an important difference between our work and other works like CPL and SLL. While these methods enforce the Lipschitz Constant's regulation by imposing constraints on the weights, we opt to regulate it through the loss function, like in Leino et al. (2021). This method of regularizing Lipschitz constants with the loss has a potential advantage: it enables the learning of robust models with various Lipschitz constants. Should 1-Lipschitz Nets prove to be the optimal choice for certified robustness within certain data distributions, our models retain the ability to learn such functions. In our empirical evaluations from Table 1, our method is better than the SoTA (CPL and SLL) in both clean accuracy and VRA with a smaller model size. --- Rebuttal Comment 1.1: Title: Updated Score Comment: The authors have answered my questions satisfactorily, and I have raised the score by a point. --- Reply to Comment 1.1.1: Comment: Thanks for reading our response and increasing the score.
Summary: - The paper investigates Lipschitz-based Certification of neural networks. - Authors aim to certify ResNets by extending the techniques from GLORO. - Authors note that it is difficult to come up with a tight approximation of the residual block - So authors replace the non-linear residual block to a linear block. They are then able to adapt GLORO results - Authors add the non-linearities after the linear residual block. Strengths: 1. Good writing - The paper is well written. I could follow all the sections in one pass. - Authors have now also added sections discussing transformers and complications with verifying them. 2. Thorough experiments - A wide range of experiments are conducted on various datasets. - The experimental results look decent. - I have a question regarding the clean accuracy, which I have added in the questions section. 3. Good problem - Certifying residual blocks is definitely worth doing. - As authors mentioned, residual connections are also used in Transformers. Weaknesses: 1. about the effectiveness of LiResNet - Main trouble in verification comes from the non-linearity. It feels natural that removing non-linearities will make things easier. The authors are not really verifying the resnet as such. - If $x + conv(x)$ can be written as a conv layer, then are you really verifying residual connections or just verifying a linear layer? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Regarding the clean accuracy, why is it higher than the baselines? - Can you compare the numbers with networks of the same size? - This will explain whether having such a network (as opposed to usual ResNet) is useful more broadly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, limitations are discussed well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. Our answers to your questions are below. > Main trouble in verification comes from the non-linearity. It feels natural that removing non-linearities will make things easier. The authors are not really verifying the ResNet as such. We would like to highlight that main issue with the Lipschitz-based approaches on conventional ResNet is not originated from the **nonlinearity** of the module – it comes from bounding the addition of the residual branch and the skip connection $y = x + f(x)$, which is a linear part in a ResNet block. The Lipschitz upper-bound of such a linear combination, $Lip(f+g)\leq Lip(f) + Lip(g)$, is shown to be loose in the paper and motivates our design of linear residual block. The characterization that we are “not verifying ResNet” is inaccurate, given that LiResNets consist of the classic skip connections that enable gradients to flow back to the deeper part of the model without vanishing issues, which is the defining feature of a “ResNet” architecture; though of course, the LiResNet architecture is a specific innovation that achieves desirable properties not shared with a traditional ResNet architecture. > If x + conv(x) can be written as a conv layer, then are you really verifying residual connections or just verifying a linear layer? We are verifying the entire network, not just a linear layer. Moreover, because the LiResNet blocks are the equivalent of x + conv(x), **these layers do contain residual connections, hence why the LiResNet is a residual network. It is merely a reparameterization technique to write these layers as a convolution**; however, _this does not justify saying that this makes it just a convolutional network_, as the reparameterization acts as a constraint on which convolutions will be learned. This is exactly analogous to how a convolutional layer can be written as a dense layer, while clearly a convolutional network is not the same as a dense network in any meaningful way. Our ablation study (LiResNet vs. ConvNet) shows that this reparameterization indeed has better training properties than directly training a ConvNet. > Regarding the clean accuracy, why is it higher than the baselines? We would appreciate it if the reviewer specifies which result (e.g. in which table) is referred to in the question. If the question is pretty general, we argue that it is that LiResNets facilitate a better trade-off between expressiveness (depth) and tight Lipschitz bounds, allowing us to learn a robust function with less overregularization; thus the clean accuracy is also higher. > Can you compare the numbers with networks of the same size? In our ablation studies, we do compare the numbers with networks of the same size. When compared with previous work, it is hard to compare with exactly the same size since the backbone is different. In Table 1, we compare our results with the best reported results from previous work. Specifically, to compare with SLL, the current state of the art result, our method achieves better VRA with a much smaller model size (in terms of parameters), which shows the effectiveness of our method. See the following table for comparing VRAs under eps=36/255 between ours and SLL summarized from our paper. | Models | Model Size | CIFAR-10 | CIFAR-100 | Tiny-Imagenet | |-------------|------------|----------|-----------|---------------| | LiResNet L12W512 | 49M | 70.1 | 41.5 | 33.6 | | SLL Small | 41M | 62.6 | 34.7 | 19.5 | | SLL X-Large | 263M | 65.8 | 36.5 | 23.2 | Here is a comparison of training speed. Our largest model and the smallest SLL model (SLL Small) have a close training throughput while the largest SLL model is 4.5 times slower than our LiResNet. | | SLL Small | SLL X-Large | LiResNet L12W512 | |--------------------------------|-------|-------------|----------| | Training speed (images/second) | 3943 | 813 | 3805 | --- Rebuttal Comment 1.1: Title: Comment Comment: Hi, Thanks to the authors for their response. 'We would like to highlight that main issue with the Lipschitz-based approaches on conventional ResNet is not originated from the nonlinearity of the module – it comes from bounding the addition of the residual branch and the skip connection , which is a linear part in a ResNet block.' If the problem is not coming from the non-linearity, then why are you able to get a tighter bound when you replace g(x) with conv(x)? For r(x) = x + g(x), as long as g is a linear layer of a conv layer, you can get a tight bound. As soon as g(x) contains a non-linearity, your bound is loose. So you replace it with a simple linear/conv layer. Is this not what you do? 'This is exactly analogous to how a convolutional layer can be written as a dense layer, while clearly a convolutional network is not the same as a dense network in any meaningful way.' Yes, it can. In verification papers, for deriving bounds for conv layers, they are indeed treated as linear layers. but there the non-linearity comes before the residual connection not after. If you take r(x) = x + g(x), where g(x) is a conv and non-linearity, then this cannot be written as a linear layer. Regarding the clean accuracy, why is it higher than the baselines? This is an important question. Firstly, I would like to decouple the architecture from the method. Secondly, I don't quite understand why the network has better accuracy than normal ResNet. Is the claim that your network is better than ResNet? That is a very strong claim. I barely understant what this means ' facilitate a better trade-off between expressiveness (depth) and tight Lipschitz bounds, allowing us to learn a robust function with less overregularization; thus the clean accuracy is also higher.' Can you compare the numbers with networks of the same size? I would like to understand in more detail, what is causing the difference in results. This relates to my previous question. I have already seen the results from the paper. --- Reply to Comment 1.1.1: Title: Response to the follow-up questions (1/2) Comment: _High-level Comment_ Based on the discussion, we believe there is confusion regarding the primary contribution of this work. We are not proposing a new certification technique. Rather, we - (1) identify a fundamental problem for certified training of ResNets, and - (2) propose a new type of residual architecture that solves this problem. The LiResNet architecture is specifically made to make existing certification techniques (e.g., GloRo) more effective and scalable, and we show that it can lead to significant increases in the SoTA for deterministically certified accuracy. _Responses to Questions_ > If the problem is not coming from the non-linearity, then why are you able to get a tighter bound when you replace g(x) with conv(x)? For r(x) = x + g(x), as long as g is a linear layer of a conv layer, you can get a tight bound. As soon as g(x) contains a non-linearity, your bound is loose. So you replace it with a simple linear/conv layer. Is this not what you do? Perhaps we misunderstood your original question. It is true that we remove the nonlinearity from inside the residual block (making the residual branch linear) and place it after the skip connection is joined. But this does *not* remove the nonlinearities from the network as a whole, so they can still introduce looseness, but not in a way that is as problematic as when the bound is estimated as the sum of the residual and skip branches. We suspect the major concern here is about the term “ResNet.” It is true that we are not verifying the “conventional” ResNet as proposed in [1]; however, our LiResNet block maintains the skip connections, which is why this is a true residual architecture. E.g., according to the original authors in [1]: “Formally, in this paper we consider a building block defined as: $y = F(x, \{W\}) + x$." Notably there is no stipulation that $F$ must be nonlinear. The most conventional ResNet, “[in] the example in Fig. 2..., $F = W_2\sigma(W_1 x)$" is clearly simply an instantiation of the general residual framework. Regardless, the goal in this work is not to certify a specific architecture, but rather, to make the benefits of residual networks possible in the context of certified training in order to train deeper certified networks with higher VRA. We will try to clarify the writing so there is not confusion over what we mean by “verifying ResNet.” > 'This is exactly analogous to how a convolutional layer can be written as a dense layer, while clearly a convolutional network is not the same as a dense network in any meaningful way.' Yes, it can. In verification papers, for deriving bounds for conv layers, they are indeed treated as linear layers. but there the non-linearity comes before the residual connection not after. If you take r(x) = x + g(x), where g(x) is a conv and non-linearity, then this cannot be written as a linear layer. We believe you might have misunderstood our reply. Any conv layer can be written as a dense layer. But this doesn’t mean there is no meaningful difference between a conv layer and a dense layer. This is just as true for a LiResNet block. It can be written as a conv (or dense) layer by reparameterization, but it adds an additional constraint that allows the gradient to flow through the skip connection, thus it is meaningfully different from a conv layer. By saying there is a meaningful difference, we don’t mean it requires a new certification technique. Of course, LiResNet blocks can have their LC bounded in the same way as other linear layers (e.g., convolutions), but note that the novel part of our work is not the use of the power method for obtaining the bound on the LC of a layer, which has been used by prior work.
Summary: This paper aims to scale up deterministic certified robustness to deeper neural networks (ResNet) and more complicated datasets (ImageNet). To this end, the authors proposed a new residual block named LiResNet and a new loss function named Efficient Margin Maximization. The proposed method achieves the state of the art results on various benchmarks. Strengths: This paper is very well written and easy to follow. Each design choice is well motivated and thorough experiments demonstrated the effectiveness of the proposed methods. - The proposed LiResNet architecture is a simple but effective technique to bypass the difficulty in certifying ResNet. Experiments also showed its scalability to deeper networks. - The new loss is well motivated by the experiment finding that the inconsistency of the threatening classes increases when number of classes increases. - Comprehensive experiments on various datasets have shown the effectiveness of the proposed methods and established new state-of-the-art. Weaknesses: - Despite the effectiveness of LiResNet in certified robustness, it may be still less expressive and scalable than the actual ResNet. It would be good to discuss the limitation of the LiResNet architecture: what do we sacrifice by "linearizing" the skipping connection? - The new proposed loss is well motivated, but it seems that in the experiment there is no ablation study on it. It'd be good to include Gloro + LiResNet but without EMMA to show the improvement of EMMA compared to the Gloro loss. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is there any normalization layer in the LiResNet architecture? There seems to be none in Figure 2(a). It is a bit surprising the it can scale so well in depth without any normalization layers. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. Our answers to your questions are below. > Despite the effectiveness of LiResNet in certified robustness, it may be still less expressive and scalable than the actual ResNet. It would be good to discuss the limitation of the LiResNet architecture: what do we sacrifice by "linearizing" the skipping connection? It still remains an open question if a LiResNet is meaningfully less expressive than a conventional ResNet. Here we provide some results comparing LiResNet and ResNet on ImageNet classification, which show that LiResNet is capable of achieving similar performance of a ResNet and a VGG-Net around the similar network depth. We follow the standard training settings to train classification models on ImageNet. | | number of layers | Top 1 accuracy | |------------------|------------------|----------------| | LiResNet | 18 | 73.3% | | VGG [1] | 19 | 74.2% | | ResNet [2] | 18 | 69.8% | Perhaps a good way of thinking about this is the following: suppose we fix the number of convolutions in a network, but vary how many are placed inside each residual block. At one extreme, all the convolutions are in a single block, so we essentially have a ConvNet. On the other end, we have a LiResNet, with only one convolution per block (a traditional ResNet would usually have two or three). While the additional skip connections can be seen as additional constraints on the function learned by the model, it seems clear most of the expressiveness comes from the depth of the backbone regardless of the block size. In this view, the ConvNet is the most expressive, but we rarely worry that a ResNet has insufficient capacity by comparison, because the skip connections ultimately allow us to train a deeper network that wouldn’t be possible with a ConvNet. > The new proposed loss is well motivated, but it seems that in the experiment there is no ablation study on it. It'd be good to include Gloro + LiResNet but without EMMA to show the improvement of EMMA compared to the Gloro loss. We include an ablation study for the loss in Table 2 (a), where we compare EMMA and the TRADES loss (proposed by Leino et al [3] as the default loss in their work). The results are most notable on the tasks with more classes, as hypothesized. > Is there any normalization layer in the LiResNet architecture? There seems to be none in Figure 2(a). It is a bit surprising that it can scale so well in depth without any normalization layers. Yes, as perceived by the reviewer LiResNets is a normalization-free architecture. In fact, to the best of our knowledge, several works [3, 4, 5] in training certifiable robust models end up with not using any normalization layers. There are a few reasons. Firstly, one of the widely-used layers, layer normalization, is not Lipschitz-continuous so it does not fit into Lipschitz-based methods. Secondly, there are diminishing returns in using batch normalization from our experiments and from other papers [3, 4]. The following paragraph provides some explanations for the choice of normalization-free. While the reason for this is not 100% clear to the community, there is a large body of work on normalization-free methods, and some work suggests that gradient norm preservation (GNP) may play a similar role as compared to normalizations. GNP has been found as a fundamentally important building block for certifying networks and can be realized by using MinMax or GroupSort activations [6]. It may help to stabilize the internal activations by stabilizing gradients. Besides GNP, there is another line of work studying parameter initializations in order to realize normalization-free [7, 8]. Their insights result in some specific ways of initializing the residual block detailed in the Appendix B2. [1] Simonyan et al., Very Deep Convolutional Networks for Large-Scale Image Recognition, ICLR 2015. [2] He et al., Deep Residual Learning for Image Recognition, CVPR 2016. [3] Leino et al., Globally-Robust Neural Networks, ICML 2021. [4] Trockman et al., Orthogonalizing Convolutional Layers with the Cayley Transform, ICLR 2021. [5] Huang et al., Training certifiably robust neural networks with efficient local Lipschitz bounds, NeurIPS 2021. [6] Anil et al. Sorting out Lipschitz function approximation, ICML 2019. [7] Shao et al. Is normalization indispensable for training deep neural network, NeurIPS 2020. [8] Zhang et al. Fixup Initialization: Residual Learning Without Normalization. ICLR 2018. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their response. Although LiResNet is not as expressive as ResNet, it is a step towards scaling up certified robustness to larger architecture and more complicated dataset. Therefore, I lean towards accepting the paper.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable reviews. Here we respond to some questions that more than one reviewer are interested. - Expressiveness of LiResNet in standard training We train LiResNet on ImageNet in the standard cross entropy setting and find that LiResNet is capable of achieving similar performance of a ResNet and a VGG-Net around the similar network depth. We follow the standard training settings to train classification models on ImageNet. Table 1 in the attached PDF shows the comparison with ResNet 18 and VGG 19. - Comparison with prior work of the same size When compared with previous work, it is hard to compare with exactly the same size since the backbone is different. In Table 1, we compare our results with the best reported results from previous work. Table 2 in the attached PDF shows the comparison of LiResNet and SLL (which is the current state of the art). SLL small has the similar model size and training speed with our largest LiResNet L12W512 while SLL X-Large is 4 times bigger and 4.5 times slower than our model. Our method can still outperform SLL X-Large under eps=36/255. The training speed is obtained using the official codes from SLL and the exact setting from the SLL paper. All experiments are conducted on the same 4-GPU machine. We further compare with SLL in a more fair setting (thanks Reviewer YimP for the motivation). We simply replace our proposed LiResNet block with SLL layer and use the same backbone setting: 12 convolutions with 512 channels and 2 linear layers with 2048 dimensions. In this comparison, our method and SLL has the same model size. Table 3 shows the comparison on CIFAR10 and CIFAR100 under different epsilons, and our proposed LiResNet block performs constantly better. In this comparison, we do not use epsilon scheduling as Reviewer YimP wondered. For SLL models, we tried the models with 1) their default settings and 2) our optimization settings and reported the best VRAs. Pdf: /pdf/4e95ef85ffcb0276761efdb41718b55320d78455.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Hyperbolic Graph Neural Networks at Scale: A Meta Learning Approach
Accept (poster)
Summary: This paper introduces the model, Hyperbolic GRAph Meta Learner (H-GRAM), that learns transferable information from a set of support local subgraphs using hyperbolic meta gradients and label hyperbolic protonets to enable faster learning over a query set of new tasks disjoint subgraphs. The model is evaluated on downstream tasks of both node classification and link prediction. The experiments and ablation studies show that H-GRAM effectively learns and transfers information in few-shot settings and outperforms its Euclidean counterparts. Strengths: In general, the paper is well written and the model introduces some novel contributions. Further, regarding experiment results, the model seems outperform all the baselines consistently on tasks of link prediction and node classification. The ablation studies from varying the base HNN model and deleting individual meta-learning components are informative to better understand the influence of the various components in the model and the final proposed architecture. Weaknesses: - In the Related Work section on Hyperbolic Neural Networks, the following important reference is missing regarding HNNs for large scale datasets (from the knowledge graph domain) from recent work: [KDD 2022] Dual-Geometric Space Embedding Model for Two-View Knowledge Graphs. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '22). Association for Computing Machinery, New York, NY, USA, 676–686. https://doi.org/10.1145/3534678.3539350 - In the problem setup section, can the authors more clearly explain the properties of the graph (e.g., directed/undirected, what do the nodes/edges represent etc.)? - Regarding experiments, it would be useful to see the model performance on both inductive as well as transductive tasks. Furthermore, can the authors provide more details on the dataset statistics such as number of vertices and edges and the domain of the dataset to get a better indication of the data size? - I also have a concern about the scalability of the model especially since non-Euclidean space embedding models tend to converge very slowly and Mobius computations are more computationally expensive compared to the Euclidean space. Can the authors provide some model complexity analysis (e.g., runtime/memory complexity)? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see weaknesses section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors have sufficiently identified limitations of the prior work and addressed it in their proposed model. It would also be helpful if the authors provide future directions for their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful review of our paper. Your feedback has been invaluable in identifying areas for improvement, and we are committed to addressing your comments to enhance the quality and impact of our work. We are delighted that you find our paper well-written and recognize the novel contributions of our model, H-GRAM. We agree that the experiment results demonstrate consistent outperformance of H-GRAM over baselines on both link prediction and node classification tasks. We are glad that the ablation studies were informative in understanding the influence of various components in the proposed architecture. W1. Regarding the [KDD 2022] reference, we believe that it is orthogonal to our work because it addresses a different research topic than the abstract's main focus, which is HNNs and their inductive bias mechanisms for generalization and scalable learning. Our paper primarily addresses the limitations of current hyperbolic neural networks due to their lack of inductive bias mechanisms and proposes a novel method to alleviate these issues. On the other hand, the referenced paper focuses on embedding KGs in a dual-geometric space for the purpose of knowledge graph analysis or other related tasks. It does not address the inductive bias mechanisms or the few-shot learning setting that the abstract's H-GRAM method deals with. W2. In the problem setup section, we have relied on standard experimental setups in this problem for fair comparison. Due to the limited space and availability of this information in previous work, we placed the section on dataset details in Appendix F. W3. Regarding the experiments, we acknowledge the importance of evaluating both inductive and transductive tasks. Due to the nature of problem setup, all the meta-learning methods and comparisons provided in Table 1 and 3 operate in an inductive setting, whereas in Table 2 which details of comparison of hyperbolic approaches are evaluated on a transductive setting. The problem setting does not rely on the potential of H-GRAM rather it is dependent on the standards of traditional evaluation procedure of the baseline methods for fair comparison. W4. Furthermore, we understand the concern about the scalability of non-Euclidean space embedding models and the computational expense of Mobius computations. However, due to operating on small graph partitions, the additional computation required for Mobius operations is negligible when compared to Euclidean methods (single GCN and HGCN layer of 128 input dimensions take 0.195s vs 0.267s in our experiments, respectively). Furthermore, the primary contribution of the paper, approaches previous to H-GRAM, could not even train on such large graphs, so the performance improvement of hyperbolic networks was inaccessible due to the lack of scalability over large graphs. Thank you for your encouraging confidence in our paper. We are committed to addressing all your feedback and ensuring that the revised version of our paper meets the highest standards. Your valuable insights have been instrumental in guiding our revisions. Once again, we sincerely appreciate your thoughtful review and consideration of our paper. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks to the author for their reply. My concerns have been addressed and i could like to raise my score --- Reply to Comment 1.1.1: Title: Thanks a lot for your consideration Comment: Dear Reviewer, We truly appreciate your active participation and the constructive feedback you've provided. Your thoughtful review has played a pivotal role in enhancing the quality of our manuscript. As we approach the final stages of our interaction, we're here to continue our exchange of ideas if you have any more inputs to offer prior to the impending deadline.
Summary: This submitted work identifies an important research problem of existing works, scalability of hyperbolic neural network to large graphs and previously unseen graphs. To achieve these two goals, this work first proves that node classification and link prediction can be done with a node's local neighborhood only. Based on this insight, this work designs a meta-learning mechanism for hyperbolic graph neural networks to scale on large graphs. Experiments on both small and large graphs show the effectiveness of the proposed model. Strengths: 1. This paper proposes an interesting and important research problem, scalability of hyperbolic graph neural networks on large datasets. With a theoretical analysis and a meta-learning based model achitecture, this paper shows promising performance over baseline models. 2. This paper is self-contained, with enough introduction to background knowledge, such as meta-learning and operations in hyperbolic space, in Appendix. 3. Experiments are comprehensive with both small and large graphs, with baselines from different categories, and with both node classification and link prediction tasks. Weaknesses: 1. Introduction section contains too much redundant content. Introduction section should provide a general and overall picture of the paper, while this submitted work introduces too much model architecture and details in the Intro section. I suggest authors to remove some content and better emphasize the key innovation proposed in the paper. Paper writing can be imporved. 2. I can see standard deviation of experiment results in most of the tables, but Table 1 doesn't have std. dev. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why is standard deviation at Table 1 absent? 2. This paper uses Poincare ball as the hyperbolic model for illustration. I am wondering if the proposed model is also applicable when Hyperboloid model is used. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I can't see any potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thorough review of our paper and are pleased that you recognize the importance of the research problem we address, the scalability of hyperbolic neural networks on large datasets and previously unseen graphs, and the promising performance of our proposed model. Regarding the strengths of our paper, we are glad that you find our research problem interesting and important. We agree that the theoretical analysis and meta-learning-based model architecture contribute to the promising results compared to baseline models. W1. We acknowledge your feedback regarding the Introduction section. We sincerely apologize for the redundancy in our communication. While our primary aim was to emphasize the novelty of the content, we inadvertently allowed repetition to creep into the text. In the revised version, we will streamline the content to provide a clearer and more concise overview of the paper. W2 & Q1. Regarding the absence of standard deviation in Table 1, it was only a choice due to limited space. We have included the standard deviation values of Table 1 in Appendix Table 6 to provide a comprehensive view of the variability of our results. We shall also include them in our codebase. Q2. To address your question about the applicability of our proposed model when using the Hyperboloid model, we believe that the core ideas and methodologies of H-GRAM are trivially adaptable to other hyperbolic models, including the Hyperboloid model, which is isometric to the currently tackled Poincaré model. One possible approach could be to use the current Poincaré model formulations and use the isometric mappings for the hyperboloid model. In conclusion, we sincerely appreciate your feedback, and we are committed to improving our paper based on your valuable comments. We believe that addressing the mentioned points will lead to a higher rating for our paper and enhance its contribution to the field of hyperbolic graph neural networks. Thank you again for your thoughtful review and consideration of our paper for NeurIPS. --- Rebuttal Comment 1.1: Title: Gentle Reminder Comment: Dear Reviewer, We sincerely thank you again for your insightful review. We have worked hard to comprehensively address your comments in the rebuttal, including new requested results, as well as providing appropriate responses addressing other questions. The impact of your discerning review is unmistakable. With the conclusion of our author-reviewer interactions drawing near, we respectfully inquire whether you might consider revising your assessment upwards, given our responses. Your continued insights are of great value to us, and we welcome any additional thoughts before the impending deadline.
Summary: The paper introduces a method, Hyperbolic GRAph Meta Learner (H-GRAM), to improve the scalability and generalization of Hyperbolic Neural Networks (HNNs). H-GRAM learns from local subgraphs and transfers this learning to new, disjoint subgraphs in a few-shot setting. The authors demonstrate that H-GRAM outperforms existing methods in various few-shot settings and scales effectively over large graph datasets. Strengths: The paper presents a new approach, H-GRAM, that combines meta-learning with hyperbolic neural networks (HNNs) to address their scalability and generalization issues. The quality of the work is evident in the detailed explanation of H-GRAM and its demonstrated effectiveness in comparison with baselines. Weaknesses: * There are a few areas where it could potentially be improved: * Comparison with Other Meta-Learning Approaches: The paper could include a comparison of H-GRAM with other meta-learning approaches in table 2 and 3, not just with other HNNs. This would provide a broader context for understanding the performance and advantages of H-GRAM. * Limited contribution * This work seems just extend paper[1]'s work to hyperbolic and present some trivial definitions and theorem. * Lack of important references [2-4] to make comparisons. [1]Huang, Kexin, and Marinka Zitnik. "Graph meta-learning via local subgraphs." Advances in neural information processing systems 33 (2020): 5862-5874. The proposed method is close to this paper. Please make a detailed comparison with the method. [2] Yu, Tao, and Christopher De Sa. "Random Laplacian Features for Learning with Hyperbolic Space." arXiv preprint arXiv:2202.06854 (2022). They also said that their method is scalable. Please compare with their method. [3] Zhang, Yiding, et al. "Lorentzian graph convolutional networks." Proceedings of the Web Conference 2021. 2021. They developed a new HGNN with the aggregation in the manifold, could you provide your definition and theorem in such a case? [4] Yang, Menglin, et al. HGCN: Tree-likeness Modeling via Continuous and Discrete Curvature Learning. KDD 2023. They also derive the node influence in the case of tangent space. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weaknesses Additionally, a lot of HNN or HGNNs are formulated within the manifold. Could you deduce the node influence and establish the information loss without utilizing a logarithmic map (i.e., relying on tangent space)? If you use a logarithmic map, could you derive the conclusion using a local reference point other than the origin point since you say that "we use the local tangent space of Poincare ball model to prove that the local neighborhood policy holds better for HNN models?" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: not mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. We appreciate your time and thoughtful evaluation of our work. We are pleased that you found the approach of H-GRAM intriguing and recognized its potential in improving the scalability and generalization of Hyperbolic Neural Networks (HNNs). W1. Regarding the weakness raised about the limited comparison with other meta-learning approaches, we focussed on foundational approaches that were directly relevant to the HNN and meta-learning aspects of our work’s advancement of the field. Additionally, due to the generalizable nature of our work, other comparisons can be drawn through relative study of the given references. W2. Regarding the point about emphasizing our contribution, we want to clarify that our work represents a significant breakthrough in the field of HNNs. Specifically, it enables the scaling of these networks to large graphs (with nodes and edges in the order of millions), which was previously a challenging task. To the best of our knowledge, there is currently no other existing research that successfully harnesses the inductive biases (with theoretical rigor) of HNNs for achieving scalability in the same way we have accomplished. This breakthrough opens up an entirely new avenue for achieving performance gains by leveraging the inherent hierarchical structure of graphs. The implications of this advancement are promising and hold the potential to pave the way for further advancements in the domain of large-scale graph processing. Regarding the questions raised: Q1. Deduction of Node Influence without Logarithmic Map: While we understand the interest in exploring alternative approaches, the use of the logarithmic map is essential in our methodology for modeling node influence effectively. It allows us to establish the local neighborhood policy for HNN models in a robust manner with theoretical justification. We will, however, discuss the possibility of providing additional insights using different techniques without compromising the integrity of our approach. Q2. Derivation Using a Local Reference Point: We appreciate your suggestion. However, the use of the local tangent space of the Poincare ball model is a fundamental choice grounded in our theoretical analysis, and it has proven to be effective in establishing the local neighborhood policy for HNN models. Choosing points other than the origin could lead to instability in the formulation as different graph partitions would have different definitions of hierarchy. This would necessitate the use of another model to track the relative positioning of the different roots, adding additional scope of errors. In conclusion, we are grateful for your feedback, and we are committed to improving our paper based on your valuable comments. We believe that the suggested enhancements will strengthen the quality and impact of our work. Thank you again for your thoughtful review and consideration of our paper for NeurIPS. --- Rebuttal Comment 1.1: Title: Gentle Reminder Comment: Dear Reviewer, We truly appreciate your active participation and the constructive feedback you've provided. Your thoughtful review has played a pivotal role in enhancing the quality of our manuscript. As we approach the final stages of our interaction, we cordially inquire if you might be inclined to reconsider your assessment, considering the comprehensive responses. We're here to continue our exchange of ideas if you have any more inputs to offer prior to the impending deadline.
Summary: The authors propose applying the MAML methodology to hyperbolic GNNs in a novel local manner, along with continuous label prototypes, that enables them to scale the HNN approach from a few thousand nodes to a few million nodes. The authors provide theoretical justification for this local H-GRAM approach via theorems 1 and 2 and demonstrate its efficacy via a large set of experimental comparisons in Sec. 4 and 5. Strengths: 1. The four research questions from Sec. 4 regarding the novel HGRAM approach are comprehensively answered via comparison on multiple different datasets with competing Euclidean MAML approaches, standard hyperbolic baselines, other graph MAML approaches such as G-Meta [17], Meta-GNN [38], protoNET[29] and ablation studies. H-GRAM clearly outperforms competing meta-learning / protoNET approaches on large graphs in Table 1 and is comparable with competing hyperbolic approaches on small graphs in Table 2. 2. The authors provide theoretical justification for their local H-GRAM approach via theorems 1 and 2. Weaknesses: 1. Some of the mathematical presentation could be shortened, e.g., the same symbol $D_{g\mu}^{p_i}$ is defined in both theorems 1 and 2. 2. The authors limit their meta-learning enhancements to hyperbolic neural networks, but it is not clear if their enhancements can be applied to superior pseudo-Riemannian approaches, e.g., Pseudo-Riemannian Graph Convolutional Networks, NeurIPS 2022, and Ultrahyperbolic Neural Networks, NeurIPS 2021. 3. The authors' approach seems quite similar to G-Meta [17], although their experimental results seem to be slightly better in Table 1. Presumably it is the hyperbolic modeling (and RSGD) or the continuous label prototypes that allows H-GRAM to outperform G-Meta, but such explanations or other explanations are not discussed. 4. In Sec. 5.4, it is not discussed why HGCN outperforms HMLP and HAT. In other studies, attention-based GNNs sometimes outperform GCNs, so some explanation appears to be necessary. The SG, SL setting on the Cora dataset from Table 2 could be considered in the ablation study in Table 3 to help answer this question. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please note the questions inherent in weakness 2, 3 and 4. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: It is not clear if weakness 2 above constitutes a limitation or whether it is simply a case where no theoretical justification can be provided for an ulterhyperbolic-GRAM variant, for example. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere appreciation for your positive feedback and constructive criticism of our paper. Your insights have been invaluable in helping us improve the clarity and impact of our work. W1. We understand your concern about the mathematical presentation and agree that some parts could be shortened for better readability. We will revise the paper accordingly to ensure that the notation is clear and concise while maintaining the rigor of our approach. W2. Regarding the limitation you pointed out about the application of our enhancements to pseudo-Riemannian approaches such as Pseudo-Riemannian Graph Convolutional Networks and Ultrahyperbolic Neural Networks, we acknowledge that our focus was on hyperbolic neural networks. However, we believe that the core ideas from our approach, such as the local H-GRAM methodology and continuous label prototypes, have the potential for generalization to pseudo-Riemannian settings. We will include a discussion in the revised paper to address this possibility and explore potential avenues for future research in this direction. This will be a future work and will be considered to be an orthogonal research direction. W3. Regarding the similarities between our approach and G-Meta [17], we agree that both methods leverage meta-learning techniques. However, there are significant differences in the underlying modeling. The advantage of H-GRAM lies in its ability of scaling hyperbolic modeling, which allows it to outperform G-Meta on large graphs (as shown in Table 1). To understand the exact strengths of the individual components in H-GRAM, we refer the reviewer to Section 5.4 that details the ablation study which highlights the strengths of different modules used in H-GRAM. W4. In Section 5.4, where HGCN outperforms HMLP and HAT, we acknowledge the need for further explanation. However, due to the limited number of pages we opted for brevity on performance not directly relevant to the main focus of our paper. Essentially, while attention networks are generally more performant than convolution networks, hyperbolic formulation needs certain approximations on the linear layers which leads to a minor information loss. Due to the comparatively more complex formulation of attention networks, the information loss has higher propagation in HAT. This approximation loss leads to lower performance of HAT than HGCN in certain cases. We will include these findings in the revised paper and provide a comprehensive discussion of the results to address this question effectively. In conclusion, we are grateful for your insightful feedback, and we are committed to addressing all the points you raised to enhance the quality and contribution of our work. We believe that the revisions will lead to a more comprehensive understanding of our approach's capabilities and potential. Once again, thank you for your thoughtful review and consideration of our paper for NeurIPS. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I want to thank the authors for their thoughtful rebuttals to all reviewers. I am satisfied by their responses to all my concerns. Although I have rated this submission more positively than any other reviewer, I wish to retain my original rating. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: We would like to express our sincere gratitude for your kind message and your thorough assessment of our rebuttals. It's truly heartening to know that our responses have addressed your concerns satisfactorily. Your recognition of our efforts means a lot to us.
Rebuttal 1: Rebuttal: Based on the valuable feedback from the reviewers, the key contributions of our paper lie in the introduction of H-GRAM, a novel meta-learning model for scalable Hyperbolic Graph Neural Networks. H-GRAM effectively leverages meta-learning techniques to learn from local subgraphs and adapt quickly to new tasks. We have demonstrated the model's superiority in addressing various HNN tasks, including inductive learning, over-smoothing elimination, and few-shot learning in demanding situations. The global theme of our response revolves around addressing the reviewers' feedback to enhance the clarity, significance, and impact of our work. We will provide a clearer presentation of the introduction, emphasize the core contributions, and discuss potential future research directions. We are committed to revising our paper to meet the highest standards and sincerely thank the reviewers for their valuable insights.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces H-GRAM, a novel meta-learning model for scalable Hyperbolic Graph Neural Networks. H-GRAM leverages meta-learning techniques to learn from from local subgraphs and adapt quickly to new tasks. The authors theoretically establish that HNNs are dependent on the local neighborhood of nodes for prediction and formulate HNNs to encode node-centric local subgraphs using the locality of tangent space transformations. Experiments conducted on various benchmark datasets to illustrate that H-GRAM addresses several HNNs tasks such as inductive learning, over-smoothing elimination, and few-shot learning in various demanding situations. Strengths: - The paper presents a novel approach that combines hyperbolic geometry and meta-learning techniques, which is innovative and interesting. - The proposed H-GRAM is scalable and efficient compared to previous HNN techniques. - Extensive and carefully designed experiments have been conducted to demonstrate the effectiveness of H-GRAM Weaknesses: For results listed in table 2, why did H-GRAM never achieve the best performances for both of node classification and link prediction at the same time among all datasets? Could you explain the reason behind this discrepancy between tasks? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I don't have further questions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude for your thorough review of our paper. We value the time and effort you have dedicated to evaluating our work and providing valuable feedback. We acknowledge your positive remarks on the novelty and innovation of our approach, which combines hyperbolic geometry and meta-learning techniques to create the Hyperbolic GRAph Meta Learner (H-GRAM). We are pleased that you find our work scalable and efficient compared to previous Hyperbolic Neural Network (HNN) techniques. Additionally, your recognition of the extensive experiments we conducted to demonstrate H-GRAM's effectiveness in addressing various HNN tasks, such as inductive learning, over-smoothing elimination, and few-shot learning, is greatly appreciated. W1. Regarding the discrepancy between node classification and link prediction performances in Table 2, we recognize that our model may not always achieve the best results for both tasks simultaneously across all datasets. This behavior is attributed to the inherent trade-offs in hyperbolic geometry and our meta-learning approach when dealing with different tasks. H-GRAM's focus on fast adaptation and few-shot learning prioritizes aggregating messages from local subgraphs, which is required for node classification, whereas, link prediction requires good message passing ability across subgraphs which is limited when we partition the graph to enable scalability. However, we want to reiterate that our primary contribution lies in addressing the limitations of HNNs (it is impossible to train basic HNNs on large graphs) and achieving better scalability using meta-learning techniques in the hyperbolic space. In this context, H-GRAM consistently outperforms other state-of-the-art baselines in various challenging few-shot settings, which aligns with the core focus of our work. In conclusion, we believe that the combination of hyperbolic geometry and meta-learning techniques presented in H-GRAM holds significant potential in the domain of graph representation learning. We hope that our additional clarifications on the discrepancy in task performance and the overall contributions of our work will lead to a more positive evaluation. Once again, we sincerely appreciate your feedback and consideration of our paper for NeurIPS. Your review has been invaluable in helping us improve the clarity and impact of our work. --- Rebuttal Comment 1.1: Title: Gentle Reminder Comment: Dear Reviewer, We extend our heartfelt gratitude for your valuable engagement and insightful feedback! As we near the conclusion of the author-reviewer discourse, we kindly request your consideration for a potential upward revision of your evaluation, given our responses. We remain open to further dialogue should you have additional insights to share before the impending deadline.
null
null
null
null
null
null
Contextual Stochastic Bilevel Optimization
Accept (poster)
Summary: This work studies the so-called contextual stochastic bilevel optimization (CSBO) problem, in which the lower-level problem is a conditional expectation problem under some contextual information. Some applications in distributional robust optimization fall into this category. The paper proposes a double-loop Monte-Carlo based method, which leverage EpochSGD in Hazan and Kale, 2014 to approximate the lower-level solution. A approximate hypergradient is provided based on an explicit form via implicit function theorem. Experiments on meta-learning and instrumental variable regression are provided. Strengths: 1. This work considers a problem that is different from existing studies by introducing the contextual information. It covers some important cases in, e.g., DRO. 2. The algorithms are reasonable to me. Using MLMC to further improve the performance of DL-SGD is good. Theoretical complexity and convergence are analyzed. Experiments seem to support the design principles. Weaknesses: 1. The studied problem seems a little bit artificial. The example in (3) seems to artificially change a single-level problem into a bilevel one. In addition, the hyper-gradient form is almost the same as in the non-contextual case and hence the tools therein may be used with some adaptations. 2. Some important components such as MLMC, EpochSGD, Neumann series expansion are existing techniques. Some challenges like variance control (e.g., in $\hat v$) and hyper-gradient computation can be well coped with by these techniques. Thus, the novelty is not that significant. 3. In experiments, in Fig. 1, why does MAML stop at this large loss value? Is it because of a non-satisfactory hyperparameter tuning or something else. It seems that the stepsizes for MAML may be chosen to be too large. The comparison seems not fair. 4. The experiments do not show the important of bilevel optimization. For example, no baselines other than the proposed CSBO solvers are provided. Some baselines (e.g., single-level ones, or some standard baselines) in Wasserstein DRO may need to be included. Overall, I am not fully convinced that this is an important problem, and given the above concerns, I lean toward the negative side. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments. In below, we address your concerns. 1. [Motivation of CSBO] The CSBO problem is not artificially made up. In addition to optimization-based meta-learning and Wasserstein DRO with side information (also known as the causal optimal transport) mentioned in the numerical study, there are other applications that are special cases of the CSBO problems but are not special cases of Equation (3). - Personalized federated learning [Xing, Pengwei, et al. "Big-fed: Bilevel optimization enhanced graph-aided federated learning." *IEEE Transactions on Big Data* (2022)] is a special case of CSBO, see Equation (2) in the reference. It is important to note that in personalized federated learning, the number of lower-level constraints can be on the order of $O(10^9)$ as each constraint represents one person. Therefore, the RT-MLMC method, which has convergence independent of the number of lower-level constraints, is crucial for achieving efficiency. Convergence of existing methods in [Guo et al., 2021] and [Hu et al., 2023 Blockwise stochastic variance-reduced methods with parallel speedup for multi-block bilevel optimization. arXiv preprint arXiv:2305.18730, 2023] all depend linearly on $M$. - End-to-end learning/Contextual Optimization [Sadana, Utsav, et al. "A Survey of Contextual Optimization Methods for Decision Making under Uncertainty." *arXiv preprint arXiv:2306.10374* (2023).] In this survey, there are three paradigms for studying end-to-end learning/contextual optimization. The third paradigm, which integrates learning and optimization, falls as a special case of CSBO problems (see Figure 3 in Sadana et al., 2023). Also, see Table 3 on Page 19 in in Sadana et al., 2023 for a list of more than 20 papers that fall into this category. Our work is the first to provide non-asymptotic optimal complexity bounds for such a problem. Indeed, we find that contextual stochastic bilevel optimization has various applications in machine learning and optimization. 2. [Novelty] Note that the contextual information $\xi$ introduces additional challenges in designing an efficient optimal algorithm. In particular, most of the existing single-loop algorithms for classical bilevel optimization do not apply to our problem due to potentially infinitely many lower-level problems parametrized by $\xi$, each of which introduces a constraint involving solving a stochastic optimization problem. Additionally, we have shown that the double-loop algorithm DL-SGD is not optimal and have proposed an optimal algorithm RT-MLMC. The closest work to ours [Guo et al., 2021] can only handle a finite number of lower-level constraints ($M$), and the complexity bounds depend linearly on $M$. Our algorithm does not depend on $M$ and sheds light on how to solve stochastic bilevel optimization with even infinitely many lower-level constraints. 3. [Comparison to MAML] We respectfully point out that the step size of MAML has been fine-tuned in our experiment. In our general response (see newly added Figure 4 in the PDF file), we report the plot of MAML performance for different choices of step size from the list {5e-3, 1e-2, 5e-2, 1e-1, 2e-1}. From the plot we realize that for small step sizes the MAML tends to have similar performance, whereas MAML tends to diverge for too large step sizes. The poor performance of MAML on meta-learning problem (8) is that it solves a different formulation, i.e., it replaces the lower level problems with one-step gradient update. It is natural that MAML cannot achieve good performance on the CSBO objective, as the approximation gap is theoretically $O(1)$ unless one performs multi-step MAML. - Besides, in our general response, we provide the performance of multi-step step MAML, which replaces the lower-level problems in (8) with $m$-step gradient updates, with $m\in$ {$1,4,8,12$}. From the plot we can see as $m$ increases, multi-step MAML tends to have better performance, but it still cannot outperform our proposed RT-MLMC algorithm. 4. [Experiments against baselines] Thank you for the suggestions. As we are the first to propose and solve the CSBO problem, we have not found any other baselines for CSBO. - Regarding Wasserstein DRO with side information, it is worth noting that the existing method in [Yang et al., 2022] heavily relies on convexity, linearity predictors, and linear loss assumptions to build reformulations that can be solved via convex solvers. However, when the hypothesis class from the covariate to the decision is parameterized by nonconvex neural networks, their method is not implementable. In contrast, our algorithm is the first implementable algorithm with a convergence guarantee. We compare our method to naively incorporating SAA and Wasserstein DRO methods that do not explicitly leverage side information in Figure 2 (right). - For multi-task stochastic bilevel optimization [Guo et al., 2021], which is a special case of CSBO with $M$ lower-level problems, we compare the performance of RT-MLMC with the state-of-the-art method BSVRB proposed in [Hu et al., 2023] (this manuscript appears on arXiv after the NeurIPS submission deadline) in the one-page PDF response. Please see Figure 5 for a comparison with BSVRB. Our RT-MLMC method converges much faster. Xing, Pengwei, et al. "Big-fed: Bilevel optimization enhanced graph-aided federated learning." *IEEE Transactions on Big Data* (2022) Sadana, Utsav, et al. "A Survey of Contextual Optimization Methods for Decision Making under Uncertainty." arXiv preprint arXiv:2306.10374 (2023). Hu et al. "Blockwise stochastic variance-reduced methods with parallel speedup for multi-block bilevel optimization". arXiv preprint arXiv:2305.18730, 2023 --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Dear Authors, Thanks so much for the response! My questions have been answered satisfactorily, so I increase my score. Best, Reviewer --- Reply to Comment 1.1.1: Title: Thank you for the discussion Comment: Thank you for the time and the valuable feedback. Best Regards, Authors --- Rebuttal 2: Comment: Dear reviewer, Thank you for your review! The authors have replied to your comments. Does their answer address your concern? Can you please react to their answer during the discussion period? Many thanks, The AC --- Rebuttal 3: Title: Follow-up on Rebuttal: Seeking Your Feedback Comment: Dear Reviewer Dnys, We appreciate the time you've taken to review our work. We've addressed your concerns in our rebuttal, and we kindly ask if you've had an opportunity to go through it. Please let us know if there are any further questions or clarifications you would like. Thank you. Best regards, Authors --- Rebuttal Comment 3.1: Title: Need feedback to the authors Comment: Dear reviewer, Thank you for your review. The authors provided an reply to address your concerns. Could you please acknowledge that you read the response and, in case you maintain your score, to provide more details about why the reply does not address your concern. Thank you for your time and effort. Best, The AC
Summary: Contextual stochastic bilevel optimization (CSBO) is introduced in this paper. An efficient double-loop gradient method based on the Multilevel Monte-Carlo (MLMC) is proposed. The proposed framework captures important applications such as meta-learning, Wasserstein distributionally robust optimization with side information (WDRO-SI), and instrumental variable regression (IV). Strengths: An interesting problem, that is, Contextual Stochastic Bilevel Optimization Problem is proposed in this work. And an efficient algorithm is proposed to solve this problem. The proposed framework captures important applications such as meta-learning, Wasserstein distributionally robust optimization with side information (WDRO-SI), and instrumental variable regression (IV). However, I have some concerns as follows. Weaknesses: 1. I think it's necessary to emphasize the difficulty of solving the Contextual Stochastic Bilevel Optimization Problem compared with traditional bilevel optimization problems. The presentation of this work is poor. I believe this is an excellent work, I suggest that you should modify the presentation of this work to better clarify the contributions of this work. 2. In the experiment, the results are limited. For example, in the meta-learning application, I suggest the authors compare the proposed method with the state-of-the-art bilevel optimization methods [1][2][3], which are shown to be able to address the meta-learning task. Alternatively, you need to specify why these methods [1][2][3] are not applicable to this application. Furthermore, the authors should conduct experiments on more datasets to better evaluate the proposed method, for example, Omniglot dataset. 3. I suggest the author briefly introduce some existing bilevel optimization works in machine learning, for example, hyper-gradient-based methods [1, 4] and approximation-based methods [2], and then discuss why these methods fail to be applied to the Contextual Stochastic Bilevel Optimization Problem. [1] Bilevel optimization: Convergence analysis and enhanced design, ICML, 2021 [2] Asynchronous Distributed Bilevel Optimization, ICLR 2023 [3] Bilevel Programming for Hyperparameter Optimization and Meta-Learning, ICML 2018 [4] Provably faster algorithms for bilevel optimization, NeurIPS 2021 Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. I think it's necessary to emphasize the difficulty of solving the Contextual Stochastic Bilevel Optimization Problem compared with traditional bilevel optimization problems. The presentation of this work is poor. I believe this is an excellent work, I suggest that you should modify the presentation of this work to better clarify the contributions of this work. 2. In the experiment, the results are limited. For example, in the meta-learning application, I suggest the authors compare the proposed method with the state-of-the-art bilevel optimization methods [1][2][3], which are shown to be able to address the meta-learning task. Alternatively, you need to specify why these methods [1][2][3] are not applicable to this application. Furthermore, the authors should conduct experiments on more datasets to better evaluate the proposed method, for example, Omniglot dataset. 3. I suggest the author briefly introduce some existing bilevel optimization works in machine learning, for example, hyper-gradient-based methods [1, 4] and approximation-based methods [2], and then discuss why these methods fail to be applied to the Contextual Stochastic Bilevel Optimization Problem. [1] Bilevel optimization: Convergence analysis and enhanced design, ICML, 2021 [2] Asynchronous Distributed Bilevel Optimization, ICLR 2023 [3] Bilevel Programming for Hyperparameter Optimization and Meta-Learning, ICML 2018 [4] Provably faster algorithms for bilevel optimization, NeurIPS 2021 Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. Below, we address your questions. 1. [Difficulty for solving CSBO] As discussed in the Introduction from Line 43 to Line 68, existing algorithms for traditional bilevel optimization problems with one single lower-level problem either cannot achieve optimal complexity bounds or does not apply to the CSBO problem due to potentially infinitely many lower-level problems parametrized by $\xi$, each of which introduces a constraint involving solving a stochastic optimization problem. See more details in below. We highlight that the proposed RT-MLMC achieves the optimal complexity bounds for CSBO problems. It is also the first double-loop algorithm that achieves the optimal complexity bounds for classical stochastic bilevel optimization. - Existing double-loop methods for classical bilevel optimization can be applied with a slight modification but admits a sub-optimal complexity bound, as we have shown in DL-SGD. - As for the more widely investigated single-loop methods for classical bilevel optimization, these algorithms are specifically designed for the setting when there is only one lower-level constraint formulated as $y^*(x)$. In such a case, one can use a sequence of vectors $y^t$ to approximate $y^*(x^t)$ leveraging the intuition that $y^*(x^{t+1}) - y^*(x^t) \approx \nabla y^*(x^t) (x^{t+1} - x^t)$. However, when there are multiple lower-level constraints, i.e., $y^*(x;\xi)$ for each realization of $\xi$, we cannot no longer use only one sequence of $y^t$ to keep track of $y^*(x^t,\xi^t)$ as $\xi^t$ is randomly sampled at each iteration. For example, when $\xi$ follows a normal distribution, it requires infinite number of sequence of $y$ to keep track of the lower level constraints. - To our best knowledge, [Guo et al., 2021] is the only work that discussed the case when there are $M$ lower-level constraints, and they adopt $M$ sequences of $\{y^t(i)\}$ to keep track of $y^*(x;i)$ for $i\in[M]$, respectively. Thus, the convergence rate of their algorithm is depends linearly on $M$. On the other hand, our proposed RT-MLMC algorithm gets rid of the dependence on the number of lower-level constraints and achieves an optimal complexity bound of $O(\epsilon^{-4})$. This is particularly important for many applications that are special cases of the CSBO, including personalized federated learning [Xing, et al. "Big-fed: Bilevel optimization enhanced graph-aided federated learning." *IEEE Transactions on Big Data* (2022)]. In this context, $M$ can be of size $O(10^9)$, meaning that each lower-level constraint represents a personalized keyboard usage preference. 2. [Comparison to baselines in meta-learning] We respectfully point out that the references mentioned [1-4] address the traditional bilevel optimization problem, where there is only one lower-level constraint. When applied to meta-learning applications, they do not treat all tasks as multiple individual lower-level constraints. Instead, they use a surrogate aggregated lower-level objective to enforce one lower-level problem, for example, via averaging. In other words, they do not solve the bilevel optimization formulation for meta-learning proposed in [Rajeswaran et al., 2019] and studied in our paper. It is unclear how the proposed methods in [1-4] can be applied to the meta-learning formulation studied in our paper. Note that the meta-learning formulation in our paper is also used in personalized federated learning [Xing, Pengwei, et al., 2022]. If one adapts the surrogate aggregated lower-level constraint, it is not personalized at all. It implies that importance of solving the formulation (8) considered in our paper. Yet no algorithms in [1-4] can be applied. - In the general response (also see the added PDF file), we have added comparison to two recent papers for solving a special of CSBO when there are $M$ lower-level problems [Guo and Yang, 2021] and [Hu et al. 2023, Blockwise stochastic variance-reduced methods with parallel speedup for multi-block bilevel optimization. arXiv preprint arXiv:2305.18730, 2023] (the second paper appeared on ArXiv after the NeurIPS submission deadline). The algorithm in [Guo and Yang, 2021] and the first algorithm in [Hu et al. 2023], both require computing the inverse of Hessian exactly in each iteration, which cannot be implemented efficiently especially for high-dimensional problems. We use the BSVRB, the second algorithm in [Hu et al. 2023], as a baseline to solve the special case of CSBO formulation. The comparison is in the general response with supporting Figure 5 in the PDF file. Note that the performance of BSVRB is worse than RT-MLMC. The reason is that the iteration complexity of the baseline BSVRB depends linearly on the number of lower-level problems, whereas the proposed RT-MLMC does not and achieves optimal complexity bounds. - We note that meta-learning with Omniglot dataset is a high-dimensional bilevel optimization problem with tremendously many lower-level problems, which is difficult to solve given the short rebuttal time period. We promise to add experiments either using this dataset or other types of datasets such as UCI Adult benchmark dataset *a8a* and web page classification dataset *w8a* in our revised paper. 3. [More reference] Thanks for the reference. We have added a related discussion. The reasons why these algorithms fail for CSBO are discussed in the first bullet point. Xing et al. "Big-fed: Bilevel optimization enhanced graph-aided federated learning." IEEE Transactions on Big Data (2022). Hu et al. "Blockwise stochastic variance-reduced methods with parallel speedup for multi-block bilevel optimization". arXiv preprint arXiv:2305.18730, 2023 --- Rebuttal Comment 1.1: Comment: Thanks for your responses, my concerns have been addressed, and I have increased the score. --- Reply to Comment 1.1.1: Title: Thank you for the discussion Comment: Thank you for the time and the valuable feedback. Best Regards, Authors --- Rebuttal 2: Comment: Dear reviewer, Thank you for your review! The authors have replied to your comments. Does their answer address your concern? Can you please react to their answer during the discussion period? Many thanks, The AC --- Rebuttal 3: Title: Follow-up on Rebuttal: Seeking Your Feedback Comment: Dear Reviewer NrrF, We appreciate the time you've taken to review our work. We've addressed your concerns in our rebuttal, and we kindly ask if you've had an opportunity to go through it. Please let us know if there are any further questions or clarifications you would like. Thank you. Best regards, Authors --- Rebuttal Comment 3.1: Title: Need feedback to the authors Comment: Dear reviewer, Thank you for your review. The authors provided an reply to address your concerns. Could you please acknowledge that you read the response and, in case you maintain your score, to provide more details about why the reply does not address your concern. Thank you for your time and effort. Best, The AC
Summary: **Summary:** The paper introduces a novel algorithm for solving contextual stochastic bilevel optimization (CSBO) problems. The authors develop DL-SGD and RT-MLMC gradient estimators for the problem along with a SGD-based algorithm for solving the CSBO problem. The authors analyze the properties of the gradient estimators and provide finite-time convergence guarantees for the proposed method. The theoretical results are corroborated by experiments on MAML, DRO with side information, and IV regression problems. Strengths: **Strengths:** - The problem formulation considered in the paper is challenging and is of significant interest to the ML community. - The paper is well written, and the ideas are presented clearly with discussions. - The authors provide theoretical guarantees for the proposed approaches with an analysis of the proposed gradient estimators. - The authors have conducted experiments on multiple tasks including MAML, DRO with side information, and IV regression problem to evaluate the performance of the proposed framework. Weaknesses: **Weaknesses:** 1. A major confusion I have is about the unbiasedness of the gradient estimator stated after line 127. Specifically, the expressions $\nabla_1 y^\ast$ and $\nabla_2 f$ both depend on the random variable $\xi$. This implies that the two expressions are dependent on each other which further means that the expectation given in the equation (after line 127) will not be the same as $\nabla F$. Can the authors clarify why the given expression will be true? The same discussion holds for the approximate gradient expressions derived in eq (5) and the rest of the paper. This is a major issue since the proofs and the results are based on the independence of $\nabla_1 y^\ast$ and $\nabla_2 f$. \ If this issue is resolved, I am willing to raise my score. 2. The authors should clarify the intuition behind using EpochSGD to estimate $y^\ast$. In the discussion, the authors simply explain the algorithm without explaining the intuition. 3. The gradient inverse estimator utilized from Ghadhimi’s work is clear, however, why the iterative estimator RT-MLMC works is difficult to understand. Can the authors please explain the working of the proposed estimator, i.e., why it works? 4. The experiments on MAML and DRO do not compare the proposed approach with baseline algorithms in the area. - Numerous bilevel algorithms solve MAML, the authors should compare the performance of the proposed approach against at least a few of them. - Similarly, the authors need to show the performance of the proposed approach for solving the DRO problem against popular baselines. 5. It would be easier if the authors keep the notations consistent in the experiment and the theory section of the paper. 6. To motivate the problem better the authors should include some examples in the introduction section. This way it will be easier for the reader to appreciate the considered formulation. ---- Updated the score after the rebuttal. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the limitations above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. Below, we address your comments in detail. 1. [Form of gradient estimator $\nabla F(x)$] Indeed $y^*(x;\xi)$ and $\nabla_2 f(x,y^*(x;\xi);\eta,\xi)$ both depend on $\xi$. However, this does not prevent us from obtaining the expression for the gradient. Note that: $$F(x)=\mathbb{E}[f(x,y^*(x;\xi);\eta,\xi)]=\mathbb{E}\_\xi\mathbb{E}\_{\eta\mid\xi}[f(x,y^*(x;\xi);\eta,\xi)].$$ By the fact that one can switch expectation and taking gradient under given assumptions, we have: $$\nabla F(x)=\mathbb{E}\_\xi\nabla_x[\mathbb{E}\_{\eta\mid\xi}[f(x,y^*(x;\xi);\eta,\xi)]],$$ Note that for a given $\xi$, the gradient formulation follows exactly from that of the classical stochastic bilevel optimization without dependence structure using chain rule. See, for instance, [Ghadimi and Wang, 2018] for more details. $$\nabla_x[\mathbb{E}\_{\eta\mid\xi}[f(x,y^*(x;\xi);\eta,\xi)]]= \mathbb{E}\_{\eta\mid\xi}[\nabla_x f(x,y^*(x;\xi);\eta,\xi)]=\mathbb{E}\_{\eta\mid\xi}[\nabla_1 f(x,y^*(x;\xi);\eta,\xi) + \nabla_1 y^*(x;\xi)^\top \nabla_2 f(x,y^*(x;\xi);\eta,\xi)].$$ Further taking expectation with respect to $\xi$ gives the expression of $\nabla F(x)$ after Line 127, i.e., $$\nabla F(x)=\mathbb{E}\_\xi\mathbb{E}\_{\eta\mid\xi}[\nabla_1f(x,y^*(x;\xi);\eta,\xi)+\nabla_1 y^*(x;\xi)^\top\nabla_2 f(x,y^*(x;\xi);\eta,\xi)].$$ Plugging in $\nabla_1y^*(x;\xi)$ obtained in Appendix B, we have the expression of $\nabla F(x)$ after Line 129. 2. [Intuition to use EpochSGD] There are two reasons for using EpochSGD instead of classical SGD. Firstly, EpochSGD is faster than SGD by a logarithmic factor and is optimal in terms of sample and iteration complexity (see Hazan and Kale, 2014). Additionally, MLMC-based methods use the control variate technique between two neighboring approximations. If the difference between two neighboring approximations is too small, the approximation error decays too slowly. If the difference is too large, the variance reduction effect achieved by the control variate is not strong enough. To achieve a balance, MLMC usually uses a sequence of approximations that admits exponentially decaying approximation error. In this regard, the error of EpochSGD decays exactly exponentially in the number of epochs. Therefore, we can use the output at the end of each epoch for MLMC and do not need to manually select at which iteration of classical SGD we use the corresponding $y$ in the construction of MLMC (see Asi et al., 2021 for more discussion on EpochSGD). 3. [Motivation of RT-MLMC estimator] Although DL-SGD is low-biased, the complexity of DL-SGD is too high. Thus we want to construct an unbiased estimator of DL-SGD at an even lower cost to achieve optimal complexity bounds. Note that even for stochastic bilevel optimization, existing double-loop methods cannot achieve optimal complexity. - To ensure low bias, we use the equation after Line 163 and show that the RT-MLMC gradient estimator in Eq. (6) to be an unbiased estimator of DL-SGD in Eq.(5) and thus low-bias for the overall objective. - To achieve low costs, RT-MLMC uses a randomized approach by incorporating a truncated geometric distribution $p_k\propto 2^{-k}$ to generate an approximation level $k$. With a high probability, RT-MLMC generates a small $k$ and constructs $\hat{v}^{k+1}$ and $\hat{v}^k$, which only requires computing $y_{k+1}^0$ and $y_{k}^0$. This is inexpensive because $k$ is very small. With a low probability, RT-MLMC generates a large $k$ and computes $\hat{v}^{k+1}$ and $\hat{v}^k$, which is very expensive. The expected computational cost is thus mild by summation over high costs multiplied by low probability + low costs multiplied by high probability. - Lastly, one might argue that dividing $p_k$ for small $k$ can cause high variance. This is mitigated as RT-MLMC incorporates a control variate technique by subtracting $\hat{v}^{k+1}$ and $\hat{v}^k$ that are highly correlated. - In summary, RT-MLMC is unbiased of DL-SGD, thus admitting low-bias for $\nabla F(x)$; it has much lower costs than DL-SGD and admits mild variance. 4. [Compare to baseline in experiments] Note that even for a special case of CSBO, i.e., Problem (8) that is a stochastic bilevel optimization with multiple lower-level problems, many baseline approaches in bilevel optimization literature are not applicable. For meta-learning in form of Problem (8), many existing methods actually only solve a surrogate of the formulation by replacing the lower-level problem with gradient updates to obtain one-step or multi-step MAML or by averaging all lower-level problems so that there is only one lower-level constraint. However, the original formulation actually finds a lot important applications, such as personalized federated learning. To directly solve Problem (8), we provide detailed comparisons with baseline methods including BSVRB [Hu et al., 2023 Blockwise stochastic variance-reduced methods with parallel speedup for multi-block bilevel optimization. arXiv preprint arXiv:2305.18730, 2023] (the STOA that appears even after the NeurIPS deadline) and one-step/multi-step MAML in the general response with figures in the PDF file. In short, RT-MLMC outperforms these methods under various experimental setups. - For Wasserstein DRO with side-information, the existing method in [Yang et al., 2022] heavily relies on convexity, linearity predictors, and linear loss assumptions to build reformulations that can be solved by convex solvers. However, when the hypothesis class from the covariate to the decision is parameterized by nonconvex neural networks, their method is not implementable. In contrast, our algorithm is the first implementable algorithm with a convergence guarantee. 5. [Consistent notation in theory and experiments] We will keep the notation in the theory and the experiments consistent. 6. [Include examples in the Introduction] We have added the applications into the introduction for a better illustration. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for the detailed response. Most of my concerns have been addressed by the authors, especially, the ones concerning the unbiasedness of the gradient estimator. Consequently, I have updated my original rating of the paper. --- Reply to Comment 1.1.1: Title: Thank you for the discussion Comment: Thank you for the time and the valuable feedback. Best Regards, Authors --- Rebuttal 2: Comment: Dear reviewer, Thank you for your review! The authors have replied to your comments. Does their answer address your concern? Can you please react to their answer during the discussion period? Many thanks, The AC --- Rebuttal 3: Title: Follow-up on Rebuttal: Seeking Your Feedback Comment: Dear Reviewer Z7Go, We appreciate the time you've taken to review our work. We've addressed your concerns in our rebuttal, and we kindly ask if you've had an opportunity to go through it. Please let us know if there are any further questions or clarifications you would like. Thank you. Best regards, Authors
Summary: The paper investigates a generalization of the stochastic bilevel optimization model, in which the lower and upper optimization levels share a random variable. The authors design two gradient-based approaches named RT-MLMC and DL-SGD for solving this problem and analyze their performance. The two methods differ in the way they estimate the gradient of the objective: DL-SGD is rather straightforwardly based on existing results, while RT-MLMC improves upon the performance of DL-SGD by a carefully tailored sampling method. Finally, numerical examples are presented to demonstrate the generality and performance of the proposed model and methods. Strengths: * The paper introduces a general model that is applicable in a wide range of situations. Although the model is more general than previous approaches, the proposed approximations appear to be as efficient as the methods developed for the special cases. * The paper is clear, focused, and well-written. * I have verified most of the math; it is easy to follow and, aside from a few typos, appears to be correct. Weaknesses: There are several typos in critical parts of the proofs (see "Questions" below). I believe that these typos do not affect the correctness of the proofs, however, this somewhat lowers my confidence in the results. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Is the dependence by $f$ on $\eta$ in the upper level of (1) necessary? I believe that removing it does not hurt the generality of the model and may avoid some confusion (see typos below). 2. In Lemma 1, $\alpha$ should be defined. 3. Typos: * In (5), $\eta'$ is used as the argument for the $\nabla_1 f$ and $\nabla^2_{12}g$ terms while $\eta''$ is used in the $\nabla_2 f$ term. This is in contrast with the second formula for $\nabla F$ on Page 4, where an independent RV is used in $\nabla^2_{12}g$. * A similar typo appears on the bottom of Page 15, in the proof of Lemma 2, where $\eta$ in the term $\nabla^2_{12}g$ should be changed to $\eta'$ (as in the definition of $V(x)$ above). * A similar typo appears on Page 16, where $H_K(1)$ and the last term in $H_K(2)$ should share the same variable $\eta$ (compare the formula for $V(x)$ on page 16 versus the definition of $V(x)$ on Page 15). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. Below, we address your comments. 1. [Dependence on $f$ and $\eta$] Indeed, the dependence between $f$ and $\eta$ is not necessary. In many cases, we can remove $\eta$ in $f$ and remove the expectation over $\eta$ in the upper level. We are writing in the current way to be as most general as possible. Thanks for pointing it out! 2. [Define $\alpha$ in Lemma 1] We have fixed that. 3. [Typos] Thanks for checking our paper carefully. Indeed, to align with the second $\nabla F$ shown on page 4, one should use $\eta^\prime$ in $\nabla_{12} g$ and use $\eta^{\prime\prime}$ in $\nabla_1 f$ and $\nabla_2 f$ in Equation (5). We have made the modifications according to your suggestion. Note that the current Equation (5) is also valid because $\nabla_{12} g$ uses a different sample compared to $\nabla_2 f$. Since Equation (5) is of the form $\nabla_1 f(\eta^\prime) - \nabla_{12} g(\eta^\prime) \Lambda \nabla_2 f(\eta^{\prime\prime})$ (where we omit other dependence), after taking the full expectation, it still aligns with the second $\nabla F$ shown on Page 4. The key part is that we should not use the same sample of $\eta$ for $\nabla_{12} g$ and $\nabla_2 f$, which will lead to correlation issues. The other two places are typos. Thanks for pointing it out. We hope that the clarification could increase your confidence in our work. --- Rebuttal Comment 1.1: Title: Acknowledgment Comment: I thank the authors for their responses. Regarding question #1, I see no loss in generality in removing $\xi$ from the upper level since we can always include a copy of $\xi$ as part of $\eta$ (i.e., set $\eta' = (\eta, \xi)$). In any case, this is a very minor point and the authors should choose the form they prefer best. --- Reply to Comment 1.1.1: Title: Acknowledgment Comment: Thank you for the time and the valuable feedback. Best Regards, Authors
Rebuttal 1: Rebuttal: We are grateful to the reviewers for constructive comments and suggestions, which significantly improve the quality of our paper. We are happy to engage in further discussion. We first restate the importance of CSBO problem in applications and the challenges for solving it. You may find added experiments in the attached PDF. 1. In addition to the ones demonstrated in the numerical experiments, CSBO covers many other important applications, including personalized federated learning [Xing, et al. "Big-fed: Bilevel optimization enhanced graph-aided federated learning." IEEE Transactions on Big Data (2022)] and end-to-end learning that integrates learning and optimization (See the third paradigm in the survey [Sadana, Utsav, et al. "A Survey of Contextual Optimization Methods for Decision Making under Uncertainty." arXiv preprint arXiv:2306.10374 (2023).]) 2. Existing algorithms for traditional bilevel optimization problems with one single lower-level problem either cannot achieve optimal complexity bounds or does not apply to the CSBO problem due to potentially infinitely many lower-level problems parametrized by $\xi$, each of which introduces a constraint involving solving a stochastic optimization problem. We highlight that the proposed RT-MLMC achieves the optimal complexity bounds for CSBO problems. We illustrate in more details in the response to each reviewer. In below, we add some extra experiments to address some concerns from reviewers. 1. Reviwer Dnys asked about the step size of MAML in meta-learning. We point out that the step size of MAML has been fine-tuned in our experiment. In our general response (Figure 4a), we report the plot of MAML performance for different choices of step size from the list {5e-3, 1e-2, 5e-2, 1e-1, 2e-1}. From the plot we can see that for small step sizes the MAML tends to have similar performance, whereas MAML tends to diverge for too large step sizes. The key issue of MAML is that it solves a different formulation than problem (8), i.e., it replaces the lower level problems with one-step gradient update. Thus, it is natural that MAML cannot achieve good performance on the CSBO objective function, as the approximation gap is theoretically $O(1)$ unless one performs multi-step MAML. - To further illustrate the performance of MAML, in our general response, we provide the performance of multi-step MAML in the plot of Figure 4b, which replaces the lower level problems in (8) with $m$-step gradient updates, with $m\in \\{1,4,8,12\\}$. From the plot we can see as $m$ increases, multi-step MAML tends to have better performance, but it still cannot outperform our proposed RT-MLMC algorithm. 2. Reviewers Z7Go and NrrF pointed out that we should add more baseline comparison for solving the meta-learning formulation (8), which is a special case of CSBO problem. Note that it is a stochastic bilevel optimization problem with multiple lower-level constraints. Many baseline approaches can actually only solve a surrogate of the formulation by replacing the lower-level problem with gradient updates to obtain one-step or multi-step MAML or by averaging all lower-level problems so that there is only one lower-level constraint. - The performance of MAML is discussed in the previous bullet points. - Only two recent papers proposed algorithms for directly solving the formulation (8): [Guo and Yang, 2021] and [Hu et al. 2023] (note that [Hu et al. 2023] appears after the NeurIPS deadline. We still use it as a baseline). The algorithm in [Guo and Yang, 2021] and the first algorithm in [Hu et al. 2023], both require computing the inverse of Hessian exactly in each iteration, which cannot be implemented efficiently especially for high-dimensional problems (For instance, in our meta-learning experiment the dimension of decision variable is $512\times10=5120$, so these two algorithms will require inverting a $5120\times5120$ size matrix in each iteration). We use the $\mathrm{BSVRB}^{v2}$, the second algorithm in [Hu et al. 2023], as a baseline to solve the special case of CSBO formulation. Figure 5 in the PDF file compare the performance of $\mathrm{BSVRB}^{v2}$ and the proposed RT-MLMC. Note that the performance of $\mathrm{BSVRB}^{v2}$ is worse than RT-MLMC. The reason is that the iteration complexity of $\mathrm{BSVRB}^{v2}$ depends linearly on the number of lower-level problems, whereas the proposed RT-MLMC does not and achieves optimal complexity bounds. 3. Reviewer Z7Go asked to add more baseline methods for Wasserstein DRO with side information. Note that the existing method in [Yang et al., 2022] heavily relies on convexity, linearity predictors, and linear loss assumptions to build reformulations that can be solved via convex solvers. However, when the hypothesis class from the covariate to the decision is parameterized by nonconvex neural networks, their method is not implementable. In contrast, our algorithm is the first implementable algorithm with a convergence guarantee. We compare our method to baselines such that naively incorporating SAA and Wasserstein DRO methods that do not explicitly leverage side information in Figure 2 (c). We again summarize the numerical results in Figure 2(c) as Table 5 in our PDF file. References: - Zhishuai Guo and Tianbao Yang. Randomized stochastic variance-reduced methods for stochastic bilevel optimization. arXiv preprint arXiv:2105.02266, 2021 - Quanqi Hu, Zi-Hao Qiu, Zhishuai Guo, Lijun Zhang, and Tianbao Yang. Blockwise stochastic variance-reduced methods with parallel speedup for multi-block bilevel optimization. arXiv preprint arXiv:2305.18730, 2023. (Appears on ArXiv after the NeurIPS submission deadline.) Pdf: /pdf/5abc8877820db7f6014f93d3000f46e1aaf33f7f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Concept Algebra for (Score-Based) Text-Controlled Generative Models
Accept (poster)
Summary: This paper hypothesizes that the latent representations learned by text-guided diffusion models contain structured subspaces corresponding to semantic concepts. Concepts are formalized as latent variables, and the concepts associated with a (text,image) pair are formalized as a bag of these latent variables, such that the distribution over image outputs is conditionally independent of text inputs given these concepts. Necessary and sufficient conditions (causal separability; Proposition 3.3) are given for concepts to be arithmetically composable; arithmetic composability in turn admits algebraic manipulation of concepts. A proof-of-concept algorithm is given for identifying subspaces of latent representations corresponding to concepts (based on "spanning prompts") and examples are provided of identifying and manipulating these subspaces using the Stable Diffusion model. Strengths: The conceptual framework constructed in this paper is though-provoking. The exposition is quite clear, striking a good balance between helpful exposition and mathematical precision. Definition 2.5 (arithmetic composability) is interesting, as is Proposition 3.3 (characterizing arithmetic composability in terms of "causal separability). The algorithms proposed in Section 4 for identifying and manipulating concepts are well-motivated. I also appreciate the discussion in Section 5 of the structure/character of concept subspaces. These analyses could be a good starting point for guiding empirical study of the score representation. Depending on how the effective the proposed algebraic interventions prove to be in practice (see Weaknesses) I think that the abstractions and definitions introduced in this paper have the potential to stimulate an entirely new direction of work on control and interpretability of generative models. Control and interpretability are broadly relevant to the NeurIPS community. Weaknesses: The experiments in this paper are very rudimentary. There are no quantitative results, and the qualitative results consist of a small number of examples provided in Figures 1-5. It is unclear to me how broadly effective, reliable, or robust this method is. There is just not enough information to evaluate whether these ideas work (or could be made to work) in practice; see the Limitations section of my review for additional thoughts about practicality. Minor: "Causal Separability" is strong language, and I'm not convinced that the word "causality" accurately describes the relationship defined by this term. At the very least, some argument is needed to justify calling this a causal relationship. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Manipulating the non-parametric representation space via evaluated values along the diffusion of a score-based model is clever, and my understanding is that this is the key to making concept algebra tractable. Could these methods be applied to, e.g., the Parti model? If not, it might be helpful to point this out and be more explicit about how and where these method depend upon the structure of a score-based model. This work seems closely related to interpretability research. The concept editing algorithm could be seen as a sort of causal intervention (in the sense of "do calculus") on a hypothetical interpretation of a latent representation. I'm curious how this work connects or complements recent interpretability research, e.g., ongoing work on mechanistic interpretability. Is the definition of "causal separability" given here related/consistent with established definitions of "causally separable processes"? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: It is unclear to me how restrictive the causal separability condition might be (Definition 3.2). Especially for more abstract, complicated concepts, I could imagine that causal separability might never be satisfied (but I am open to being convinced otherwise). Even for simple concepts, it seems like causal separability can be violated in ways that might not be obvious: for example, the inseparability of (male,female) and (deer,human) surprised me (although it became very clear once the reason for inseparability was explained to me). Understanding the how broadly causal separability holds seems essential to the ultimate practicality of this framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review! We’re glad that you found the conceptual framework thought provoking, and you think it may have the potential to stimulate new lines of work in control and interpretability. **Experiments** With respect to the experiments, please see our global-level response. In short: the purpose of the experiments is to test the mathematical framework by testing the prediction that (1) the Stein score is an arithmetically composable representation, and (2) it's possible to find finite dimensional subspaces corresponding to concepts. In our view, the examples in the paper do this—if the subspace structure didn't exist, then we couldn't find examples that demonstrate it! Additionally, we have added a new experiment testing whether the concept subspace structure is actually useful or necessary for model control. To that end, we compare with an existing method for style transfer that just adds on score vectors generated by style prompt (not taking subspace structure into account). We also compare with direct English prompting. We find that human raters significantly prefer samples using concept algebra—see the global response for details. **Causal Separability** The intuitive idea here is that the separability of factors of variation boils down to whether there are “non-ignorable” interactions in the structural equation model that generates the output from the latent factors of variation—hence the name. The formal definition 3.2 relaxes this causal requirement to distributional assumptions. We have added its causal interpretation in the camera ready version. **Application to Other Generative Models** Ultimately, the results in the paper are about non-parametric representations (indeed, the results are about the structure of probability distributions directly!) The importance of diffusion models is that they non-parametrically model the conditional distribution, so that the score representation directly inherits the properties of the distribution. To apply the results to other generative models, we must articulate the connection between the natural representations of these models (e.g., the residual stream in transformers) and the (estimated) conditional distributions. For autoregressive models like Parti, it’s not immediately clear how to do this. This is an exciting and important direction for future work! (Very speculatively: models with finite dimensional representations are often trained with objective functions corresponding to log likelihoods of exponential family probability models, such that the natural finite dimensional representation corresponds to the natural parameter of the exponential family model. In exponential family models, the Stein score is exactly the inner product of the natural parameter with $y$. This weakly suggests that additive subspace structure may originate in these models following the same Stein score representation arguments!) **Connection to Interpretability** This is a great question! Indeed, a major motivation for starting this line of work is to try to understand if the ''linear subspace hypothesis'' in mechanistic interpretability of transformers is true, and why it arises if so. As just discussed, the missing step for precisely connecting our results to this line of work is articulating how the finite dimensional transformer representation (the residual stream) relates to the log probability of the conditional distributions. Solving this missing step would presumably allow the tool set developed here to be brought to bear on the interpretation of transformers. One exciting observation here is that linear subspace structure appears to be a generic feature of probability distributions! Much mechanistic interpretability work motivates the linear subspace hypothesis by appealing to special structure of the transformer architecture (e.g., this is Anthropic's usual explanation). In contrast, our results suggest that linear encoding may fundamentally be about the structure of the data generating process. **Limitations** One important thing to note: the causal separability assumption is required for the concepts to be separable in the conditional distribution itself. This is a fundamental restriction on what concepts can be learned by any method that (approximately) learns a conditional distribution. I.e., it’s a limitation of the data generating process, not special to concept algebra or even diffusion models. Now, it is true that to find the concept subspace using prompts we have to be able to find prompts that elicit causally separable concepts. However, this is not so onerous—because sex and species are not separable, we can't elicit the sex concept with ''buck'' and ''doe''. But the prompts ''a woman'' and ''a man'' work well. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I appreciate in particular your clarification of the goals of the experimental results. I wholeheartedly agree that this work does not need to demonstrate: > 3. Our method excels in image manipulation compared to some alternatives. I also agree that the experiments support the claim: > 1. Our mathematical framework connects high-level concepts with internal representations, making significant predictions. In particular, I agree that the experiments support the following claim: > These conditions aren't merely theoretical; concrete examples of concepts exist where corresponding subspaces can be identified. However, a stronger claim would be that such concepts not only exist, but are pervasive. And this is the root of my concern about the experiments, that a few ad-hoc examples do not provide compelling evidence for the following claim: > 2. Concept algebra holds promise for image manipulation in text-guided diffusion models. The authors acknowledge that "Our paper mainly champions claim (1), with preliminary evidence supporting claim (2)." I believe that a more systematic approach to experiments could provide stronger evidence for (2) and this is what is missing from the current paper. That said, concept algebra is an interesting idea, it is described well, and I am open to seeing the paper published in its current form. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We believe we misunderstood your main concern as relating to the usefulness of concept algebra as a procedure. Our response was focused on the case where concept algebra is directly useful for the particular task of image generation. In this setting, the expected advantage (relative to direct prompting or heuristic methods) is disentangled manipulation of correlated concepts. Hence, the added larger scale experiment testing effectiveness on anti-correlated content/style pairs, relative to heuristic methods. (See the pdf attached to the global response) Are we correct in our understanding that your main concern now is whether concepts "pervasively" correspond to subspaces? In the fully general case, this can be a bit subtle to test because the question of "does a subspace exist" and the question of "can we find prompts that elicit the subspace" are not easily disentangled. However, in the particular case of binary concepts, finding suitable prompts is fairly straightforward. To demonstrate that, we ran the method with a number of additional arbitrarily selected prompts and binary concepts. The demonstrations work as follows: Start with the initial prompt. Then use concept algebra on the binary concept {concept value 1, concept value 2} to change the original concept value to the new one. We use a pair of prompts to identify the subspace. We ran the following examples: ## Dog vs. Cat - **Initial Prompt**: "A black dog sitting on the beach" - **Prompt for Target Z**: "cat" - **Prompts Defining Subspace**: "a dog" vs. "a cat" ## Beach vs. Forest - **Initial Prompt**: "A black dog sitting on the beach" - **Prompt for Target Z**: "forest" - **Prompts Defining Subspace**: "the beach" vs. "the forest" ## Black Dog vs. Yellow Dog - **Initial Prompt**: "A black dog sitting on the beach" - **Prompt for Target Z**: "yellow dog" - **Prompts Defining Subspace**: "a black dog" vs. "a yellow dog" ## Young vs. Old - **Initial Prompt**: “A boy playing the guitar” - **Prompt for Target Z**: "an old man" - **Prompts Defining Subspace**: “young man” vs. “old man” ## Formal Clothes vs. Casual Clothes - **Initial Prompt**: “A portrait of a man wearing formal clothes” - **Prompt for Target Z**: "casual clothes" - **Prompts Defining Subspace**: “formal clothes” vs. “casual clothes” ## Sunny Day vs. Rainy Day - **Initial Prompt**: “People sitting on the grass on a sunny afternoon by the river” - **Prompt for Target Z**: "a rainy afternoon" - **Prompts Defining Subspace**: “a sunny day” vs. “a rainy day” ## Happy Person vs. Sad Person - **Initial Prompt**: “A portrait of a smiling woman” - **Prompt for Target Z**: "a gloomy woman" - **Prompts Defining Subspace**: “a happy person” vs. “a sad person” In all cases, concept algebra clearly succeeds in identifying a subspace associated with the target concept. If the AC permits, we will link a document containing the generated images (NeurIPS policy does not let us add additional external links by default). Or, you can try these examples directly using the jupyter notebook demo included in the supplementary material. The point here is: it's totally straightforward to find subspaces corresponding to these randomly selected concepts. These examples are not particularly *useful* for image editing, since direct prompting also works fine in these cases. But it does provide clear support for the core prediction that the stein score yields an arithmetically composable representation. (We also realize that just adding additional examples doesn't feel like a "systematic" test of whether suitable subspaces exist. However, we think it's fairly compelling evidence---if the subspace structure didn't exist, then we wouldn't find it!)
Summary: The paper aims to propose a formalization of concept-based algebra for text-to-image models. It presents equations for how prompts are composed of concepts, which can interact additively in order to generate the corresponding images, similar to word embedding analogies by Mikolov. Several examples are presented to show this framework in action. Strengths: The direction of work representing images as latent concepts and then learning representations for these concepts is interesting and useful. Weaknesses: The main part of this paper is the mathematical equations for conditioning image generation on concepts. It starts by claiming to provide a cognitive framework, such as mathematical equations for how humans map images to high-level concepts, which seems to be complete conjecture. There are no references provided to show any evidence that this is actually how humans analyse images. I very much doubt that a human looking at an image first starts by making a list of all the attributes in that image. The equations then gradually morph into how concept representations can be learned from examples. However, it is unclear how this is an improvement or a contribution over the previous work. The last paragraph of the paper correctly points out several other works that also do concept-based representations for image generation. I would expect there to be a comparison and evaluation. There is currently very little evaluation of the method, each claim seems to be backed up by only a single example. Almost all of the examples use image style as the concept that is being modified. In one case this is referred to as "medium", but even that is ultimately just image style. This moves the work to the style transfer area, which isn't really addressed in the paper. It also doesn't give any evidence that this method can be used for anything involving actual concepts related to the content of the image. Pages 7-8 claim a strength of this method is to handle images for which no prompt exists. However, it seems the first example could be prompted with "a portrait of a male or female mathematician" and the second one with "an androgynous nurse". Neither of these seem examples for which no textual prompt exist; or at least it hasn't been shown in the paper that prompting like this wouldn't work. Also, this section makes a rather strong claim that the model is being debiased in terms of gender by adding the "person" vector, which is a claim that would need a lot more proof than examples from a single prompt. There do not seem to be any details on how the experiments were conducted - what pre-trained models or training data was used, or how the modifications were performed in the context of that particular model. The appendix is repeatedly referenced for important details, proofs and examples, but does not seem to be submitted. Line 88: A text-controlled generative model should not be producing a random output. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please define the novel contribution of this work in the context of the previous work in the area of concept-based representations for image generation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: There does not seem to be any discussion of limitations or possible societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to read the paper. We’d like to clarify a few misunderstandings: 1. The paper does not provide a cognitive framework. We do posit a latent variable model in the data generating process, which we think is reasonable. Some evidence that this is reasonable: we are able to use it to derive the (surprising!) conclusion that high-level concepts are encoded as subspaces of the Stein-score representation space, and then find empirical examples showing this is true. 2. In addition to style transfer examples, we test the method on the concept of sex, and changing a generic toy to a specific one (the Dreambooth example)—these are both content variables! 3. With respect to the literature on style transfer: the point of the experiments is to show that the Stein score representation encodes these elements already, and in an arithmetically composable fashion. In particular, there is no style-transfer specific heuristics or finetuning. The takeaway of those experiments is that we can find style subspaces. Our claim is not that this is the most aesthetically pleasing way to affect style transfer. 4. The point of the experiments with unpromptable vectors is to show that the subspace structure does indeed correspond to the concept. So, the point of the androgynous figure example is not that this is the best way to produce an androgynous figure. It's that the output samples are semantically sensible! (As opposed to, e.g., having no effect, or producing nonsense images, as we might expect if there was no semantic subspace structure) 5. The supplementary material included with the submission has both experimental details and demonstration code. Additionally, it's mentioned twice (including in the abstract) that the experiments are based on Stable Diffusion. 6. The appendix was submitted in the supplementary material. 7. ''A text-controlled generative model should not be producing a random output.'' These models produce random samples drawn from a distribution defined by the prompt. --- Rebuttal 2: Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? If you are satisfied with the response, we hope you will consider increasing the score. --- Rebuttal Comment 2.1: Comment: Thank you for your reply. Lines 61-64 clearly claim to define an equation for how a human processes an image, hence a cognitive framework. My point about the unpromptable examples was that the paper claims that certain concepts cannot be prompted, when it seems there are definitely more accurate prompts available compared to those that were tried. "random samples drawn from a distribution defined by the prompt" is not quite the same as "random output". The largest issues remain unaddressed: 1. Given that the proposed framework is a reformulation of existing work, what exactly is the novel contribution? That was the only question in my review and it was left unanswered. 2. The evaluation is not sufficient. There are only a small number of individual qualitative examples discussed, which could easily be outliers or cherry-picked. Some form of quantitative evaluation is needed to draw conclusions. --- Reply to Comment 2.1.1: Comment: Thank you for your reply. With respect to your two main concerns: * This paper is not a reformulation of existing work. As far as we know, both the mathematical framework for reasoning about concepts-as-subspaces, and concept algebra---the demonstration of this framework---are new. Related work is discussed in detail in the paper. We are unclear what the source of your concern here is. * Please see the global level response for discussion of experimental evaluation. In short: the main purpose of the experiments to assess the predictions of the mathematical framework, which we think they do. We also added additional experiments comparing to naive composition addition and negative prompting, to illustrate the value of the subspace structure. The discussion with reviewer mXTP may also be helpful here.
Summary: The paper focuses on encoding abstract concepts (as prompt) and systematically composing them to generate images. They proposed a mathematical framework to generate images based on specific combinations of prompts. Unlike approaches that change the prompt with different text descriptions, this framework encodes each feature independently. It then performs mathematical operations on different features of the prompt and the original content prompt to generate the desired image. They show several successful examples of the proposed framework and compare its performance with direct prompting approaches. Additionally, they emphasize the necessity of the assumption of causal separability. Strengths: 1. The paper introduces an interpretable approach to encoding high-level concepts with LLM representations. By incorporating specific prompts and applying metathetical operations on these prompts and the original content, the framework enables the generation of images that capture desired attributes or concepts. This interpretability allows for more fine-grained control over the generated content. I personally really like this aspect. 2. The paper showed an interesting analysis of the causal relationship keywords used in the framework. By examining the feature differences between terms such as "buck" and "doe," the authors demonstrate the limitations of direct application between terms like "man" and "woman" due to the inherent species difference. Weaknesses: 1. It's necessary to have a larger-scale evaluation of the generated images. While the qualitative results presented in the paper showcase specific examples, a quantitative evaluation on a larger dataset or with a larger sample size would enhance the reliability and generalizability of the method. 2. I suspect the impact of prompt length is a key factor in the performance of the proposed methods. If shorter direct prompts yield comparable results to the proposed method, it may reduce the appeal of the proposed approach. Once again, a larger evaluation scale on different prompt lengths would shed light on this aspect. 3. In line 286, you mention that "1/2 male nurse and 1/2 female nurse" does not correspond to any English prompt. I wonder if paraphrasing it into a more neutral term like "androgynous person" and re-prompting the model could potentially yield different direct prompting results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does "sex" encoded in Figure 1 (b)? what's the result of sex "man" or "women" in this example? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No, the paper doesn't discuss limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful insights and constructive feedback on our approach and its implications. **Regarding larger-scale experiments** As discussed in the global response: Our main contribution is the mathematical framework. The demonstrations test predictions of the framework---i.e., that the score representation is arithmetically composable, and the subspaces corresponding to concepts can be identified. (If these results were not true, we would expect to find no examples where linear representation manipulations change concepts in isolation!) We have also added some additional experiments, including a quantitative comparison with direct prompting and an algebraic composition method that doesn’t account for subspace structure. We find that concept algebra is significantly preferred by human evaluators. **Regarding Limitations** We added limitations and discussion subsection in the camera ready version. Please see our global response. **Regarding the effect of prompt length** Thanks for the interesting suggestion! Indeed, Fig 3(d) example in the main text uses a very long prompt (to describe a detailed scene). However, it does not seem that prompt length is a key factor. In the added experiments (see global response), we use succinct prompts for describing both content and target style; e.g., “A nuclear power plant” or “Baroque painting”. In these short prompt examples, we observe that direct prompting still frequently fails and concept algebra often succeeds (and is clearly preferred by human evaluators). **Regarding promptless embeddings** The point about figure 2 is to support the claim that the estimated subspace genuinely corresponds to the concept of sex. The vector $\frac{1}{2}(s[\text{\`\`male nurse''}]+ s[\text{\`\`female nurse''}])$ doesn’t correspond to any English prompt. In the absence of concept subspace structure, we would expect adding such a vector to either result in nonsense—e.g., a white noise image—or to have no effect at all. Instead, we observe that the outputs are semantically sensible, supporting the idea that we’ve found a direction corresponding to the sex concept. Note: even if the weighted sum of scores very luckily does correspond to the word ''androgynous'', our claim still holds --- because then $\frac{2}{3}s[\text{\`\`male nurse''}]+ \frac{1}{3}s[\text{\`\`female nurse''}]$ will definitely not correspond to the same word. However, we can also generate sensible images with this embedding (we didn't show the images due to space limits). Finally, regarding your last question “How is "sex" encoded in Figure 1 (b)? what's the result of sex "man" or "women" in this example?”. Could you clarify what you mean by this question? --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: Thanks for providing the additional experiments. I found the prompting preference evaluation provides more convincing results for the method. I raised my score. For the last question, I meant in Figure 1(c), there is a projection of a particular style "in Fauvism style." However, in Figure 1 (b), there isn't one specific sex in the prompting. Should it be the projection of "male" or "female" instead of "a person"? I might be missing something, so I would appreciate a clarification. --- Reply to Comment 1.1.1: Comment: Ah, that's deliberate! We're trying to replace the distribution over the sex concept elicited by the prompt "a mathematician" with the distribution elicited by the prompt "a person"; i.e., the goal is to move from a distribution heavily biased towards men (as in figure 1a) to one that is roughly evenly split between men and women (as in figure 1b). (there were two motivations for this choice of example: 1. the training data includes a spurious correlation between sex and mathematician, which may be undesirable for the generative model to replicate. This example shows we can use concept algebra to break this kind of spurious association. 2. this illustrates that concept algebra handles non-degenerate distributions over concepts. See the text from lines 77-86.) --- Rebuttal 2: Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? If you are satisfied with the response, we hope you will consider increasing the score.
Summary: This research presents a novel mathematical framework that brings clarity to the abstract notion of concepts, enabling their connection to specific subspaces in a representation space. The significance lies in demonstrating the existence of structured concepts in score representations, specifically emphasizing the compositional nature of these concepts within the representation space, which is useful for further analysis regarding generation. One of the key contributions of this work is the introduction of an effective method to identify the subspace associated with a given concept. This approach allows for the manipulation of concepts expressed by the model through algebraic operations on the representation, thereby providing a powerful tool to work with and understand complex concepts in a more tangible and interpretable manner. Moreover, the implications of this research extend beyond its immediate domain, as the proposed framework can be extended to other areas, such as natural language processing (NLP), analysing concepts such as topics and semantic meanings. Strengths: 1. Previously, the notion of concepts is often discussed in an abstract level. This work demonstrated the existence of structured concepts in score representations, specifically emphasizing the compositional nature of these concepts within the representation space, which can be helpful in further analysis on abstract notions. 2. The work proposed a useful method to identify the subspace associated with a given concept, allowing us to manipulate concepts expressed by a model through algebraic operations on the representation. This method can be a powerful tool in terms of understanding complex concepts. 3. The experimental results verified the efficacy of this method in case studies. This method can be extended to analysis on other areas like NLP. Weaknesses: 1. The major concern regarding this work lies in the absence of a precise quantitative evaluation to assess the impact of concept extraction and manipulation, particularly in the experiments. Relying largely on case-level analysis may not be sufficient to draw robust conclusions. Such measurement based on the precise quantitative evaluation can be used to make comparisons between various methods relevant to this topic and for further analysis on subsequent improvements. It will be more convincing if the concept manipulation is evaluated on more cases (or on more datasets) and comparisons are made between the proposed method and relevant methods (e.g., methods to extract and manipulate latent features) used in previous research. 2. A minor issue that needs attention is the writing style, particularly the accuracy of internal references. Some of the internal references in the work are not precise and should be revised for clarity and consistency. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please pay attention to the clarity of the descriptions. 1. Some notations are a little bit confusing. E.g., in Definition 2.2, does “z_{1:k}” refer to “z_1, z_2, …, z_k” regarding Definition 2.2? What is δmale in Equation 11? 2. “Following theorem 3.3” (Line 172). “theorem 3.3” is not found. Similar cases can also be found in Line 246 and Line 252. 3. It can be worth discussing the selection of the discretized concepts, e.g., how many ks are sufficient for a latent variable C? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limiatations have not been well discussed in the paper. As the performance of the proposed method is mainly evaluated on case studies, it is suggested that a more quantitive measurement be introduced to judge the robustness of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and appreciation of our work's contributions. We're glad you recognized the novelty and utility of our approach in providing tangible interpretations of abstract concepts within score representations. Our method, as you rightly pointed out, offers a powerful tool for understanding and manipulating these complex concepts, with potential extensions to domains like NLP. **Experiments & Limitations**: Please refer to the global response for a comprehensive discussion on the experiments. In brief, we believe our experiments substantiate the paper's primary assertions, and we've integrated a broader scale evaluation in the revised manuscript. Regarding the noted limitations, they have been elaborated in our global response and will be included in the camera-ready version. Primarily, these limitations pertain to the automation, efficiency, and accuracy of estimating the concept-subspace. As highlighted in Section 5, given the low-dimensionality of the concept-subspace (for many practical problems), we're optimistic about overcoming these challenges in future works. **Selection of Discretized Concepts**: For a specified prompt $x$, we solely require the adequate sets of concepts $Z_1, …, Z_K$. If our objective revolves around modifying $Z_1$ while maintaining the stability of other concepts, we can designate $Z$ as $Z_1$, with $W$ representing the remaining concepts. Thus, even though our method predominantly uses two concepts, $Z$ and $W$, its applicability remains general. **Clarifications**: We appreciate your attention to detail. Indeed, Thm 3.3 should be referenced as Prop 3.3. In the notation, $z_{1:k}$ indeed represents the sequence $z_1, z_2, …, z_k$. Regarding Eq. 11, the intended interpretation is $Q_{x_1}(z, w) = \delta_{\text{male}}(z) Q_w$, with $\delta_{\text{male}}(z)$ implying $P(Z = \text{male}) = 1$. --- Rebuttal Comment 1.1: Comment: Thanks very much for your reply! I am satisfied with your explanation! --- Rebuttal 2: Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? If you are satisfied with the response, we hope you will consider increasing the score.
Rebuttal 1: Rebuttal: We appreciate the reviewers' insightful comments. The reviewers broadly agree that understanding how high-level concepts are encoded in the internal representations of generative models is a timely and important topic, and that the mathematical framework developed here is a significant step in this direction. Several reviewers note that the development in this paper opens up important new directions in the control and interpretation of language guided generative broadly. Further, the reviewers generally found the exposition clear and thought provoking. ## Existing experiments Reviewers expressed concerns about our experimental evaluation. We assert that our experiments are scientifically robust and align with standard publication norms. Additionally, we've introduced further experiments highlighting concept algebra's advantages over traditional heuristics. Regarding our scientific contributions, let's distinguish the three claims one might make: 1. Our mathematical framework connects high-level concepts with internal representations, making significant predictions. 2. Concept algebra holds promise for image manipulation in text-guided diffusion models. 3. Our method excels in image manipulation compared to some alternatives. Our paper mainly champions claim (1), with preliminary evidence supporting claim (2) (which may have created the confusion). As we note explicitly in the paper, our claim is not (3). Our empirical evaluation aims to ascertain the utility of the mathematical framework. We specifically address the following non-trivial implications of our theory: 1. The Stein score represents concepts in an arithmetically decomposable manner. 2. Subspaces can be discerned using the methodologies from sections 4 and 5, given certain conditions. 3. These conditions aren't merely theoretical; concrete examples of concepts exist where corresponding subspaces can be identified. To validate these claims, we presented examples within the paper that pinpoint concepts, identify their corresponding subspace, and exhibit their arithmetically composable nature. We furthered our analysis, which could have been misunderstood. Through two key stress tests: - First, our experiments revealed that manipulations within the sex subspace are coherent, even when such manipulations aren't directly prompted by English. Importantly, our emphasis is **not** about images cannot be generated via English prompts but that the sex subspace genuinely encapsulates the concept. The example \( $\frac{1}{2} (s[\text{\`\`man''}]+ s[\text{\`\`woman''}])$ \) illustrates this, producing semantically sound images even without a direct English phrase representation. - Our work emphasizes concept manipulation's effectiveness, even when direct prompts fail. The point here is that the subspace structure is free of the prompt, and so works even for "hard" prompts. In the camera-ready version, we will expand on the discussion and address several limitations: - Our primary emphasis is on the mathematical framework. We anticipate its applicability beyond just the text-to-image setting. - Concept algebra is useful for communicating user intention to the model. It does not generically change the quality of generated images. Accordingly, it is complementary to generative model improvements such as architecture changes or increased scale - We acknowledge the computational demands of concept algebra and the necessity for handcrafting prompts to pinpoint concept subspaces. These present significant practical challenges. These are both significant practical issues, and an important direction for future work would be overcome this. ## Further Experiments Beyond this, we've incorporated new experiments (figures and results in the attached pdf) to highlight our theory's advantages over existing heuristic methods. Detailed codes and results have been shared anonymously with the AC. Concept algebra modifies representation vectors in subspaces aligned with target concepts. To gauge its utility, we contrasted it against methods that don't utilize such structured subspaces. Methods like [Du+21; Liu+21; NBP22; Ano23] (references in main text) employ algebraic manipulations without pinpointing specific subspaces. Another common approach, negative prompting, aims to eliminate target concept expressions by subtracting relevant scores. Unlike concept algebra, these methods don't confine manipulations to specific subspaces. Consequently, our theory posits that these heuristics might inadvertently modify off-target concepts tied to the primary concept, like inducing a medieval theme while aiming for a renaissance style. We assessed concept algebra's efficacy against composition and direct prompting in style transfer tasks. Using 49 challenging content/style combinations, like "A nuclear power plant in Baroque painting", we employed the three methods on each pair to generate samples. Human raters were then presented with the outcomes alongside reference images, ranking them based on adherence to the desired style and content. This evaluation was replicated across 10 different raters. Refer to Fig 1 in the attached PDF for illustrative examples. **Raters consistently favored images produced by concept algebra**, as highlighted in table 1(c) of the attached PDF. This aligns with our theory, suggesting concept algebra's adeptness in retaining content while altering style. Furthermore, we demonstrated a comparison between concept algebra and negative prompting. Using the prompt "a portrait of a king", our aim was to transition to "a portrait of a queen". Negative prompting using $x_{-} = \text{\`\`male"}$ was ineffective, while concept algebra employing $x_{\text{new}} = \text{\`\`female"}$ (refer to equation 13) achieved the desired result. For a fairer comparison, we applied the same negative prompt in conjunction with concept projection, which was also successful. Details can be found in the anonymous link provided to AC. Pdf: /pdf/1e2aaaa88e3dd33ac810818cfacf46ddcc4d05b3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper suggests concepts are represented in text-guided generative models as an encoded subspaces of the representation space. It also formalizes the description of the representation of concepts in text-controlled generative model in a mathematical way. The paper also shows that the Stein score of the text-conditional distribution is an arithmetic composable representation of the input text, and develops concept algebra, a method to manipulate the concepts via arithmetic manipulation of the representation. Strengths: Very well-written paper! All the math looks great and I am excited about the potential alternative to prompt engineering! Hope there is a demo that I could try. - Section 2 and 3 rigorously defines the mathematical background used in the concept algebra method, and section 4 and 5 is also very well-written. - I enjoyed reading the figures mentioned in the paper! All of them are informative, and demonstrate the point of each figure clearly. - As demonstated in section 6, the proposed method could be a strong alternative to prompt engineering! Being much more stable, less random, and with no need to manipulate models trying to act as a painter or other roles, concept algebra seems very promising! Weaknesses: Nothing much, but I would love to see some limitations of the method discussed in section 7. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: * If I understand everything correctly, concept algebra can only work on models have access to their representation, therefore, concept algebra can't really work on closed models, right? * How will it negatively impact the society if everyone can edit the model output using concept algebra? Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your in-depth engagement with our work on concept algebra. **Limitations & Future Improvements**: - The primary limitations of our method lie in the concept-subspace estimation step, but there's potential for enhancement: - The current approach depends on basis prompts that vary in $Z$ and are invariant in $W$. A more systematic method for selecting these basis prompts, alongside a quantitative evaluation of their disentangling effectiveness, is required. - Estimating the subspace from $K$ score embeddings presents a challenge in high-dimensional estimation. Despite our use of truncated SVD and variance thresholding, this step could benefit from advanced statistical techniques. - The computation-heavy nature of the estimation step at each sampling iteration is a concern. An immediate workaround could involve performing concept-algebra selectively during the sampling process, as high-level concepts often get determined in initial steps. Alternatively, a model approximating the $\text{Proj}_z(.)$ function could be trained for direct use during sampling, presenting an interesting avenue for future research. **Demo Details**: - In the supplementary materials, we've included a `concept_algebra.ipynb` demo focused on binary concept alterations. This code can be run with Google Colab with GPU (pro is not necessary). For more complex concepts, we've implemented solutions in `code/concept_pj_basis/concept_pj.py`. **Applicability to Closed Models**: - Our approach isn't directly applicable to closed models. However, if one has access to the underlying language model of a closed system, concept-algebra could be potentially be extended to that language model to tweak embeddings (this will be interesting future research). Following this, nearest corresponding prompts to the altered embeddings could be identified, enabling concept manipulation in closed text-to-image models. **Potential Negative Impact**: - While concept algebra can be misused, its capabilities can also serve as a countermeasure. For instance, if one creates NSFW images using a specific subspace, the same subspace could aid in developing a more robust NSFW detection classifier, reducing susceptibility to spurious correlations. --- Rebuttal 2: Comment: Thank you again for your review and feedback. Do you have any additional concerns or questions? --- Rebuttal Comment 2.1: Title: Thank you for the reply Comment: I am satisfied with the answers to my questions, and the future improvements sound good to me. I don't have any additional concerns or questions.
null
null
null
null
null
null
Tracking Most Significant Shifts in Nonparametric Contextual Bandits
Accept (poster)
Summary: This paper studies nonparametric contextual bandit problems with distributional shifts. This paper proposes a new notion of distributional changes called the experienced significant shifts. Based on this notion, the authors develop new algorithms that achieve the minimax rates without knowing some problem-dependent parameters. ====After rebuttal==== I have read the rebuttal. I'd like to keep my scores. I also encourage the authors to add more discussions regarding the adaptivity issue. Strengths: The authors developed a new notion called the experience significant shifts that better capture the distribution shifts in contextual bandits. Based on this new notion, the authors show that the minimax optimal regret in contextual bandits with distributional shifts can be achieved, even without knowing problem-dependent parameters. In my opinion, this result is significant to the community. Also, along the way, the authors develop several new techniques to achieve this result (e.g., those have been highlighted in Section 5), which can be of independent interest. Weaknesses: In the paper, the authors mention several generalizations of the existing setting, e.g., (i) generalization to H\"older continuity, and (ii) the one mentioned in Remark 1. However, no formal results are provided for these generalizations. It will be great if the authors can provide some formal statements. Also, for the oracle procedure described in Definition 7, it's better formally state the power of the oracle, e.g., what is known to the oracle and what is unknown. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In bandit learning, when the goal is to minimize the cumulative regret, researchers have previously shown that adaptivity to the usual minimax rate is usually impossible and the best one can hope for is the Pareto optimality, e.g., for the model selection problem [1, 2]. However, in this paper, the authors show the opposite in this paper for learning with distributional shifts. Can authors elaborate more on this? Is this because of the common assumptions made in this setting, e.g., Assumption 2? Citations: [1] Teodor Marinov and Julian Zimmert. The Pareto frontier of model selection for general contextual bandits. [2] Yinglun Zhu and Robert Nowak. Pareto optimal model selection in linear bandits. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for supportive comments! _Generalizations to Hölder continuity and Remark 1_: We apologize for the vagueness; we'll add explicit statements showcasing such statements. _On Adaptivity_: The cited papers in fact study other notions of adaptivity, which are not applicable to to our problem for the following reasons. [1] considers a stronger notion of adaptivity than the one considered here: the impossibility of adaptive switching regret against an adaptive adversary, which is a harder problem than our task of dynamic regret minimization with obliviously decided rewards. [2] is concerned with adaptivity to the unknown intrinsic dimension of linear representations of arms; here we focus on optimal regret in terms of the context dimension which is known (indeed, by Assumption 2, as you point out). Also, note that adaptivity to the minimax rate for an unknown number of distribution shifts is recently shown to be possible in many bandit and RL settings (see Auer et al., 2019; Chen et al., 2019; Wei \& Luo, 2021). So, it is a not a new phenomenon. However, we find your comment very insightful and will add discussion as such.
Summary: This paper studies the contextual bandits problem with changing Lipschitz reward function and proves a *minimax optimal* regret bounds for this problem, which includes both upper and lower bounds. For the upper bound, this paper comes up with an algorithm that achieves it. The algorithm is based on carefully maintaining a hierarchical partition tree that discretizes the context space. Strengths: 1. This paper achieves minimax optimality for the problem, which closes the existing gap and solves the problem. 2. The novel idea of significant shifts and the algorithm design of maintaining the partition tree, could be of independent interest. 3. This paper provides excellent plain word explanation to highlight the key ideas and steps in their proof. Weaknesses: 1. The algorithm is recursive and complicated, so it makes the audience hard to understand the algorithm, even with the algorithm explanation in Sec. 4. Maybe it is better to replace Line 9 in Algorithm 2 with some while-loop to improve readability. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Typos: 1. Line 178, "Miminimax". 2. Below Line 259, Algorithm 1, "tree $T$" -> tree $\mathcal T$. 3. Line 305, "ut". 4. Line 306, "(i.e., the bin at level $r_{s_2-s_1}$ containing $X_t$)" duplicates. 4. Line 13 in Algorithm 2, trailing ; after : The authors may spend some time to polish the paper before finalizing. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for encouraging comments as well as pointing out typos and writing suggestions! --- Rebuttal Comment 1.1: Comment: I have read the rebuttal.
Summary: This paper studies nonstationary contextual bandits. In particular, a new notion of "significant shift" is introduced (Definition 6) which accounts for shifts in the distribution localized at the possible actions and with significant magnitude. First, the authors derive a lower bound for existing definitions of shifts (Theorem 1) and provide an oracle algorithm to achieve it. Then, after introducing the new notion of shifts, they derive an algorithm which adapts to it (i.e., which does not require the time indexes of the shifts), see Theorem 3. Strengths: - The topic is of interest to the NeurIPS community - The paper is globally clear and well written - The intuition and analysis are sound - Authors constantly compares their results with existing ones to position their contribution Weaknesses: - The paper lacks experiments. Since the authors provided a detailed algorithm, I find it disappointing that no experiment is carried out. It would be of particular interest to study the behavior of the different algorithm depending on the types of switches - I feel previous paper on significant switches could be discussed a bit more, to highlight the changes and challenges raised by the contextual setting - No lower bound for the proposed definition of switches is provided Minor: - Abstract: MAB is not defined yet - Section 1.1: linebreaks do not seem necessary to me and make the reading less fluid - When citing several works, the chronological order is preferable - Equation ($\star$): what is $r(B)$? - Lines 242, 252: are the log factors omitted? - l. 305: ut - l. 316: $\approx$ could be avoided Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In this paper, the available actions are the same at every time steps. Could the authors think about a generalization when the action set changes over time? In particular, would it be possible to restrict the significant changes only to the playable arms? - Could the Lipschitz assumption be removed? - In Definition 7: does $\cal{G}_t$ always exist? - Do authors have in mind simple examples of $f_a^t$ where both switches characterization drastically differ? - If I'm not missing anything, it seems to me that no definition of switch implies the other. Then, I find it a bit misleading to compare results all along. It should be made clear that the two are two different parameterizations incomparable in general (I agree on the identity $\tilde{L} \le L$) - Assume that changes have small magnitude (e.g., small drift at each time step) or do not apply to every arms (but only to the best let say), is the regret of CMETA not impacted by those switches? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for pointing out typos, writing suggestions, and many careful questions. **Weaknesses**: _Experiments_: We'd like to emphasize the main contribution of the paper is theoretical, rather than proposing a new algorithm. In fact, the algorithm in the paper is of a theoretical nature that only serves to drive the main theoretical message: for the well-studied setting of contextual bandits with Lipschitz rewards, both the form of the optimal regret and whether optimality can be adaptively achieved have remained open; we have resolved these questions. We agree with you, however, that an eventual goal of this fledgling line of work is to develop practical procedures, and we admit the state-of-the-art is still far from this. _Lower Bound in terms of $\tilde{L}$_: The construction of the lower bound (Theorem 1) in terms of $L$ global shifts and total-variation $V_T$ in fact satisfies $\tilde{L}=\Omega(L)$. Thus, this implies a matching lower bound $T^{\frac{1+d}{2+d}}\cdot \tilde{L}^{\frac{1}{2+d}}$ of the same order as our upper bound (Theorem 3), up to log factors. We'll make this more clear. **Questions**: _Changing Action Sets_: This is an interesting future direction which is beyond the scope of our paper. Even in the simpler MAB setting, such results are unknown as there are added difficulties with changing action sets. For instance, with changing action sets, the _safe arm_ (i.e., the arm not yet incurring significant regret within a phase) and a bad arm with large regret may not be available on the same rounds, a fact crucial to the significant shift analysis of Suk and Kpotufe, 2022. This makes it unclear how to generalize key parts of the regret analysis. _Removing Lipschitz Assumptions_: We can generalize all the results to the setting of $\alpha$-Hölder continuous rewards with $\alpha\leq 1$. We'll include a remark to this. _Existence of Good Arm Set_ $\mathcal{G}_t$: Yes, $\mathcal{G}_t$ always exists and contains at least one arm by the definition of experienced significant shift (Definition 6). _Instances where Switch Characterizations Drastically Differ_: As a simple example, if there were no changes in best arm at any context $x$ but changes in rewards at every round (e.g., the rewards of all arms change together by the same amount) then we'd have large total-variation $V_T=T$ and global count of shifts $L=T$ versus $\tilde{L} = 0$ experienced sig. shifts. Going even beyond this, even a tighter global count of best-arm changes $S := \sum_{t=2}^T {\bf 1}(\exists x \in \mathcal{X}: \text{best arm changes at $x$ from $t-1$ to $t$})$ could still be large $S=T$ while $\tilde{L} \ll T$ if the changes in best arm were constrained to a small subregion of context space. _Comparability of Definitions of Switches_: In fact, the lower and upper bound results **are comparable**. Our main adaptive upper bound always achieves the minimax optimal rate under all the parametrizations mentioned in the paper: $L \geq \tilde{L}$ (Corollary 4) and $V_T$ (Corollary 5). _Small Magnitude Changes and Regret of CMETA_: You're correct in that small enough magnitude changes do not affect the regret of CMETA. Additionally, changes constrained to subsets of arms which do not change the best arm do not affect the regret of CMETA.
Summary: This paper studies nonparametric contextual bandits where the mean reward functions can change over time. A key assumption is that the rewards are Lipchitz in context. The authors then adopt a typical approach to discretize the context space into bins. The notions “significant regret”, “unsafe at context”, and “experienced significant shift,”; in particular, an experienced significant shift implies a change in the optimal arm in a particular bin. The authors then propose an algorithm Contextual Meta-Elimination while Tracking (CMETA) and establish regret bounds in terms of the total number of “experienced significant shifts.” Strengths: This paper is well-organized and introduces both a notion of “experienced significant shift” and an algorithm, CMETA, accompanied with theoretical guarantees. Weaknesses: My major concerns centers around the comparison of CMETA and its analysis to the algorithm and regret analysis introduced by Suk and Kpotufe (2022), and the presentation of the theoretical results. 1. Besides discretization of the context space into bins, how does CMETA and its analysis differ from that introduced by Suk and Kpotufe (2022)? 2. As for the presentation of the results, we take Theorem 3 as an example. First, the notation is inconsistent: the notation $\tilde{L}$ is introduced in line 214, and in Theorem 3 it is appeared as $\tilde{L}$ in line 248 but $\tilde{L}(\mathbb{X}_T)$ in line 250. In addition, $\tilde{L}$ is dependent on the discretization of context space into bins, which is determined by level $r$ in Algorithm 1, yet $r$ does not appear in Theorem 3.1? Moreover, could the authors elaborate on “choice of level” in Section 4 on how the level is adaptively chosen? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My main concerns were raised in the “Weakness” section. To reiterate: 1. Besides discretization of the context space into bins, how does CMETA and its analysis differ from that introduced by Suk and Kpotufe (2022)? 2. It would be very helpful if the authors could elaborate on the “choice of level” in Algorithm 1 (CMETA) and how the regret bound established in Theorem 3.1 depends on the level. A remaining question is: - In Corollary 4, it seems that if $\tilde{L}$ grows linearly in $T$, the bound on cumulative regret becomes linear in $\log^3(T) T$, in bandits where rewards are bounded in $[0, 1]$ (and cumulative regret is at most $T$)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: _Key Difficulties_: Although we agree with the reviewer that discretization is a natural approach appearing in fact in all past works on Lipschitz contextual bandits, this is not the main technical difficulty, neither algorithmically nor analytically: the main difficulty is to understand the level of discretization required for the specific setting, i.e., what cell width to employ in different parts of space (e.g., Rigollet & Zeevi, 2010; Perchet & Rigollet, 2013), and how to automatically infer such cell-width from data (Slivkins, 2014). This is in fact the focus of papers on the subject, and is by now well-understood in the stationary setting to tightly depend on the time horizon. In the non-stationary setting considered here, however, such automatic choice of discretization is now even more difficult: we need to not only make a choice of level in stationary phases, but also, in order to detect stationary phases of varying length, we need separate choices of levels commensurate with the unknown starts of new phases. The main challenge therefore is in understanding how to design and schedule such automatic choices of levels while maintaining optimal performance w.r.t. the unknown number and positions of stationary phases. This is explained lines 318-341 of Section 5. _Notation and Choice of Level_: We apologize for confusing notation; we drop the dependence on ${\bf X}_T$ in $\tilde{L}$ in some places for ease of presentation (see Lines 211--212). Note that the notion $\tilde{L}$ of experienced sig. shift (Definition 6) is independent of any fixed level (and thus of the level used by the algorithm), which is part of its appeal. This misunderstanding is our fault as we often discuss $\tilde{L}$ in terms of levels used by the algorithm. As said above, such choices of level are the main technical apport of our analysis. The "choice of level'' is adaptive in the sense that it does not depend on a fixed horizon $T$ and so can "adapt'' to the minimax regret over unknown episode durations (the earlier Perchet & Rigollet, 2013 employ such a time-varying level in the stationary setup). _On $\log$ Factors and Sublinear Regret_: It has unfortunately become common in bandits to write regret bounds this way. We apologize for the confusion this may have caused. Our regret is always upper bounded by $T$ and not by $\log^3(T) \cdot T$ in the worst case, and we'll add an indicator to our bound to express when there's $\log$ factors.
Rebuttal 1: Rebuttal: We thank reviewers for their time and useful comments. Please see individual rebuttals to each reviewer.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Collaborative Learning via Prediction Consensus
Accept (poster)
Summary: To facilitate information exchange among agents and improve prediction accuracy on a shared target domain, this paper proposes a decentralized learning algorithm based on prediction consensus inspired from social science for leveraging each agent’s predictions. To be concrete, each agent firstly shares its prediction and weighs these predictions according to the trust scores, then these proxy labels are used to augment local model training via distillation. The authors show the trust measure based on cosine similarity can attain trust matrix with ideal properties, which could facilitate effective consensus. The efficacy is empirically demonstrated by showing that the proposed method is better than classical baselines under some heterogeneous conditions. Strengths: This paper considers a collaborative learning setting where each agent owns their own data and agents want to collaborate to improve their predictive performance over a target domain. The authors propose a decentralized algorithm based on distillation and trust weighting scheme, which is useful in cases where data sharing and model sharing are not allowed. The key ingredient, which is called trust, is inspired from social science, which is an interesting topic. The theoretical analysis seems sound although I didn’t check all the details. The authors also conduct sufficient experiments to demonstrate the effectiveness of the proposed method. Overall, the paper is a well-written. Weaknesses: In the collaborative setting, each agent owns their own data and wants to improve its predictive accuracy while keeping its data and model private, which is a significant problem. However, exchanging predictions may leak valuable information. Is there a more private way to share data between agents? The goal of agents is to improve the prediction performance on a shared target domain. Is it possible that ensemble algorithms like bagging work better here? I think the motivation and necessity of collective prediction could be improved. I didn’t clearly figure out why the proposed method could keep communication at a minimum as claimed in the paper. Could the authors give more detailed explanations? The explanation of co-training in line 144 is not that convincing. As far as I know, dispersed and heterogeneous data on multiple agents cannot be simply viewed as multi-view data. Could the author give more explanations? Technical Quality: 3 good Clarity: 3 good Questions for Authors: please see weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your helpful comments and feedback. Regarding the weakness pointed out by the reviewer, we offer the subsequent clarifications: **W1 Information leakage**: This is an interesting question. As you point out, exchanging predictions may still leak sensitive information about the model’s training data. However, compared to the conventional ways in collaborative learning, such as sharing model parameters or gradients, predictions contain much less information. Further our scheme, due to the information exchange through model queries, renders itself amenable to applying standard query perturbation approaches to guarantee differential privacy. Crucially, the number of queries each agent is willing to answer is entirely in their own control, which poses a significant advantage. We will add a discussion of privacy considerations. In the worst case privacy constraints would limit the number of outer rounds that can be performed (we further attached a plot of the number of outer rounds versus accuracy in the **global response PDF**). Studying this tradeoff between privacy and the progress we can make through information exchange more formally could be an interesting extension for future work. **W2 Comparison to other ensemble algorithms**: Ensemble algorithms like bagging would put equal weights on each agent’s predictions, which is equivalent to our naive trust scheme. This equal weighting will suffer from low-quality agents that possess weak models or bad-quality data, please refer to Figure 2 and Table 1 for supporting experimental results. This also motivates the design of our collaborative learning algorithm that is able to benefit from each agent’s local expertise and can learn to upweight high-quality agents and downweight low-quality agents in the consensus. **W3 Minimum communication costs**: Regarding our claim that “the proposed method could keep communication at a minimum”, we position ourselves in the modern deep learning regime where the size of model parameters comes in millions or billions. Compared to exchange model parameters/gradients, which induces communication complexity per global round $\mathcal{O}(N \times |params|)$, exchanging model predictions will induce communication cost $\mathcal{O}(N^2 \times n_S \times C)$, where $N$ stands for the number of agents, $n_S$ stands for the size of the shared dataset and $C$ denotes the number of classes. The latter is significantly smaller. We will make the meaning of the claim more precise in this sense. **W4 Multi-view explanation**: Thanks for pointing this out. In our setting, we are not talking about combining models with different feature views. Instead, each agent contributes their knowledge covering different regions of the target domain, and thus combining their knowledge wisely leads to a better model over the full region of the target domain. We will make sure this is made more explicit. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I would like to retain my evaluation and keep my score.
Summary: This paper proposes a collaborative learning method that leverages unlabeled auxiliary data to facilitate the exchange of expertise among agents. The method adaptively weights the influence of each collaborator on the pseudo-labels until a consensus on how to label the auxiliary data is reached. The authors demonstrate that their collaboration scheme significantly boosts individual model's performance with respect to the global distribution, compared to local training. They also show that their method is particularly effective in scenarios where the intrinsic beliefs of individuals counterbalance the averaging process and yield a diversity of opinions. Strengths: 1. The proposed approach utilizes unlabeled auxiliary data to enhance the exchange of expertise among agents, resulting in a significant improvement in individual model performance when compared to local training with respect to the global distribution. 2. The method dynamically assigns weights to each collaborator's influence on the pseudo-labels, iteratively reaching a consensus on how to label the auxiliary data. This adaptive weighting effectively detects and mitigates the negative impact of poor models on the collective performance. 3. The authors demonstrate the method's efficacy, particularly in scenarios where individuals hold diverse beliefs that counterbalance the averaging process. This diversity of opinions leads to improved performance in heterogeneous environments, showcasing the method's potential in such situations. Weaknesses: 1. The paper lacks a comprehensive comparison with other recent state-of-the-art collaborative learning methods, making it challenging to evaluate the relative performance of the proposed method. Including such comparisons would enhance the understanding of its strengths and weaknesses. 2. The paper would benefit from a more in-depth analysis of the computational complexity of the proposed method. This analysis would shed light on its scalability to large-scale datasets or complex models, which is crucial for practical implementation. 3. The paper assumes the trustworthiness and non-malicious nature of all agents, which may not be realistic in real-world scenarios. To address this limitation, a more robust trust weighting scheme capable of handling malicious agents should be considered, ensuring the method's applicability in diverse environments. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can you provide more details on the computational complexity of the proposed method? How does it scale to large-scale datasets or complex models? 2. How does the proposed method compare to other state-of-the-art collaborative learning methods in terms of performance? Can you provide a comprehensive comparison? 3. How robust is the trust weighting scheme to malicious agents? Have you considered scenarios where some agents may intentionally provide incorrect information? 4. Have you tested the proposed method on real-world datasets? If so, can you provide some examples and discuss the results? 5. How sensitive is the proposed method to the choice of hyperparameters? Have you conducted a sensitivity analysis to assess the impact of different hyperparameters on performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper briefly mentions some potential limitations of the proposed method, such as the lack of a comprehensive comparison with other state-of-the-art methods and the assumption of trustworthy and non-malicious agents. However, the authors do not provide a detailed discussion of these limitations or potential negative impacts of their work. While the paper does provide some insights into the strengths and weaknesses of the proposed method, a more thorough analysis of the limitations and potential negative impacts would be desirable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive feedback and insightful questions. We address the raised questions below: **Q1**: Compared to typical decentralized algorithms where model params/gradients are communicated, our method introduced extra computational complexity on the calculation of pairwise trust scores and the computation brought by including the augmented dataset into training. The first one is matrix calculation which can be made efficient on GPU, and the extra computing time mainly comes from the augmented dataset, which is 1.4x longer in our experiments with one V100 GPU. However, we want to emphasize that our method _greatly reduced communication complexity_, which is the main bottleneck in decentralized training. “In federated optimization communication costs dominate — we will typically be limited by an upload bandwidth of 1 MB/s or less” [3]. Conventional collaborative learning algorithm communicates model params/gradients, which come at a size of millions or billions, and the communication complexity per global round is $\mathcal{O}(N \times |params|)$. While we communicate model predictions. The complexity is $\mathcal{O}(n_S \times C \times N^2)$ for each global round, where $N$ stands for the number of agents, $n_S$ stands for the size of the shared dataset and $C$ denotes the number of classes. It is clear that this value does not scale up with more complex models, and is much smaller than the model size. **Q2**: Based on your comment we further included two SOTA (and classic) methods that were designed to address statistical heterogeneity in federated learning: FedDyn [R1] and SCAFFOLD [R2]. SCAFFOLD uses variance reduction to correct for the `client-drift' in its local updates, and FedDyn designs a dynamic regularization term to ensure the alignment of global and device solutions. In our first scenario where the same model architecture is applied, we did a more comprehensive comparison. Please note that the training loss of SCAFFOLD and FedDyn gets saturated when the number of local epochs is set to 5, which is our default setting. Instead, we did a quick hyperparameter search, and found the optimal numbers of local epochs for SCAFFOLD and FedDyn are 1 and 2 respectively, and report the corresponding accuracy on our target dataset. For all the other methods, we set the number of local epochs to 5 as reported in the paper. Our proposed method still occupies the top 1 accuracy in most of the cases. Please refer to Table 1 in the **global response PDF file**. **Q3**: In our paper we design the algorithm under the assumption that every agent communicates honestly, that is, no Byzantine workers that send intentionally incorrect information are involved. However, our method does exhibit some robustness against a typical Byzantine attack, which is label flip (we call workers with flipped labels low-quality workers in the paper). With 2 out of 10 workers having 100% flipped labels, we did not witness a big performance drop. Additionally, here are some of our thoughts regarding your question: If malicious workers that send intentionally incorrect information are involved, then the nodes might reject to reach consensus, instead of reaching a detrimental bad consensus, assuming a reasonable $\lambda$ is chosen. Imagine if a bad consensus is reached with malicious nodes involved, then for the regular nodes, the consensus loss and the local loss will not decrease in the same direction, and thus the consensus solution is not a stationary solution. Indeed, the "personal" part of our loss adds some degree of robustness to malicious nodes. We agree that it would be interesting to investigate robustness against _intentionally_ malicious nodes more. However, we would like to keep the focus of our work on the new protocol of information exchange, and leave robustness extensions for future work. **Q4**: Yes, Fed-ISIC-2019 [30, 31, 32] is a real-world dataset from the healthcare domain. It is a dermoscopic dataset collected from 6 different hospitals, which is a benchmark dataset from Flamby paper [33]. Our method shows consistent success in the classification task. Please further refer to Figure 1 and Figure 3(c) for more details. **Q5**: We offer sensitivity analysis of how the performance scales with $\lambda$ and the number of local epochs. Accuracy in the target domain versus different choices of hyperparameter $\lambda$ is plotted in Figure 8 in the appendix. And the influence of the number of local epochs is also presented in Figure 6 in the main text. Is there another study you would be interested in? [R1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama. _Federated Learning Based on Dynamic Regularization_. ICLR 2021 [R2] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh. _SCAFFOLD: Stochastic Controlled Averaging for Federated Learning_. ICML 2020 --- Rebuttal Comment 1.1: Comment: I recommend that the author incorporate this section into the main body of the revised paper. While I am inclined to give a higher score based on this addition, I believe it's crucial to consider the feedback from other reviewers as well. I have no further questions.
Summary: The paper proposes a collaborative learning approach that leverages unlabeled auxiliary data to improve individual models through consensus. The trust weighting scheme adapts to each collaborator's influence, leading to a consensus on how to label the auxiliary data. The authors demonstrate that this collaboration scheme significantly boosts individual model performance and effectively mitigates the negative impact of bad models on the collective. Overall, the paper makes a valuable contribution to the field of collaborative learning and presents a promising approach for improving model performance through consensus. Strengths: - The paper proposes a novel approach for collaborative learning through consensus on unlabeled auxiliary data. - The collaboration scheme significantly boosts individual model performance and mitigates the negative impact of bad models on the collective. - The paper provides a thorough description of the algorithm and its implementation. - The experimental results demonstrate the effectiveness of the proposed approach. Weaknesses: - The paper could benefit from a more detailed analysis of the impact of different trust weighting schemes on the consensus. - The paper could benefit from a more detailed discussion of the assumptions and limitations of the trust-based iterative pseudo-labeling process. - The paper could provide more insights into the computational complexity of the proposed algorithm and potential scalability issues. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the proposed approach scale to larger datasets and more complex model architectures, and can the authors provide more insights into the computational complexity of the algorithm? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive and helpful feedback. We address your question regarding computational complexity of our method in the following: Compared to typical decentralized algorithms where model params/gradients are communicated, our method introduced extra computational complexity on the calculation of pairwise trust scores and the computation brought by including the augmented dataset ($X_S$ and the corresponding pseudolabels) into training. The first one is matrix calculation and can be made efficient on GPU, and the extra computing time mainly comes from the augmented dataset, which is 1.35x longer per global round in our experiments with one V100 GPU. We believe this would less be an issue with more computing resources. However, we want to emphasize that our method greatly reduced communication complexity, which is a bottleneck in decentralized training. “In federated optimization communication costs dominate — we will typically be limited by an upload bandwidth of 1 MB/s or less” [3]. Conventional collaborative learning algorithm communicates model params/gradients, which come at size of millions, and the communication complexity per global round is $\mathcal{O}(N \times |params|)$. While we communicate model predictions. The complexity is $\mathcal{O}(n_S \times C \times N^2)$ for each global round, where $N$ stands for the number of agents, $n_S$ stands for the size of the shared dataset and $C$ denotes the number of classes. It is clear that this value does not scale up with more complex models, and is much smaller than the model size.
Summary: In this paper, the authors consider a collaborative learning setting where agents want to improve their predictive performance on a shared target domain. The paper proposes a novel algorithm based on prediction-consensus, which effectively addresses statistical and model heterogeneity in the learning process. The algorithm works by having agents pseudo-label data from the target domain. The pseudo-labels are then used to compute a trust weighting scheme, which determines how much each agent's opinion should be weighted when reaching a consensus on how to label the unlabeled data. Theoretical results that show consensus can be reached via the algorithm and justify the conditions for good consensus to be achieved. Overall, the paper is a significant contribution to the field of collaborative learning. The proposed algorithm is a promising approach for improving the predictive performance of individual models in the presence of heterogeneity. Strengths: * The paper proposes a novel algorithm for collaborative learning that is based on prediction-consensus. * The paper is well-written and easy to follow. The authors do a good job of explaining the motivation for the work, the proposed algorithm, and the experimental results. * The theoretical results in the paper are sound. The authors provide a rigorous analysis of the proposed algorithm and show that it can reach consensus under certain conditions. Weaknesses: * The experimental results in the paper are somewhat weak. The authors only report results on a few datasets and with a few model architectures. And the datasets seem over-simplified. It would be helpful to see more experimental results on a wider variety of datasets and with a wider variety of model architectures. This would help to give a better sense of the generalizability of the results. * Again on the experimental results, it would be great to study the effects of the hyperparameter lambda and distance function D. It would be helpful to see how the performance of the algorithm is affected by different hyperparameter settings. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * Assumption 1 states that there is no concept shift between the local data distributions. However, it seems to be a quite strong assumption. If there is no concept shift, then the problem of collaborative learning becomes much easier. * Depending on the choice of lambda, is it possible that the algorithm converges to a point where although the local models reach consensus, the consensus is not ideal? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing helpful feedback, which we genuinely appreciate. Regarding the comment concerning the over-simplicity of our chosen datasets, it is important to highlight that we have, in fact, incorporated a challenging dataset known as Fed-ISIC-2019 [30, 31, 32] from a real-world use case. This dataset comprises dermoscopic images collected from six different hospitals, thereby adding complexity to the experiments. Additionally, we employed the Cifar10/100 datasets with Dirichlet distributed splits, which are standard in collaborative learning experiments. The consistent performance observed on these datasets serves to demonstrate the potential practical benefits of our algorithm. **Q1**: In the collaborative learning setting, statistical heterogeneity remains an issue even without concept shift [5,6]. This arises due to the non-IID nature of data distributed across agents, causing each agent's local objective to deviate from the global one. Consequently, the averaged federated model may deviate from the global optima. Our algorithm effectively addresses scenarios with highly non-IID data. Furthermore, beyond concept shift, the presence of low-quality nodes (nodes with bad-quality data or weak model architectures) can impede the learning process, and our algorithm is specifically designed to address this concern. **Q2**: Yes, one could imagine with a large enough $\lambda$, such that every agent’s goal is just to reach consensus, then any consensus could be a minimizer to the optimization problem. We showed a plot of accuracy on $X_S$ versus the choice of $\lambda$ in the appendix (see Figure 8). We found that accuracy increases and then decreases again as $\lambda$ is increased and the optimal value is around 0.5. We use this value for our experiment without further hyperparameter tuning.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and effort spent in giving comments and feedbacks on the paper, and the ACs for the help in the reviewing process. In addition to the separate responses, we further added a table and a graph in our global response PDF to better support our arguments experimentally. We kindly request your consideration of these additions for enhanced clarity and understanding. Pdf: /pdf/23a2dd281ba49c5ce6092fe084361a2464924d3a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Graph Convolutional Kernel Machine versus Graph Convolutional Networks
Accept (poster)
Summary: The paper presents a framework called graph convolutional kernel machine (GCKM) for graph-based machine learning. GCKMs are built upon kernel functions integrated with graph convolution. Within the framework of GCKM, the authors propose GCKSVM for node classification, GCKSC for node clustering, and extensions for graph-level learning. The experiments show that GCKMs have at least competitive accuracy compared to GCNs. Strengths: 1. The paper presents a framework GCKM for graph-based machine learning. As far as I am concerned, this is a novel framework. Compared to GCNs, GCKMs are easier to train, are guaranteed to obtain globally optimal solutions, and have strong generalization ability. 2. The authors provided generalization bound for GCKSVM, which justified the advantage of GCKSVM over KSVM. 3. The authors provided GCKM extensions to node clustering, graph classification, etc. They also provided fast feature transformation for GCKM. 4. The numerical evaluation are sufficient, which showed the proposed methods are at least as effective as GCNs. Weaknesses: Two minor issues are as follows. 1. In Section 3.3, the extension GCKPCA hasn’t been evaluated. 2. Some important results are in the supplement rather than the main paper. The authors may reorganize the paper and show the numerical justification for Theorem 1 and the experiment of graph-level learning in the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In line 171, it is claimed “GCKM with Gaussian kernels can be easily generalized to multi-layer cases”. Could the authors provide a recursive or explicit formulation of deep GCKM? Is there a rule of thumb to determine the number of layers for deep GCKM? 2. In line 204, the formulation “...has tiny influences on the training...and the spectral norm of $[K_{ij}^{(L)}]_{i,j\in V}$...” is a little bit confusing. The matrix will become smaller if there are fewer support vectors and a smaller matrix usually has a smaller spectral norm. 3. In Eq. (18) or Eq. (16), the graph-level feature is obtained as the sum of nodes’ features. Is there any explanation for this operation? How about using the mean or min/max of nodes’ features as graph feature? 4. At the end of Section 3.4, I suggest the authors highlight the computational complexity in comparison to the implicit feature transformation method. 5. In the first column of Figure 3, why are the decision boundaries of SGC in the second and third plots are not nonlinear? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Weakness 1:** Thanks for your constructive advice. We have added a visualization experiment for GCKPCA, which is shown in the attached **PDF**. To be specific, we first map the node features to 2-D space by PCA, Graph-regularized PCA (GPCA) [1, 2] and GCKPCA, then further try to map them to 32-D space and leverage t-SNE to obtain the 2-D results. Figure 2 illustrates the comparison of mapping results of three methods. It can be observed that the figures of GCKPCA and GCKPCA + t-SNE both show the best separability between different classes, and GPCA performs slightly better than PCA. [1] Zhang, Zhenyue, and Keke Zhao. Low-rank matrix approximation with manifold regularization. TPAMI 2012. [2] Jiang, Bo, et al. Graph-Laplacian PCA: Closed-form solution and robustness. CVPR 2013. **Reply to Weakness 2:** Thanks for this helpful suggestion. We have adjusted our paper and will put these results on the additional page if accepted. **Reply to Question 1:** Thank you for raising this. To obtain the kernel matrix, we have the following recursive formulation: $$ \begin{equation} {K}^{(l+1)}\_{i,j} = \exp\left(\frac{(\hat{\mathbf{A}}^{q})\_{i} \mathbf{K}^{(l)} (\hat{\mathbf{A}}^{q})^{\top}\_{i} - 2 (\hat{\mathbf{A}}^{q})_{i} \mathbf{K}^{(l)} (\hat{\mathbf{A}}^{q})^{\top}\_{j} + (\hat{\mathbf{A}}^{q})\_{j} \mathbf{K}^{(l)} (\hat{\mathbf{A}}^{q})^{\top}\_{j}}{2\sigma\_{l+1}^{2}}\right). \end{equation} $$ where we define $\mathbf{K}^{(0)} = \mathbf{X} \mathbf{X}^{\top}$ in particular. For convenience, it can also be rewritten in matrix form: $$ \begin{align} \begin{cases} \bar{\mathbf{K}}^{(l+1)} = \hat{\mathbf{A}}^{q} \mathbf{K}^{(l)} (\hat{\mathbf{A}}^{q})^{\top}, \\\\ \mathbf{K}^{(l+1)} = \exp\left(-\frac{\mathbf{1}\_{n}^{\top} \mathbf{d}\_{\bar{\mathbf{K}}^{(l+1)}} + \mathbf{d}\_{\bar{\mathbf{K}}^{(l+1)}} \mathbf{1}\_{n} - 2 \bar{\mathbf{K}}^{(l+1)}}{2\sigma^{2}\_{l+1}}\right), \end{cases} \end{align} $$ where $\mathbf{d}_ {\bar{\mathbf{K}}^{(l+1)}} = [\bar{K}_ {11}^{(l+1)}, \bar{K}_ {22}^{(l+1)}, \ldots, \bar{K}_ {nn}^{(l+1)}]$. We have also provided an experiment discussing the depth of GCKM in Figure 1. It shows that a 2-6 layer GCKM performs the best, attributed to which a deeper model requires more hyperparameters to be tuned. Besides, GCKM is not a neural network and thus benefits less from depth, and we can also increase the power of $\hat{\mathbf{A}}^{q}$ to reach more hops of neighborhood information in a layer. **Reply to Question 2:** Sorry for the misleading formulation. We meant that for a fixed number of support vectors, the graph convolution has a small influence on the spectral norm. But in reality, the graph convolution operation reduced the number of support vectors and the matrix become smaller, which leads to a smaller spectral norm. **Reply to Question 3:** Thank you for pointing out this. It is a widely adopted operator in graph-level tasks, which is called ReadOut function. All these min/max, mean, or sum of nodes' features are included in ReadOut functions, but [1] have proved that sum has a more powerful expressive ability over other ReadOut functions and we follow their setting to choose it. In practice, we also find that sum is better than other ReadOut functions. **Reply to Question 4:** Thanks for your helpful comment, we have revised our manuscript and highlighted this. **Reply to Question 5:** Actually, the decision boundaries of SGC are all nonlinear (because of the $\hat{\mathbf{A}}^q$) but approximately linear. The key idea of SGC is removing nonlinearities and collapsing weight matrices, thus the forward computation is simply formulated as $$ \begin{equation} \mathbf{Z} = \hat{\mathbf{A}}^{q} \mathbf{X} \mathbf{W}. \end{equation} $$ With a single learnable weight matrix and without activation functions, SGC's decision boundaries are all approximately linear, and the graph convolution provides slight nonlinearity to SGC. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and some of my concerns have been addressed. I tend to accept this paper. --- Reply to Comment 1.1.1: Title: Thanks for the feedback Comment: We thank you very much for recoganizing our work.
Summary: This paper presents a kernel-based message-passing framework for graph convolutional networks called GCKM. The author demonstrated that GCKM is computationally efficient with stable performance on both node classification and graph classification tasks. Theoretical analyses of GCKM are provided. Strengths: 1. GCKM modifies the general GNN by replacing the trainable parameters and non-linear activation functions with kernels (e.g., RBF Gaussian kernel). Classification is then done using well-established SVM. The proposed method significantly reduces the running time. 2. Provide theoretical analyses. Weaknesses: 1. In section 3.4, the author claims that their method can be applied to very large graph datasets. What does “large” mean? In experiments, only small graph datasets (Cora, Citeseer, and PubMed) were used. How about relatively large datasets like Reddit or OGB datasets? 2. The value/insight of Theorem 1 is unclear. How can one use it to guide the design of GCKM models? How close is the theoretical bound to experimental observations? How can this address the challenges (like over-smoothing or improving model performances)? 3. Previous work [1] already conducts extensive exploration on applying kernel methods on GNNs, the author needs to discuss or compare their method with it. 4. The settings of the models used to produce Figures 1 and 2, e.g. the number of layers, and hidden dimensions, especially for the deep GCNs. 5. Abstract claims "GCKMs are guaranteed to obtain globally optimal solutions and have strong generalization ability and high interpretability." If true, should the GCKM produce the best results on all dataset? Interpretability need to be elaborated. [1] Chen, Dexiong, Laurent Jacob, and Julien Mairal. "Convolutional kernel networks for graph-structured data." International Conference on Machine Learning. PMLR, 2020. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What is the memory requirement of GCKM, when compared to other methods? 2. The over-smoothing problem in GCN or another type of GNN models is caused by stacking multiple message-passing layers, and eventually, every node shares similar embeddings. In Formula (5), a single GCKM layer is constructed by applying several times of graph convolution and using a pre-defined Gaussian kernel function as the ReadOut function. How does this architecture address the over-smoothing problem? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Not sure if the choice/design of kernel may cause any biases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Weakness 1:** Thanks for your valuable suggestions. We added a comparison experiment on OGB dataset Arxiv with about 160k nodes. Table 1 (in the attached **PDF**) demonstrates that GCKM is still competitive with these GNNs on large-scale dataset. Although the complexity of GCKM is quadratic, we provide an efficient variant GCKM-E in Section 3.4 that leverages the random Fourier feature to explicitly derive a low-dimensional output instead of a kernel matrix. With this output, we can apply fast linear methods (e.g. linear SVM) for downstream tasks, which is still more efficient than GNNS needing a lengthy training process. **Reply to Weakness 2:** The theorem showed that the generalization error bound is linear with $\vert\mathcal{V}\vert$ and $\big\\|[K^{(L)}\_{ij}]\_{i,j\in\mathcal{V}}\big\\|\_{\text{spec}}$. We showed in Table 4 of Appendix (or an intercepted part in **Table 3 of the global rebuttal PDF file**) that the graph convolution can reduce both $\vert\mathcal{V}\vert$ and $\big\\|[K^{(L)}\_{ij}]\_{i,j\in\mathcal{V}}\big\\|\_{\text{spec}}$, and then lead to a tighter error bound. A tighter generalization error bound means that the test error is potentially smaller. In other words, graph convolution improves the model performance. **Reply to Weakness 3:** Many thanks for pointing out the reference [1] (GCKN). It is indeed an interesting paper. We have revised our manuscript and discussed this work. Here we mainly state the differences and our unique contributions: 1. Motivated by the previous studies of "graph kernel" (computing the similarity between entire graphs) and GNNs, GCKN [1] focuses on graph-level tasks and aims to connect graph kernel and GNNs. Differently, our GCKM is motivated by the unsatisfactory performance of deep GNNs and the possibility of classical kernel machine learning. GCKM is built upon kernels rather than neural networks. In other words, GCKM is a general framework that can be applied to both node-level tasks and graph-level tasks. It is parallel to GNNs. 2. GCKN constructs a neural network via employing the graph kernels and shows decent performance on graph-level tasks, while GCKMs are a series of graph convolutional kernel based machine learning approaches and perform well on several downstream tasks. Nevertheless, due to the path kernel, GCKN can be time-consuming when the path lengths is long. GCKM has advantages like a globally optimal solution, faster computation, higher interpretability, and a stronger theoretical guarantee. 3. We propose an efficient variant GCKM-E to explicitly compute the node representation and can also be easily extended. 4. We provide theoretical and empirical analyses of the graph's influence on the test error bound. 5. We now demonstrate that GCKM can perform well on large-scale OGB dataset and can be deeper and alleviate the so-called over-smoothing issue. [1] Convolutional kernel networks for graph-structured data. ICML 2020. **Reply to Weakness 4:** Thank you for this valuable advice. Figure 2 is the flowchart of GCKM, maybe you mean Figure 3? For Figure 1, we set the hidden dimensions as $32$ for all hidden layers of GCN, APPNP, and JKNet, while SGC only has a learnable matrix $\mathbf{W} \in \mathbb{R}^{n\times c}$ where $n$ is the number of nodes and $c$ is the number of classes, and learning rate is selected in $\{ 1 \times 10^{-2}, 1 \times 10^{-3}, 1 \times 10^{-4}\}$ and weight decay is selected in $\{ 5 \times 10^{-4}, 5 \times 10^{-5}, 5 \times 10^{-6}\}$. For Figure 3, the numbers of layers are $2$ and $8$ for GCN and APPNP respectively, the hidden dimensions are $32$, and the learning rate is set as $1 \times 10^{-3}$ and weight decay is $5 \times 10^{-5}$ for all models. **Reply to Weakness 5:** Thank you for raising these. Actually, we claimed the following opinions and corresponding reasons: 1. The model can achieve a globally optimal solution because the optimization problem is convex. 2. Theorem 1 and the experiments demonstrate good generalization ability. 3. The model has higher interpretability because the support vectors form the decision boundary. These three statements about GCKM are the potential reasons for GCKMs' good performance. They do not mean that GCKMs should produce the best results on all datasets in comparison to other methods. Here the optimality means in terms of optimization, GCKM can obtain its optimal solution given the current architecture and hyperparameters on a specific dataset. **Reply to Question 1:** The space complexity of GCKM and GCKM-E are $\mathcal{O}(n^2)$ and $\mathcal{O}((n+d)m)$ respectively, while a typical GNN, like GCN, requires $\mathcal{O}((n+d')m)$, if the edges are sparse, where $n$ is the number of nodes, $m$, is the number of input feature dimension, $d$ and $d'$ are the hidden dimensions of GCKM-E and GNN respectively. Although GCKM has high memory requirement when the graph is large, we can use GCKM-E paired with a linear method (e.g. linear SVM) to reduce the complexity. **Reply to Question 2:** Thanks for pointing out this. The ReadOut function is only adopted in the graph-level variant, and the over-smoothing problem is mainly discussed in the context of node-level tasks, so we analyze this phenomenon on the node classification task. We have supplemented an experiment on this issue in **Figure 1 of the global rebuttal PDF file**, which revealed that **GCKM can be deeper and alleviate the over-smoothing issue**. Due to the limitation of characters, **please refer to the Reply to Question 1 in response to Reviewer dP1e for detailed analyses.** **Reply to Limitation 1:** Thank you for pointing out this limitation. We have provided a complementary experiment on various kernel functions in **Table 2 of the global rebuttal PDF file**, including 2nd order polynomial kernel, sigmoid kernel, and Laplacian kernel. All the kernel functions show decent performance and the Gaussian kernel performs the best. --- Rebuttal Comment 1.1: Comment: Thanks authors for providing additional information including experiments. I raised my score. --- Reply to Comment 1.1.1: Title: Many thanks Comment: We sincerely thank you for recognizing our work and increasing the score.
Summary: This paper proposes a new support vector machine approach for graph learning called graph convolutional kernel machine. GCKM combines traditional kernel functions with graph convolution, and shows good performance in both node- and graph-level tasks. Generalization bound is also provided for the approach. Strengths: 1. The proposed GCKM is simple yet effective. It is impressive that SVM-based approach can perform on par with deep GNN models in both node- and graph-level tasks. 2. Generalization bound is provided for the proposed GCKM, which partially explains how graph structure in the construction of kernel function benefits generalization performance. 3. Many variants are developed for applications in different scenarios. Weaknesses: 1. Though many datasets and tasks are considered in the paper, all datasets are quite small with the largest one consisting of only 20k nodes. While the paper claims GCKM can be "extended to large-scale data” by using low-rank approximation tricks, no experimental results on large datasets are provided. Additionally, I would also suggest the authors put some results in supplementary experiments to the main text as cora/citeseer/pubmed are somewhat outdated and insufficient to reflect the effectiveness of the proposed approach. 2. The explanation of how the graph structure affects the generalization of GCKM is somewhat vague. Specifically, I am still confused of why “graph structure significantly improved the quality of the kernel matrix” and how it affects the generalization bound. 3. Literature coverage could be improved. Some prior work have also analyzed how graph structure in the construction of kernel function affects generalization in graph [1] and node [2] tasks, and some others adopted kernel methods for graph learning e.g. [3,4], which are related to this work. 4. As the authors stated in conclusion, we did not systematically test other kernel functions. [1] Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels, NeurIPS 2019 [2] Graph Neural Networks are Inherently Good Generalizers Insights by Bridging GNNs and MLPs, ICLR 2023 [3] Convolutional Kernel Networks for Graph-Structured Data, ICML 2020 [4] KerGNNs: Interpretable Graph Neural Networks with Graph Kernels, AAAI 2022 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. How does the GCKM perform on larger datasets, e.g. open graph benchmark? Such the complexity of GCKM is quadratic, is it still more efficient than GNNs in those larger datasets? 2. Why “graph structure significantly improved the quality of the kernel matrix” and how it affects the generalization bound? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: See limitations in weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Weakness 1/Question 1:** Thanks for your constructive suggestions. We have added a comparison experiment on the OGB dataset Arxiv with about 160k nodes. Table 1 (see the attached **PDF**) demonstrates that GCKM is still competitive with these GNNs on large-scale datasets. Although the complexity of GCKM is quadratic, we provide an efficient variant GCKM-E in Section 3.4 that leverages the random Fourier feature to explicitly derive a low-dimensional output instead of a kernel matrix. With this output, we can apply fast linear methods (e.g. linear SVM) for downstream tasks, which is still more efficient than GNNS needing a lengthy training process. **Reply to Weakness 2/Question 2:** Thank you for the valuable comment. Theorem 1 theoretically demonstrated the connections between the generalization bound and graph structure, and experiments further provide the evidence. The results are recorded in Table 4 of Appendix, here we intercept a part of the table in **Table 3 of the global rebuttal PDF file**. We considered the following three cases for a comprehensive comparison. 1. **non-graph-convolution** We replaced the affinity matrix $\hat{\mathbf{A}}$ in GCKM with an identity matrix $\mathbf{I}_n$, which means the graph structure is not used. 2. **strongly connected graph** $\ \hat{\mathbf{A}}$ in GCKM is replaced by $\mathbf{I}\_{n}-\mathbf{1}\_{n \times n}$, which means every node is connected with all other nodes. 3. **normal GCKM** With fixed $\lambda$, graph convolution significantly reduces the number of support vectors and spectral norm on the kernel matrix of support vectors. According to Theorem 1, the upper bound (linear with $\vert\mathcal{V}\vert$ and $\big\\|[K^{(L)}\_{ij}]\_{i,j\in\mathcal{V}}\big\\|\_{\text{spec}}$) of test error can be reduced compared to using a non-graph-convolution kernel. Thus, we can conclude that the graph structure significantly improved the quality of the kernel matrix. **Reply to Weakness 3:** Thank you for raising this problem. We have revised our manuscript, discussing the differences and citing these related research. **Reply to Weakness 4:** Thank you for pointing out this limitation. We have provided a complementary experiment on various kernel functions in **Table 2 of the global rebuttal PDF file**, including 2nd order polynomial kernel, sigmoid kernel, and Laplacian kernel. All the kernel functions show decent performance and the Gaussian kernel performs the best. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Some of my concerns are addressed, and I find the extension to large graphs particularly valuable. Moreover, while it may not be a central concern, I am still confused why "graph convolution significantly reduces the number of support vectors". Is this claim provable or just an empirical observation? If this claim lacks support from a theorem, it would be beneficial to clarify this point in the main text. Overall, I appreciate the simplicity and effectiveness of proposed approach, and will keep the score as 6. --- Reply to Comment 1.1.1: Title: Authors' feedback Comment: Thank you very much for the comment. "graph convolution significantly reduces the number of support vectors" is an empirical observation. Since the number of support vectors is data-dependent, we cannot prove it theoretically unless making assumptions about the data. We here prove the claim theoretically based on the following assumption: **Assumption:** Convolution with graph $G$ increases the inner product between the kernel feature maps of samples in the same class and reduces or does not change the inner product between the kernel feature maps of samples in different classes. This is a reasonable assumption because a useful graph should make the samples from different classes more distinguishable or at least make the samples from the same class more similar. Let $\varphi$ and $\varphi_G$ be the kernel feature map without and with graph convolution respectively. Recall the Lagrangian dual problem: \begin{equation} \qquad\mathop{\text {max}}_ {\mathbf{c}} ~~\sum_{i=1}^n c_i-\frac{1}{2} \sum_ {i=1}^n \sum_ {j=1}^n c_ ic_ jy_ iy_ j{\varphi(\mathbf{x}_ i)}^{\top} {\varphi(\mathbf{x}_ j)}\qquad \text {s.t.} \sum_{i=1}^n c_ i y_ i=0, ~0 \leq c_ i \leq \frac{\lambda}{n}. \end{equation} For convenience, we let $\mathcal{L}(\mathbf{c}):=\frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n c_ic_jq_{ij}-\sum_{i=1}^n c_i$, where $q_ {ij}=y_ iy_ j{\varphi(\mathbf{x}_ i)}^{\top} {\varphi(\mathbf{x}_ j)}$. Then the problem is equivalent to \begin{equation} \qquad\mathop{\text {min}}_ {\mathbf{c}} ~~\mathcal{L}(\mathbf{c})\qquad \text {s.t.} \sum_{i=1}^n c_ i y_ i=0, ~0 \leq c_ i \leq \frac{\lambda}{n}. \end{equation} Similarly, for the case of using the graph $G$, we let $\mathcal{L}_ {G}(\mathbf{c}):=\frac{1}{2} \sum_ {i=1}^n \sum_ {j=1}^n c_ ic_ jq_ {ij}^{G}-\sum_ {i=1}^n c_ i$, where $q_{ij}^{G}=y_iy_j{\varphi_G(\mathbf{x}_i)}^{\top} {\varphi_G(\mathbf{x}_j)}$. According to the previous assumption, we have: * if samples $i$ and $j$ are in the same class, $\varphi_G(\mathbf{x}_i)^\top\varphi_G(\mathbf{x}_j)>\varphi(\mathbf{x}_i)^\top\varphi(\mathbf{x}_j)$ and $y_iy_j=1$; * if samples $i$ and $j$ are in different classes, $\varphi_ G(\mathbf{x}_ i)^\top\varphi_ G(\mathbf{x}_ j)\leq\varphi(\mathbf{x}_ i)^\top\varphi(\mathbf{x}_ j)$ and $y_ iy_ j=-1$. Therefore, the following inequality holds: $$\qquad q_{ij}^G=q_{ij}+\epsilon_{ij}, \text{. where } \epsilon_{ij}\geq 0~\forall (i,j)\in[n]\times[n].$$ For convenience, let $\bar{\epsilon}_ i=\min_ {j}\epsilon_ {ij}$ and $\tilde{\epsilon}=\min_ {i}\bar{\epsilon}_ {i}$. We have \begin{equation} \begin{aligned} \mathcal{L}_ {G}(\mathbf{c}):=&\frac{1}{2} \sum_ {i=1}^n \sum_ {j=1}^n c_ ic_ j(q_ {ij}+\epsilon_ {ij})-\sum_ {i=1}^n c_ i\\\\ \geq&\frac{1}{2} \sum_ {i=1}^n \sum_ {j=1}^n c_ ic_ jq_ {ij}+\frac{1}{2} \sum_ {i=1}^n \bar{\epsilon}_ i\sum_ {j=1}^n c_ ic_ j-\sum_ {i=1}^n c_ i\\\\ =&\mathcal{L}(\mathbf{c})+\frac{1}{2} \sum_ {i=1}^n \tilde{\epsilon}_ ic_ i\sum_ {j=1}^n c_ j\\\\ \geq&\mathcal{L}(\mathbf{c})+\frac{1}{2}\left(\tilde{\epsilon}\Vert \mathbf{c}\Vert_ 2^2+\sum_ {i=1}^n\bar{\epsilon}_ ic_ i\Vert\mathbf{c}_ {/i}\Vert_ 1\right), \end{aligned} \end{equation} where $\mathbf{c}_ {/i}=[c_ 1,\ldots,c_ {i-1},c_ {i+1},\ldots,c_ n]^\top$. It is known that the $\ell_1$-norm $\Vert\cdot\Vert_1$ is a convex relaxation of the $\ell_0$-norm $\Vert\cdot\Vert_0$, i.e., the number of nonzero elements in a vector. Denote $\mathcal{R}(\mathbf{c}):=\tilde{\epsilon}\Vert \mathbf{c}\Vert_ 2^2+\sum_ {i=1}^n\bar{\epsilon}_ ic_ i\Vert\mathbf{c}_ {/i}\Vert_ 1$. We see $\mathcal{R}(\mathbf{c})$ is very similar to the elastic net regularization and is able to induce sparsity. Actually, if we let $\kappa=\min_ i\Vert\mathbf{c}_ {/i}\Vert_ 1$, we have $\mathcal{R}(\mathbf{c})\geq \tilde{\epsilon}\Vert \mathbf{c}\Vert_ 2^2+\kappa\sum_ {i=1}^n\bar{\epsilon}_ ic_ i=\tilde{\epsilon}\Vert \mathbf{c}\Vert_ 2^2+\kappa\Vert\text{diag}(\bar{\boldsymbol{\epsilon}})\mathbf{c}\Vert_1$, where the second term is a weighted $\ell_1$-norm and also induces sparisty. Therefore, the graph convolution introduces an additional sparse regularization term $\mathcal{R}(\mathbf{c})$, which will make $\mathbf{c}$ sparser, or in other words, reduce the number of support vectors. We will form a proposition using the above result and add it to the paper. We hope that this analysis could make you be more confident about our work and increase the score if possible.
Summary: This paper introduces a novel approach called the Graph Convolutional Kernel Machine (GCKM) for graph learning. Unlike other neural network-based frameworks, GCKM employs a graph kernel to replace the neighbor aggregation step, without any learnable parameters. Then the author build the features based on the graph kernel and then use SVM for classification tasks. In general, I find the paper to be good and interesting, although it could benefit from additional experiments. I am somewhat surprised by the better performance of GCKSVM compared to GAT, considering that GAT defines the similarity of node features using learnable attention weights. Strengths: 1. Well written and clear explained 2. good visualization (Figure 3) of the potential benefit of kernel methods: the Kernel methods offer greater interpretability compared to neural networks and provide stronger generalization guarantees. In contrast to SGC, which functions as a linear classifier based on the final node representation, GCKSVM serves as a nonlinear classifier. Therefore, it is anticipated to outperform SGC in cases where the data is not linearly separable. 3. Faster running time. Weaknesses: 1. The performance is relatively underwhelming. Table 2 reveals that GCKSC achieves the best results in only 3 cases and the second-best results in 2 cases, whereas S3 GC outperforms with the best results in 3 cases and the second-best results in 3 cases. 2. The performance reported in this paper appears to differ significantly from the results presented in the APPNP paper (https://arxiv.org/pdf/1810.05997v6.pdf). In the APPNP paper, they reported 85% accuracy on Cora and 75% accuracy on Citeseer. The substantial difference in performance raises questions about potential differences in the experimental settings between the two papers. If there are indeed differences in the experimental settings, it would be important to assess whether the conclusions drawn in this paper still hold when utilizing the experimental settings from the APPNP paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The author discusses the issue of over-smoothing, and it raises the question of whether over-smoothing would occur when employing GCKSC with more layers. Specifically, is there a significant drop in performance when utilizing GCKSC with more than 2 layers? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Weakness 1:** Based on the analysis in our paper, GCKM aims to further build a simple paradigm for graph-oriented tasks. It can be viewed as a further simplified baseline and its main competitors are GCN, GAE, VAGE and other simplified models, but GCKMs have shown outstanding performance over not only baselines but also some SOTA methods. And we note that all the competitors are deep learning based methods and GCKMs are a series of kernel-based traditional methods. Thus, besides the competitive performance, our models also have the following advantages: 1. The model can achieve a **globally optimal solution** because the optimization problem is convex. 2. The computation is **faster** because the model does not involve forward and backward propagation in the training and inference stages. 3. The model has **higher interpretability** (e.g. the support vectors of GCKSVM form the decision boundary). 4. The model has a **stronger theoretical guarantee** (the generalization error bound is almost tight). In contrast, the optimizations of GNNs are nonconvex and it is very difficult or even impossible to obtain globally optimal solutions. Moreover, it is well known that the generalization bounds of neural networks are usually exponentially dependent on the network depth [1]. Last but not least, the decision process of neural networks has much lower interpretability. [1] Norm-based capacity control in neural networks. COLT 2015. **Reply to Weakness 2:** Thank you for pointing out this. Although the dataset names are the same, they are actually **different** datasets. The original paper of APPNP adopted Citeseer with 2,110 nodes and Cora-ML with 2,810 nodes, while we use Citeseer with 3,327 nodes and Cora with 2,708 nodes following settings of vanilla GCN [1] (please refer to Table 1 in Appendix of our paper and Table 1 in APPNP paper), which is widely adopted in node classification methods [2, 3, 4]. There are also some recent works [3, 6, 7] that evaluated APPNP under this setting, and comparing with their results, we believe ours are fair and reasonable. [1] Semi-supervised classification with graph convolutional networks. ICLR 2017. [2] Simplifying graph convolutional networks. ICML 2019. [3] Dissecting the diffusion process in linear graph convolutional networks. NeurIPS 2021. [4] Beyond low-frequency information in graph convolutional networks. AAAI 2021. [5] Dropmessage: Unifying random dropping for graph neural networks. AAAI 2023. [6] Node-wise Diffusion for Scalable Graph Learning. WWW 2023. [7] Elastic graph neural networks. ICML 2021. **Reply to Question 1:** Thanks for your insightful comment. We have supplemented an experiment (**Figure 1 of the global rebuttal PDF file**) on this issue, which revealed that **GCKM can be deeper and alleviate the over-smoothing issue**. To be specific, two situations are considered in this experiment: 1. GCKM with fixed 2 layers and varied hops of neighbors per aggregation ($q$ in Eq. (5)) $$ \begin{equation} \mathbf{H} = \phi_{(1)}( \hat{\mathbf{A}}^{q} \phi_{(0)}(\hat{\mathbf{A}}^{q} \mathbf{X})). \end{equation} $$ 2. GCKM with fixed 2 hops of neighbors per aggregation and varied layers $$ \begin{equation} \mathbf{H} = \phi_{(l)}( \hat{\mathbf{A}}^{2} \cdots \phi_{(0)}(\hat{\mathbf{A}}^{2} \mathbf{X})). \end{equation} $$ From **Figure 1 in the global rebuttal PDF file**, we have the following observations: 1. Deep GCKM performs more stably than deep GCN. 2. GCKM's performance first improves then slightly decreases with increasing layers/hops 3. Particularly, GCKM with fixed layers performs better and decreases less. It is known that over-smoothing is caused by the aggregation step, that is, multiplying $\hat{\mathbf{A}}$ makes the representations of different nodes more and more indistinguishable. However, the analysis of over-smoothing does not consider activation functions and learnable weights, and it theoretically exists when the power of $\hat{\mathbf{A}}$ tends to infinity [1], which does not match the fact that GCN collapses with only 8 layers. Recent studies [2, 3] have pointed out that over-smoothing problem might be an artifact of theoretical analysis and the failure of deep GNNs may not only cause by the over-smoothing issue in the aggregation/message-passing step. Although GCKM and GCN share a similar aggregation step, the main difference between them is the transformation step, namely, GCN uses a linear layer to conduct explicit dimension reduction while GCKM employs implicit high-dimensional feature mapping. GCKM implicitly maps node features to a high-dimensional (even infinite) space after aggregation, and there may exist an appropriate space where node representation can be distinguished. In contrast, recent studies [3] have found that the node representations processed by deep GCN would collapse to be low-rank and lose expressive power. The appropriate space can be found by tuning the hyperparameters of GCKM, however, the number of hyperparameters increases when building a deeper GCKM and makes it time-consuming to search for this space. That may be the reason why GCKM decreases slightly with too many layers and why fixing the number of layers improves the performance. [1] Deeper insights into graph convolutional networks for semi-supervised learning. AAAI 2018. [2] On provable benefits of depth in training graph convolutional networks. NeurIPS 2021. [3] Contranorm: A contrastive learning perspective on oversmoothing and beyond. ICLR2023. --- Rebuttal Comment 1.1: Comment: thank you for your response, I now raise to 6. --- Reply to Comment 1.1.1: Title: Thanks for the feedback Comment: We greatly appreciate your comments and recognition.
Rebuttal 1: Rebuttal: We would like to thank the Senior Area Chairs/Area Chairs and all the Reviewers for handling our paper and providing constructive comments. We have systematically and carefully replied to all the comments and revised our work based on comments from the reviewers. In addition, we have attached a PDF file including four experiments to support our response, and all these source codes can be provided if needed. The following is the summary of mainly focused problems and our response: 1. **Deep GCKM and over-smoothing issue.** We have conducted experiments (**Figure 1 of the PDF file**) considering two situations and showed that GCKM can be deeper and alleviate the over-smoothing issue. Besides, we analyzed the possible reasons in the response. 2. **Performance and efficiency on large-scale datasets.** We have provided results on the OGB dataset Arixv with over 160k nodes (**Table 1 of the PDF file**), where GCKM is still competitive with other baseline and SOTA methods. 3. **Explanation of Theorem 1.** We have further explained Theorem 1 and put the corresponding experimental results in **Table 3 of the PDF file** (full table in Appendix). Theoretical and empirical results revealed that graph convolution in GCKM can improve the generalization bound. 4. **Experiments of GCKPCA.** The results are in Figure 2 of the attached PDF file. GCKPCA outperformed PCA and GPCA. Besides these, we also evaluated GCKM with different kernel functions (**recorded in Table 2 of the PDF file**) and also further elaborated on the experimental settings, discussion with related work, the effectiveness of GCKM, etc. Finally, many thanks to the positive assessments that encouraged us a lot: 1. R1: "Well written and clear explained; good visualization." 2. R2: "The proposed GCKM is simple yet effective; it is impressive." 3. R4: "This is a novel framework." Pdf: /pdf/d1e3a6cab998576dec359f965e0a4939fbf0120e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Nearly Optimal VC-Dimension and Pseudo-Dimension Bounds for Deep Neural Network Derivatives
Accept (poster)
Summary: The paper proposed a method to estimate the Vapnik-Chervonenkis (VC) dimension and pseudo-dimension of deep neural network (DNN) derivatives with the ReLU activation function, which have important applications such as characterizing the generalization error of machine learning methods and establishing the optimal approximation of DNNs in Sobolev spaces. The authors provided theoretical analysis and proofs for their proposed method, which fills a gap in learning error estimations for many physics-informed machine learning models and applications, including solving partial differential equations, operator learning, network compression, and regularization. Another contribution of the paper is the demonstration of how DNNs can be used to approximate functions in Sobolev spaces using ReLU activation functions in a deep feedforward neural network architecture, with a nearly-optimal approximation rate. Overall, the study provides a framework for analyzing and optimizing DNNs for different applications while taking into account mathematical concepts such as VC-dimension and pseudo-dimension. Strengths: * The topic of the paper is highly relevant to the field of deep learning and offers an interesting approach to estimate VC-dimension and pseudo-dimension of derivatives of deep neural networks. * The paper is well-written and clearly presents the mathematical language and definitions used in the study. The proofs provided are detailed and structured in a logical manner. * The paper presents two important theorems that provide a solution to the approximation rate problem of DNNs in Sobolev spaces and the degree of generalization error in loss functions involving derivatives of DNNs. * The proposed approach has the potential to be applied in different areas of physics-informed machine learning such as solving partial differential equations, operator learning, and generative models. Weaknesses: - Some aspects of the paper could be clearer and more thoroughly explained. The introduction, for instance, could better demonstrate the main contribution of the paper and provide a more detailed overview of the state-of-the-art and the limitations of existing research. - The section on references could be more comprehensive, covering more related studies and presenting a more thorough overview of the existing literature. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Pseudo dimension is a more general concept than VC dimension, while the bounds in Theorem 1 and Theorem 2 seems similar to each other except two constants $\hat{C}$ and $\overline{C}$, is there any relationship between $\hat{C}$ and $\overline{C}$ ? 2. Are those results in this paper holds only for ReLU networks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for your thorough and diligent review, helpful feedback, positive remarks, and insightful summary. Reviewer's comment: " Some aspects of the paper could be clearer and more thoroughly explained. The introduction, for instance, could better demonstrate the main contribution of the paper and provide a more detailed overview of the state-of-the-art and the limitations of existing research." Response: Thanks for providing kindly suggestions. In the introduction (Page 2), we add a more comprehensive discussion of the state-of-the-art and the limitations of existing research on the VC-dimension and pseudo-dimension of DNNs. We observe that most existing research in this area does not consider the derivatives of DNNs, which are crucial in the error analysis of Sobolev training. Furthermore, we note that a recent study by Duan et al. [2021] analyzed the VC-dimension and pseudo-dimension of DNN derivatives, but their results were suboptimal due to a lack of consideration for the relationships between the multiplied terms in a DNN derivative. As a result, their findings cannot be used to determine the optimal approximation error of DNNs in Sobolev training, and may only provide a generalization error that is much larger than the actual error that may arise from Sobolev training. Reviewer's comment: " The section on references could be more comprehensive, covering more related studies and presenting a more thorough overview of the existing literature. " Response: We appreciate your feedback and have made additional revisions to the introduction and references. Specifically, we have added more references on Sobolev training and the estimation of VC-dimension and pseudo-dimension to the introduction and references sections. Reviewer's comment: " Pseudo-dimension is a more general concept than VC dimension, while the bounds in Theorem 1 and Theorem 2 seems similar to each other except two constants $\bar{C}$ and $\hat{C}$, is there any relationship between them?" Response: Yes. As you can see in the proof of Theorem 2, we establish that $$\bar{C}(N+1)^2(L+1)^2\log_2 (L+1)\log_2 (N+1)\le 64\bar{C}N^2L^2\log_2 L\log_2 N.$$ Since the pseudo-dimension of derivatives of DNNs with $N$ width and $L$ depth can be controlled by the VC-dimension of derivatives of DNNs with $N+1$ width and $L+1$ depth. Therefore, we conclude that $64\bar{C}\ge\hat{C}$. We add this discussion after the proof of Theorem 2 in appendix. Reviewer's comment: " Are those results in this paper holds only for ReLU networks?" Response: We do not only consider ReLU networks. In Corollaries 1 and 2, we present the approximation results of DNNs with ReLU and square of ReLU activation functions, respectively. Unlike piece-wise polynomial activation functions such as ReLU and square of ReLU, developing the method based on this paper for DNNs with other activation functions can be challenging. Especially, difficulties arise from not only building DNNs that can approximate functions in Sobolev spaces but also estimating the VC-dimension of these DNNs. This is an interesting question that requires further exploration, and we refer to it as an area for future research. --- Rebuttal Comment 1.1: Comment: Thank you for addressing some of my comments. I am maintaining my score. --- Reply to Comment 1.1.1: Comment: Thank you again for your help and suggestions.
Summary: This paper facilitates the understandings of Sobolev training and performances of DNNs in Sobolev spaces by providing the near optimal VC-dimension and pseudo-dimension of DNN derivatives. Strengths: Technically, they improve the bounds on VC and pseudo-dimensions of DNN derivatives in reference [10]. Weaknesses: Though I think the theoretical contributions of this paper is great, but in terms of readability of the paper, there are some rooms for the improvements. \\ First, this paper assumes that the readers are very familiar with the notions of Sobolev training. In my opinion, authors should defer some technical proofs in the appendix, and introduce the notion briefly in the main paper, and motivate readers why the Sobolev training is interesting problem to consider. \\ Second, some notations should be introduced first, before being stated. I realized the Sobolev Spaces $W^{n,\infty}([0,1]^{d})$ first appeared in line 59, then introduced in line 114, formally.\\ Third, some sentences are repeated quite often. For instance, line numbers 125-127 are same with line numbers 167-169. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. To my knowledge, VC-dimension and Pseudo-dimension are essentially same notion. (Reference [4].) I am wondering why the results in Theorem 1 and Theorem 2 are surprising in a sense that they have the same bound. Is there any intuitive reason on why it is non-trivial to expect they should be same for the DNN derivatives? \\ 2. What is the meaning of approximating functions in $W^{n, \infty}([0,1]^{d})$ with Sobolev norm $W^{1,\infty}([0,1]^{d})$? Why this is interesting? \\ 3. How do we know the bound is optimal? To my knowledge, we commonly refer that we have an optimal bound when we have the matching orders of lower bound and upper bounds. But the author only provides the upper bounds in the paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work has no negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our gratitude to the reviewer for your thorough review and positive feedback. We would like to clarify that our contribution is not limited to improving the bounds on VC and pseudo-dimensions of DNN derivatives. Beyond that, our contributions also include: 1) establishes DNNs as effective approximators of functions in Sobolev spaces through the use of Sobolev norms, resulting in lower error rates compared to previous works. 2) demonstrate the optimality of our approach through the estimation of VC-dimension. 3) utilize pseudo-dimension to obtain the generalization error of Sobolev training in supervised learning of DNNs. All are novel results not found in other works. Reviewer's comment:" Though I think the theoretical contributions of this paper is great, but in terms of readability of the paper, there are some rooms for the improvements. First, this paper assumes that the readers are very familiar with the notions of Sobolev training. In my opinion, authors should defer some technical proofs in the appendix, and introduce the notion briefly in the main paper, and motivate readers why the Sobolev training is interesting problem to consider. Second, some notations should be introduced first, before being stated. I realized the Sobolev Spaces $W^{n,\infty}([0,1]^d)$ first appeared in line 59, then introduced in line 114, formally.Third, some sentences are repeated quite often. For instance, line numbers 125-127 are same with line numbers 167-169. " Response: Thank you for your valuable suggestions! Based on your input, we have moved the complete proofs of Theorems 1 and 2 in the appendix, while providing proof sketches in Section 5. Furthermore, we have enriched the introductions by including additional references and discussing the significance of Sobolev training. We have also added a befily review of a specific task focused on solving partial differential equations to provide a practical context for the application of Sobolev training. Secondly, we have included the definitions of Sobolev spaces in the introduction section to enhance readability. Finally, we have also removed the duplicated parts. Thanks for your help and suggestions again! Reviewer's comment: "To my knowledge, VC-dimension and Pseudo-dimension are essentially same notion. (Reference [4].) I am wondering why the results in Theorem 1 and Theorem 2 are surprising in a sense that they have the same bound. Is there any intuitive reason on why it is non-trivial to expect they should be same for the DNN derivatives?" Response: Please note that the claim in the arXiv version of Bartlett et al. [2019], stating that the VC-dimension and pseudo-dimension of DNNs are the same, dose not always hold. The authors correct this claim in the published form in Journal of Machine Learning Research (See the reference of their paper in our reference list). Roughly speaking, the pseudo-dimension can be bounded by the VC-dimension of a larger set. The most challenging aspect of our work revolved around proving the VC-dimension of DNN derivatives. The introduction of the pseudo-dimension was necessary because the VC-dimension alone is insufficient for obtaining a generalization error estimate. We rely on the notion of pseudo-dimension to achieve this. Secondly, we were not surprised to find that the order of the VC-dimension and pseudo-dimension are the same. What surprised us is the order of VC-dimension between DNNs and their derivatives are same, considering the differences in the complexity of their respective structures. Reviewer's comment:" What is the meaning of approximating functions in $W^{n,\infty}$ with Sobolev norm $W^{1,\infty}$? Why this is interesting? " Response: When utilizing ReLU-based DNNs to approximate functions in the Sobolev space $W^{n,\infty}$, the goal is to capture both the magnitude and derivative of the functions. This is the essence of approximating functions in $W^{n,\infty}$ with the Sobolev norm $W^{1,\infty}$. Although it is desired to consider approximating functions in $W^{n,\infty}$ with Sobolev norms $W^{m,\infty}$ for $m\geq2$, one has to recognize that ReLU DNNs lack higher-order derivatives, making them impossible to approximate functions in Sobolev norms $W^{m,\infty}$ with $m\geq2$. As a result, when employing ReLU DNNs, the primary focus is on approximations measured within the $W^{1,\infty}$ framework e.g. Gühring et al. [2020]. The consideration of approximating functions in $W^{n,\infty}$ with the Sobolev norm $W^{1,\infty}$ sufficiently explains the success of DNNs in Sobolev training, particularly when dealing with loss functions that only involve first-order derivatives of both the DNNs and the target functions, as mentioned in the introduction, such as solving second-order partial differential equations (PDEs) in a weak sense and penalizing function gradients in the loss functions to control the Lipschitz constant of DNNs. Therefore, this scenario is both valuable and interesting. Reviewer's comment:" How do we know the bound is optimal? To my knowledge, we commonly refer that we have an optimal bound when we have the matching orders of lower bound and upper bounds. But the author only provides the upper bounds in the paper. " Response: The paper's Corollaries 3 and 4 provide lower bounds for the VC-dimension and Pseudo-dimension, respectively. These lower bounds are proven to be on the order of $O(N^{2-\epsilon}L^{2-\epsilon})$ for any small $\epsilon>0$. The upper bounds in Theorems 1 and 2 are $O(N^2L^2\log_2 N\log _2L$). This means that the paper's results are considered nearly optimal because the order of polynomials in the upper bounds cannot be further reduced without contradicting Corollaries 3 and 4. Note that there still exists a small gap between the upper bound and the lower bound, which is the meaning of nearly optimal, --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal. Comment: I have no further questions. I will raise the score to 6. --- Reply to Comment 1.1.1: Comment: Thank you for your appreciation of our work.
Summary: The authors provide estimates on two measures of statistical complexity, the VC-dimension and the pseudo-dimension, of derivatives of deep neural networks. The estimate of the VC-dimension is shown to be optimal up to logarithmic factors. They also propose a constructive method for approximating functions in Sobolev spaces by deep neural networks. The VC-dimension bound is used to show that the obtained approximation rate is optimal as a function of the width and depth of the network. Finally, they prove a generalization bound in terms of Sobolev norm by leveraging the pseudo-dimension upper bound. Strengths: The paper is globally well-written and pleasant to read. It brings several new results regarding the statistical and approximation properties of deep neural networks in terms of Sobolev norms, which could be of broad use, in particular in the community of deep learning for PDEs. I like that most of the presented bounds have matching lower bounds. I have not checked the proofs in details, so I cannot provide evidence on their soundness, but the mathematical statements presented in the main paper are easy to understand and unambiguous. Weaknesses: I do not have any strong reservation, a few questions are list below. The only part of the paper that I found hard to follow is the proof of Theorem 1. I suggest that authors take advantage of the additional page to expand a bit on the proof. Perhaps a drawing would help? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: + Line 77: you claim that the estimate of the pseudo-dimension is nearly optimal, but I do not see a lower bound in the paper. Could you provide a lower-bound or at least an argument on how to obtain one, or otherwise change the phrasing of this sentence (and similar ones elsewhere in the paper)? + Line 101: I don’t understand the \leq sign. Shouldn’t it be an equal sign? Otherwise, I feel the argument of lines 184-187 breaks down, since \sigma_2 networks include ReLU networks. + Line 129: the dependence of the width on the dimension d is exponential. Is this expected? Do you think that you could get a matching dependence in the lower boud? + Line 224 and Theorem 5: I think it would be clearer to upper bound your generalization error term by 2 sup_{\theta} |\Esp(R_S(\theta)) - R_D(\theta)|. Otherwise it is a bit confusing since, without further clarification, the expectation applies both to the estimator \theta_S and to the random function R_S. Similarly, in the proof of Lemma 12, I don’t think that the proof is correct as it is if you apply it for \theta_S, since \theta_S depends on the random sample. However, it is correct if you write it for any (deterministic) \theta, thereby getting the sup over \theta as in the LHS of Lemma 11, which you can then apply to get the same upper bound that you get with your proof. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our heartfelt appreciation to the reviewer for your comprehensive and diligent review, invaluable feedback, positive remarks, and insightful summary. Reviewer's comment: "I do not have any strong reservation, a few questions are list below. The only part of the paper that I found hard to follow is the proof of Theorem 1. I suggest that authors take advantage of the additional page to expand a bit on the proof. Perhaps a drawing would help?" Response: We greatly appreciate your kind suggestions and insights. We agree that the proof of Theorem 1 in our paper is lengthy. We have found that providing a sketch of the proof can effectively convey the main ideas to a wider range of readers. The most challenging and extensive aspect lies in the refinements necessary to obtain the partitions of parameter spaces, which are crucial for our analysis. For this purpose, we have included a concise sketch of the proof in the main paper. This will allow readers to grasp the key concepts and logical flow without being overwhelmed by excessive details. We have moved the detailed proof of refinements required to obtain the partitions to the appendix, for those who desire a more thorough understanding. By adopting this approach, we hope to enhance the overall readability of our paper. Reviewer's comment: "Line 77: you claim that the estimate of the pseudo-dimension is nearly optimal, but I do not see a lower bound in the paper. Could you provide a lower-bound or at least an argument on how to obtain one, or otherwise change the phrasing of this sentence (and similar ones elsewhere in the paper)? " Response: We would like to express our sincere appreciation for your attentive review of our paper. We have included Corollary 4 for the lower bound of the pseudo-dimension, which demonstrates the near optimality of the pseudo-dimension estimate presented in Theorem 2. Reviewer's comment: "Line 101: I don’t understand the $\leq$ sign. Shouldn’t it be an equal sign? Otherwise, I feel the argument of lines 184-187 breaks down, since $\sigma_2$ networks include ReLU networks." Response: The sign $\leq$ is correct. We understand the concern regarding the presence of higher-order derivatives in DNNs that employ ReLU activation functions. While it is true that not all DNNs utilizing ReLU or the square of ReLU activation functions have higher-order derivatives, it is worth noting that some DNNs do have higher-order derivatives. For instance, the expression $\sigma_2\circ\sigma_2\circ (\sigma_1(x)-\sigma_1(-x))$ has higher-order derivatives. In the proofs of Corollaries 1 and 2, we construct $\sigma_2$ networks that include both ReLU and square of ReLU networks. These constructed networks are designed to approximate functions measured by $W^{m,\infty}$ norms, which implies that they can effectively capture higher-order derivatives. Reviewer's comment: " Line 129: the dependence of the width on the dimension $d$ is exponential. Is this expected? Do you think that you could get a matching dependence in the lower bound? " Response: In this paper, we focus on the optimality of approximation rate with respect to width $N$ and depth $L$ of DNNs. The dimensionality $d$ is not the focus that we consider in our research. Addressing your question about mitigating the exponential dependence of width on dimensionality, we have observed that this arises from the utilization of methods like Taylor's expansion or average Taylor polynomials in our approximation techniques. It remains an open question for future research to explore alternative approaches to address the challenge of getting the dependence of $d$ in the lower bounds. Reviewer's comment: "Line 224 and Theorem 5: I think it would be clearer to upper bound your generalization error term by $2 \sup_{\theta} |E(R_S(\theta)) - R_D(\theta)|$. Otherwise it is a bit confusing since, without further clarification, the expectation applies both to the estimator $\theta_S$ and to the random function $R_S$. Similarly, in the proof of Lemma 12, I don’t think that the proof is correct as it is if you apply it for $\theta_S$, since $\theta_S$ depends on the random sample. However, it is correct if you write it for any (deterministic) $\theta$, thereby getting the sup over $\theta$ as in the LHS of Lemma 11, which you can then apply to get the same upper bound that you get with your proof. " Response: Thank you for your feedback. Following the suggestion, we have revised Theorem 5 and Lemma 12 to ensure clarity and avoid confusion. --- Rebuttal Comment 1.1: Comment: I thank the authors for taking the time to write the rebuttal. All my questions are addressed thoroughly. My rating is unchanged. > The sign $\leq$ is correct (...) Thank you for the clarification. Then I understand that the word “alone” in line 184 is crucial? I suggest expanding a bit the explanation in this paragraph to clarify why it is still acceptable to have ReLU activations appearing in the network. > In this paper, we focus on the optimality of approximation rate with respect to width $N$ and depth $L$ of DNNs. The dimensionality $d$ is not the focus (...) Thank you for the clarification. I suggest adding this discussion to the paper. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable suggestions. We genuinely appreciate your input, and we will make the necessary additions to our paper as per your recommendations in the final version. Specifically, we will include an explanation in Line 184 regarding our utilization of the smooth partition of the unit to ensure the elimination of parts of ReLU-based DNNs that lack high-order derivatives in the final presentation. Furthermore, we will incorporate a discussion on the dependency of the width on the dimension $d$ after presenting Theorem 3.
Summary: The main contribution of the paper are new VC dimension and pseudo-dimension bounds for derivatives of functions implemented by deep neural networks. The utility of these bounds is demonstrated by proving the tightness of approximation error bounds in the Sobolev norm for networks with ReLU and ReLU squared activations, and by giving an improved generalization bound in a similar setting. Strengths: **Contribution.** This is a good technical paper that improves state of the art in the theoretical studies of VC/pseudo-dimensions and approximation and generalization rates of DNNs. The focus of the paper is the setting where VC/pseudo-dimensions are estimated for model derivatives, and approximation/generalization is considered with respect to Sobolev norms. This setting is not so well-explored as the more common setting where the fitted functions are assumed to belong to a Sobolev space, but the error of fitting does not involve the derivatives. The main results claimed in the paper are new VC/pseudo-dimensions bounds. The other results are new approximation and generalization bounds. The VC/pseudo-dimension bounds naturally help to obtain the generalization bounds and show the tightness of approximation bounds. It appears that all the results established in the paper improve previous analogous results in terms of giving more accurate/tight rates. **Quality and clarity.** The paper is fairly well written. All the results are precisely stated, sketches of proofs are provided where appropriate (full proofs provided in the appendix), connections between the results are well-explained, previous work is duly mentioned. Weaknesses: I don't see any major issues in the paper, but my overall impression is that it is fairly technical and lacks significant new insights. Virtually all results in the paper rely very heavily on previous research, and most of them look like being assembled from ideas scattered across many previous publications. I find it hard to name new ideas that never appeared before. I would say that this paper is more suitable for a journal. I think that the claims of achievement in this paper are exaggerated. The main claim is connected with the new VC-dimension bound: "obtaining such bounds for DNN derivatives is much more difficult", "DNN derivatives consist of a series of interdependent parts...rendering existing methods for estimating bounds inapplicable". In fact, the bound for VC-dimension of derivatives given in Theorem 1 is very close to the bound for the original network function given in Theorem 7 of Bartlett et al (2019), and the proof of Theorem 1 is just a slight modification of the proof of Theorem 7 in Bartlett et al (2019). This is not surprising because the chain rule expression (13) for the derivatives of the network function is only slightly more complicated than the original function for the purpose of partitioning the parameter domain into piecewise polynomial components and estimating the degrees of the resulting polynomials, as required for the proof. The authors claim "we propose a method to achieve nearly optimal estimations of the VC-dimension and pseudo-dimension of DNN derivatives", but I don't see here any new method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the important non-technical takeaways from this paper? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's careful reading of our paper, as well as your positive feedback and helpful suggestions. The problem we addressed is important and open since the nearly tight VC dimension bound of neural networks (not their derivatives) by Bartlett. Even though our analysis tools are not fundamentally new, we have addressed an open problem with wide applications in the theoretical analysis of deep neural networks. We hope that the reviewer understands that it may not be necessary to address an open problem with completely novel analysis tools. The main contributions of our research are establishing nearly optimal DNN structures in Sobolev spaces and deriving nearly optimal bounds for VC-dimension and pseudo-dimension. To the best of our knowledge, no existing work has achieved these results, despite the long-standing consideration of derivatives in DNN training, such as solving partial differential equations (PDEs) using DNNs. Our findings confirm the effectiveness of DNNs in Sobolev training and substantially improve approximation and generalization errors compared to methodologies described in the existing literature, such as those proposed by Duan et al. (2021), De Ryck and Mishra (2022), and Jiao et al. (2023). We believe that our results are of great importance and interest to the NeurIPS community. Presentation of our paper at the conference will enable the community to quickly access our findings and to use our results in their further analyses of DNN algorithms, ultimately leading to a broader impact. This aligns with the purpose of a conference paper on NeurIPS. Moreover, to achieve our theory, there are several technical difficulties to be addressed with new ideas. Firstly, one particular difficulty arises when constructing ReLU-based DNNs in Theorem 1. In the work of Lu et al. [2021], the target functions are approximated in the trifling region, the entire domain except for a small subset, and techniques such as integration and shifting, employing a middle-value function are utilized to control the error in the small subset. However, when it comes to approximating functions in Sobolev spaces, these methods prove to be ineffective since the derivatives of DNNs tend to deteriorate in such small subset. Secondly, we agree that while there are similarities between our method and Bartlett et al. [2019] in proving the upper bound of VC-dimension for DNN derivatives, there are notable differences. Applying the chain rule necessitates considering the correlations between different parts of the DNNs, rather than treating them as independent components multiplied together. This correlation-based partitioning of parameter spaces is a specific and challenging aspect of our estimation process, setting it apart from previous approaches. The difficulties associated with this aspect also contribute to the suboptimality observed in the results of Duan et al. [2021]. Thirdly, our approach to proving the optimality of VC-dimension differs from Bartlett et al. [2019] (Theorem 3). The proof of optimality for DNN derivatives is not easily generalizable using their approach. Instead, we establish the optimality of VC-dimension estimation (Corollary 3) based on the DNN approximation results we derived within the Sobolev space (Theorem 3). Specifically, we demonstrate that if the degree of polynomials in the upper bounds of VC-dimension for DNN derivatives in Theorem 1 can be reduced, it becomes impossible to find DNNs that achieve the established approximation rate. This approach distinguishes it from Bartlett et al. [2019] (Theorem 3). While this paper primarily focuses on technical aspects, it also offers valuable non-technical insights. 1. DNNs outperform traditional methods: According to Theorem 1 in our paper, DNNs with $O(N^2L\log L(\log N)^2)$ parameters can achieve an error rate of $O(N^{\frac{-2(n-1)}{d}}L^{\frac{-2(n-1)}{d}})$ when measured by the norm in $W^{1,\infty}$, while approximating functions in $W^{n,\infty}$ in Sobolev spaces. In comparison, traditional methods like finite elements require $O(N^2L^2)$ parameters to achieve the same approximation error. This shows that DNNs have a clear advantage over traditional methods in terms of approximation in Sobolev spaces, particularly in terms of the freedom of the depth parameter $L$. Notably, this result is not found in other papers that focus on DNN approximation in Sobolev spaces measured by Sobolev norms, such as Gühring et al. [2020] and Gühring and Raslan [2021], as their results are suboptimal. 2. Replacing ReLU with squared ReLU: Although ReLU-based DNNs cannot approximate functions measured by $W^{m,\infty}$ for $m \geq 2$ due to their lack of smoothness, our paper suggests a solution by replacing some ReLU activations with squared ReLU activations. This modification allows for the approximation of functions measured by higher-order Sobolev norms. Furthermore, the number of squared ReLU activations required can be very few, as shown in the proof of Corollary 1.2. 3. Generalization error and sample points: Based on Theorem 5, our findings indicate that learning target functions with loss functions defined by Sobolev norms does not require substantially more sample points compared to those defined by $L_2$-norms. The generalization error orders of these two types of loss functions are equivalent with respect to the width $N$ and depth $L$ of DNNs. 4. Implement of solving PDEs by DNNs: Our findings serve as confirmation that DNNs are indeed capable of effectively solving partial differential equations (PDEs) within frameworks like Deep Ritz, Wasserstein GAN (WGAN), and Physics-Informed Neural Networks (PINN). Furthermore, our research contributes significant advancements by substantially improving the approximation and generalization errors presented in the methodologies in this field proposed by Duan et al. [2021], etc. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for your replies, I find them generally reasonable. I still think, however, that it is not quite fair for you to write "*there are similarities between our method and Bartlett et al. [2019] in proving the upper bound of VC-dimension*". In fact, your proof very closely follows the structure and specific elements of the original proof. Your contribution is, indeed, in extending it to the more complex scenario involving derivatives. It might be reasonable to add a comment to the paper explaining in more detail the relation of your proof to the original proof, and the associated challenges. Anyway, I'm increasing my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your valuable suggestion. We will incorporate a comment in the final version of the paper, providing a more detailed explanation of the relationship between our proof and the original proof, as well as discussing the associated challenges.
Rebuttal 1: Rebuttal: We would like to express our gratitude for all the reviewers' valuable suggestions and careful reading. Based on your advice, we have made several improvements to our paper. Additionally, we have added an example for solving Partial Differential Equations by DNNs with Sobolev training in the introduction section, as well as a corollary discussing lower bounds of pseudo-dimension. We have provided further details in the attached file. Pdf: /pdf/317a6e67cb9a873a96f5166608ba728a49703298.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Structure Learning with Adaptive Random Neighborhood Informed MCMC
Accept (poster)
Summary: By adding various elaborations to the state-of-the-art Markov chain Monte Carlo inference algorithm, this paper achieves an efficient and effective Bayesian network inference algorithm. Bayesian networks are one of the main tools of machine learning with a long history. In general, their learning is known to be a computationally hard problem, but efficient inference algorithms based on Markov chain Monte Carlo have been actively studied. Section 2 describes the problem setup of a linear functional Bayesian network with a normally distributed weight matrix as a concrete example. Section 3.1 introduces the baseline algorithm as a state-of-the-art inference algorithm for Bayesian networks, applying the method of Liang et al. [2022] for Bayesian variable selection problems to Bayesian networks. In Sections 3.2 through 3.4, the authors' further elaborations are carefully described step by step. Section 3.2 shows how to effectively adjust the neighborhood of the proposals, inspired by Liang et al. [2022] for Bayesian variable selection problems. Section 3.3 introduces an effective device for (nested) sequential sampling of neighborhoods to eliminate problems that can arise in the DAG estimation problem, but not in Bayesian variable selection. Section 3.4 presents a method for properly restricting the neighborhoods, which tend to be enormous. Strengths: - This paper is a solid proposal for possible reasonable improvements in the Bayesian network inference problem, with broad coverage of the latest developments in the surrounding fields of Bayesian machine learning. - The text is very detailed so that a wide variety of readers (from beginners to experts) can follow the history and the latest developments in the field. - The code is provided in such a way that the proposed algorithm can be easily followed up by subsequent research, which is a very significant contribution to the field. Weaknesses: The candidate weaknesses listed below are based on questions I had during my initial peer review. As my misconceptions are resolved, they may cease to be weaknesses. - The paper is somewhat unclear in its claims about the mixing time analysis (or empirical observation) of the proposed MCMC algorithm, although there are several mentions of it (e.g., Line 13, 288, 365). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Thank you for sharing this very interesting paper. This paper has a lot of devices to help the reader understand. I really enjoyed reading about many of the parts. On the other hands, for some parts, I also worry that perhaps I am underestimating the value of this paper due to my own lack of understanding. Therefore, I would like to present some arguments below to improve my own understanding. Authors do not need to respond to the inconsequential ones, but if there is anything that authors infer that my understanding is lacking, it would be very helpful if you could respond. - I find the author's argument against mixing times in the proposed algorithm somewhat unclear. In this paper, there are several mentions of mixing (e.g., Line 13, 288, 365). As the author states "PARNI-DAG quickly converges to high-probability regions (Line 11),” I interpret these claims as "the proposed algorithm achieves more rapid mixing times or shorter arrival time expectations to regions of high probability (note: these two claims are equivalent (Theorem 1.4 of [*]))”. [*] Yuval Peres and Perla Sousi. Mixing times are hitting times of large sets. Journal of Theoretical Probability, 28(2):488–519, 2015. However, I think that in general (especially when it depends on input data) analyzing or empirically observing MCMC mixing times is not an easy task. To my knowledge, there are not many examples of MCMC mixing times being analyzed for many problems in general. The exceptions are the following few problems: - Bayesian variable selection: Yun Yang, Martin J. Wainwright, and Michael I. Jordan. On the computational complexity of high- dimensional Bayesian variable selection. The Annals of Statistics, 44(6):2497 – 2532, 2016. - Bayesian community detection: Bumeng Zhuo and Chao Gao. Mixing time of Metropolis-Hastings for Bayesian community detection. Journal of Machine Learning Research, 22:10:1–10:89, 2021, - Structure learning with directed acyclic graph: [+] Quan Zhou and Hyunwoong Chang. Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes, Annals of Statistics, 2023 (or arXiv:2101.04084). Fortunately, the DAG inference problems addressed in this paper seem to be able to guarantee polynomial-time mixing times for certain algorithms (Theorem 6 of the above literature [+]). In this light, what observations can be made about the mixing time of the proposed MCMC algorithm? Can existing results for mixing times be easily applied to the proposed algorithm? Or is theoretical analysis of the proposed algorithm for mixing time a future issue? Also, as we can observe from the experimental results, can the proposed algorithm speed up the mixing time by an order of magnitude that depends on the problem size? Or is it an improvement of constant orders? Fortunately, theoretical analysis of MCMC mixing time for DAGs seems to be in progress, so we hope that the author's mention of these issues (separating what is known from what is not known) will provide important insights for the reader. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: As discussed in the above Questions, I am not sure the (theoretical) guarantee of mixing time of the proposed MCMC algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for the reviewer’s positive comments about the clarity of exposition and the developments brought about by our work. We proceed by answering the reviewer’s clarifying questions about mixing time here below. In some parts of the paper, we use the wording ‘mixing time’ to describe the empirical mixing performance observed in the output of simulated Markov chains, rather than the formal theoretical mixing time property of the MCMC algorithm. These numerical results (Section 4) suggest that the PARNI-DAG proposal is very likely to have a faster theoretical mixing time compared to the add-delete-reverse proposal, even though this has not been formally proved. Investigating the mixing time bounds of the PARNI(-DAG) proposal is a very interesting topic, but currently beyond the scope of this work. Nonetheless, to acknowledge this, we have included a brief discussion in the new version of the paper. Although the theoretical mixing time bound for random walk proposals has been largely studied, similar results for the class of locally informed proposals (including PARNI and PARNI-DAG) are relatively under-developed, due to the complexities arising in the proposal distributions. Most results on mixing time in discrete sample spaces focus on the problem of Bayesian variable selection. It has been shown in [1] that a random walk proposal (specifically the add-delete-swap proposal) can achieve polynomial-time mixing under mild conditions on the posterior distribution. Another theoretical result in [2] shows that the informed proposals can achieve dimension-free mixing time bounds under the same mild conditions. Under the conditions that posterior mass concentrates on a small set of models and the chain starts at a model close enough to the underlying “true” model, it has been shown in [2] that the mixing time of the Locally Informed and Thresholded proposal (LIT) does not depend on the number of covariates. Notoriously, the mixing time of MCMC samplers in Bayesian structure learning settings is harder to study, due to the higher complexity of the DAGs sample space. Recent work [3] has shed light on the analysis of mixing time bounds for Bayesian structure learning as a generalisation of the result from [1]. In [3], the authors showed that the mixing time of the Random Walk Greedy Equivalent Search (RW-GES) proposal is at most linear in the number of covariates and the number of datapoints. Moreover, they also presented the necessary conditions for posterior consistency in Bayesian structure learning. Proving that the informed proposal can achieve faster mixing time than the random walk proposal (as [2] did for the case of Bayesian variable selection) in Bayesian structure learning settings is still an ongoing area of research. We conclude by mentioning that, as part of a separate ongoing research, we are currently studying the mixing time of the PARNI proposal in comparison to other state-of-the-art schemes on various applications. From the empirical results in [4], the PARNI proposal has faster empirical mixing time compared to the LIT proposal, and it is highly likely that the PARNI proposal can also achieve dimension-free mixing like the LIT proposal on Bayesian variable selection. Considering the theoretical results in [2], it appears feasible to generalise them to the Bayesian structure learning setting, but we leave this for future research. We are happy to include in the new version of the paper a discussion about the theoretical mixing time of the PARNI-DAG proposal and how it relates to the work mentioned above. The discussion also mentions possible methods (e.g., spectral gap, canonical path analysis and drift-and-minorization methods) that can be employed to find the theoretical mixing time bound of PARNI-DAG on structure learning problems. We look forward to hearing back from you, \ The Authors [1] Yun Yang, Martin J. Wainwright, and Michael I. Jordan. On the computational complexity of high- dimensional Bayesian variable selection. The Annals of Statistics, 44(6):2497 – 2532, 2016. [2] Zhou, Q., Yang, J., Vats, D., Roberts, G.O. and Rosenthal, J.S., 2022. Dimension-free mixing for high-dimensional Bayesian variable selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(5), pp.1751-1784. [3] Quan Zhou and Hyunwoong Chang. Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes, Annals of Statistics, 2023 (orarXiv:2101.04084). [4] Liang, X., Livingstone, S. and Griffin, J., 2022. Adaptive random neighbourhood informed Markov chain Monte Carlo for high-dimensional Bayesian variable Selection. Statistics and Computing, 32(5), p.84. --- Rebuttal Comment 1.1: Title: I appreciate the author's very detailed and helpful responses. Comment: I appreciate the author's very detailed and helpful responses. The detailed additional explanation for the MCMC mixing time exceeded my expectations. All my concerns have been addressed. Thank you very much. I am sure that these explanations will satisfy the more expert readers, although I believe that the original paper also entertained a diverse audience ranging from newcomers to the field to experts in Bayesian networks with a long history of research. I would keep the score unchanged from my initial positive impression. Thank you for sharing a solid and important paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer PZBP Comment: Dear Reviewer PZBP, We are very happy the response has clarified any outstanding concerns about the work. We are once again thankful for the positive comments, but mostly for the interesting discussion these have sparked about mixing time bounds of PARNI-like proposals. This will turn out to be very useful also for related ongoing (and future) research on the topic. We remain available in case any additional query arises! Kindest Regards, \ The Authors
Summary: This paper proposes PARNI-DAG, a new MCMC method for sampling Directed Acyclic Graphs (DAGs) that can be used for the problem of structure learning under observational data. PARNI-DAG builds on top of PARNI, and similarly uses locally informed, adaptive random neighborhood with an efficient point-wise implementation, but introduces additional improvements include: pre-tune sampler parameters using a skeleton graph derived from other methods, augmenting the search neighborhood with an edge-reversal move, and neighborhood thinning to improve computational efficiency. Experiments on some toy datasets demonstrate advantage over existing baselines. Strengths: The proposed method seems to be an effective adaptation of PARNI to do Bayesian learning on DAGs, and outperfoms a few (classical) baseline methods on a few toy benchmarks. Weaknesses: I am not an expert in this field, but it seems to me this paper is largely adapting the existing PARNI method for Bayesian variable selection for DAG learning, and combining various techniques that are already present in the current literature on top of PARNI. It would be helpful if the authors can make a more compelling case for the contributions of the paper over existing work. For example, in L62-L72 contribution, it seems up until L69 it is just describing what PARNI already has. The procedure for pre-tune sampler parameters and do warm start also seem like a trivial application of exsiting method. In Section 3 titled "The novel PARNI-DAG" proposal, the entire Section 3.1 seems to be just a recap of wht PARNI already has. Section 3.2 is mostly a recap of Kuipers et al. [2022]. Section 3.3 introduces the reversal neighborhood but this is also done in for example partition MCMC. Section 3.4 also seems like a simple adaptation of the thinning procedure that is already present in PARNI. In fact a large part of Section 3 seems to belong to background review rather than a description of a novel method. It would be helpful if the authors can reorganze this way and also clearly state what exactly is the novel contribution of PARNI-DAG. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weaknesses Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: The authors did not discuss limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time spent reviewing our submission. In what follows, we address concerns about novelty and originality of the work. The modifications made to the original PARNI proposal, in order to adjust the sampler from a pure variable selection setting to the more complex structure learning one, are several, and non-trivial. Applying a vanilla PARNI proposal to a DAG learning problem results in sub-optimal performance and particularly poses serious concerns about the sampler’s scalability, due to the complexity of DAGs space. For this reason, the adjustments we introduce are necessary to make the PARNI sampler available to the structure learning community to use. We proceed by addressing some of the specific points raised by the reviewer: • In Section 3.2, the procedure we introduce is fundamentally different from the one described in Kuipers, J. et al. (2022). While they make use of the PC algorithm to restrict the starting search space, we instead utilise it to compute warm-start estimates (instead of the computationally intractable Rao-Blackwellised estimates used in original PARNI proposal) for the posterior edge probabilities $\pi (\gamma_{ij} = 1)$. While the former procedure introduces bias due to true positive edges deletion, ours does not, as the sampler can potentially still revert back and target the true posterior distribution. • In Section 3.3, we describe the reversal neighbourhood construction, introduced to improve the sampler’s mixing. Although the idea of a reversal neighbourhood features also in the classical structure MCMC sampler, designing the reversal neighbourhood for the PARNI-DAG proposal is a relevant modification - the way the reversal neighbourhood is constructed in a vanilla PARNI proposal is trivial and would result in extremely slow mixing. • Section 3.4 introduces the parameter adaptation scheme that controls the neighbourhood size and can thus significantly reduce the computational cost. The novel PARNI-DAG’s adaptation scheme comes with a completely different objective from the classic PARNI proposal, as it attempts to control the neighbourhood size directly. Without these adjustments, vanilla PARNI proposal would not be efficaciously applicable to DAG learning problems, as it would suffer from extremely slow mixing issues. We look forward to hearing back from you,\ The Authors --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I thank the authors for the response. However after reading the response my confusion still remains. For example the authors mentioned that `Applying a vanilla PARNI proposal to a DAG learning problem results in sub-optimal performance and particularly poses serious concerns about the sampler’s scalability` but it's not immediately clear to me what they mean by `sub-optimal performance` and what are the `serious concerns`. And with all the additional explanations this work still seems like a simple adaptation of existing techniques. But as I mentioned in the initial review, I am not an expert in this area. I will keep my score as is since the rebuttal does not seem convincing to me (and in may places seems to be making claims without any explanations/support). But it is possible that I do not fully grasp the challenges the authors have to overcome in the relevant adaptions, and I would be happy to hear feedback from other reviews. --- Reply to Comment 1.1.1: Title: Response to Reviewer 1zPb Comment: Dear Reviewer 1zPb, Thank you for your reply. We would just like to clarify the specific points raised in the response that might be source of confusion. - By ``serious concerns`` in relation to vanilla PARNI sampler’s scalability, we mean both its **higher computational costs** and its **slower mixing**. Issues concerning the higher computational costs of vanilla PARNI are addressed via the changes introduced in Section 3.2 (warm-up PEPs estimates) and 3.4 (directly controlling neighbourhood size). As an example relative to the changes implemented in section 3.2 alone, vanilla PARNI, where the PEPs are computed adaptively via Rao-Blackwellised estimates, scales linearly in the number of edges and quadratically in the number of nodes. PARNI-DAG instead scales strictly less than that, with computational savings’ attributable to the use of warm-up PEPs estimates that depend on the starting search space obtained from the PC algorithm. Section 3.3 instead deals with slow mixing issues. - By ``sub-optimal performance`` we mean that vanilla PARNI yields a significantly lower accuracy (in addition to longer computational time) in DAG learning tasks compared to PARNI-DAG (and closer to ADR’s one). This is naturally attributable to its slower mixing properties mentioned in the point above. Kindest Regards, \ The Authors
Summary: This paper presents a Markov chain Monte Carlo (MCMC) sampler adapted from previous PARNI sampler, called PARNI-DAG, which is designed for Bayesian structure learning under observational data. The authors assume causal sufficiency and target the posterior distribution on Directed Acyclic Graphs (DAGs). The proposed PARNI-DAG mainly relies on the PARNI sampler and modify it to suite the purpose of structure learning, including 1. warm-start of neighbourhood sampling probability 2. reversal neighbourhood step and 3. adaptive neighbourhood skipping probability. The main contributions of the paper are 1. A warm-start procedure of the sampler's parameters that exploits skeleton graphs derived through constraint-based or scoring-based algorithms, ensuring better scalability and mixing property with the number of nodes. 2. A reverse step to avoid getting trapped in the local mode. 3. An adaptive skipping probability such that not all intermediate neighbourhood sampling steps are executed. Empirically, the author demonstrates the advantage of PARNI-DAG using real-world examples, which shows advantages compared to the previous MCMC methods. Strengths: ## Originality The originality of the paper lies in the development of the PARNI-DAG algorithm, which combines the Point-wise Adaptive Random Neighborhood Informed (PARNI) proposal with new features specifically designed for structure learning in the space of Directed Acyclic Graphs (DAGs). The proposed algorithm addresses the challenges of mixing and convergence in high-dimensional settings, which is not adequately addressed by existing MCMC methods for structure learning. This work does not provides a completely new sampling algorithm, instead, modify the existing approaches to suite the purpose of structure learning. ## Clarity The paper is logically structured with clear presentation of the modifications made to the PARNI proposal. The authors have made efforts to provide intuitive explanations and motivate their choices in the development of the PARNI-DAG algorithm. The paper is easy to follow, and the appendices provide additional details on the derivations and calculations. ## Significance The significance of the paper lies in its potential impact on the field of structure learning and causal discovery. The PARNI-DAG algorithm aims to address the challenges of mixing and convergence in high-dimensional settings. The algorithm's improved performance over existing MCMC methods for structure learning should make it a reasonable contribution to the field but there are some limitations, which I will elaborate in the following. Weaknesses: ## Empirical experiments: While the experimental results demonstrate the advantages of PARNI-DAG over existing MCMC methods, the experiments primarily focus on the comparison with MCMC based approach. However, in the literature review, the author also mentioned several structure learning approach. Although the main claim of this paper is the improvement over existing MCMC, it is still beneficial to include a comparison to state-of-the-art Bayesian structure learning approach like [1,2,3]. [1] Cundy, C., Grover, A., & Ermon, S. (2021). Bcd nets: Scalable variational approaches for bayesian causal discovery. Advances in Neural Information Processing Systems, 34, 7095-7110. [2] Lorch, L., Rothfuss, J., Schölkopf, B., & Krause, A. (2021). Dibs: Differentiable bayesian structure learning. Advances in Neural Information Processing Systems, 34, 24111-24123. [3] Geffner, T., Antoran, J., Foster, A., Gong, W., Ma, C., Kiciman, E., ... & Zhang, C. (2022). Deep end-to-end causal inference. arXiv preprint arXiv:2202.02195. ## Limitation with linear model The proposed PARNI-DAG mainly targets at the linear model with Gaussian assumptions because the marginal probability are needed. For general nonlinear model, such integration is not tractable, which rendering the following PARNI-DAG proposal invalid. However, nonlinearity is everywhere in the real-world applications. To make the paper stronger, the author should discuss the implication of using linear model or demonstrates that linearity assumption does not harm the performances too much. This can be achieved by comparing some of the previous mentioned baselines [1,2,3] ## Computational complexity Although the proposed PARNI-DAG method uses many modifications to reduce the computation cost, this approach relies on the MH step to correct the bias. It is known that MH step scales linearly with the number of datapoints, which can be a huge computational bottleneck for large dataset. I suggests the author should explicitly discuss the computational complexity or limitations, and potential approach to remove this constraints. Technical Quality: 3 good Clarity: 3 good Questions for Authors: All the questions are mentioned in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The author does not explicitly write a limitation section, I have made suggestions to include one like computational challenges and limitations of using linear model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time spent carefully reviewing the paper and their positive comments about its originality and clarity. We proceed by addressing the points raised in the weaknesses section. 1) Empirical Experiments We have implemented DiBS and DiBS+ (Lorch, L. et al., 2021), and DECI (Geffner, T. et al., 2022) models on the experimental setups of Section 4.2. However, we have noticed that, perhaps not so surprisingly, these variational methods struggle to detect edges in these low-sample regimes, as they require a stronger signal (or larger samples, as featured in their experiments) to perform well. In the pdf file attached to the global response, we have added a table reporting these methods’ average SHD and number of edges detected in the first three datasets (ecoli, magic-niab, magic-irri), as in the last dataset (arth150), characterised by large number of nodes, they are significantly slow and run into numerical instability issues. 2) Limitations of Linear Model We have added a new ‘limitations’ section in the manuscript, where we discuss both implications of the linearity assumption and computational complexity of PARNI-DAG. Linearity is naturally restrictive, particularly when the objective of the inference is the joint posterior $p(G, \theta | D)$ rather than the marginal posterior $p(G | D)$ that we focus on, where model misspecification is a threat only when it hampers edges identifiability - meaning that some edges could potentially still be detected in presence of non-linear relationships. Unfortunately, the PARNI-DAG proposal is not directly extendable to Additive Noise Models (ANM) (Hoyer, P. et al., 2008) where the closed-form marginal likelihood does not exist. As part of a separate, but related, research project we have developed a new PARNI proposal coupled with (approximate) Laplace approximation (Rossell, D. et al, 2021) suitable in contexts where the marginal likelihood is not closed form. We have shown that it works well empirically in Bayesian variable selection tasks in generalised linear models and survival models. This provides a solid base to potentially extend the PARNI-DAG sampler to ANM with approximate marginal likelihood, although we leave this for future research. 3) Computational Complexity The other limitation of the PARNI-DAG proposal pointed out by the reviewer, and discussed in the newly added limitation section, is the computational cost associated with the likelihood evaluation, which scales at least linearly in the number of datapoints. The issue arises as the likelihood features both in the informed proposals and in the MH step. A potential solution to this problem, that we discuss in the new section, regards the possibility of coupling PARNI-DAG with sub-sampling MCMC procedures, such as the ones presented in Korattikara, A. et al. (2014) and Maclaurin, D. & Adams, R. (2015) (FireflyMC). We are actively investigating this topic as a part of a separate ongoing research project. We look forward to hearing back from you, \ The Authors [1] Hoyer, P., Janzing, D., Mooij, J.M., Peters, J. and Schölkopf, B., 2008. Nonlinear causal discovery with additive noise models. Advances in neural information processing systems, 21. [2] Rossell, D., Abril, O. and Bhattacharya, A., 2021. Approximate Laplace approximations for scalable model selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(4), pp.853-879. [3] Korattikara, A., Chen, Y. and Welling, M., 2014, January. Austerity in MCMC land: Cutting the Metropolis-Hastings budget. In International conference on machine learning (pp. 181-189). PMLR. [4] Mclaurin, D. and Adams, R., P., 2015, Firefly Monte Carlo: Exact MCMC with Subsets of Data. Proceedings of the 24th International Conference on Artificial Intelligence (pp. 4289–4295). IJCAI'15. --- Rebuttal 2: Comment: Thanks for the authors' feedback regarding the additional experiments. They addressed my concerns and promised to add new limitation section. So I will keep my current evaluation. --- Rebuttal Comment 2.1: Title: Response to Reviewer xD8f Comment: Dear Reviewer xD8f, Thank you for your reply. We remain available in case any further clarification is needed. Kindest Regards, \ The Authors
null
null
Rebuttal 1: Rebuttal: We are thankful to the reviewers for the time spent going through our submission and for their insightful comments, as we believe these have significantly contributed to improving the paper. In particular, we have addressed Reviewer xD8f’s request for additional experimental comparison by implementing the variational methods DiBS, DiBS+ and DECI (Lorch, L. et al., 2021; Geffner, T. et al., 2022) on the experimental setups of Section 4.2, and showed that these are not particularly well-suited for these types of low data regimes (a table of the results can be found in the pdf attached this global response). We have also added a new section in the paper discussing the limitations of the linearity assumption and the computational complexity of PARNI-DAG together with potential ways to alleviate this (sub-sampling MCMC solutions), which we are currently investigating. We have addressed Reviewer 1zPb’s concerns about the originality of the work, by stressing how PARNI-DAG brings about several relevant adjustments to the vanilla PARNI proposal that makes the methods applicable to the higher complexity of DAG spaces. We have stressed also that vanilla PARNI would yield sub-optimal performance and would pose serious scalability concerns in structure learning settings. Finally, in response to Reviewer PZBP’s questions, we have included in the new version of the paper a brief discussion on the theoretical mixing time of the PARNI and PARNI-DAG proposals. As mentioned in the last paragraph of the individual response to Reviewer PZBP, as part of a separate ongoing project, we are currently studying mixing time bounds of the classic PARNI proposal in Bayesian variable selection settings, and plan to potentially extend this study to Bayesian structure learning settings in the future, although this is notoriously a harder problem. We hope we have adequately responded to all the concerns regarding the paper, and we look forward to engaging with the reviewers during the interactive discussion period, to respond to any outstanding queries. Kindest Regards, \ The Authors Pdf: /pdf/a2bacf6f875eefa205a1c82b275e66ba89c81a07.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Self-Interpretable Graph-Level Anomaly Detection
Accept (poster)
Summary: Inspired by the multi-view information bottleneck and dual hypergraph transformation, this paper proposed a self-explainable anomalous graph detection method called SIGNET. The method has three key designs in the model training stage: 1) Dual hypergraph-based view construction, 2) Bottleneck subgraph extraction, and 3) Cross-view maximization. Anomalous graph detection and graph rationale extraction will be achieved through maximizing the mutual information between the representations of graph rationale extracted from the original graph view and hyper-graph view. During inference, the anomaly score of a test graph is determined by the negative mutual information between the representations of its two graph rationales from different views. Strengths: 1. The problem studied in this paper is important and interesting. It is essential to understand the subgraphs make a graph anomalous. 2. This paper provides a clear summary of the challenges of self-interpretable GLAD and why existing explainers for GNNs cannot be applied to graph-level anomaly detection. 3. This paper is well-structured. Weaknesses: 1. Despite having a clear framework, it is still hard to get the motivations behind employing multi-view information bottleneck (MIB) and dual hypergraph transformation (DHT) to solve self-interpretable graph-level anomaly detection. Much like GAST [1], authors draw inspiration from information theory to extract subgraphs as the explanation of model prediction, but they fail to adequately discuss the underlying intuition or theoretical basis for how MIB and DHT specifically address graph-level anomaly detection. 2. The dataset used in experiments is relatively small-scale, and time complexity analysis is lacking. If there are difficulties in conducting experiments on large-scale graph data, time complexity analysis is necessary. 3. More related work about anomaly detection and explainers of anomaly detection is expected. [1] Miao et al. Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism, ICML 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How does the equation (1) improve unsupervised representation? In other words, why does the Mutual information between $V_1$ and $Z_1$ need to be minimized when $V_2$ is known? And, why does the mutual information between $V_2$ and $Z_1$ need to be maximize? 2. What are the key connections between unsupervised representation and graph-level anomaly detection? In this paper, graph-level anomaly detection is not a completely unsupervised problem, because labelled normal graphs work as the supervision information. (The answers to the above two questions 1 and 2 will help me understand why MIB can be applied to graph-level anomaly detection problems.) 3. Why DHT-based view is useful for detecting anomalous graphs? More fine-grained discussions are important. For example, for a normal graph and an anomalous graph, what does their DHT-based view look like? If this is difficult to depict, it would be beneficial to explain why, for a normal graph, the mutual information between the representation of subgraph extracted from the original graph view and the DHT-based view should be high, while for an anomalous graph, the mutual information between representations of two different views should be low when two graph views share a single extractor? (I have read the content from line 210 to line 215. question: why the distinct content between dual hyper-graph and original view can benefit anomaly detection? What exactly are these distinctions?). 4. It is reasonable to use the same extractor to replace minimizing the SKL divergence? 5. Questions about the Loss function (5): Minimizing (5) is equal to maximizing $I(\mathbf h_{G_i^{(s)}}, \mathbf h_{G_i^{\star (s)}})$. Hence, for $l(\mathbf h_{G_i^{(s)}}, \mathbf h_{G_i^{\star (s)}})$, a larger numerator and a smaller denominator are expected. My question is, what is the meaning of the small denominator? Why do we need to make the representation $\mathbf h_{G_i^{(s)}}$ of graph rationales about normal graph $i$ to be dissimilar from $\mathbf h_{G_j^{(s)}}$ representation of graph rationales about normal graph $j$? Shouldn't normal graphs have similar representations (in experiments, authors use the graphs from the same class as normal graph)? In this way, the representations of their graph rationales should also be similar. 6. In experiments, how to use GE and PE to explain the predictions of OCGIN, GLocalKD, and OCGTL? The loss function of GE and PE is designed for graph classification. 7. The proposed method has connection with infoNCE loss and contrastive learning, a contrastive learning-based graph-level anomaly detection baseline is needed. GOOD-D [2] is a suitable choice. [2] Liu et al, GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection, WSDM 2023. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Reviewer QzdR** We are grateful to Reviewer QzdR for providing insightful feedback. Due to the limitation of space, we only reply to some core questions as below. For Questions 1/4/6 in the review, we will discuss them later. **Q1: Motivation behind using MIB and DHT** **A1:** We appreciate the reviewer for the insightful comment. We discuss the motivations as follows: **MIB**: We want to mention that in (unsupervised) graph-level anomaly detection (GLAD), the training phase only contains normal graphs, and the goal is to detect abnormal graphs during the test phase. Since we do not have any knowledge of labeled abnormal graphs during training, it is hard to learn a precise decision boundary between normal and anomalous graphs. Due to the lack of ground-truth knowledge of abnormal graphs during training, we try to leverage self-supervised (unsupervised) learning, in particular MIB to build the GLAD model, which characterizes the distribution of normal graphs and provides a good indicator for measuring the abnormality of test graphs. Also, by extracting the bottleneck subgraphs from our MSIB framework, we are able to provide self-interpretability when detecting anomalies. **DHT**: As our goal is to learn the normal data distribution through self-supervised learning, however, simply augmenting the input graph with stochastic perturbation may introduce anomalous graphs, leading to unexpected performance degradation. In contrast, the DHT-based view construction avoids perturbing the original graph's semantic knowledge and provides a new view of the input graph, enabling us to capture consistent matching patterns between the two views. Additionally, the dual hypergraph approach prioritizes edge-level information, encouraging the model to recognize both node-level and edge-level anomaly patterns. Consequently, the node-edge matching pattern becomes a focal point for model learning and serves as an indicator for normal/abnormal graphs. For instance, if a normal pattern is a star-shaped motif, the node patterns (e.g., high-degree center node) and edge patterns (e.g., connections between center/tail nodes) of this motif can be effectively captured by maximizing the agreement between the original view and the DHT view. In cases where an anomaly with a similar but distinct motif (e.g., clique) is introduced, it would exhibit a lower mutual information (MI) value due to the disruption of node-edge matching patterns, which can help us to better detect the abnormal graphs during the test phase. **Q2: Experiments on large-scale datasets and complexity analysis** **A2:** Thank you for your valuable feedback regarding the scalability and time complexity analysis. In response to your suggestions, we conducted additional experiments on two large-scale datasets with more samples, BM-MT-20k and MNIST-0-6k. The content of datasets is similar to BM-MT and MNIST-0. The results below demonstrate that our method can handle datasets with a large number of graphs, verifying the scalability of our method. Note that SIGNET can also handle datasets with large graphs (e.g., REDDIT-B with 429.6 avg. #node), as shown in our main paper. Furthermore, we have included a comprehensive time complexity analysis of our method, which is attached in P1 of general response. |Dataset|AD-AUC|NX-AUC|EX-AUC|Runtime| |-|-|-|-|-| |BM-MT-20k|95.75%|81.85%|80.59%|2.5561s| |MNIST-0-6k|82.25%|74.08%|76.35%|0.6247s| **Q3: Connections between unsupervised representation and graph-level anomaly detection** **A3:** We appreciate the reviewer for raising this concern. Following the previous literature on GLAD, we want to mention that the training graphs only contain normal ones, and the goal is to detect abnormal graphs during the test phase. Since we do not have any knowledge of labeled abnormal graphs during training, it is hard to learn precise decision boundaries between normal and anomalous graphs. Therefore, we try to leverage unsupervised/self-supervised learning to build the graph-level anomaly detection model, which characterizes the distribution of normal graphs and provides an indicator (i.e., cross-view MI) for measuring the abnormality of test graphs. **Q4: Why minimize the MI between graph rationale and other normal graphs** **A4:** We appreciate your insightful question. Firstly, the Info-NCE mutual information estimator theoretically requires negative samples in its denominator to prevent collapse, i.e., all input pairs are estimated to have large MI. To meet this requirement efficiently, we opted to use other samples in the same batch as negative samples, which is an efficient strategy without extra cost. During the model design phase, we explored stochastic graph perturbation to generate synthetic negative samples. However, this approach proved less effective, unstable, and time-consuming. Empirically, using other normal samples as negative samples has shown effectiveness. Secondly, despite normal samples belonging to the same class or having similar properties, their graph rationales can differ from each other. For instance, in the BM-MS dataset, the rationales may include rings of different sizes; in the MUTAG dataset, they can be -NH2 or -NO2. Hence, maximizing the similarity between a graph and the rationales of other graphs may lead to mismatching and hurt stability and performance. Instead, maximizing the similarity of each graph to its own rationale proves to be a reliable strategy. Therefore, we opted to use Info-NCE to ensure that each graph has a larger similarity to its own rationale rather than to others' rationales. We hope this explanation addresses your query and provides better insight into our approach. **Q5: New baseline** **A5:** Thank you for providing a strong anomaly detection baseline GOOD-D that makes our experiments more convincing. We will include a comparison with GOOD-D, see attached PDF, where SIGNET outperforms GOOD-D with a significant performance gap of 7.40%. --- Rebuttal Comment 1.1: Title: Discussion for more questions Comment: We extend our gratitude to Reviewer QzdR for their insightful feedback. In this section, we offer responses to the remaining questions posed by the reviewers. If you have any further points for discussion, please feel free to share them here. **Q6: Regarding the explanation of MIB principle (Eq. (1))** **A6:** We appreciate the reviewer for the thoughtful question. Our explanation of Eq. (1) is mainly based on the original paper of multi-view information bottleneck (MIB) where this equation is deducted [*1]. According to the paper, minimizing the mutual information between $V_1$ and $Z_1$ conditioned on $V_2$ can improve unsupervised representation by discarding irrelevant information that is unique to $V_1$ and not predictable by observing $V_2$. This is because, in the multi-view unsupervised setting, we assume mutual redundancy between the two views, meaning that they share some information. Therefore, any information that is unique to one view and not shared by the other is considered superfluous and can be safely discarded without losing any label information. On the other hand, maximizing the mutual information between $V_2$ and $Z_1$ is necessary to ensure that the representation is sufficient for the potential label $Y$. This is because the mutual information between $V_2$ and $Z_1$ represents how much label information is accessible from the representation. Therefore, by maximizing this term, the authors are able to ensure that the representation contains enough information to predict the label accurately. Overall, by combining these two requirements using a relaxed Lagrangian objective, we are able to obtain a minimal sufficient representation that discards as much superfluous information as possible without losing any label information, resulting in a more robust and informative representation. In the revised paper, we will discuss more about the inner mechanism of MIB. [*1] Federici et al. Learning robust representations via multi-view information bottleneck. In ICLR, 2020. **Q7: Is it reasonable to use the same extractor to replace minimizing the SKL divergence?** **A7:** Thank you for your insightful question, and our answer is indeed positive. Theoretically, the purpose of the SKL divergence term is to align two distributions, specifically $p(G^{1(s)}|G^{1})$ and $p(G^{2(s)}|G^{2})$. When these distributions are perfectly matched, the SKL value diminishes to $0$. It's important to note that the transformations from $G$ to both $G^{1}$ and $G^{2}$ are deterministic. If we employ the same extractor to generate $G^{1(s)}$ and $G^{2(s)}$ from $G=G^{1}$, this implies an inherent alignment of the two distributions, resulting in a $D_{SKL}$ of $0$, which aligns with our intention. Hence, we can omit the SKL term from the objective function. Furthermore, the empirical comparison in Section 4.4 underscores the effectiveness of this design. To conclude, based on both theoretical consideration and experimental evidence, we assert that this design choice is reasonable. **Q8: Details of “detector+explainer” baselines** **A8:** We appreciate your insightful question. Firstly, the Info-NCE mutual information estimator theoretically requires negative samples in its denominator to prevent collapse [*2,*3], i.e., all input pairs are estimated to have large MI. To meet this requirement efficiently, we opted to use other samples in the same batch as negative samples, which is an efficient strategy without extra cost. During the model design phase, we explored stochastic graph perturbation to generate synthetic negative samples. However, this approach proved less effective, unstable, and time-consuming. Empirically, using other normal samples as negative samples has shown effectiveness. Secondly, despite normal samples belonging to the same class or having similar properties, their graph rationales can differ from each other. For instance, in the BM-MS dataset, the rationales may include rings of different sizes; in the MUTAG dataset, they can be -NH2 or -NO2. Hence, maximizing the similarity between a graph and the rationales of other graphs may lead to mismatching and hurt stability and performance. Instead, maximizing the similarity of each graph to its own rationale proves to be a reliable strategy. Therefore, we opted to use Info-NCE to ensure that each graph has a larger similarity to its own rationale rather than to others' rationales. We hope this explanation addresses your query and provides better insight into our approach. [*2] Tschannen et al. "On Mutual Information Maximization for Representation Learning." In ICLR, 2020. [*3] Chen et al. "A simple framework for contrastive learning of visual representations." In ICML, 2020. **Q9: More related works** **A9:** We appreciate the reviewer for the suggestion of our literature review. In the revised version of the paper, we will conduct a thorough review and include more references on both anomaly detection and explainable anomaly detection.
Summary: The paper tackles the task of graph anomaly detection. It proposes a multi-view subgraph information bottleneck framework that is used to introduce SIGNET, a self-interpretable graph anomaly detection method. The method leverages the dual hypergraph transformation to obtain two views of the same graph, which are subsequently used for a mutual information maximization objective. The resulting model can then be employed to detect anomalous samples in the dataset and provides an implicit explainability mechanism that can highlight salient regions of the graph. Strengths: - The paper is well-written, and the ideas presented are exposed in an easy-to-follow manner. - The authors present compelling arguments for their design choices, an ablation study, and the derivation of the loss function in the Supplementary materials. - The Supplementary materials contain the code for the proposed approach and various details about the experimental setup, such as the hyperparameter pool used during hyperparameter selection. - The quantitative results are compelling, and the proposed method obtains the best results in most cases. Adequate baselines were selected for the experiments, containing both neural approaches and more classical AD methods, such as the OC-SVM with a WL kernel. Weaknesses: - The paper focuses on presenting an explainability/interpretability mechanism instead of an anomaly detection pipeline with an explainability mechanism built in. The usage of the dual hypergraph view in the context of training an anomaly detector is novel in its own right. The main results for anomaly detection in Table 2 are also good. An explainability mechanism is a very important addition, but I would prefer it not to be the paper's primary focus. The strong emphasis on explainability made me think that the authors were not confident in the anomaly detection model, which in my eyes, should be pitched as the main strength of the paper. - The qualitative analysis of the explanations is somewhat lacking. A more detailed caption for Fig. 3, which also explains what the highlighted nodes and edges represent, would improve readability. It looks to me that the model highlights the motifs in both the normal and anomaly scenarios, but I would have expected it to highlight the motif in the anomalous graphs more. I would also like more qualitative results, especially on real-world datasets. Figure 3 is overall somewhat confusing and makes me feel like the examples might be cherry-picked. - As far as I'm aware, the NX-AUC and EX-AUC metrics are not very common. Please consider expanding their meaning in the main paper or the Supplementary materials. - I don't particularly like the "cartoon"-style fonts used in Fig. 1 and Fig. 2., please consider some alternatives. However, this is a subjective stylistic opinion and won't affect my final score for the paper. - I have not seen any mention of hyperparameter selection for the baselines. Some classical methods (such as the OC-SVM) are particularly sensitive to hyperparameters. Please consider adding a discussion regarding this. - It would be insightful if the paper would further discuss potential limitations and negative results. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Do the datasets used contain anomalies in the training set? I would love to see an analysis of the performance with respect to the anomaly contamination percentage in the training data. - The overall objective would also be interesting for general unsupervised graph representation learning. Have the authors tried using the graph representations for downstream tasks? I'm not requesting this experiment to be done since it's outside the scope of the paper; it's just something I have thought about while reading it. I imagine the results would not be great since the two views are somewhat similar. - Did the authors search for any hyperparameters for the baselines? - Did the authors examine any other qualitative results? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - The authors have addressed their limitations regarding the inability to use ground-truth labels when training their model, making it entirely unsupervised. Expanding the discussion of the limitations, potentially with negative qualitative results, would be beneficial. Discussing FP/FN samples obtained by picking some anomaly threshold would also be interesting since the model could provide explanations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Reviewer vtQs** We appreciate Reviewer vtQs for the valuable feedback and acknowledge our technical contributions and the effectiveness of the proposed method. We address the concerns raised by the reviewer as follows. **Q1: The main focus of this paper** **A1:** Thank you for your valuable feedback and perspective on our paper. We want to clarify that the primary focus of our paper is to develop a self-interpretable graph-level anomaly detection method with robust anomaly detection capability and promising explainability for its predictions. We are confident in both of these capabilities, and they have been substantiated by the results of our experiments in Sections 4.2 and 4.3, respectively. However, it should be noted that self-interpretability is a novel property for graph-level detection methods, representing a "zero-to-one" innovation. As a result, we have placed additional emphasis on introducing and explaining this innovative aspect, which may inadvertently overshadow the strength of our anomaly detection model. We appreciate your valuable insight, and based on your suggestion, we will revise our paper to ensure a more balanced presentation of both the detection capability and the novel self-interpretability feature. **Q2: Insufficient qualitative analysis** **A2:** Thanks for the valuable suggestion! We have prepared more visualization results, which are included in the attached PDF and will be incorporated into the revised version of the paper. These additional visualizations cover not only three synthetic datasets but also a real-world dataset, namely MUTAG. To address your concern about the confusion in Figure 3, we will provide a more detailed caption that explains what the highlighted nodes and edges represent. Also, we will discuss the new results in more depth. Notably, the visualization examples are randomly selected rather than cherry-picked. As a result, you can witness some cases where SIGNET does not work perfectly in our new results. **Q3: The meaning of evaluation metrics** **A3:** We understand the importance of clarity and ensuring that the metrics used in our study are well-defined for readers. In the revised version of the paper, we will provide a more detailed explanation (P3 in general response) of the NX-AUC and EX-AUC metrics. **Q4: Hyper-parameters of baselines** **A4:** We apologize for the oversight in not mentioning the hyperparameter selection process for the baselines. Allow us to provide more clarity on this matter. To ensure robust and reliable results, we conducted a comprehensive grid search to obtain the best hyperparameter configurations for the baselines. Specifically, for deep GLAD methods, we performed grid searches on key hyperparameters (e.g., layer number and hidden dimensions). For post-hoc explainers, we conducted grid searches on their post-hoc training iterations and learning rates. As for shallow GLAD methods, we focused on searching for key hyperparameters such as the training iterations of detectors and kernel-specific parameters. We are committed to adding a more specific discussion regarding this in the revised version of the paper. **Q5: More discussion about limitations** **A5:** We appreciate your suggestion to discuss potential limitations and negative results in our paper. In the revised version of our paper, we will include a dedicated section to address the potential limitations of our proposed method, SIGNET, and also discuss any negative results that we encountered during our research. For the method, While SIGNET performs well across various datasets, there might be specific datasets where its performance is not optimal. We will discuss the dataset characteristics that could impact the method's generalization and suggest ways to address this issue. For the negative results, we will transparently share any experiments or scenarios where SIGNET did not perform as expected or where its interpretability might be limited. **Q6: Anomalies in training set** **A6:** Thanks for the thoughtful comment! In this study, we adhere to the common practice of unsupervised graph-level anomaly detection, using a training set comprising only normal samples without any anomalies. Theoretically, adding anomalies to the training set may decrease the performance of all methods, including our proposed approach. However, it is important to note that some baseline methods are not specifically designed to handle scenarios with training data containing anomalies. Consequently, including anomalies in the training set could lead to unfair comparisons, as these baseline methods might not perform optimally in such settings. Although we recognize the significance of analyzing performance with varying levels of anomaly contamination in the training data, we have decided to leave this investigation for future work. **Q7: Style of figures** **A7:** Thanks for the valuable suggestion. In our figures, the cartoon-style fonts were chosen to provide a high-level explanation of the research question and the proposed method. The intention was to present a clear and visually engaging representation of the high-level concepts in another style, without delving into the specifics of the learning process. However, we also recognize the importance of maintaining a balance between clarity and aesthetics. In the revised version, we will explore alternative font styles that can still convey the high-level explanation effectively while aligning with academic standards. **Q8: Potential applications on unsupervised graph representation learning** **A8:** We agree that exploring the use of our framework for unsupervised graph representation learning could be promising. However, in this paper, our main focus is on the specific problem of unsupervised graph-level anomaly detection. In future works, we will consider exploring the use of our framework for unsupervised graph representation learning and its impact on various tasks. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications to me and the other reviewers. I will keep my score of 6 and raise my confidence to 5. --- Reply to Comment 1.1.1: Title: Thanks for the reply and comments Comment: We sincerely appreciate the reviewer once again for recognizing our contribution and for providing valuable comments! Your insights are truly invaluable to us.
Summary: To overcome the disadvantage of existing anomaly detection methods that fail to provide meaningful explanations for the predictions, this paper proposes SIGNET to (1) detects anomalous graphs and (2) generate informative explanations. To achieve this, the paper devises a multi-view subgraph information bottleneck framework to extract the informative subgraphs as explanations. Empirical results on 16 datasets verify the effectiveness of the proposed method. Strengths: - The investigated problem is novel and important, as robustness and interpretability are the key sub-areas of trustworthy graph learning. - The paper is easy to follow, with generally clear writing and illustration. - The proposed SIGNET is technically solid, with good and competitive empirical results. - Some preliminary analyses in terms of information theory are conducted. Weaknesses: - The technical contributions are neutral. The proposed MSIB framework seems to be a combination of information bottleneck and graph multi-view learning, while the novelty and difficulty are not clear. - The answers to "RQ1: Can SIGNET provide informative explanations for the detection results" are not convincing enough, which should be the key contribution of the paper. The reasons are as follows. - The two compared GNN explainers, GNNExplainer and PGExplainer, are not up-to-date. The latest and state-of-the-art explainers, e.g., GSAT (ICML'22) [22], should also be considered and discussed. - Note that GSAT is also derived from the information bottleneck, which shares a similar design as the proposed SIGNET. I would suggest the paper discuss more the connection and differences. - Besides, directly combining GNN explainers and anomaly detection methods, e.g., OCGIN-GE, can be sub-optimal, as shown in Table 1. The paper should explain more about the baseline settings, that is, why such a combination is reasonable, but the results show that it does not work well in most cases. - The few cases shown in Figure 3 are insufficient, which are not promising and convincing enough. It seems that SIGNET learns to capture the same (similar) functional sub-graph (or motif) for both normal and anomaly samples. I would suggest the paper show more cases and provide an in-depth analysis, which will add more value to the main contribution, i.e., interpretability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For the anomaly detection performance, Table 2 shows that SIGNET is outperformed by the three baselines in some cases. What are the reasons here? Is there a natural tradeoff between accuracy and interpretability? - The MI estimation appears many times. How does the paper conduct the MI estimation in a tractable and differentiable way? Equation (3) seems confusing for including the MI and SKL divergence. - Besides, how does Equation (5) correlate with Equation (3), is Equation (5) also an MI estimation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the above Weaknesses and Questions. I would consider raising my score if the above questions are well answered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Reviewer yuBa** We appreciate Reviewer yuBa for the perception of our contributions and thank the reviewer for the insightful feedback. We provide our responses as follows. **Q1: Technical contribution of this paper** **A1:** Thanks for the valuable feedback. Our work embarks on an important and challenging research direction, representing the first step towards integrating self-interpretability into graph-level anomaly detection (GLAD), thereby serving as an inspirational catalyst for future advancements in this domain. It's essential to note that our proposed method transcends a mere combination of existing techniques. The Multi-View Subgraph Information Bottleneck (MSIB) framework is thoughtfully crafted to address the demands of unsupervised self-interpretable GLAD with sound motivation and solid deduction. Building upon this framework, we introduce several innovative designs to effectively tackle the intricacies of the research problem. We greatly value the reviewer's feedback and wish to emphasize that our paper introduces a pioneering GLAD method with inherent self-interpretability, carving a unique path distinct from existing approaches, and our work holds the potential to inspire future research in the evolving field of GLAD. **Q2: Compared to state-of-the-art explainers** **A2:** Thanks for the kind suggestion. We understand the importance of considering other state-of-the-art explainers for comparison. However, regarding GSAT, we acknowledge that it is a self-interpretable GNN that requires labels for training. Unfortunately, in our GLAD setting, we do not have access to labels. As a result, we cannot directly compare our approach with GSAT. To address the reviewer's concern and ensure comprehensive evaluation, we have introduced one of the latest post-hoc explainers, RC-Explainer (TPAMI 2022 [1]), for comparison. The comparison (w.r.t. EX-AUC) below shows that SIGNET continues to outperform RC-Explainer on three datasets, reaffirming that the combination of trained detectors and post-hoc explainers typically provides sub-optimal explainability. |Methods|BM-MN|MNIST-0|MNIST-1| |-|-|-|-| |GLocalKD-RC|71.87|63.56|61.67| |OCGTL-RC|68.50|66.83|63.22| |SIGNET|83.45|72.78|74.83| [1] Wang et al. "Reinforced causal explainer for graph neural networks." IEEE TPAMI (2022). **Q3: Connection and differences with GSAT** **A3:** Thanks for the valuable comment. Following the reviewer's suggestion, we will include the connections and differences between our method (SIGNET) and GSAT in the revised paper, see P4 in general response. **Q4: Discussion about “detector+explainer” baselines** **A4:** Thanks for your valuable feedback! Regarding the implementation details, we will add them to the revised paper and we also attach them to P2 in general response. We claim that the combination is reasonable because the mechanism of post-hoc explainers is to parameterize the input graph and find the input that can generate the ideal output under several conditions, serving as the explanation. Since the baseline GLAD methods are deep learning models with graph-level input and scalar output, it is feasible to use the post-hoc explainer to explain their prediction, similar to the explanation process of classification models. However, such a combination would lead to sub-optimal performance since the GLAD model and explainer are trained independently, which might not ensure optimal alignment between the two components during training. Consequently, this leads to the mismatch between the generated explanations and true decision boundaries learned by GLAD model. In contrast, SIGNET avoids this potential mismatch by incorporating interpretability into the detection model, allowing for joint learning of prediction and interpretation. By optimizing a unified objective for detection and explanation, SIGNET can align its learned decision boundaries with the generated explanations effectively. **Q5: More case studies** **A5:** Thanks for the valuable suggestion! In response to your feedback, we have prepared more visualization results (including real-world dataset MUTAG), which are included in the attached PDF. With the new results, we will provide a detailed analysis in the revised paper. **Q6: Discussion about anomaly detection (AD) performance** **A6:** Thanks for the kind suggestion. While SIGNET has demonstrated strong AD performance on most benchmark datasets, we acknowledge that there are cases where it is outperformed by some of the baselines. A possible reason for this is the dataset characteristics. Different datasets may have varying levels of complexity and distribution of anomalies, which can impact the performance of AD methods. Some baselines might be more suitable for certain datasets due to their specific design and assumptions, leading to their better performance. **Q7: Details of MI estimation in SIGNET** **A7:** Thanks for the thoughtful comment. MI plays a critical role in our proposed MSIB framework, specifically in the first term of Eq.(3). However, estimating MI can be challenging, especially with limited examples instead of the variable distribution itself. Fortunately, [2] introduced parametrized MI estimators that offer a tractable and differentiable way to estimate MI using neural networks. In SIGNET, we adopt the Info-NCE (Eq.(5)) for MI estimation due to its generalization ability (see the 2nd paragraph of Sec. 3.4). Experiments in Sec. 4.4 shows the superiority of Info-NCE over other estimators. To correlate Eq.(3) with Eq.(5), instead of minimizing the SKL between two extractors, we utilize a unified subgraph extractor to estimate the subgraph for both views (see the 2nd paragraph of Sec. 3.3). This naturally aligns the two distributions without the need to minimize their SKL. Consequently, we can omit the second term in Eq.(3), leading to Eq.(5). [2] Tschannen et al. "On Mutual Information Maximization for Representation Learning." ICLR (2020). --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and extra results. Most of my concerns are alleviated. Nonetheless, the original novelty and technical contribution of the proposed method are neutral to me. Therefore, I will keep my score as 5 and raise my confidence to 4. --- Reply to Comment 1.1.1: Title: Appreciation for your response and inquiry regarding additional questions Comment: We appreciate Reviewer yuBa for your valuable comments! We would like to inquire whether you have any remaining concerns at this point. If there are any previous questions that haven't been thoroughly addressed due to the limitations of the rebuttal space, please feel free to highlight them. Your feedback is instrumental in refining our work, and we are willing to engage in further discussion to provide clarity on any unclear points or concerns you might have.
Summary: This paper studies graph-level anomaly detection (GLAD), which aims to find the anomalous graphs. To construct an explainable GLAD under an unsupervised manner, the authors first proposed a multi-view subgraph information bottleneck (MSIB) framework and then introduce a dual hypergraph as a supplemental view of the original graph. The core contribution of the paper is the designing of a self-explainable GLAD model. Strengths: 1. Explainable AI is important for many real-world applications that highlight interpretability and security. The proposed framework is explainable by the model itself (rather than post-hoc explainers). 1. Using two different and distinguishable views to train the MSIB framework for graph-level anomaly detection is reasonable and sound. The authors also used cross-view MI maximization for estimating MI between two compact subgraph representations. Weaknesses: 1. The authors stated "this is the first attempt to study the explainability problem for anomaly detection on graph-structured data". However, there are works on node-level graph anomaly detection, both supervised and unsupervised and self-supervised. Better use "graph-level anomaly detection" here. 1. The proposed model is a little heavy and introduces non-trivial computational overhead. The trade-off between scalability and interpretability (there are also other efficient methods for explainable graph-level representation learning) should be considered. Complexity and run-time analysis should be reflected in the paper -- given the overhead of computing on multiple subgraphs. 1. Statistics of all datasets should be provided, e.g., avg. number of nodes, density of the graph. 1. All methods on GLAD seem to be unstable and less robust on the detection performance (with extremely high std). The evaluation of the effectiveness of the proposed method (RQ2 and RQ3) seems to be trivial and "boring". I suggest the authors to emphasize more on the RQ1. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Suggestion: the authors should discuss more and provide more examples about the explanabilty of the model in RQ1 and in Figure 3. There are only visualizations for synthetic datasets. I believe readers would be more interested in these (rather than performance). The advantages of the proposed explanable GLAD model over existing explanable models should be clearly presented. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors slightly discussed the model limitations in the Conclusion section. More discussions w.r.t. complexity and dataset-type applicability are needed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply to Reviewer 6qZe** We appreciate Reviewer 6qZe for the positive review and constructive comments. We provide our responses as follows. **Q1: Complexity and run-time analysis of the proposed method** **A1:** We appreciate your concern regarding the computational overhead and the trade-off between scalability and interpretability in our proposed model. In response to these concerns, we conduct a comprehensive time complexity analysis, which is attached to the general response and will display in the revised paper. The analysis shows that the time complexity of SIGNET is $\mathcal{O}(NLd^2(m+n) + Nnd(d_f+d_{f*}) + NBd)$, which is comparable to mainstream GNN-based models, including our baselines. From the analysis, we can also find that although the self-interpretable block (i.e., subgraph extractor) may introduce some additional time complexity, it is designed as a lightweight module compared to the entire anomaly detection framework and would not increase the overall scale of time complexity. Additionally, we provide a comparison between our method and baseline methods (including GOOD-D [1], the strong baseline pointed out by Reviewer QzdR) in terms of running time per epoch, as shown in the table below. This comparison illustrates that while our method provides self-interpretation capabilities, it still maintains competitive running efficiency. Specifically, on the dataset with larger graphs (MNIST-0), the runtime per epoch is very close to the most efficient baseline OCGIN, and is 6.5x faster than the strong baseline OCGTL. Therefore, the running efficiency of SIGNET would not be a large concern. |Dataset|OCGIN|GLocalKD|OCGTL|GOOD-D|SIGNET(ours)| |-|-|-|-|-|-| |BM-MT|0.0457s|0.0535s|0.2624s|0.3036s|0.0720s| |MNIST-0|0.1213s|0.3498s |0.8019s|0.5937s|0.1273s | [1] Yixin Liu, Kaize Ding, Huan Liu, and Shirui Pan. Good-d: On unsupervised graph out-of-distribution detection. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pages 339–347, 2023. **Q2: More discussion for explainability (RQ1)** **A2:** We appreciate your interest in the explainability of our proposed GLAD method and the importance of providing more examples and discussions. In the revised version of our paper, we will dedicate more attention to the model's explainability, discussing it in greater detail and providing additional examples. We acknowledge the significance of visualizations, as they offer a clear and intuitive understanding of the model's explanations. According to your suggestion, we have prepared more visualization results, which are included in the attached PDF and will be incorporated into the revised paper. These visualizations include the results on a real-world dataset MUTAG dataset. More visualization examples will be displayed when we open-source our code. I hope the extra results can address your concerns about the explainability of our method. **Q3: Detailed statistics of datasets** **A3:** Thanks for the kind suggestion! In response to the space limitations, we will move the dataset statistics to Appendix F in the supplementary material. We understand the importance of easy access, so we will clearly highlight the specific section in the revised version. Additionally, we will include more details, such as the density of graphs, as suggested by the reviewers. **Q4: Statement about “first attempt” contribution** **A4:** We appreciate your suggestion. While there have been previous works on node-level graph anomaly detection, our paper specifically focuses on the explainability problem for graph-level anomaly detection. To clarify this, we will use the term “graph-level anomaly detection” as our first-attempt contribution in the revised version. Thank you for helping us improve the accuracy of our paper. --- Rebuttal Comment 1.1: Title: Additional Questions Comment: Thank you for the clarifications provided. Upon reviewing other questions, I have encountered further inquiries and hope that the authors could address my concerns. Before presenting my questions, I'd like to clarify that I possess a strong familiarity with hypergraphs, IB techniques, and self-supervised learning, as well as a reasonable understanding of (/un/semi)supervised time series anomaly detection. However, I must admit that I am not familiar with GLAD, including challenges, baselines, datasets, metrics, etc. - In light of a recent survey paper titled "A Survey on Explainable Anomaly Detection," it appears that the authors may not be the pioneers in considering explainability/interpretability in graph anomaly detection. Consequently, the assertions of being the "first attempt" or achieving a "zero-to-one" accomplishment might not be tenable. I agree with the first suggestion made by Reviewer vtQs in the weaknesses section. If the authors insist on emphasizing that the principal contribution of their work as the introduction of explainability/interpretability into GLAD, then the originality and significance of this work could be significantly weakened. My perspective is that the authors essentially utilize an existing IB technique to endow the model with a degree of interpretability, which, I must say, is a relatively limited advance. It's worth noting that IB has already found success in various similar tasks, such as graph/node/link classifications, to enhance explainability. I believe these tasks bear a strong connection to graph anomaly detection, especially considering that some of the datasets employed in this work were initially utilized in graph/node/link classification tasks. - I noticed the authors' description that they evaluated the anomaly detection performance of SIGNET using 10 TU datasets following the setting in [3]. After reading (or 'glancing through') [3], I have observed a discrepancy between the datasets utilized in this study (10 datasets) and those specified in [3] (16 datasets). Given that [3] is also compared as a strong baseline, it raises the question of why the experiments weren't conducted using precisely the same settings. Additionally, I am curious about the rationale behind using AD-AUC rather than the commonly employed AUC metric for evaluating model performance. Could the authors elaborate on the distinctions between these two metrics? While I understand that NX-AUC and EX-AUC metrics are chosen to assess explainability, I believe it is equally important for the authors to furnish AUC results using the same settings as the baselines. - As recommended by Reviewer QzdR, I echo the suggestion of providing a more comprehensive coverage of related works, particularly in the domain of explainable anomaly detection. Could authors offer a brief explanation here to illustrate the main difference of this work compared to existing explainable anomaly detection? - Following the suggestion of Reviewer yuBa to include discussions on recent and state-of-the-art explainers, I would like to expand upon this suggestion: It would be better to compare the proposed explainable model with other explainable techniques, such as influence functions, Shapley value -based EAI, and post-hoc concept bottleneck models, etc. I understand that conducting these experiments during the rebuttal is impractical, but I wish the authors can consider this for future research. This is especially crucial and requisite when someone claims that he/she has provided a explainable model in any particular area or sub-area. - One of my prior questions appears to have received limited attention. It revolves around the observed instability and lack of robustness in the performance of all GLAD methods, characterized by extremely high standard deviations (Table 1). Furthermore, Table 2 indicates that OCGTL exhibits significantly greater stability compared to SIGNET. Could the authors delve into a more in-depth analysis to shed light on the underlying causes for these phenomena? - One minor suggestion: GLAD is used to represent "graph-level anomaly detection". However, in previous works they often use GAD instead. Better to stay consistent with previous works. --- Reply to Comment 1.1.1: Title: Response (1/2) to further questions raised by Reviewer 6qZe Comment: We are grateful to Reviewer 6qZe for the valuable feedback. Below, you'll find our response addressing the raised questions. We hope that our response addresses your concerns. **Q5: Statement about “first attempt” contribution** **A5:** We deeply appreciate your valuable feedback and for sharing the recent survey paper in this research field. From the survey paper, we indeed identify two explainable papers related to graph-structured data: [214] for edge-level anomaly detection on dynamic graphs and [132] for node-level anomaly detection on network traffic data. To clarify this, we will use the term “graph-level anomaly detection” as our first-attempt contribution in the revised version, which will more our claim more accurate. It is still notable that we mainly focus on the “graph-level anomaly detection (GLAD)” problem which is highly distinct from the scenarios in the above papers. The core challenges in explainable GLAD problems, e.g., the lack of graph-level labels and the pattern diversity of subgraphs, are unique and ticklish. Since GLAD is a practical research problem, our contribution towards “self-interpretable GLAD” remains noteworthy and consequential. Regarding the applications of IB principle, the majority of studies focus on supervised graph/node/link classifications tasks where labels are available. In contrast, we apply this principle to an unsupervised anomaly detection scenario without the requirement of ground-truth labels. We believe that it is not a simple borrow-and-application task but requires well-crafted designs, such as multi-view MI maximization. To sum up, we will carefully modify our statement about the “first attempt” contribution to make the paper more precise. Appreciate again for the insightful comments. **Q6: Benchmark for comparison and metric** **A6:** We appreciate your concern regarding the benchmark and metric. For the benchmark for comparison, we found that GLocalKD[3] establishes the comparison on 16 datasets, including 10 datasets from TU datasets and 6 self-collected datasets (HSE, MMP, p53, etc.). Meanwhile, other representative studies (OCGIN [7] and OCGTL [8]) conduct experiments only on TU datasets. Such a conflict causes the difficulty of reproducing OCGIN and OCGTL on 6 new datasets with heavy grid-search to ensure fair comparison. In this case, we conduct our experiments on 10 commonly used TU datasets and 6 explainable datasets. We believe that our comparison is fair and reasonable. For the evaluation metric “AD-AUC”, we apologize for the unclear expression in the paper. Actually, “AD-AUC” is totally equivalent to “AUC” in previous papers [3,7,8]. Here we denote it as “AD-AUC” because we want to distinguish it from our explainability metrics (NX-AUC and EX-AUC) which are also called “AUC”. We will add more explanation for the metric in the revised paper and apology for the confusion. **Q7: Main difference compared to existing explainable anomaly detection** **A7:** We appreciate your feedback, and we will certainly update our related works section accordingly, referring to the recent survey papers and technical studies. As for the difference between our method and existing explainable anomaly detection (EAD), we would like to highlight the following two points: * Target task and explanation format. Existing EAD methods mainly focus on explaining anomaly detection results of image/tabular/time series data. To this end, they usually aim to learn explanations in the corresponding formats, such as pixel, feature value, and series. For the few EAD works for node/edge -level anomaly detection, their explanations mainly lie in node/edge features (mostly feature value). In contrast, we focus on the explainable GLAD problem, and our model aims to generate explanations in a subgraph format, including a group of nodes and corresponding edges. This is a more difficult task due to the discrete property of subgraphs and the complexity of graph-level patterns. * Technical solution. Our technical solution, i.e., multi-view subgraph information bottleneck, is novel and rarely seen in previous EAD methods before. Noting that existing EAD methods are mainly based on model gradient, approximation, reconstruction, etc. To the best of our knowledge, we are the pioneering study that applies multi-view learning and information bottleneck principle to EAD tasks. Although some components in our method are well crafted for graph-level EAD tasks, it is also promising to apply this learning framework to more EAD scenarios.
Rebuttal 1: Rebuttal: **General Response** We sincerely thank all the reviewers for their valuable and insightful comments. We are glad that the reviewers find that the studied problem is novel and significant (Reviewer 6qZe, yuBa, and QzdR), the proposed method is novel and well-motivated (Reviewer 6qZe, yuBa, and vtQs), the theoretical analysis is sound (Reviewer yuBa), the empirical studies are adequate and reasonable (Reviewer vtQs), and the writing is smooth and has a good storyline (Reviewer yuBa, vtQs, and QzdR). To the best of our efforts, we provided detailed responses to address the concerns raised by each reviewer in the following. Meanwhile, we carefully revised the paper according to the reviewers’ comments. We will incorporate all the feedback in the final version. ​​Specifically, the main revisions we made are as follows. * We have added extra experiments to discuss the scalability on large-scale datasets and running efficiency of the proposed method (please see the Reply to Reviewrs 6qZe and QzdR for details). * We have analyzed the time complexity of the proposed method (see the attached paragraph P1 below). The discussion shows that the complexity of the proposed method is comparable to mainstream GNN-based models. * We have added more qualitative experiments, i.e., more visualization of explanation results, including results on real-world dataset (see the attached PDF). * We have illustrated the implementation details of our baselines, i.e., the GLAD methods with post-hoc explainers (see the attached paragraph P2 below). * We have added detailed explanations for our evaluation metrics, i.e., NX-AUC and EX-AUC (see the attached paragraph P3 below). * We have highlighted the comparisons between our method (SIGNET) and a representative self-interpretable GNN, GSAT (see the attached paragraph P4 below). **P1: Complexity Analysis of SIGNET.** Within this paragraph, we denote the average numbers of nodes and edges as $n$ and $m$ respectively, and denote the number of graphs and batch size as $N$ and $B$ respectively. At each training iteration, we first conduct DHT to obtain the dual hypergraph, which requires $\mathcal{O}(N(m+n))$. Then, the GNN-based extractor that calculates probability consumes $\mathcal{O}(NL_1md_1+NL_1nd_1^2 + Nnd_1d_f)$ complexity, where $L_1$ and $d_1$ are the layer number and latent dimension of the extractor, respectively. The bottleneck subgraph extraction for two views requires $\mathcal{O}(N(m+n))$ in total. For the GNN and HGNN encoders, their time complexities are $\mathcal{O}(NL_2md_2+NL_2nd_2^2 + Nnd_2d_f)$ and $\mathcal{O}(NL_2nd_2+NL_2md_2^2 + Nnd_2d_{f*})$ respectively, where $L_2$ and $d_2$ denote their layer number and latent dimension. Finally, the Info-NCE loss requires $\mathcal{O}(NBd_2)$ complexity. To simplify the overall complexity, we denote the larger terms within $L_1$ and $L_2$ as $L$, and the larger terms between $d_1$ and $d_2$ as $d$. After ignoring the smaller terms, the overall complexity of SIGNET is $\mathcal{O}(NLd^2(m+n) + Nnd(d_f+d_{f*}) + NBd)$. **P2: Implementation of GLAD methods with post-hoc explainers.** Given a GLAD model and post-hoc explainer, at first, we train the GLAD model independently on the training set. After sufficient training, the GLAD model is able to map each input graph into a scalar, i.e., its anomaly score. To address the uncertainty of the anomaly score boundaries, we apply a linear scaling function to map the scores into the [0,1] range and then use a sigmoid function to convert each score into a probability for binary classification. Subsequently, we integrate the post-hoc explainer with the probabilitized output of the GLAD model and optimize the explainer accordingly. **P3: Evaluation metrics.** In this paper, we use “explanation Area Under the Curve (AUC)” to evaluate the explanation performance, following previous works [19,20]. We employ both node-level and edge-level explanation AUCs for comparison (NX-AUC and EX-AUC for short, respectively). Specifically, we tackle the explanation problem by framing it as a binary classification task for nodes and edges. We designate nodes and edges inside the explanation subgraph as positive instances and the rest as negative. The importance weights generated by the explanation methods serve as prediction scores. An effective explanation method should assign higher weights to nodes and edges within the ground truth subgraphs compared to those outside. To quantitatively evaluate the performance, we use the AUC as the metric for this binary classification problem. A higher AUC indicates better performance in providing meaningful explanations. **P4: Comparison between GSAT and SIGNET** Connections: Both GSAT and SIGNET use information bottleneck principle as theoretical foundation of their explanation target that extracts the explanation subgraph. Both of them adopt neural networks to parameterize input graph and make the explanation differentiable, which is a common design among explainable GNNs. Differences: * Different targeted tasks: GSAT focuses on a graph classification where labels are available to train the interpretation module. Differently, SIGNET targets unsupervised GLAD, a more challenging task with unavailable labels during training. * Different theoretical framework: GSAT is designed based on the original information bottleneck framework, tailored to its targeted supervised setting. In contrast, SIGNET is based on the multi-view subgraph information bottleneck (MSIB) framework derived in this paper, specifically designed for unsupervised GLAD. * Different learning objectives: GSAT is trained using cross-entropy loss, a commonly used classification loss. In contrast, SIGNET is optimized using an Info-NCE loss, aiming to maximize the MI between each graph and its rational subgraph. * Different graph view for graph learning: GSAT only considers the original view for graph learning, while SIGNET considers both the original and DHT views. Pdf: /pdf/3b4bf33d992c3af05106609573455b6c9aa4b5a5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Category-Extensible Out-of-Distribution Detection via Hierarchical Context Descriptions
Accept (poster)
Summary: The paper presents a new technique to improve OOD, as well as IID, prediction for a pre-trained Language Vision Model (LVM). The heart of the proposed method is to learn both an ID context (i.e. perceptual context) and an OOD context (i.e. spurious context) to improve the classification of both ID classes and OOD samples. The paper proposes a new loss function to combine the ID and OOD losses together and a sampling strategy to produce spurious samples. Strengths: The strengths of the paper are in its empirical validation and the originality around combining insights from different works. The proposed technique in the paper shows strong empirical performance, including ablation testing, on the main OOD tasks and measures. The quality of this validation well supports the claims made in the article about the need to consider both ID and some kind of spurious context when doing open-world detection. The paper also combines elements from previous work, like the idea of having perturbed examples from VOS and NPOS, and the idea of learning prompts in the text vector space from Learning to Prompt into one framework. Weaknesses: Despite its strong empirical validation, the paper does have some weaknesses in its clarity and novelty. Beginning with novelty, the proposed technique of CATEX seems to be only an incremental improvement on NPOS (e.g., creating perturbed examples as part of training for OOD) and directly uses the technique from Learning to Prompt with only a change in the loss function. In essence, the paper doesn’t present any insight that the NPOS/VOS papers already presented, namely the inclusion of perturbed samples into learning for an LVM can help with OOD performance. If the paper were more explicit, especially in the methods section and discussion section, about how the proposed method differs from previous ones, it would help for establishing the novelty. For example, I believe both VOS and NPOS train the underlying CLIP model, while the proposed technique of CATEX uses the Learning to Prompt technique of training a lightweight layer on top of the CLIP model. Such a change seems to balance between being good at ID tasks, while not distorting the feature space. In terms of clarity, there is not enough detail in the methods section to both deal with the aforementioned novelty issues as well as to fully understand the training process and the perturbation guidance. For the perturbation guidance, lines 174-176 make it sound like its changing the actual words or tokens (as is done in the Kwon et al. article with masked language modeling), versus the embeddings of the text, as is done in Learning to Prompt article. If the proposed technique is actually masking the tokens, versus changing the context vectors in the embed space, then how is the training done to optimize the text, as the method in Learning to Prompt only deals with a vector space? Also, does perturbation by masking tokens fully make sense? For example, is the perturbation of “a photo of a dog” to “a [MASK] of a dog” or “a photo of a [MASK]” really a meaningful perturbation for the spurious context? I wish there was something like a walkthrough example of the perturbation as well as some more explanation of the training method and intuition behind the perturbation to better understand the contribution of the work. Finally, there are a couple of areas where the writing could be improved. There are some sentences throughout that need to be proofread for grammar. For example, the last sentence of the abstract is a run-on sentence and the sentence on lines 48-50 is unclear in what its trying to say. -------- Following author's responses -------------- I believe the authors have significantly addressed my concerns about the perturbation guidance. They have both added additional explanations as well as done some additional experimentation around ideas like how many tokens to mask. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How does the proposed method find optimal perturbations of the textual input? How does the method decide on which tokens or words to mask? 2. How could the proposed method be used when you have no labeled data (i.e. a true zero-shot setting) to improve performance? Or can it? For example, can perturbations be included in at inference time – combined with the OOD scoring function – to do zero-shot labeling? 3. What is the performance of the proposed method versus CoCoOP? While the paper does investigate the performance of its proposed method versus CoOp, and rightly concludes its does better with OOD, it does not evaluate the performance of its proposed method against the newer version of the CoOP method (i.e. CoCoOP) which was explicitly designed to deal with OOD issues of CoOP. -------- Following author's responses -------------- I believe the authors have answered all three of my questions. In particular, I found their answer around 2, of how to use the proposed method to improve the zero-shot performance of LVMS, to be very interesting and notable. I also find it quite interesting that there was such a performance gap between the proposed method and CoCoOP. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed nearly all of their limitations and addressed those dealing with societal impact. The only limitation they have not addressed is that the proposed method still requires labeled (or captioned) data in order to work and cannot work in a zero-shot setting as was the promise of CLIP. I welcome the author’s reply on this, as I am not sure if the proposed method couldn’t be used without labeled data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are pleased that the reviewer recognizes our work‘s originality and empirical validity. We appreciate your experienced comments and valuable suggestions, which are addressed below in detail: > Q1: No insights beyond NPOS/VOS. Clarifying the method difference helps establish novelty. Thanks for your kind comment. As follows, we provide at least three new insights. 1. **Large-scale pretrained VLM itself provides significant advances in OOD detection**. Huge corpora bring powerful feature representation capability, which is the key to detecting diverse unknown OOD samples beyond the limited ID datasets. Furthermore, massive paired image-text data builds comprehensive multi-modal prior knowledge, offering valuable extra information for OOD detection. 2. **Previous methods have not fully utilized VLM's advances**. NPOS/VOS optimize a more compact ID feature space via **random** OOD syntheses, but the generalized feature space is distorted and the image-text prior knowledge is not utilized. Besides, vanilla CoOp lacks explicit descriptions for OOD samples, which hinders further OOD detection. 2. **Our method leverages VLM's advances via hierarchical contexts**. Freezing CLIP's encoders to maintain the generalized feature space, we learn two contexts (i.e., perceptual and spurious) to hierarchically describe each ID category's boundary. Moreover, to exploit VLM's prior knowledge, a novel textual-perturbation-guided approach is developed to generate OOD samples to learn the contexts. Hence, this paper is specifically designed for OOD detection with VLMs, but not an incremental improvement on NPOS. > Q2: About training process and perturbation guidance. How to optimize the text? Does the perturbation of “a [MASK] of a dog” or “a photo of a [MASK]” makes sense? Thanks for your comment. First, in the training procedure, the perturbation itself only serves to generate OOD samples and no gradient is involved. Detailed diagrams (**Figure A1-A2**) are provided in the uploaded PDF file to illustrate the whole training procedure. Second, rather than the initial prompt template as "a photo of a", the perturbation is applied to the tuned contexts [V1][V2]...[Vm]+[CLS], where [CLS] is the fixed class name (e.g., dog or goldfish). As each word [Vi] has a certain semantic meaning $^{[1]}$, let's view the learned contexts as (even though not) "yellow fish long flowing tail" for [CLS] as [*goldfish*]. Thus, the perturbations of "[MASK] fish long flowing tail" and "yellow fish long flowing [MASK]" both make sense, which eliminate some kinds of visual features of [*goldfish*], and generally describe another spurious OOD category against the certain ID category. [1] Kaiyang Zhou. Learning to Prompt for Vision-Language Models, IJCV 2022. > Q3: The writing can be improved. Lines 48-50 are unclear. Thanks to this, we have made a throughout proofreading to improve readability. In particular, the core idea conveyed by lines 48-50 is that even though fine-tuning CLIP's encoder may boost the performance (e.g., classification accuracy on ImageNet), CLIP's generalized feature space is destroyed. So test data shifts (e.g., ImageNet-R) will cause severe performance degradation. > Q4: How to find optimal perturbations of the textual input? A thoughtful comment! Indeed, deciding the optimal word to perturb is challenging, especially when the learned word embeddings do not correspond to actual words in natural language. Though, we have made a simple step to find the "optimal" word/token in the embedding space. After computing the prototype vector by averaging the 16 learned words, we take the most distant (denoted as *MaxDist*) or closed (denoted as *MinDist*) word for perturbation guidance. However, as shown below, the results even get worse. |Method|FPR95↓|AUROC↑| |:----:|:----:|:----:| |MaxDist|10.42|97.75| |MinDist|10.86|97.73| |Random|**10.31**|**97.82**| So, we argue that randomly choosing the words to perturb at each iteration is still an effective way. After several iterations, we can statistically perturb every word to guide the spurious sample generation, covering the optimal situation. > Q5: How to deploy without labeled data? Can perturbations be used to do zero-shot labeling? An insightful suggestion! Our method actually CAN be used for zero-shot labeling. For ImageNet-1K, since CLIP's default prompt "a photo of a [CLS]" contains no visual info, we adopt the visual description from LLM $^{[2]}$ as the initial prompt input (denoted as VisDesc). Then, we randomly mask the visual description to simulate spurious contexts, and adopt Eq.(5) in our manuscript to regularize image-text similarities for classification. The results shown below indicate our method boosts zero-shot classification without any training cost. |Method|Prompt|ACC-Top1|ACC-Top5| |:----:|:------:|:-----:|:----:| |CLIP|A photo of a goldfish.|63.50|88.99| |VisDesc|A yellow fish with a long flowing tail is goldfish.|65.47|90.14| |Ours|(plus) A [MASK] fish with a long [MASK] [MASK] is goldfish.|**65.83**|**90.36**| It provides another new insight that in the category-extensible setting or a true zero-shot classification scenario, explicitly constructing spurious contexts can perform the one-class OOD detection task, as other categories can also be viewed as OOD for a certain ID category. [2] Sachit Menon, Visual Classification via Description from Large Language Models. ICLR 2023. > Q6: Comparison to CoCoOP? On ImageNet-100, CoCoOp is surprisingly much worse than our method. The reason may be that during training, CoCoOp only takes ID images as context conditions, while neither OOD samples nor OOD contexts are involved. Thus, when employed in the open-world and asked to reject OOD samples not belonging to any ID category, CoCoOp failed. The image conditions even aggravate the overconfidence in OOD samples. |Method|ACC↑|FPR95↓|AUROC↑| |:----:|:--:|:---:|:----:| |CoCoOp|92.90|39.22|92.69| |Ours|**94.12**|**10.31**|**97.82**| --- Rebuttal Comment 1.1: Title: Reponse to Author's Rebuttal Comment: I really appreciate the authors' addressing of my questions and attempting to better elucidate how the perturbation is done. In particular, I am rather impressed by the performance of the proposed method against CoCoOP (which is supposed to handle OOD better) and how the method can be used to improve zero-shot image classification. This later answer, in particular, raised my estimation of this paper rather significantly. I do still wish there was some kind of easier-to-follow flow chart or even pseudo-code on how to implement the perturbation of CATEX for use in training and zero-shot classification, though. Given the authors' responses, I have raised my rating and do believe the paper should be given serious consideration for acceptance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for evaluating our work and carefully reading our response. Your constructive comments and insightful suggestions do make our work much stronger, and we are really encouraged that you can boost our paper. On the other hand, we are sorry that in the discussion stage, it is not allowed to provide new charts or figures to illustrate our method. Therefore, to answer your following-up questions on how our perturbation is implemented in (1) training our CATEX and (2) performing zero-shot classification, we provide the pseudo-codes below: > 0. How is the perturbation implemented? The perturbation itself is simple. For a context $\mathbf{v}$, the perturbation can be expressed as: 1. Take the $m$ (e.g., 16) learned/pre-defined tokens/words $\mathbf{v} = [v_1;v_2;\cdots;v_m]$ 2. Generate masking/noise perturbations $u$ 3. Randomly perturb one (or more) token/word $\mathring{\mathbf{v}} = [v_1;u;\cdots;v_m]$ The above process is formulated as $\mathring{\mathbf{v}} = \mathcal{P}(\mathbf{v})$. With the text-encoder $\mathcal{T}$ and class name $[\texttt{CLS}]$, the perturbed text embedding is encoded as $\mathring{\mathbf{w}} = \mathcal{T}(\mathring{\mathbf{v}}, [\texttt{CLS}])$. > 1. How is the perturbation implemented in the vanilla training procedure? Our method trains a pair of perceptual context $\mathbf{v}^p$ and spurious context $\mathbf{v}^s$ for each category, with real in-distribution samples $\lbrace\mathbf{x_i}\rbrace$ and generated OOD samples (guided by the perturbation) $\lbrace\tilde{\mathbf{z}}_j\rbrace$. The training procedure can be expressed as: 1. Perturb the perceptual context $\mathring{\mathbf{v}}^p = \mathcal{P}(\mathbf{v}^p)$ 2. Generate random OOD samples $\lbrace\tilde{\mathbf{z}}_j^\prime\rbrace = \mathcal{G}(\lbrace\mathbf{x_i}\rbrace)$ 3. Use perturbation to select OOD samples $\lbrace\tilde{\mathbf{z}}_j\rbrace = \mathcal{F}(\lbrace\tilde{\mathbf{z}}_j^\prime\rbrace, \mathring{\mathbf{v}}^p)$ via Eq.(4) 4. Encode text embeddings $\mathbf{w}^p = \mathcal{T}(\mathbf{v}^p)$, $\mathbf{w}^s = \mathcal{T}(\mathbf{v}^s)$ 5. Train with loss functions $\mathcal{L}(\mathbf{w}^p, \mathbf{w}^s, \lbrace\mathbf{x_i}\rbrace, \lbrace\tilde{\mathbf{z}}_j\rbrace)$ as Eq.(1) and Eq.(3) The random OOD sample generator $\mathcal{G}$ can be distance-based $^{[1]}$, density-based $^{[2]}$, etc. Kindly note that the perturbation guidance itself does not involve gradient back-propagation, and we only optimize perceptual contexts $\mathbf{v}^p$ and spurious contexts $\mathbf{v}^s$. > 2. How is the perturbation implemented to help zero-shot classification? For zero-shot classification, we apply perturbation on pre-defined category descriptions (viewed as perceptual context $\mathbf{v}^p$) to simulate the spurious context $\hat{\mathbf{v}}^s$. Give an input image $x$, the classification process can be expressed as: 1. Perturb the perceptual context $\hat{\mathbf{v}}^s = \mathcal{P}(\mathbf{v}^p)$ 2. Encode text embeddings $\mathbf{w}^p = \mathcal{T}(\mathbf{v}^p)$, $\hat{\mathbf{w}}^s = \mathcal{T}(\hat{\mathbf{v}}^s)$ 3. Get initial image-text similarity $s = \langle \mathbf{w}^p, x \rangle$ 4. Compute regularization item $\gamma = \mathcal{R}(\mathbf{w}^p, \hat{\mathbf{w}}^s, x)$ via Eq.(5) 5. Compute regularized similarity $r = s \times \gamma$ 6. Determine the category $k = argmax_{k} \lbrace r_k \rbrace $ We will add more detailed pseudo-codes or flow charts in the revised paper. For reproducibility, the source code will be released upon acceptance. And we are also happy to answer any remaining or follow-up questions to clarify our method. [1] Leitian Tao. Non-Parametric Outlier Synthesis, ICLR 2023. [2] Xuefeng Du. VOS: Learning What You Don't Know by Virtual Outlier Synthesis, ICLR 2022. --- Best regards, Authors
Summary: The paper proposes a method to incorporate perceptual context and spurious context to handle the OOD detection problem. The experimental results seem quite promising. Strengths: 1. The results of the proposed method is obviously better than previous OOD detection methods; 2. The proposed method is quite interesting, which i think can be extended to other areas involving large-scale vision-and-language models. Weaknesses: 1. The presentation of this paper is not very clear. For example, the authors mentioned "label bias" in the contributions part, but there is no explicit explanation about this so-called label bias, which I believe needs more clarifications; 2. The motivations for the proposed hierarchical context mechanism is not very clear. Why a single context cannot produce a precise classification boundary? Moreover, the spurious samples w.r.t. a specific ID category can have large variance, so using a single spurious context can encode such large intra-class variance? 3. It would be better if the authors can provide some visualizations of the generated samples using the perturbation guidance. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Kindly refer to the weakness part Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors can test that finetuning the encoders from the CLIP can produce what extent of the performance improvement for OOD detection. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very cheerful that the reviewer finds our work interesting and transferable to other areas involving large-scale VLMs. We hope the detailed responses below can address your concerns. > Q1: The presentation of this paper is not very clear. For example, the mentioned "label bias" in the contributions part needs more clarification. We are sorry for that. The term "label bias" basically means the overconfidence in predicting unknown samples into known categories $^{[1][2]}$. To avoid ambiguity and reduce the burden of understanding, we will change the term "mitigate label bias" to the straightaway "detect OOD samples" in our contribution claim. In addition, we will make a throughout proofreading to improve readability and rigorousness. [1] Anh Nguyen. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. CVPR 2015. [2] Boxi Cao. Knowledgeable or educated guess? revisiting language models as knowledge bases. ACL 2021. > Q2: The motivation for the proposed hierarchical context mechanism is not very clear. Why a single context cannot produce a precise classification boundary? Moreover, the spurious samples w.r.t. a specific ID category can have large variance, so using a single spurious context can encode such large intra-class variance? Actually, a single perceptual context CAN produce a precise classification boundary in the closed-set, but CANNOT produce the precise category boundary in the open-set. As a result, the spurious OOD samples (similar to a certain in-distribution category) will lead to overconfidence predictions by the single perceptual context. Therefore, we propose to learn the spurious context to help perceptual context establish the precise category boundary. Besides, we recognize that for each ID category (e.g., cat), the spurious OOD samples are diverse (e.g., panthers, lions, etc). Thus, we aim to only describe the spurious sample surrounding the ID category, rather than the whole OOD space (which is too complicated). According to common sense, it is intuitive to use more spurious contexts ($w_k^p$) to describe better category boundaries. To verify it, we have tested the number of spurious contexts for each ID category (taking ImageNet-100 as the ID dataset), and the results are shown below: | Spurious Context Number | FPR95↓ | AUROC↑ | |:-----------------------:|:------:|:------:| | 1 | 10.31 | 97.82 | | 2 | **10.21** | 97.86 | | 4 | 10.27 | **97.88** | | 8 | 10.25 | 97.86 | It implies using more spurious contexts only leads to a 0.1% gain in performance. The reason may be that the learned 2 or more spurious contexts are too redundant without any constraints. To alleviate this problem, we simply add an orthogonal constraint (making the similarities between every two spurious contexts close to zero, denoted as *+orth*), and the OOD detection performance is significantly boosted: | Spurious Context Number | FPR95↓ | AUROC↑ | |:-----------------------:|:------:|:------:| | 1 | 10.31 | 97.82 | | 2 + orth | 10.17 | 97.86 | | 4 + orth | 9.89 | **97.89** | | 8 + orth | **9.76** | 97.84 | Therefore, how to effectively and efficiently leverage more spurious contexts to better describe the category boundary deserves further exploration, and we view it as our future work. We will update the experiments and discussions. > Q3: It would be better if the authors can provide some visualizations of the generated samples using the perturbation guidance. Thanks for such a constructive suggestion. We have provided the visualizations in **Figure A1** in the uploaded PDF file. We hope it can better illustrate our approach. > Q4: The authors can test that finetuning the encoders from the CLIP can produce what extent of performance improvement for OOD detection. We have tested finetuning CLIP's encoders in the same category-extensible setting as Table 3 in our manuscript (separately training on two ImageNet-100 subsets (IN100-I, IN100-II), while directly testing on the merged ImageNet-200 (IN200)). Following NPOS $^{[3]}$, we only train the encoder's last two blocks, and the results are shown below. | | IN100-I | | | IN100-II | | | IN200 | | | |:--------:|:-------:|:------:|:------:|:--------:|:------:|:------:|:-----:|:------:|:------:| | FineTune | ACC↑ | FPR95↓ | AUROC↑ | ACC↑ | FPR95↓ | AUROC↑ | ACC↑ | FPR95↓ | AUROC↑ | | No | 94.12 | 10.31 | 97.82 | 94.42 | 7.91 | 98.31 | 89.61 | 13.13 | 97.19 | | Yes | 95.24 | 10.55 | 97.77 | 95.16 | 9.56 | 97.98 | 89.29 | 14.71 | 96.89 | Specifically, encoder-finetuning can indeed improve the in-distribution classification accuracy by 1% in each subset, but the OOD detection performance (i.e., FPR95 and AUROC) decreases. Moreover, when testing on the merged union set, the model obtains a lower ID classification accuracy with worse OOD detection performance. It is consistent with our motivation that freezing the encoders of large-scale vision-language models is necessary to describe the precision category boundaries. [3] Leitian Tao. Non-Parametric Outlier Synthesis, ICLR 2023. --- Rebuttal Comment 1.1: Comment: The author's response fully addresses my concerns and the added experiments further demonstrate the value of this work. So I decided to raise my rating to weak accept. --- Reply to Comment 1.1.1: Title: Thanks for your immediate feedback Comment: We would like to express our sincere graititude to your immediate feedback. And we really appreciate that you can boost our paper. Your valuable comments do make our work much stronger. Best regards, Authors
Summary: This paper contributes a new method for OOD detection by learning the precise category boundaries, specifically, the category boundaries are defined by one perceptual context and one spurious context, these two contexts are learned text embeddings for a frozen CLIP model, the spurious context is learned by perturbing the perceptual context. Experiments on large scale OOD detection benchmarks show the effectiveness of the proposed method. Strengths: 1. The experiments are comprehensive. 2. Because the proposed method does not change the parameter the frozen CLIP model, it is shown that the proposed method is more generalizable under multiple category extended scenarios. 3. The writing of the paper is clear. Weaknesses: 1. The spurious context is learned by perturbing the perceptual context, and the perturbation is done by changing one word embedding, this could also be changing multiple word embeddings, but there are no ablations for this. 2. There is one paper[R1] explore similar idea of spuriour context (termed reciprocal points in [R1]), which I think should be cited and discussed. [R1] Learning Open Set Network with Discriminative Reciprocal Points, ECCV 2020. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. I don't quite understand why the perceptual context is called hierarchical. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are really encouraged that the reviewer thinks our method makes sense and the experiments are comprehensive. We appreciate your spot-on summary and constructive comments and suggestions, which we address below: > Q1: When learning spurious contexts, the perturbation could also be changing multiple word embeddings. Thanks for the suggestion. In fact, in the supplementary materials we have demonstrated that perturbing (masking is one of the perturbing approaches) one word/token in the perceptual context is **effective** to guide the OOD syntheses in section 3.2. Moreover, we have conducted a series of ablations to study how the masking ratio influences the final OOD detection performance. Taking ImageNet-100 as the ID dataset, we learn 16 words/tokens for each category, and randomly perturb 1/16 to 16/16 words (the *classname* is always preserved) to guide the OOD synthesis for training. The results are shown below: | Mask Ratio | FPR95↓ | AUROC↑ | |:----------:|:------:|:------:| | 1/16 | 10.31 | **97.82** | | 2/16 | **10.27** | 97.81 | | 4/16 | 10.47 | 97.78 | | 8/16 | 11.02 | 97.73 | | 16/16 | 11.70 | 97.62 | It indicates that masking 1~2 words/tokens is **effective enough** for perturbation guidance. Masking 4 or more words even leads to performance degradation, which means severely perturbed contexts may choose the noisy OOD candidates (e.g., random noise), making the learned spurious context unable to capture the true spurious OOD samples and further describe the category boundary. We will add the ablation studies and discussions to the manuscript. > Q2: Reciprocal Points Learning (RPL)$^{[1]}$ explores a similar idea of spurious context, which should be cited and discussed. Thanks for your constructive suggestion. We will add the discussion and cite it in the manuscript. The similarity between our method and Reciprocal Points Learning (RPL)$^{[1]}$ and its sequel ARPL $^{[2]}$ is that we all explicitly learn the spurious contexts / reciprocal points beyond the ID category to deal with unknown samples in the open-world. As follows, we briefly discuss the difference from two main aspects: 1. The **motivation** between our method and RPL/ARPL is different. RPL/ARPL aim at modeling the whole complementary feature space against a certain ID category for a visual classifier. On the contrary, our paper focuses on leveraging the pretrained vision-language models to capture the spurious OOD samples surrounding a certain ID category, and hierarchically describe the category boundary. 2. The **approach** between our method and RPL/ARPL is different. RPL/ARPL are essentially learning a more compact in-distribution feature space by the constraints from reciprocal points, which are more similar to NPOS $^{[3]}$ and VOS $^{[4]}$. In contrast, our method freezes the model's parameters to maintain the generalized feature space, and learns the spurious contexts in a specially-designed prompt-tuning way. [1] Guangyao Chen. Learning Open Set Network with Discriminative Reciprocal Points, ECCV 2020. [2] Guangyao Chen. Adversarial Reciprocal Points Learning for Open Set Recognition, TPAMI 2021. [3] Leitian Tao. Non-Parametric Outlier Synthesis, ICLR 2023. [4] Xuefeng Du. VOS: Learning What You Don't Know by Virtual Outlier Synthesis, ICLR 2022. > Q3: I don't quite understand why the perceptual context is called hierarchical. We are sorry for that. Actually, the perceptual context together with the spurious contexts are called hierarchical contexts. In our paper, the term "hierarchical" does NOT mean a taxonomic hierarchy, e.g., a fish node/category with two leaves/subcategories (goldfish and lionfish) in the ImageNet hierarchy. Instead, our "hierarchical" term refers to a mathematical/logical hierarchy, that is, first using the perceptual context to classify the samples into different categories (different colors in Fig.1 (a)), then using the spurious context for each category to further determine whether the sample truly belongs to this category or just comes from out-of-distribution (dark regions in Fig.1 (b)). In the manuscript, we will make a more rigorous definition of the term "hierarchical" with more appropriate descriptions. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I would like to thank the author for their rebuttal. My concerns are largely resolved. I would strongly recommend paraphrasing the usage of the term hierarchical, as I see multiple reviewers raising this question. --- Reply to Comment 1.1.1: Title: Thanks for the constructive feedback Comment: We sincerely thank the reviewer again for evaluating our work and providing constructive feedback. We did neglect rigorous definitions of the term "hierarchical" in the initial manuscript. In the revised version, we will paraphrase the usage of "hierarchical" in the introduction/abstract section. --- Best regards, Authors
Summary: This paper proposes a framework for detecting out-of-distribution (OOD) samples using hierarchical description - perceptual and spurious contexts. Authors consider a category-extensible setup where categories can merge hierarchically. The proposed approach is evaluated by considering ImageNet as the in-distribution data and iNaturalist, SUN, Places, and Texture as OOD datasets. Strengths: 1. Considering image-text models to learn precise class boundaries is interesting and effective. 2. Authors performed extensive experiments on relevant benchmarks. 3. Results are state of the art. Weaknesses: 1. It is not clear from the introduction what the hierarchical contexts are and how they affect OOD detection in a category-extensible way. Precisely, how do these contexts influence Fig. 1(a) to get updated to Fig. 1(b)? 2. Authors talk about learning precise class boundaries. This is perhaps more suitable for novel class detection i.e., open-set learning than OOD detection. For example, the same class 'Car' can be in-distribution in a sunny environment and OOD in a rainy environment. How does the proposed approach address this scenario? 3. Line 150-152: how does the combination of the contexts implemented? It is not clear from the description. 4. section 3.2 and Fig. 3: how are the spurious samples generated and perturbed? Are the perturbations always random or informed by text cues? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please address the questions in the 'Weaknesses' section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad that the reviewer considers our method to be interesting and effective. However, we are sorry that you get confused with the mechanism and procedure of our method, due to our imperfect presentation. With the illustration figures in our uploaded PDF file, we hope the following responses can successfully address your concerns. > Q1: It is not clear from the introduction what the hierarchical contexts are and how they affect OOD detection in a category-extensible way. Precisely, how do these contexts influence Fig. 1(a) to get updated to Fig. 1(b)? In our paper, the term "hierarchical" does NOT mean a taxonomic hierarchy, e.g., a fish node/category with two leave nodes/subcategories (goldfish and lionfish) in the ImageNet hierarchy. Instead, our "hierarchical" term refers to a mathematical/logical hierarchy, that is, first using the perceptual context to classify the samples into different categories (different colors in Fig.1 (a)), then using the spurious context for each category to further determine whether the sample truly belongs to this category or just comes from out-of-distribution (dark regions in Fig.1 (b)). As illustrated in Fig.4 (c-d), the learned hierarchical contexts (i.e., perceptual and spurious contexts) are essential for distinguishing base/novel classes and ID/OOD samples in the category-extensible way. Besides, we have revised the figure and caption in the uploaded PDF file (**Figure A3**), which may be more clear and comprehensible. > Q2: Authors talk about learning precise class boundaries. This is perhaps more suitable for novel class detection i.e., open-set learning than OOD detection. For example, the same class 'Car' can be in-distribution in a sunny environment and OOD in a rainy environment. How does the proposed approach address this scenario? Thanks for pointing out this problem. Indeed, in the early stage, the task definitions were confusing, but nowadays out-of-distribution detection (OOD) and open-set recognition (OSR) tend to be a unified scenario $^{[1][2][3]}$. Here is a brief summary of the two tasks. Traditionally, the OSR task aims at classifying samples from predefined classes and identifying the rest samples as unknown, while the initial OOD task only focuses on detecting and rejecting the unknown samples. However, modern OOD methods simultaneously take in-distribution classification and out-of-distribution detection into account. Hence, in today's community, OOD and OSR are exactly performing the same task, and the only difference lies in the evaluation settings $^{[3]}$. So you may simply view OOD and OSR the same scenario, and the precise class boundaries play the key role. Besides, in a general OOD detection setting, when defining the 'car' as an in-distribution category, a car in sunny or rainy environments will possess a large intra-class variance. This is indeed a challenging problem, and our solution is freezing the encoders of CLIP to maintain the generalized representation capability to overcome the variance. If we misunderstood or have not addressed your concern, please feel free to inform us anytime. [1] Yang Jingkang. Generalized Out-of-Distribution Detection: A Survey. ArXiv, 2021. [2] Yang Jingkang. OpenOOD: Benchmarking Generalized Out-of-Distribution Detection. NIPS 2022. [3] Jun Cen. The Devil is in the Wrongly-classified Samples, ICLR 2023. > Q3: Line 150-152: how does the combination of the contexts implemented? As for the combination of the perceptual and spurious contexts, the methodological mechanism is as we discussed in Q1, and the mathematical formulation please refers to Eq.(1-3) (for training) and Eq.(5) (for inference). For example, in Eq.(5), we use the perceptual context $w_k^p$ to compute the initial image-text similarity $s_k$ with the $k$-th ID category. Then the spurious contexts $w_k^s$ is adopted to calculate the regularization item $\gamma_k$, which presents the probability that this input image truly belongs to the $k$-th ID category or comes from out-of-distribution. Finally, we combine the two contexts by multiplying the initial similarity $s_k$ by the regularization item $\gamma_k$, to derive the final score $r_k=s_k \times \gamma_k$ for ID classification and OOD detection. Besides, we have illustrated the whole procedure in **Figure A2** in the uploaded PDF file. If there is still something unclear, please let us know and we are happy to answer your further questions at our best effort. > Q4: section 3.2 and Fig. 3: how are the spurious samples generated and perturbed? Are the perturbations always random or informed by text cues? We first randomly generate a set of candidate samples in the image feature space, and then use the perturbed text features to select valid samples. The selected samples are so-called spurious samples. And indeed, the perturbations are always random, because during the training iterations, randomly perturbing is effective enough to guide the OOD sample generation, as illustrated in our supplementary material. For clear illustration, we provide the **visualizations** of the initial OOD **sample generation**, our **perturbation guidance**, and further **training goals** in **Figure A1** in the uploaded PDF file. We hope it can help understand our method.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are encouraged to see **ALL** reviewers find our method **interesting** and **effective** (ufZG, rNwN, oU43, R97w, 194m), **comprehensive** experiments and **promising** results (ufZG, rNwN, oU43, R97w, 194m), and **transferable** to other areas involving large-scale vision-language models (R97w). However, reviewers still have some extra concerns, which mainly focus on the following aspects: * insufficient presentation on the term definition, method operation, and novelty clarification; * lack of additional experiments and ablation studies; * extension and combination with zero-shot applications, future explorations, etc. We respond to each reviewer's comments in detail below, and here is a brief summary of the main discussions and experiments: * more precise explanations of our proposed perceptual context, spurious context, hierarchical mechanism, etc; * walkthrough examples to illustrate the perturbation process, visualize the generated samples, and demonstrate the full training procedure (in the **uploaded PDF file**); * extensive ablation studies and discussions on the mask ratio for Perturbation Guidance, the number of learnable spurious contexts, as well as comparison and discussion with relevant works like RPL and CoCoOp; * new insights and validations for zero-shot applications, including the combination with large language models (like GPT). We thank the reviewers' valuable suggestions again, and we believe those make our paper much stronger. Pdf: /pdf/ae099f038974f6cd7226904df7ab8c3795bd1e74.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a solution for the task of out-of-distribution detection with a hierarchical context. Specifically, it introduces the concepts of spurious context as negative descriptions to learn the distribution of unseen categories implicitly. When equipped with perceptual context, it can effectively detect OOD samples. Experimental results verify its robustness and effectiveness on several benchmarks. Strengths: - The proposed method is simple yet effective. Instead of finding a negative context for all categories, it proposes to adopt one for each category. Besides, the approach proposed to find negative samples is novel and intuitive. - The writing structure is clear and easy to follow. However, I find it hard to have an initial guess for the meanings of "perceptual context" and "spurious" context in the abstract section. - The experimental results are strong and solid. Besides, the ablation study in Tab.4 verifies the effectiveness of both proposed modules. Weaknesses: - The masking ratio in section 3.2, as a very important factor, hasn't been studied. A too-small ratio may lead to some false negative candidates while too-large ratios can neglect false positive candidates. From this perspective, this hyper-parameter can be very sensitive, thus influencing performance vastly. - I just doubt whether a single vector w^p_k for a spurious can model the complex category boundary. Modeling the interior region of a category is intuitive as in the ideal case it can model a quasi-hypersphere. But for OOD samples surrounding a category, the manifold can be more complex. - Fig.4 (a-b) as the main visualization, is a bit confusing. It's hard to tell what it aims to show. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: What is the masking ratio in Pertubantion Guidance? And is the performance sensitive to it? Where will the spurious context locate? How about plotting them in Fig 4(a-b)? Although there has been an illustrative plotting in Fig.1, what would it be like in TSNE visualization? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: No potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are really encouraged that the reviewer recognizes our method to be simple, novel, intuitive, and effective. We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns. > Q0: However, I find it hard to have an initial guess for the meanings of "perceptual context" and "spurious context" in the abstract section. Thank for pointing out this. We did miss a concise and precise description in the abstract section. Specifically, we are planning to revise corresponding sentences into: "Perceptual contexts perceive the inter-category difference (e.g., cats vs apples) for current classification tasks, while spurious contexts further identify spurious (similar but exactly not) OOD samples for each single category (e.g., cats vs panthers, apples vs peaches). " We hope such a revision can help readers get into our paper faster. > Q1: The masking ratio in section 3.2 hasn't been studied. Thanks for the essential suggestion! In fact, in the supplementary materials we have demonstrated that perturbing (masking is one of the perturbing approaches) one word/token in the perceptual context is **effective** to guide the OOD syntheses in section 3.2. Moreover, we have conducted a series of ablations to study how the masking ratio influences the final OOD detection performance. Taking ImageNet-100 as the ID dataset, we learn 16 words/tokens for each category, and randomly perturb 1/16 to 16/16 words (the *classname* is always preserved) to guide the OOD synthesis for training. The results are shown below: | Mask Ratio | FPR95↓ | AUROC↑ | |:----------:|:------:|:------:| | 1/16 | 10.31 | **97.82** | | 2/16 | **10.27** | 97.81 | | 4/16 | 10.47 | 97.78 | | 8/16 | 11.02 | 97.73 | | 16/16 | 11.70 | 97.62 | It indicates that masking 1~2 words/tokens is **effective enough** for perturbation guidance. Masking 4 or more words even leads to performance degradation, which means severely perturbed contexts may choose the noisy OOD candidates (e.g., random noise), making the learned spurious context unable to capture the true spurious OOD samples and further describe the category boundary. We will add the ablation studies and discussions to the manuscript. > Q2: Whether a single vector $w_k^p$ for a spurious can model the complex category boundary. We really appreciate the comment! We recognize that for each ID category (e.g., cat), the spurious OOD samples are diverse (e.g., panthers, lions, etc). Thus, we aim to only describe the spurious sample surrounding the ID category, rather than the whole OOD space (which is too complicated). According to common sense, it is intuitive to use more spurious contexts ($w_k^p$) to describe better category boundaries. To verify it, we have tested the number of spurious contexts for each ID category (taking ImageNet-100 as the ID dataset), and the results are shown below: | Spurious Context Number | FPR95↓ | AUROC↑ | |:-----------------------:|:------:|:------:| | 1 | 10.31 | 97.82 | | 2 | **10.21** | 97.86 | | 4 | 10.27 | **97.88** | | 8 | 10.25 | 97.86 | It implies using more spurious contexts only leads to 0.1% gain on performance. The reason may be that the learned 2 or more spurious contexts are too redundant without any constraints. To alleviate this problem, we simply add an orthogonal constraint (making the similarities between each two spurious contexts close to zero), and the OOD detection performance is significantly boosted: | Spurious Context Number | FPR95↓ | AUROC↑ | |:-----------------------:|:------:|:------:| | 1 | 10.31 | 97.82 | | 2 + orth | 10.17 | 97.86 | | 4 + orth | 9.89 | **97.89** | | 8 + orth | **9.76** | 97.84 | Therefore, how to effectively and efficiently leverage more spurious contexts to better describe the category boundary deserves further exploration, and we view it as our future work. We will update the experiments and discussions. > Q3: Fig.4 (a-b) as the main visualization, is a bit confusing. It's hard to tell what it aims to show. Where will the spurious context locate? How about plotting them in Fig 4(a-b)? Although there has been an illustrative plotting in Fig.1, what would it be like in TSNE visualization? Fig.4 highlights the advantages of our method against competitors, including: 1. Fig.4 (a) indicates finetuning the model's encoder will damage the generalized feature space, making the unseen OOD images indistinguishable from seen ID images. On the contrary, Fig.4 (b) shows our method maintains the feature-level separability by freezing the whole encoders. 2. Fig.4 (c-d) implies our learned spurious context (blue star in (d)) assists perceptual context (yellow star) in better distinguishing the ID and OOD samples. We have revised the figure and caption in the uploaded PDF file (**Figure A3**), which may be more clear and comprehensible. Besides, visualizing the spurious contexts in Fig.4 (a-b) will make the pattern too complicated to highlight the necessity to freeze the generalized feature space. Thus, we turn to visualize the spurious context in Fig.4 (d) for clarity.
null
null
null
null
null
null
$p$-Poisson surface reconstruction in curl-free flow from point clouds
Accept (poster)
Summary: This content seems interesting. I like that the authors gave considerable thought to improving surface reconstruction from the perspective of vector field processing. The paper targets two challenging problems in surface reconstruction: (1) removing the requirement for surface normal, and (2) improving the over-smoothness. However, there are technical details that need to be addressed before I can accept it. Strengths: (1) The problems are challenging. (2) Methodology is novel. (3) Improvements are observed. Weaknesses: In a p-Poisson equation, you have the p-Laplacian instead of the Laplacian. This is equivalent to giving weights to the gradient of an implicit function. And this weight is based on the magnitude of the gradient itself. The geometric intuition behind this weight is not clearly stated. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: major: (1) Add a comparison for the learned vector field. From equation (8), your G plays the role of the vector field derived from the oriented point cloud in PoissonRecon. It might be beneficial to compare G with the vector field formulated by surface normal. (2) In section 3.2, you mentioned the goal is to progressively enlarge lambda to infinity. If so, have you tried getting rid of the first term of equation (8)? I don’t think the method will fall apart. You would only be downgrading from ScreenedPoissonRecon to PoissonRecon. (3) The result in Figure 3 is a bit confusing. What’s your sampling strategy? Are you sampling all the vertices from the mesh? Because from the Screwstar, it seems you are not sampling uniformly. Could that partially be the reason the SIREN has a broken surface? Can you try to uniformly sample the star and see if there’s any improvement in the completeness? (4) The result in Figure 7 (a) had some clear artifacts. Why did that happen? When p = 2, your energy looks almost the same as ScreenedPoissonRecon. However, when I ran ScreenedPoissonRecon on this exact model, I did not see this artifact. minor: (1) The statement “irrotational flow” only appeared once in the title. If you want to use the statement, you need to be consistent in the writing. At least mention and explain it in the introduction. (2) In Chapter 4.3, you used the subtitle “Effect of curl-free constraint” twice. I believe you meant to say “Effect of minimal surface area constraint” for the second one. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive comments. Below, we carefully address the reviewer's comments: **Q1. In a p-Poisson equation, you have the p-Laplacian instead of the Laplacian..** **Reply**: In the $p$-Poisson equation $-\triangle_p u = 1$, by letting $p$ in the weight $\parallel\nabla u \parallel^{p-2}$ grows infinitely, we can obtain the SDF without normal supervision. More precisely, the $p$-Laplacian $\triangle_p$ can be decomposed as follows (Kawohl, 2016): $$\frac{1}{p}\triangle_p u= \frac{1}{p}\parallel\nabla u \parallel^{p-1}\triangle_1 u+ \frac{p-1}{p}\parallel\nabla u \parallel^{p-2}\triangle_\infty u,$$ where $\triangle_1 u = \nabla \cdot \left( \frac{\nabla u}{\parallel\nabla u\parallel} \right)$ geometrically represents the mean curvature of the isosurfaces of $u$ and $\triangle_\infty u = \nabla u^T H\nabla u$ stands for the second derivative in the steepest ascending direction. Here, $H$ denotes the Hessian matrix of $u$. The parameter $p$ directly represents the weights between these two terms. As $p$ becomes larger, the weight of the second part is larger and eventually converges to the infinite Laplacian $\triangle_\infty u$. On the other hand, if we set $p=2$ (the Laplacian), we couldn't get the SDF. **Q2. Add a comparison for the learned vector field..** **Reply**: We agree that $G$ plays the role of normal field in PoissonRecon because we find a scalar function $u$ whose gradient best fit the vector field $G$ by minimizing $\int_\Omega \left\Vert \nabla u - G\right\Vert^2$. Following the reviewer's recommendation, we measure the difference between the given surface normal $n$ and the learned $G$ on SRB dataset. Given an oriented point cloud together with surface normals $\lbrace x_i, n_i \rbrace$, $i=1,\cdots,N$, we measure the cosine similarity $G^Tn\coloneqq\frac{1}{N}\sum_{i=1}^N | G\left(x_i\right)^T n_i |$. In addition, we also report the cosine similarity of the gradient field of learned SDF $u$ and the surface normal. Results are reported in the Table below. The results confirm that $G$ is accurately trained. |Model|Anchor|Daratech|DC|Gargoyle|Lord Quas| |:---:|:---:|:---:|:---:|:---:|:---:| |$G^Tn$|0.9868|0.9544|0.9941|0.9924|0.9965| |$\nabla u^Tn$|0.9870|0.9579|0.9946|0.9929|0.9966| **Q3. In section 3.2, you mentioned the goal is to progressively enlarge lambda to infinity..** **Reply**: In the standard penalty method, using a progressively larger value of the penalty parameter is theoretically necessary to obtain a solution of the constrained optimization. In the numerical simulation, we could make a procedural sequential approach to increase $\lambda_1$ in (8). However, in practice, we cannot make the value of $\lambda_1$ infinitely large (the line number 147 of the paper). As $\lambda_1 \rightarrow \infty$, the balancing between $\parallel \nabla u - G\parallel$ and $\parallel \nabla \times \tilde{G}\parallel$ is going to be broken and then it is difficult to obtain the curl-free constraint which is one of crucial parts of the proposed method; please check Q3 of the author response to the reviewer rqMu for the necessity of the curl-free term. Moreover, without the first term, the solution is up to constant and then there should be an extra step to find a unique solution. Since the first term is the fidelity working like Dirichlet boundary condition, it enforces the implicit function should be zero at the point cloud. **Q4. The result in Figure 3 is a bit confusing..** **Reply**: When we trained SIREN, points on $\Gamma$ were uniform randomly sampled from the original point cloud and we set collocation points of $\Omega$ as the uniform grid. The reason for SIREN to restore the broken surface seems to be the choice of activation function and network initialization. We can see from Figure 3 that IGR has smooth surfaces, but the difference between SIREN and IGR is the choice of activation function and network initialization. IGR uses the softplus activation function and initializes the network to be the SDF for the sphere. SIREN, on the other hand, adopts the sine function as the activation and an initialization that preserves the distribution of activations through its layers. **Q5. The result in Figure 7 (a) had some clear artifacts..** **Reply**: In the proposed model, there are a few differences with ScreenedPoissonRecon (SPR). Firstly, the corresponding Euler-Lagrange (EL) equation of the proposed objective is not reduced to the screened Poisson equation because the first loss term considers $\left\vert u\right\vert$ rather than $\left\vert u\right\vert^2$ and the integration is computed over $\Gamma$ rather than the whole computational domain $\Omega$. The first term in (8) corresponds to the boundary condition. Secondly, $G$ is not the vector field obtained from the oriented point cloud, but the learnable function that is simultaneously trained with $u$. Moreover, since we impose the $p$-Poisson equation on $G$ as a hard constraint (7), we obtain an SDF rather than an indicator function like SPR. When we use small $p$ in the view of finding the SDF, the construction of $G$ leads to degraded results; see Figure 7. When $p=2$, the optimal solution $u$ is the following PDE: $$\triangle u = \nabla\cdot G= -1$$ with Dirichlet boundary condition $u=0$ on $\Gamma$, which is far away from the SDF. **Q6. The statement “irrotational flow” only appeared once in the title..** **Reply**: We completely agree with the reviewer. We will revise the manuscript as recommended. **Q7. In Chapter 4.3, you used the subtitle “Effect of curl-free constraint” twice..** **Reply**: Thank you for pointing this out. We will revise the manuscript as recommended. **Reference** B. Kawohl et al. On the geometry of the $ p $-Laplacian operator. arXiv preprint arXiv:1604.07675, 2016. --- Rebuttal Comment 1.1: Comment: The rebuttal has been thorough. I recommend authors add additional explanation and try to be as clear as possible to convey the motivation/intuition of adopting p-Poisson equation in the final manuscript. Regardless, this work has enough novelty. I change my score to 7, and recommend acceptance of this paper.
Summary: The paper introduces an intriguing approach to surface reconstruction, but further enhancements in terms of completeness and evaluations would strengthen its contribution to the field. Also, the gradient of the SDF acts as an auxiliary network output and incorporated the Poisson equation as a hard constraint. The proposed method was also used to propose a more accurate representation. They perform some experiments on standard benchmark datasets to demonstrate superior and robust reconstruction. In my view, the proposed method cannot achieve the best one numerically on average (table 1). Strengths: This paper proposes a novel surface reconstruction method using the p-Poisson equation and a curl-free constraint, which is highly interesting. It demonstrates superior performance compared to previous works. Weaknesses: The authors have submitted a revision of the full paper in the supplementary material, it may be considered a violation of the rules. It may be appropriate to consider resubmitting the paper to another venue due to the violation. There are several incomplete pieces of information. It would be beneficial to include evaluations to measure surface quality, such as normal consistency, as well as provide details on training and inference times. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: 1. Your method appears to be computationally intensive. Can you provide information on the training and inference times compared to other methods? 2. While your results look promising, including quantitative results on surface quality would be valuable. 3. It would be unfair to directly compare the proposed method with approaches that utilize normal information. However, conducting experiments and comparisons with methods such as "shape as points" from the URL [https://pengsongyou.github.io/sap] would provide valuable insights and contribute to a more comprehensive evaluation of the proposed method. 4. What are lines 457-460? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 1 poor Contribution: 3 good Limitations: The limitation has been roughly discussed in the paper. Poisson-based methods, including the proposed approach, are unable to handle open surfaces. Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful comments. Below, we carefully address the reviewer's comments: **Q1. The authors have submitted a revision of the full paper in the supplementary material..** **Reply**: The reviewer's point is indeed correct if the main argument, main idea, or computational results of the proposed algorithm had been changed between the original paper and the paper with the supplementary material. The paper with the supplementary material has precisely the same arguments and ideas as the original paper. The only changed parts are i) the results of SIREN, ii) a typographical error in Figure 7, and iii) some of the incomplete information in the reference. The superiority of the results remains almost the same, so none of the main arguments in the paper are affected. We would like to note that we revised the manuscript to provide accurate values for the reviewer's better judgment, not because we wanted to change or strengthen the main argument. **Q2. It would be beneficial to include evaluations to measure surface quality,..** **Reply**: Thanks for the comments on further evaluations. We provide the related answers in Q4 for a measure of surface quality regarding surface quality; see Tables 2 and 3. Training/inference times are reported in the answer to Q3; see Table 1. **Q3. Your method appears to be computationally intensive..** **Reply**: We investigate the training/inference times of the proposed model compared to other models. In the Table 1 below, we report the average training time per iteration on SRB dataset and inference time at a resolution of $32^3$ voxels. As the reviewer mentioned, the proposed model requires more computational cost than baseline models because of the computation on curl using automatic differentiation. [Table 1] Training/Inference times |Time|IGR|SIREN|DiGS|PINC (ours)| |:---|:---:|:---:|:---:|:---:| |Training time (ms/iteration)|48.34| 13.11 | 52.34 | 295.0| |Inference time (ms)| 6.86 | 3.51 | 4.39 | 6.93| **Q4. While your results look promising, including quantitative results on surface quality would be valuable.** **Reply**: Thank you for the suggestion. As normal consistency (L. Mescheder, 2019) is recommended in Q2, it is tested and the results are reported in the Tables 2 and 3 below. Overall, compared to other models, the proposed model achieves a better normal consistency. [Table 2] Normal Consistency on SRB |Model|Anchor|Daratech|DC|Gargoyle|Lord Quas| |:---:|:---:|:---:|:---:|:---:|:---:| |IGR|0.9706|0.8526|0.9800|0.9765|0.9901| |SIREN|0.9438|**0.9682**|0.9735|0.9392|0.9762| |DiGS|**0.9767**|0.9680|0.9826|0.9788|0.9907| |SAP|0.9750|0.9414|0.9636|0.9731|0.9838| |PINC (ours)|0.9754|0.9311|**0.9828**|**0.9803**|**0.9915**| [Table 3] Normal Consistency on Thingi10K |Model|Squirrel|Pumpkiin|Frogrock|Scrwstar|Buser head| |:---:|:---:|:---:|:---:|:---:|:---:| |IGR|**0.9820**|0.9565|0.9509|0.9709|0.9249| |SIREN|0.9529|0.8996|0.9035|0.9142|0.8860| |DiGS|0.9557|0.9353|0.9468|0.9386|0.9171| |SAP|0.9791|0.9520|0.9319|0.9767|0.9004| |PINC (ours)|0.9816|**0.9583**|**0.9545**|**0.9805**|**0.9376**| **Q5. It would be unfair to directly compare the proposed method with approaches that utilize normal information..** **Reply**: Thank you for the comment. Following the reviewer's suggestion, we include the comparison with Shape As Points (SAP) on both SRB and Thingi10K dataset using three metrics for quantitative evaluation: Chamfer distance (CD) and Hausdorff distance (HD) are summarized in Tables A5 and A6 in the attachment, and evaluation of normal consistency (NC) is reported in Tables 2 and 3 in the response to Q4. The results show that the CD and HD of the proposed model are similar to SAP, despite not utilizing the given surface normal. Furthermore, the proposed model, which learns the gradient field of the $p$-Poisson equation instead of using the given surface normal, achieves a better overall NC. SAP is based on Poisson Surface Reconstruction (PSR; Kazhdan, 2006). The proposed model may be interpreted as PSR because of (8). However, the vector field $G$ in the proposed model is not obtained from the oriented point cloud, but the learnable function that is trained with $u$ at the same time. Moreover, since we bake the $p$-Poisson equation into $G$ as a hard constraint in (7), we obtain a continuous SDF rather than an indicator function like PSR and SAP. The results confirm that simultaneous training of the gradient field and the SDF, that is, the variable splitting method, achieves similar or even better surface restoration than SAP, even without using the given surface normal. **Q6. What are lines 457-460?** **Reply**: Thank you for pointing out the mistake. We will make sure that it will be deleted in the revised version. **References** L. Mescheder et al. Occupancy networks: Learning 3d reconstruction in function space. IEEE/CVF, 2019. M. Kazhdan et al. Poisson surface reconstruction. In Proceedings of the fourth Eurographics symposium on Geometry processing, 2006. --- Rebuttal Comment 1.1: Title: Reply Comment: After reading the other reviews and your responses, I think some of the concerns are addressed well. Here, more comprehensive numerical evaluations are provided to demonstrate the quantitative performance of the proposed methods, I suggest the author add these experiments to the revised paper, including the numerical evaluations, training/inference time, and discussion with SAP. All of the responses have addressed my major concerns, instead of the unfair submission. After that, I am positive about the submission and will change my score to accept when I ignore the unfair submission. I have raised the issues to the ac and sac, and I have no other comments if it is uncritical for the submission.
Summary: This paper considers the problem of reconstruction of a smooth surface from an unorganised point cloud. The proposed approach is based on neural implicit function while without normal information. The main contribution of this work is that they demonstrate that proper supervision of partial differentiable equation and fundamental properties of differential vector fields are enough to reconstruct high-quality surfaces. A novel part is to develop a variable splitting structure by introducing a gradient of the SDF as an auxiliary variable and a curl-free constraint on the auxiliary variable. The experimental results somehow demonstrate the effectiveness on some aspects. Strengths: I like the idea by introducing auxiliary variable to solve the optimization problem under the framework of neural implicit function. Actually, they all both solve optimization problem. Therefore, the optimization strategy in numerical algorithm can be adopted for neural implicit function based representation. This paper shows a good example in this aspect, and might inspire some interesting along this direction. Weaknesses: The results shown in Tab. 1 and Tab. 2 are not good enough. Can you explain why the performance is not good enough for different dataset and metrics? Technical Quality: 3 good Clarity: 3 good Questions for Authors: With additional auxiliary variable, does the optimization take long time to converge? If yes, please list the computation time with more details. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As listed in the above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable comments. Below, we carefully address the reviewer's comments: **Q1. The results shown in Tab. 1 and Tab. 2 are not good enough..** **Reply**: Each shape of data has a different level of difficulty due to its own challenging characteristics, such as complex topology, missing data, sharp corners, a high level of detail, and so on. Therefore, it is natural for the model to perform differently on these different data. The proposed model has the advantage of learning an SDF based on the $p$-Poisson equation, which implicitly represents the surface, allowing for accurate and smooth reconstruction of closed surfaces. Therefore, it is difficult to say that surface reconstruction of the proposed model performs better than other models on data that cannot highlight these advantages. We only confirmed that the experimental results of the proposed model achieved comparable results to leading INR models across the various data. Moreover, different metrics quantify different features. The proposed model seems to show better results with Chamfer distances than with Hausdorff distances. However, these two metrics do not reflect the complete quality of the restored surface. We evaluate normal consistency (L. Mescheder, 2019) as recommended by the reviewer WkvV (Please see Tables 2 and 3 below the answer to Q4). We would like to note that the proposed model achieves better normal consistency for the tested examples. **Q2. With additional auxiliary variable, does the optimization take long time to converge? If yes, ..** **Reply**: We agree with the reviewer that, in general, the computational cost of using and not using auxiliary variables is undoubtedly not the same. However, we have technical difficulties to make a fair comparison on this issue. In the case of auxiliary variable $\tilde{G}$, the results show a significant difference in performance with and without using it; see more details in Figure 5. It means that the convergent point may not be the same with and without using $\tilde{G}$, so it is difficult to compare convergence speeds under equivalent conditions when the exact solution is not specifically known. In the case of auxiliary variable $G$ cannot be excluded due to the construction of the proposed model. Nevertheless, analyzing the convergence speed or computational cost of using and not using auxiliary variables is indeed a crucial topic and it should be studied rigorously and mathematically with a very simple and meaningful loss function. We thank the reviewer for pointing out such a worthwhile future research topic. We will add this as future work in Section 5.
Summary: The paper presents a surface reconstruction method that uses only raw point clouds. It enforces the Poisson surface equation implicitly over the SDF representation of the surface. As a consequence, it obtains smooth surfaces with preserved details without any 3D supervision or apriori knowledge of normals. The experiments show that the performance is comparable to sota methods which require data beyond raw point clouds. Strengths: The use of p-poisson equation to describe SDF is well-motivated. The use of auxiliary variables relating to the gradient and curl of SDF is interesting and convincingly reduces the computational complexity. Ablation study is a plus. The performance is comparable to the methods that use either 3D supervision or oriented normals. The performance on noisy data is generally better than sota. I think it is due to the fact the method uses only raw point clouds that serves as a benefit here. Normal computation on noisy point clouds can be disproportionately erroneous. Weaknesses: There is no theoretical motivation/argument provided to choose p-poisson over eikonal equation to describe SDFs. IGR[23] can perform surface reconstruction without normals as well. It is not clear whether the authors used the Normals while evaluating IGR. A comparison with both: IGR with and without normals should have been considered. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weakness section Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: to some extent, the limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. We carefully address your questions as follows: **Q.1 There is no theoretical motivation/argument provided to choose p-poisson over eikonal equation to describe SDFs.** **Reply**: For a detailed response to the reviewer's question about the theoretical motivation for adopting the $p$-Poisson equation instead of the eikonal equation to describe SDFs, please check the response to the common question above. **Q2.IGR[23] can perform surface reconstruction without normals as well..** **Reply**: We compared the performance with IGR without normal vectors $n$. As the reviewer recommended, we present the comparison of results with IGR with and without $n$, and the proposed model in the Table A5 of the attachment. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It addresses most of my concerns. I am going to maintain my rating.
Rebuttal 1: Rebuttal: **General Response to All Reviewers** We sincerely thank all the reviewers for their valuable comments, recommendations, and suggestions. The opinions of the reviewers are carefully considered and answering their questions has improved the paper. We first address a common question raised by the reviewers rqMu and ymPA. Then, we address the individual response to each reviewer below. We also attach a supplementary file for Tables. Hopefully, the replies could address all the questions. $\ $ **Common Question:** Theoretical motivation/intuition for choosing $p$-poisson instead of eikonal equation to describe SDFs. **Reply**: The main advantage of using the $p$-Poisson equation is that $u_p$ (solution of (1)) is unique in $W^{1,p}$ (Lindqvist, 2017), which prevents from non-unique weak solution in the eikonal equation $\left\Vert\nabla u\right\Vert=1$. A numerical challenge is to deal with $p \rightarrow \infty$ in order to get a good approximation of the viscosity solution. When the variational formulation (2) is used, the difficulty of using a large $p$ is still persistent numerically. However, it was resolved by using (7), which is one of the main advantages of the proposed algorithm. **Reference** P. Lindqvist. Notes on the $p$-Laplace equation. No. 161. University of Jyväskylä, 2017. $\ $ **Attachment $\downarrow$** Pdf: /pdf/ccd4ad37efc078c7a3c48c0ffd076179e6eeec8a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper considers the task of surface reconstruction from point clouds without normals with INRs. They consider solving for the SDF by solving the p-Poisson equation (with manifold constraint) as p\to\infty. However as the obvious loss function form of this is difficult to optimise, they consider a variable splitting strategy. They also do this w.r.t. ensuring that their gradient solution is conservative. Finally they add a minimal surface area regularisation term. Strengths: - Variable splitting is a nice approach to the issues with neural networks and automatic differentiation - Model is backed strongly by theoretical intuition - Auxilary variables sharing the network structure is nice - Decent results Weaknesses: - It would be nice to have some intuition as to why you propose to use the p-Poisson equation rather than other PDEs like the Eikonal equation. At first glance it seems that the reason is because it is possible to describe it as a variational problem as shown in equation 2, however that doesn't get used by your method. Another argument you posit is that without the vanishing viscosity method a normal eikonal PDE based solution may produce a non-unique weak solution, why is it clear that your method does not produce a non-unique weak solution? Is it because of the curl-free constraint being enforced? - How important is the enforcement of the loose eikonal constraint within the curl-free constraint? It somewhat diminishes the story of trying to solve the SDF problem using a different PDE to the eikonal equation. - It is not clear why the curl-free constraint is needed. $\nabla u$ is curl-free by design, so isn't minimising $||\nabla u - G||$ in (8) sufficient? Why is a separate auxiliary variable necessary, apart from enforcing a loose Eikonal constraint by construction? Doesn't the argument about needed to set $\lambda_1$ infinitely large to enforce the constraint apply to $\lambda_3$ as well? - The qualitative diagrams (Figure 5-7) for the ablation study are great for intuition and understanding, however you should have quantitative results on what happens when you remove each of those components (especially for curl-free) I like this type of approach, willing to increase the score if clarity on the necessity of both the curl-free constraint and the eikonal construction constraint as well as quantitative ablations are given. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is $F$? Is it kept to the example given, $1/3x$? - I am confused why the curl-free constraint needed. By equation 10, the new curl-free constraint ensures that $G=\nabla v$ for some $v$, but why allow it to be some $v$ that is not the $u$ being outputed as the INR value, and/or instead constrain it to be similar to the current INR value $u(x)$? - I would like more intuition on the role of the curl-free constraint. Figure 5 seems to show that it forces the model to pay attention to detail more, however I don't see why this is the case theoretically. Is it because the eikonal term is loosely being enforced within the curl-free objective by construction? - The good results of IGR on Thingi10K seems unlikely given its bad performance on SRB, it is almost as good as your model. Is there a property of Thingi10K that causes this? Is there a reason your model and IGR would be so similar on Thingi10K? - The performance of your model on Daratech in SRB seems a bit confusing, in the results it does really badly on $d_C$ however it looks fairly good in Figure 6? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Some discussion given. No clear potential negative societal impact or broader societal impacts to discuss Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. We hereby carefully address your questions as follows: **Q1. It would be nice to have some intuition as to why..** **Reply**: Please check the answer to the common question above. **Q2. How important is the enforcement of the loose eikonal constraint..** **Reply**: Imposing the curl-free condition helps to learn $G$ and $u$ accurately. The necessity of the curl-free constraint is explained in detail in the answer to Q3. **Q3. It is not clear why the curl-free constraint is needed..** **Reply**: Minimizing $L_2=\int_\Omega \parallel \nabla u_n - G_n\parallel^2 dx$ in (8) is not a pointwise-manner. There is a sequence {$u_n,G_n$} such that $L_2 \rightarrow 0$ but $G_n$ does not converge to a curl-free field. For every $u_n$ defined on $\Omega=[0,1]^3$, set $G_n(x,y,z)=\nabla u_n(x,y,z) + (0,1/n\sin(2\pi nx),0)$. Then, $L_2\rightarrow 0$ but $\nabla\times G_n=(0,0,\cos(2\pi nx))\nrightarrow 0$. Note that $\int_\Omega\parallel\nabla\times G_n\parallel^2=1/2$ is constant. This implies that we can prevent the pathological example above by adding the curl-free loss term. Therefore, the curl-free term is necessary to accurately learn $G$. If we impose the curl-free loss directly on $G$ without using $\tilde{G}$, we have to take curl on $G$, which is constructed by computing curl on $\Psi$ in (7). However, applying automatic differentiation (AD) consecutively leads to excessive memory consumption and computational inefficiency. In addition, the objective with a high-order derivative using AD has a challenging loss landscape that is difficult to optimize (Wang, 2021). We introduced the additional auxiliary variable $\tilde{G}$ to avoid these problems. We conducted an additional experiment by imposing the curl-free loss term directly on $G$ without using $\tilde{G}$. The results are reported in the Table A1 of the attachment. The results shows of the necessity of introducing $\tilde{G}$. The requirement of $\lambda_1 \rightarrow \infty$ is theoretically important to penalize the constraint in the penalty method. We may gradually increase $\lambda_1$ during training. However, as $\lambda_1$ becomes larger, the balance between the loss terms becomes imbalanced and other terms could be ignored. It means that the condition $u=0$ on $\Gamma$ is not properly enforced and it is also difficult to obtain the curl-free constraint. **Q4. The qualitative diagrams..** **Reply**: We summarized quantitative metrics (Chamfer and Hausdorff distances) in the Tables A1, A2, and A3 in the attachment. To further investigate the effect of the curl-free term on the learning of $G$, we measure the difference between the given surface normal $n$ and the learned $G$. Given point cloud with normals {$x_i,n_i$}, we estimate the cosine similarity (CS) $G^Tn:=\frac{1}{N}\sum_{i=1}^N |G(x_i)^T n_i |$, and report it in the Table A4 in the attachment. The results show the angle difference of $G$ and $n$ differing by an average of almost 1.50 (Anchor) and 3.57 (Gargoyle) degrees. Also, $\nabla u$ also has similar differences with $n$. It numerically validates that the curl-free term brings more accurate results to learn $G$ and $u$ for the given test cases. We would like to emphasize that CS does not reflect errors that occur when the 0-level set of $u$ is far from the given point cloud, as it is computed only at points where the $n$ is defined. So, it does not fully describe the quality of the trained surface and gradient fields, but we evaluate it to show the effect of curl-free term on learning accurate gradient fields where the $n$ is defined. **Q5. What is $F$?..** **Reply**: Yes, $F=1/3x$ was used in all experiments. In the revised manuscript, we'll make it clear. **Q6. I am confused why the curl-free constraint needed..** **Reply**: The necessity of the curl-free constraint is explained in the answer of Q3. In other aspects, for a given $G$ in (10), we agree with the reviewer that $u$ satisfying $G=\nabla u$ is not unique. However, the first term in (8) brings the uniqueness of $u$ such that $G=\nabla u$. **Q8. The good results of IGR on Thingi10K..** **Reply**: The similar tendency between IGR and the proposed model seems to be due to the initialization and activation function of the network. Both IGR and our model use the softplus activation and initialize the network to be approximately the SDF of a sphere. On the other hand, SIREN and DiGS use a sine activation and initialize the network in different ways. These are almost the only difference between IGR and SIREN, but the results show that IGR restores smoother surfaces. Therefore, IGR and our model seem to have a tendency to restore smooth surfaces due to these differences. **Q9. The performance of your model on Daratech..** **Reply**: A given point cloud of Daratech has an empty part at the back. The proposed model restores the surface by filling in this part, which is why the metric could be high. After the submission of the paper, we have found that $\beta=0.1$ restores this part better. We have reported the additional metric values in Table A2 in the attachment. **Q10. No clear potential negative societal impact..** **Reply**: We will specify societal impacts in the revised manuscript as follows: "The proposed PINC allows high-quality representation of 3D shapes only from raw unoriented 3D point cloud. It has many potential downstream applications, including product design, medical imaging, and the film industry. We are aware that accurate 3D surface reconstruction can be used in malicious environments such as unauthorized reproduction of machines without consent and digital impersonation. However, it is not a work to develop a technique to go to abuse, and we hope and encourage users of the proposed model to concenter on the positive impact of this work." **Reference** S. Wang et al. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM, 2021. --- Rebuttal Comment 1.1: Title: Thanks for addressing my concerns, final comments and score change Comment: The new explanation for the theoretical motivation for p-poisson makes a lot more sense, please have that more clear in the paper, it is the most important part of your paper. It would be cool if the uniqueness was able to be shown for your method with a toy problem, e.g. for a very simple set of points in 2D you show that under different initialisations an eikonal loss guided network will converge to drastically different solutions, however with the same initialisations your loss leads to a single (or at much less varied) solution. Same thing with the reason for the curl-free constraint: please improve the explanation in the paper, and it would be great if you can show a toy example showing that often L_2's minimisation does't converge to a curl-free field in practice (while theoretical counter-examples are great, since everything is happening with neural networks which are biased to very smooth approximations due to gradient descent, it would be great to show it is a practical consideration as well). Though its not completely neccessary as your new Table A1 indicates this too. Thanks for providing Tables A1-3, they provide a lot of context about your method. I reccommend having A1 in the main paper alongside the visualisation (maybe A3 as well, and A2 can go to supplementary). As the authors have sufficiently addressed my concerns, I am increasing my score from 4 to 6. I hope they consider my comments for improving the paper (whichever are reasonable for them to do).
null
null
null
null
null
null
Learning Nonparametric Latent Causal Graphs with Unknown Interventions
Accept (poster)
Summary: The paper studies the problem of recovering causal relationships under the measurement model where there are latents but no direct causal edges between observed covariates. The authors introduced two graphical concepts -- imaginary subsets and isolated edges -- and show how they relate to sufficient conditions for recovery (under some additional assumptions). The assumptions are discussed at length in the appendix and a two phased recovery algorithm is proposed. Strengths: The paper is well-written and easy to follow in general. I also appreciate the in-depth discussion of the assumptions. Weaknesses: Assumption 1(d) seems redundant and I think it is implied by assumption 1(c); rewriting it in terms of a lemma or consequence of assumption 1(c) would strengthen the paper and reduce the number of assumptions required. Consider the following argument: Fix any two latents $H_i$ and $H_j$. By assumption 1(c), $H_i$ has a child $X_i$ that is not a child of $H_j$, and $H_j$ has a child $X_j$ which is not a child of $H_i$. We now consider the contrapositive of 1(d). Suppose $X_i$ and $X_j$ are d-connected. In the measurement model, this means that there is a path $X_i \gets H_i - \ldots - H_j \to X_j$ which has no colliders. Then, this same path is a witness to $H_i$ and $H_j$ being d-connected. I am unsure how interesting Definition 5.2 on Isolated equivalence class (IEC) is. I do not think it is fair to compare its significance to Chickering's "covered edge reversal" characterization, which I believe is significantly more subtle and interesting. For instance, while there exists a sequence of covered edge reversals (say edge $e_1$, then $e_2$, then $e_3$, ..., then $e_r$) between any two DAGs in the same Markov equivalence class (MEC), the edges may actually NOT be covered edges midway through the transformation and one cannot arbitrarily reverse the set of edges $\{e_1, \ldots, e_r\}$ in any ordering whilst ensuring that we always get a DAG from the same MEC. Chickering further gives a constructive algorithm which tells us how to find this sequence $(e_1, \ldots, e_r)$. In contrast, IEC seems trivial since it involves a union of disjoint edges, where the size of IEC is always 2^(number of isolated edges) and every edge can be reversed at any point in time. Experimental details are lacking: Section 6 is short and there is nothing in the appendix about the experiments. It is hard to judge or appreciate any empirical contribution. I feel that the authors should have just focused on presenting this work as a theoretical contribution (which I think is already sufficient on its own, modulo the questions below). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Caption for Figure 1: From my understanding of Definition 3.2, imaginary subsets have to be maximally valid. Why is $\{X_5, X_6\}$ maximal? Suggestion for Figure 1: In the additional page that comes with paper acceptance (if this gets accepted), it would be nice to include a picture of D(P), mention that $\Omega_P = \\{ \\{1,2,5,6\\}, \\{3,4,5,6\\} \\}$, and then refer to Fig 1 on Lines 146 and 200. Missing assumption: This work implicitly assumes infinite samples / population regime / access to a d-separation oracle, right? Please state this explicitly. Assumption 2: This assumption feels very strong... For example, such an assumption trivially solves the causal graph discovery problem in the causal sufficient setting (w/o latents but observed covariates have edges amongst themslves) if we have access to all $n$ interventional essential graphs. I understand that the model studied here has latents, but then the measurement model seems to also simplify things a lot. Why does Assumption 2 not immediately trivialize the entire (or part of the) recovery objective? For example, if there is no repeated interventional distributions (i.e. $|\mathcal{I}| = m+1$), then isn't Line 223 trivial? I understand the discussion of this assumption in the appendix (Line 651 should be emphasized in the main text), but it seems that, in the worst case, we just "lump" all downstream latents together. What am I missing? What is the subtlety that I am not getting? Line 184, footnote 1: Do you mean "In other words, removing any more..." instead of "In other words, adding any more..."? We can always add redundant edges while maintaining the underlying model, right? e.g. a clique can encode any arbitrary distribution, including the product distribution. Theorem 3.4: Is there a "necessary" counterpart to this result? For instance, I was under the impression that existence of imaginary subsets makes G unidentifiable. If that is the case, you should perhaps write something like "G is identifiable if and only if no imaginary subsets" for (a). Characterizations like these would greatly strengthen the paper's contribution. Line 250, and also appendix E; Misconception about maximality: I think I have a misconception about maximality, which is affecting my understanding of the paper's correctness. Why is $\{X_1, X_2\}$ a maximal valid subset if $\{X_1, X_2, X_5\}$ is one? Doesn't the fact that the former subset being a proper subset of the latter make it *not* maximal? Could you kindly resolve my misconception? Thanks! (I will revise the soundness score and overall rating accordingly.) Line 377: I am unsure what the last sentence is trying to imply. Could you clarify? I have the following guesses (all of which could be wrong): - Are the theoretical assumptions unnecessary? - Did you just "get lucky" with the experiments? - Are you suggesting that the assumptions are not required for the class of models which you have ran experiments on? Table 1: How are the errors split across $G_B$ and $G_H$? From my understanding, the algorithm to recover $G_H$ crucially depends on $G_B$ being correctly recovered, right? It is unclear to me what we can conclude if $G_B$ was recovered with errors and then subsequently used to recover $G_H$ --- how do the errors propagate? Can you say something about it in theory? Figure 2 and 3: Isn't Figure 3 just Figure 2? Example 12: Why does Figure 1 satisfy assumption 3? $H_1$ and $H_2$ have no pure child. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Nil. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the questions and suggestions! **Assumption 1(d)** 1(d) is not implied by 1(c). In your example, $X_i$ and $X_j$ could be connected via other hidden variables. Even when $H_i$ and $H_j$ are disconnected, $X_i$ and $X_j$ might still be connected. We give such an example in Appendix C.3. In particular, consider Fig 5, $X_1$ is not a child of $H_2$ and $X_3$ is not a child of $H_1$. However, under the intervention target $H_2$, $X_1$ and $X_3$ are still d-connected via parent $H_3$. **IEC** Apologies for any confusion. There seems to be a potential misunderstanding regarding the concept of isolated edges. Despite what the name might suggest, an isolated edge X->Y does not mean that X, Y are disconnected from all other nodes. In fact, X and Y can still have outcoming edges (definition 3.3) and X->Y is not just an isolated connected component. So IEC doesn't just involve a union of disjoint edges. For example, consider a graph of three variables: A, B, C with two edges A->B, A->C. These two edges are both isolated edges but not disjoint. In fact, one can reverse edge A->B to be B->A. In that case, A->C is no longer an isolated edge. Therefore, similar to MEC where the edges may actually NOT be covered edges midway through the transformation, isolated edges can have the same problem. We agree that the characterization of isolated edges is simpler than covered edges because isolated edges are special cases of covered edges. On the other hand, the introduction of IEC is used merely as a tool to characterize the unorientability of isolated edges (theorem 5.3 and G.7) and is not intended to suggest that this is deeper or more fundamental than the MEC. **“Experimental details”** We will provide more details on experiments in the updated version. Some additional details include that the weights are generated by uniformly sampling from $[-2, -0.5] \cup [0.5, 2]$ and the variances are set to 1. More details can also be found in the codebase we included in the supplementary files. **“Misconception about maximality”** Again, we apologize for any confusion: The definition of a maximal valid subset clearly states that X’ and X’’ must be contained in the *same* clique in the UDG. A maximal valid subset X’ is maximal in the sense that for any clique containing X’ there does not exist X’’ that’s also in the same clique and is a superset of X’. In other words, X’ is maximal if its existence cannot be completely explained by another subset. We apologize for the confusion, however, upon inspecting our definitions, everything is correct as stated. We will add a clarification of this point in the camera ready. See below as well for a correction to Figure 1. **Why is X_5, X_6 maximal?** Thanks for pointing this out! In fact, as reported in Figure 1, {$X_5, X_6$} is not maximal. Thanks to your careful attention, we realized there is a mistake in Figure 1 that needs to be corrected. The original Figure 1 was simplified for the purpose of providing a clear and easily understandable demonstration, but in the process of simplifying the DAG, the maximal valid subsets became slightly different than what the caption states. In the attached pdf file, we present a modification to Fig 1 that corrects the issue and preserves all the statements about maximal valid subsets and imaginary subsets. We have also taken your advice to include UDGs under each intervention. To demonstrate why both {$X_1, X_2$} and {$X_1, X_2, X_5$} can be maximal valid subsets, let’s refer to the updated figures in the pdf file. Figure 2(c) in the pdf file shows that under intervention target $H_4$, there are three maximal cliques: {$X_1, X_2, X_4$}, {$X_1, X_2, X_5$}, {$X_3, X_5, X_6$}. Because {$X_1, X_2, X_4$} contains {$X_1, X_2$} but not {$X_1, X_2, X_5$}, these two sets can both be maximal without contradicting the definition. **Missing assumption** This is stated at L49: “Given a set of interventional distributions”, as opposed to samples. Of course, the population regime is equivalent to assuming distributions as input, which is standard in the literature when discussing identifiability. Since the goal of this paper revolves around identifiability, we leave estimation as an intriguing future direction. We’ll make these points clear in the updated version. **Assumption 2** This is a common assumption and in fact, it is an open question whether or not this can be relaxed [a-b]. [a] also shows that this assumption is necessary (Section 3.3 of [a]). Moreover, compared to classical work on interventions in graphical models, the current literature on causal representation learning (including our submission as well as [a,b]) considers a strictly harder setting since the interventions are both latent and unknown. Since the intervention target is latent, we only have access to partial information (observed variables) regarding the effect of interventions. And since the intervention target is unknown, compared to the known intervention target, we additionally have the unorientability problem of isolated edges which is discussed in section 5.2 (L357). [a] Seigal, Squires, Uhler. "Linear causal disentanglement via interventions." arXiv preprint arXiv:2211.16467 (2022). [b] Varici, Burak, et al. "Score-based causal representation learning with interventions." arXiv preprint arXiv:2301.08230 (2023). **Line 377** Please see the global review. **Line 184, footnote 1** Sorry for the confusion. What we meant is that adding edges until we cannot add more without changing (i.e. adding _or_ removing) the conditional independence statements. This notion of maximality is the same as in maximal ancestral graphs (MAGs), as mentioned on L180. See [c] for more details on MAGs, which are standard graphical models for incorporating latent variables. We have double-checked our definition and can confirm it is correct as stated. [c] Richardson and Spirtes. "Ancestral graph Markov models." The Annals of Statistics (2002). --- Rebuttal Comment 1.1: Comment: Thank you for your patience and efforts to clearing my doubts and misunderstandings. Also, thanks for sharing about the "pure child assumption" is similar to the notion of "separability assumption" in NLP (I don't work on NLP problems and this is the first time I learnt about this!) **Assumption 1(d)** You are right. Thank you for clarifying my misunderstanding. **IEC** Thank you for clarifying my misunderstanding. **Maximal clique** Thank you for explaining why both {X1, X2} and {X1, X2, X5} can be both maximal. My confusion was not that it must be the *same* clique, but that it must be *any clique*, under *any intervention*. The pictures in the attached PDF were very helpful for me in clearing this misconception. Now that I understand what you mean by maximal clique, do you mean to have $X' \subseteq X''$ instead of $X' \subsetneq X''$ in the Definition 4.2 in your submission? Otherwise, you are saying {X1, X2} $\subsetneq$ {X1, X2, X5}, which confuses me again... **Missing assumption** Thanks. Please make it explicit and clear to other readers. It is indeed an intriguing future direction. **Assumption 2** Thank you for sharing the references. These two works indeed use the similar assumption. It is also interesting that [a] has a worst-case necessity for that assumption, though they study a slightly different setting from what you study (maybe you can give some explanation why their setting is a special case of yours?). Please include some discussion about this in the paragraph above Section 3.2, where you discussed the other assumptions. I think it will benefit the other readers. Thanks! **Line 184, footnote 1** Sorry, I still don't get it... I know about a bit about ancestral graphs, though I'm not an expert on it. My understanding of causal graphs is that the *absence* of edges encode assumptions about independencies in the model. For example, [c] states that "a graph is maximal if every missing edge corresponds to at least one independence in the corresponding independence model". The *presence* of an edge itself doesn't say much: a fully connected clique is always a valid consistent causal graph but it yields no useful information. We are talking about the same thing right...? **References** [a] Squires, Seigal, Bhate, Uhler. "Linear causal disentanglement via interventions." ICML (2023). (I was checking your references in the rebuttal and noticed that the author list is slightly different. Also, I think you should cite the conference version instead of the arXiv one; see https://openreview.net/pdf?id=1VDuHddxtA) [c] Richardson and Spirtes. "Ancestral graph Markov models." The Annals of Statistics (2002). --- Reply to Comment 1.1.1: Comment: Thanks for the quick reply! **Maximal clique** Sorry about the confusion. The notation $\subsetneq$ means “proper subset”, i.e.if $A \subsetneq B$, then $A$ is a subset of $B$ but not equal to $B$. This is not to be confused with $\not\subset$. Since there is room for confusion here, we will clarify this in the final version. **Assumption 2** You’re right that [a] studies a slightly different setting, although the high-level goal of identifying latents under unknown interventions in a measurement model is the same. To clarify, [a] shows the necessity of Assumption 2 under their assumptions, which are slightly different from our setting. We have independently shown that this assumption is needed in our setting with Example 6 in Appendix C.4. There are three main differences with [a]: (1) They study linear functions while we focus on nonparametric identification; (2) They allow the bipartite graph between latents and observed to be fully connected while we have graphical constraints (Assumptions 1(c) and (d)); (3) They consider noiseless transformations between latents and observed while we allow noisy transformations. Though we briefly touch on similar assumptions in L747 in Appendix C.4, We’ll clarify this further in the paper. Thanks for the suggestion! **Line 184, footnote 1** Your intuition is right, however, the situation is more nuanced with latent variables. In Example 7 in Appendix D, we construct two DAGs $G_{(a)}$ and $G_{(b)}$, and two models $P_{(a)}$ and $P_{(b)}$, that generate identical d-separation and CI relations over X. But they differ over (X,H): $P_{(a)}$ satisfies $H_1 \perp H_3 | {H_2, H_4}$, whereas $P_{(b)}$ does not. (This is clear from the extra edge $H_1\to H_3$ in $G_{(b)}$.) [We realize now that this point was never made explicit, and we will definitely revise this example and the discussion of maximality to reflect this discussion. We’d like to thank you for surfacing this confusion so it can be properly addressed in the final version.] So, since these models cannot be distinguished on the basis of the observed data P(X), what should we do? We argue that we should only remove an edge if its removal can be justified on the basis of what we actually observe, i.e. the data X. Although $P_{(a)}$ and $P_{(b)}$ can (in principle) be distinguished, we need to observe H to do so, which we cannot do in practice. More generally, here is what is happening: In general, of course, there are multiple DAGs that are Markov to a given distribution, and the question is how do we decide on the correct “minimal” representation. Without latents, there is no ambiguity: We can always test all possible CI relations and obtain a complete picture to obtain a minimal I-map. With latents, we must be careful: - Of course, if we can check CI relations over all of (X,H), then the usual notion of a minimal I-map prevails. But in practice, we cannot access P(X,H) since H is unobserved. - Thus, in practice, we should restrict our attention to information about P(X) only. In this case, we argue that we should only remove an edge if its removal can be justified on the basis of information about P(X) _only_. This is the essence of maximality: We only remove an edge if it follows from the observed data X. Otherwise, we remain agnostic: We do not want to remove an edge that may in fact reflect a “real” dependence over H. This is the essence of maximality, and the intuition is the same as for maximal ancestral graphs. As a result, our characterization of the maximal measurement model aligns with the essence of the maximal ancestral graph. Measurement models have latent variables and we only have access to partial information (ie., observed variables). Since two measurement models can encode the same set of conditional independencies over X and as you have pointed out, the absence of edges encodes nontrivial information, the removal of an edge should be justified carefully on the data we have available. **References** Thanks for pointing that out! We didn’t realize the author list has changed since we drafted the paper. We will update it accordingly.
Summary: - The paper studies causal representation learning, or more precisely the identification of the causal graph between observed and latent variables, from interventional data with unknown intervention targets. - Its main contribution is an identifiability result for the causal graph. This theorem makes no assumptions on the functional form of the causal model or the mixing function, but is based on several graphical requirements. In various ways, the paper requires that different latents affect different sets of observed variables. - The authors spend a large part of the paper discussing these assumptions and providing sufficient conditions for them. - In the end, they also briefly demonstrate their algorithm on toy data. - Unlike most of the CRL literature, the paper does not study the identification of the latent *variables*. The authors delegate this task to "existing work, since one can [...] use deep latent-variable-models to infer the latent distributions from the latent structure". I have read the author's rebuttal. They have addressed my questions clearly. Strengths: - It is great that the authors can prove identifiability from observational, unlabelled data, and without functional assumptions. This makes the results potentially quite practical, barring limitations from the graphical assumptions (see below). - While I have not been able to check the proof in detail and I do have some questions (see below), I overall believe that the key results are correct. - The paper is very thorough, with precise statements, extensive discussion, useful examples, and thorough appendices. There is a lot in here that may be useful beyond the concrete identifiability result. - It is also extremely well-written, really a joy to read. I appreciate the frequent signposting. Great job! Weaknesses: - Different from virtually all other CRL works, the authors choose to focus *only* on the identifiability of the causal structure and entirely disregard the identification of the causal variables. - Arguably, in most applications of CRL, the latent variables are at least as important as a result. - The strategy of solving the structure learning problem first and delegating the variable identification to a latent-variable model makes sense, but deserves a discussion that goes beyond the two lines that the authors have reserved for it. - Could you perhaps quote or sketch what kind of guarantees on the identification of the latent variables one can expect when following such a two-step procedure? - As with most of the CRL literature, a key question is whether the assumptions are too unrealistic or too difficult to verify to make the results useful beyond pure academic curiosity. I am particularly worried that assumptions 1(c), 1(d), and the lack of imaginary subsets do not apply to typical CRL settings. - For instance, I find it difficult to imagine these assumptions applying to any of the systems sketched in lines 18-20 in the introduction. Could the authors discuss this and provide perhaps some semi-realistic examples of systems that satisfy them? - Negative results are equally valuable though, and I appreciate the counter-examples that show the lack of identifiability when these assumptions are violated. - I am also concerned about the restriction to maximal measurement models. - This seems to be a bit at odds with Okham's razor: if multiple models explain the data, why should we focus on the most complex model that explains the data? Should we not identify the family of all models, or the simplest model? - It would be great if the authors could comment on how strong this assumption is and in what kind of systems they expect it to be satisfied. - The experiments are very limited and really just a minimal proof of concept. - It would make the paper stronger if the authors would design experiment that test whether the approach scales to interesting problems and that test how robust the approach is to violations of the assumptions. - In addition, a comparison to other methods (for instance on problems that satisfy functional assumptions made by other papers) would be interesting. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In line 150, the authors stress that they consider the *set*, not the tuple, of interventional distributions. What motivates this choice? - Is Assumption 1(b) the same as the common assumption of faithfulness? - In Theorem 3.4, what does "using CI information only" mean exactly? Could we get stronger results when using the full distribution(s), not just the conditional independence patterns? - In Theorem 3.4, what does "G is identifiable" mean exactly? Identification up to a graph isomorphism (so for instance allowing for a permutation of the latent variable)? - In line 227, what is meant by "sequel"? - If we had a dataset with multi-target interventions in addition to the single-target interventions, would that allow for stronger statements? Perhaps we could relax the requirement of not having imaginary subsets? - I'm a bit confused why we can have identifiability of the latent graph except for the direction of isolated edges $a \to b$ . It seems that any other isolated chain, like $a \to b \to c$ disconnected from all other latents, would be similar. Could the authors provide some intuition for why this latter graph can be identified, while the isolated edge cannot? - Is the interventional MEC a subset of the IEC? - In line 372, what are m and n? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: - The authors are very clear about the assumptions of their theoretical results. - Nevertheless, I believe it deserves more discussion whether these assumptions fit realistic problems (see above). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments! We're glad you find the paper a joy to read. **“Focus only on the causal structure”** We completely agree that learning latent distributions is important! At the same time, without causal structure, latent distributions may not be interpreted causally, and therefore we suggest that gaining an understanding of latent causal structures is of equal importance. In some sense, learning the structure is necessary: To understand causal relations, one needs to know what happens when we intervene on these learned features. Therefore, this paper studies the equivalently important problem of structure learning and we believe that it lays the foundation for future research on CRL. In this sense, since most related work focuses on the latent distribution, we are studying a complementary but equally important aspect: What are the minimal conditions needed to recover the latent causal structure? We believe that understanding these two problems (structure vs representations) separately is crucial to understanding what assumptions matter, why they matter, and for which aspects of the problem they are needed. **“The assumptions are too unrealistic or too difficult to verify”** Please see the global response. **“maximal measurement models”** Maximality helps uniquely identify latent causal graphs as in many cases, there could be multiple measurement models that can explain the observed interventional distributions but only one maximal measurement model (Appendix D). Such construction is similar to the definition of maximal ancestral graphs (MAGs, see L180-181). See [c] for more details on MAGs, which are standard graphical models for incorporating latent variables. Intuitively, a conditional independence statement could be explained by multiple combinations of missing edges. We choose the simplest explanation with the least amount of missing edges. [c] Richardson and Spirtes. "Ancestral graph Markov models." The Annals of Statistics (2002). **“The experiments are very limited”** Please see the global response. **“set vs tuple”** This is a good observation! Because we allow interventional distributions to be the same under different interventions and the interventional targets are unknown, sampling from the same interventional distributions could be viewed as the same. Thus, we consider sets instead of tuples since this allows for a more realistic setting where different interventions are indistinguishable. Moreover, learning from a tuple is easier than learning from a set, since one can reduce the tuple problem to set problem because one can create a set from a tuple by removing duplicates. Thus, there is no loss of generality in our setting. **“Assumption 1(b) and faithfulness?”** Assumption 1(b) is substantially weaker than the usual faithfulness assumption (L166-168 and detailed discussion in appendix C.1). Here, we only consider pairwise, marginal independencies between observed variables. Ordinary faithfulness includes full conditional independencies of arbitrary order including latent variables. **“In Theorem 3.4, what does "using CI information only" mean exactly?”** Yes, one could use more information about the distributions to potentially orient more isolated edges, and this is a natural direction for future work. Our results imply that this will require additional assumptions. One example is direct I-faithfulness [c], which uses distributional info (L368). [c] Squires, Chandler, Yuhao Wang, and Caroline Uhler. "Permutation-based causal structure learning with unknown intervention targets." Conference on Uncertainty in Artificial Intelligence. PMLR, 2020. **Confusion about isolated edges a->b** Your intuition is right. a->b->c cannot be oriented either. Note that a->b is an isolated edge. This is because the only parent of b is a and c is a child of b (Definition 3.3). Thus one can reverse it such that we have a<-b->c. Now, b->c is an isolated edge, one can reverse it again to get a<-b<-c. Thus a->b->c and a<-b<-c are in the same IEC because there exists a sequence of isolated edge reversals between them. Theorem 5.3 shows that these two graphs are indistinguishable using CI information only. We apologize for the confusion and feel like there is a potential misunderstanding due to the naming of “isolated edges”. An isolated edge X->Y does not mean that X, Y are disconnected from all other nodes. In fact, X and Y can still have outcoming edges and X->Y is not just an isolated connected component. For instance, in your example, a->b is an isolated edge but is connected to other edges as well. **“In Theorem 3.4, what does "G is identifiable" mean exactly? Identification up to a graph isomorphism (so for instance allowing for a permutation of the latent variable)?”** This is correct. Up to different labeling of the latent variables (graph isomorphism). But such reordering is trivial because of assumption 1(c). Different latent variables will have different sets of observed children. Thus one can use the children set to uniquely identify different latent variables. **“If we had a dataset with multi-target interventions in addition to the single-target interventions, would that allow for stronger statements? Perhaps we could relax the requirement of not having imaginary subsets?”** This is a good question! Extending our theory to multi-target interventions is an exciting direction. Having access to multi-target interventions would definitely help relax assumption 2 and maybe allow for stronger results. **“Is the interventional MEC a subset of the IEC?”** Interventional MEC is a bit different from IEC. IEC is for unknown interventions and interventional MEC is for known interventions. **“In line 372, what are m and n?”** Sorry for the confusion. We define m and n as the number of latents and observed respectively on line 103. We will clarify it again in the experiment section. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. You have answered my questions thoroughly and clearly. At the moment, I have no follow-up questions, though I will think about your work more in the next days. --- Reply to Comment 1.1.1: Comment: We're pleased to learn that your concerns have been resolved. Feel free to reach out with any future questions you may have.
Summary: this paper aims the learn the causal structure in the latent space, by using interventional data, where the intervention targets are unknown, but with certain restrictions. This paper's focus is to provide some theoretical analysis, to show that, under what assumptions, we can recover (up to what level) the causal structure (include the bipartite DAG between observable variables and latent variables, and the DAG within the latent variables). Strengths: 1 - the paper deals with a very challenging, or even ill-posed problem, about recovering the causal structure in latent space. 2 - the theoretical part seems to be sound. I have checked the illustration and the proofs in appendix C, the idea of the necessity of the assumptions in (1) is well presented 3 - recovering latent causal structure has great potential for AI or AGI area. Weaknesses: The most concern from my side is that, what this paper is telling lacks real-world relevance. this makes me doubt if this work can be helpful for practical usage. It seems to be more like a analytical deduction in that, given what assumptions, what I can achieve. but whether these assumptions are relevant in real-world is unclear. Some details 1) Can this work be evaluated from "causal representation learning" perspective? for example, image pixels are generated from some latent concepts/entities, which seems to be quite aligned with the motivation of this work, can this work be experimented on any such case to justify its real usefulness? 2) justification of the required assumptions many assumptions used in this work are untestable, although the authors discussed some of them, and pointed out that they are not be ignored, which is fine. But without real-world relevance, how can I know in what situation should I apply the proposed algorithm? for example, a complete family of targets seems to be too strong, from the perspective of "causal discovery from interventional data". 3) Better re-structure the paper writing Overall, the paper is very dense and lacks intuitions. Regarding identifying the latent variables (or at least, detecting the number of latent variables), one sentence in line 245 is very interesting: "Proposition 4.1 suggests that we assign a latent variable to each maximal valid subset", I think this has a potentially nice intuition about identifying latent variables by using dependencies among the observational variables. I suggest the authors to provide an overview, with intuitions about the key idea, rather than defer then to Section 4 and 5. 4) claim of Example 1 is improper. even with unknown interventional target, and by intervening once, we can still recover the orientation between X1 and X2. you can further checking if there is marginal independence between X1 and "whether the intervention is performed", and X1 and "whether the intervention is performed", this information can further help you to identify the causal direction. you can check more details from [1]. [1] Mooij, J. M., Magliacane, S., & Claassen, T. (2020). Joint causal inference from multiple contexts. The Journal of Machine Learning Research, 21(1), 3919-4026. The second concern is the weakness of experiment. as the author pointed, "Compared to these existing works, our focus is on nonparametric models with unknown, hard, single-node interventions on the latent variables", I think this is a clear configuration, and we can certainly conducts comparison with other SOTAs (such as using parametric approach) Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) can we give a definition of clique in main body? 2) in line 224~226, the paper said "In other words, G can be maximally identified in the sense that any edge in the latent space that isn’t oriented cannot be oriented from the given list of interventions using CI information only: Additional assumptions are needed (e.g. conditional invariances and direct I-faithfulness)." I want to know, if the additional assumptions are included, how much further we can achieve? from my perspective, conditional invariance and direct I-faithfulness are fair assumptions, they basically say that "when you do intervention, the data distribution should be changed in a rational way; otherwise, intervention cannot introduce detective changes thus you can exploit nothing from interventional data" Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and suggestions! **“The most concern from my side is that what this paper is telling lacks real-world relevance. this makes me doubt if this work can be helpful for practical usage.”** For example, one practical application is topic models. In topic models, the pure child assumption is fairly common [a], which is strictly stronger than assumptions 1(c) and 1(d) (Remark 4.2). And we show in the paper (Section 4.3) how to identify causal graphs under the pure child assumption. Applying our results to image data is a very interesting future direction. [a] Arora, Sanjeev, et al. "A practical algorithm for topic modeling with provable guarantees." International conference on machine learning. PMLR, 2013. **“justification of the required assumptions many assumptions used in this work are untestable.”** This paper studies the theoretical limit of nonparametric identifiability of latent causal graphs, and in Appendix C we show that our assumptions are tight. We feel like understanding the capabilities and limits of these assumptions can build the foundation for future work on CRL. On the other hand, our empirical results (section 6) show that even when our assumptions are not enforced, we can still get approximate recovery with low error rate as long as graphs are generated with sparsity. Finally, the complete family of targets is needed to get exact graph recovery (section C.4). One can easily relax this assumption to get partial identification (i.e., lumping some latent variables together). **Better re-structure the paper writing** Thanks for the suggestion. You’re right that the intuition behind our proof to identify latent variables is to use dependencies among the observational variables. By examining what’s invariant and what’s changing, one can recover the latent graph. **Claim of Example 1 is improper. even with unknown interventional target, and by intervening once, we can still recover the orientation between X1 and X2. you can further checking if there is marginal independence between X1 and "whether the intervention is performed", and X1 and "whether the intervention is performed", this information can further help you to identify the causal direction. you can check more details from [1].** Typically, one needs additional information to identify the interventional target. For instance, [b] uses direct I-faithfulness. Once the interventional target is known, then one can solve the isolated edge orientation problem. For JCI, could you clarify what you do mean by “whether the invention is performed”? If you meant the observed context variables, then one still needs to have observed context variables that satisfy assumptions on how system variables and context variables interact. [b] Squires, Chandler, Yuhao Wang, and Caroline Uhler. "Permutation-based causal structure learning with unknown intervention targets." Conference on Uncertainty in Artificial Intelligence. PMLR, 2020. **“Comparison with other SOTAs”** Our paper studies the general setting that allows arbitrary nonlinear transformation between latents and observed and among latents. It is likely that other methods utilizing functional assumptions like linearity can perform better if the functional form is known. Since our paper is primarily theoretical, our experiments are just a proof of concept to demonstrate that even for nonlinear transformations one can still recover the measurement model with low error. **“Can we give a definition of clique in main body?”** Thanks for the suggestion. We’ll add the definition to the main body. A clique is a subgraph where every pair of distinct vertices are adjacent. **“I want to know, if the additional assumptions are included, how much further we can achieve?”** With additional assumption like direct I-faithfulness [b], one can identify isolated edges. In this paper, we want to study nonparametric identifiability with minimal assumptions. Even without assumptions like direct I-faithfulness, intervention **can** still introduce detective changes. This is why all the non-isolated edges can be oriented (Theorem G.7). Our results demonstrate how to orient non-isolated edges without making additional assumptions. Thus, additional assumptions are not totally necessary. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Since the reviewer has not replied, I have been asked to respond. It seems to me that most of the reviewer's concerns are addressed, and I will take this into account unless they reply here further. Best, the AC --- Reply to Comment 1.1.1: Comment: Thanks for your reply! We will look forward to additional feedback from the reviewer and we really appreciate your commitment to the quality of the review.
Summary: The paper discusses the identification and reconstruction of latent causal graphs from unknown interventions in the latent space. The main focus is on uncovering the latent structure in a measurement model, where the dependence between observed variables is less significant than the dependence between latent representations without parametric assumptions. The paper presents a characterization of the limits of edge orientations within the class of Directed Acyclic Graphs (DAGs) induced by unknown interventions. The paper concludes with an experimental evaluation that shows the recovery of the causal graph using structural Hamming distance as the error metric of the true vs. learned causal structure. ********** POST REBUTTAL ******* Thank you to the authors for their responses. I’m satisfied with the clarifications and increased my score Strengths: Creating models that identify the causal structure in scenarios where parametric assumptions are not applicable is an important problem in causal inference. The proposed approach uses interesting and clever concepts to provide a novel perspective of latent causal graphs. The proposed approach provides an intriguing perspective about the DAGs’ equivalence class associated to unknown interventions. Weaknesses: Some details of the paper could be further clarified such as why not used stablished concepts of Markov equivalence classes, etc. Or why not consider non-parametric learning of causal structure such as the one in Gao, et al. (2020) or Azadkia et al. (2021) Another aspect to consider is the fact that the evaluation is done with two settings and 100 runs for only 4 combinations of M,N. In general the paper main contributions seem interesting from a theoretical perspective and because of that a more thorough discussion could have improved the paper to compensate for the limited evaluation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Could you elaborate on why is needed to rely on the concept of imaginary subsets? What was the main technical challenge to define the additional concepts of imaginary subsets and isolated edges? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The paper does not describe potential societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your questions! We also agree that this problem is important! **“Some details of the paper could be further clarified such as why not used stablished concepts of Markov equivalence classes, etc. Or why not consider non-parametric learning of causal structure such as the one in Gao, et al. (2020) or Azadkia et al. (2021)”** Thanks for the question! These papers do not consider latent variables or the measurement model and hence are not applicable to our setting. Markov equivalence applies to observational data without interventions, thus with interventions, standard graphical concepts such as Markov equivalence are not as useful. (Note that the MEC is of course still valid, it just does not account for interventions.) **“Another aspect to consider is the fact that the evaluation is done with two settings and 100 runs for only 4 combinations of M,N. In general the paper main contributions seem interesting from a theoretical perspective and because of that a more thorough discussion could have improved the paper to compensate for the limited evaluation.”** Thanks for acknowledging our theoretical contributions! We have detailed and thorough discussion of the assumptions and limitations of our theory in the appendix. For instance, we discuss why our assumptions are necessary in appendixes C and D. We also discuss in detail the difficulty with identifying bipartite graphs and why dealing with imaginary subsets is nontrivial with many examples in appendix E. Currently, because our paper is primarily theoretical, the experiments are shown as a proof of concept. It would be an exciting future direction to extend our results to real-world datsets. Other methods that rely on functional assumptions could potentially surpass our approach when supplemented with extra functional information (such as linearity). Nonetheless, our theory holds greater generality as it pertains to a wide range of nonlinearities. **“Could you elaborate on why is needed to rely on the concept of imaginary subsets?”** Imaginary subsets arise because we allow arbitrary connections between the latent variables. This is explained very briefly at L196-200, and we are happy to include more detail on this as follows: It is possible that two observed nodes can stay d-connected under any interventions even when they do not share the same parent. The densely connected latent graph makes them appear as if they share the same parent. The existence of imaginary subsets complicates the identification of the bipartite graph (example 8 and example 9). In this paper, we show additional assumptions one can have to get rid of imaginary subsets (section 4.2) including one testable assumption (no fractured subset, corollary 4.7). We also show how to identify the bipartite graph even with the presence of imaginary subsets under pure child assumptions (section 4.3). These are explained in detail in both Section 4 and Appendix E. **“What was the main technical challenge to define the additional concepts of imaginary subsets and isolated edges?”.** The main technical challenge with defining imaginary subsets is finding a concise way to encapsulate the difficulties with learning bipartite graphs and define them in a way that’s useful for guaranteeing identifiability. For instance, in section 4.2, we define fractured subsets which is a testable necessary condition of imaginary subsets. This is made possible by our precise notion of imaginary subsets. The main technical challenge with isolated edges is to identify whether they are the only non-orientable edges under unknown interventions, which we show by introducing isolated equivalence classes (section 5). --- Rebuttal Comment 1.1: Title: Comment Comment: Thank you for your answers which clarify my questions and improve a lot the content of your paper. Please: - include in the main body of your paper the assumptions detailed in appendices C and D, - add a small discussion that details your answer (about how your model deals with latent variables, etc.), and - add the description of the main technical challenge to the problem description. Other than that. I don't have further suggestions other than what the other reviewers pointed out. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our response, and we will definitely include these changes in our updated version. Your suggestions have helped us clarify our main ideas better. Do you have any further questions or clarifications? If you are satisfied with the response, we hope you will consider increasing the score.
Rebuttal 1: Rebuttal: We want to thank all the reviewers for their thoughtful comments and suggestions! We also want to address some common questions. **Assumptions** Two reviewers have questions about the real-world relevance of our assumptions (bZuh, tfcU). First of all, we completely agree with Reviewer tfcU that negative results are equally valuable. One of the main objectives of this paper is to study the theoretical limits of nonparametric identifiability of latent causal graphs. The fact that our assumptions are tight (L164-176 and Appendix C) is crucial: Any hope of relaxing these assumptions will require making alternative assumptions. We show what is possible and impossible under this setting and why additional assumptions are needed. In particular, we show in the paper why assumptions 1(c) and 1(d) are necessary for our problem (appendix c.2 and c.3). A practical example where such assumptions might be satisfied is topic modeling. In topic modeling, one often assumes the existence of pure children (or anchor words) [a]. But our 1(c) and 1(d) assumptions are strictly weaker assumptions than the pure child assumption (Remark 4.2) and we show how to identify under the pure child assumption even with imaginary subsets (section 4.3, theorem 4.8). [a] Arora, Sanjeev, et al. "A practical algorithm for topic modeling with provable guarantees." International conference on machine learning. PMLR, 2013. **Experiments** Three reviewers have questions about limitations of experiments (3v5g, bZuh, tfcU), although Reviewer 3v5g does agree that the paper is “interesting from a theoretical perspective”. We also appreciate Reviewer Yp1n for acknowledging that the theoretical contribution of this paper is “already sufficient on its own”. Since our paper is primarily theoretical, the purpose of our experiments is simply to verify the theory, illustrate that it is easy to implement, and serve as a proof of concept, as Reviewer tfcU has suggested. It is likely that other methods which use functional assumptions might outperform our method if the additional functional information (i,e., linearity) is available. Nevertheless, our theory is more general and applies to arbitrary nonlinearities. We also want to apologize to Reviewer Yp1n for the confusion at L377. To clarify, we simply mean that we did not enforce Assumptions 1 and maximality strictly in the experiments, and in spite of this, the method still performs well. We will modify this sentence to say: “The empirical results show that our method is robust at recovering DAGs with low errors when the graphs are generated with sparsity.” We suspect this is a combination of two factors: 1) The simulated DAGs are sparse, which intuitively suggests they are likely (but not guaranteed) to satisfy Assumption 1, and 2) There is some mild robustness to misspecification. Since we know the assumptions cannot be relaxed, we know that violations of the assumptions must lead to nonidentifiability, however, the _severity_ of nonidentifiability could be mild (we stress that we do not have any justification of this beyond the encouraging results of the simulations). This is an interesting observation we look forward to investigating more deeply in future work. Pdf: /pdf/28ea7823c66e1b68948690b4faaa6a18c1734dc6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the problem of nonparametrically constructing latent causal graphs given unknown _interventions_ in a measurement model. In particular the paper given a constructive proof of establishing sufficient conditions under which one can find the latent causal structure between observed and hidden variables. Authors achieve this by first defining a notion of *imaginary subsets,* which is a subset of observed variable that are not children of a single latent variable. Authors show that if a graph does not have any imaginary subsets then up to some assumptions on the graph, one can identify a causal structure. Author further give sufficient conditions that guarantee there don't exist imaginary subsets in the graph. **Update**: After reading other reviews and rebuttals. I have updated the score from 5 to 6. Strengths: - Authors identify sufficient conditions for when causal structure is possible and construct a causal graph under this setting. - Paper is thorough and identify limitations with respect to limitations of the proposed construction. Weaknesses: - Overall though the paper is difficult to follow for non-experts. It heavily relies on jargon that is used commonly in causal inference and thus proves to be a difficult read. For instance from the abstract authors use unknown *intervention* starting in the abstract which is not defined much later in the paper. - The paper can really benefit from a running example that could guide the reader through all the definitions because in the current state one it is difficult to follow. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In theorem 3.5, one needs to know $\{ P^{(I)}}_{I\in \mathcal{I}}$. Isn’t this a very strong assumption as one needs to exactly know the intervention distributions for all hidden variables since $\mathcal{I} = \{0, \{H_1\}, …, \{H_m\}\}$. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Authors have addressed limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! **“Overall though the paper is difficult to follow for non-experts. The paper can really benefit from a running example that could guide the reader through all the definitions because in the current state one it is difficult to follow.”** Thanks for the suggestion! “Intervention” is a standard term in the causal inference community. We actually use Figure 1 as our running example which covers most of our key graphical concepts. We will be happy to add some more detail about this example in the camera ready. **“Theorem 3.5”** This is a common assumption and in fact, it is an open question whether or not this can be relaxed [a-b]. [a] also shows that this assumption is necessary (Section 3.3 of [a]). Since our focus is on the nonparametric side, we have not yet looked into optimizing the number of interventions. The complete family of intervention targets is needed to guarantee exact latent graph recovery (appendix C.4), but one might be able to relax these assumptions to get partial recovery. On the other hand, knowing all the interventional distributions does not mean we know the interventional target. In fact, we argue in the paper that two interventional distributions could be the same and thus we might not even know the number of latents. Moreover, compared to classical work on interventions in graphical models, the current literature on causal representation learning (including our submission as well as [a,b]) considers a strictly harder setting since the interventions are both latent and unknown. Since the intervention target is latent, we only have access to partial information (observed variables) regarding the effect of interventions. And since the intervention target is unknown, compared to the known intervention target, we additionally have the unorientability problem of isolated edges which is discussed in section 5.2 (L357). [a] Seigal, Squires, Uhler. "Linear causal disentanglement via interventions." arXiv preprint arXiv:2211.16467 (2022). [b] Varici, Burak, et al. "Score-based causal representation learning with interventions." arXiv preprint arXiv:2301.08230 (2023). --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: Thank you for the explanations. After carefully reading other reviews and corresponding rebuttals, I have updated my score on the submission. --- Reply to Comment 1.1.1: Comment: Thank you for your support and for taking the time to thoroughly evaluate our submission!
null
null
null
null
null
null
Posterior Sampling for Competitive RL: Function Approximation and Partial Observation
Accept (poster)
Summary: The paper considers a zero-sum Markov game with unknown dynamics in the case of full and partial observations. The authors propose algorithms for finding a Nash equilibrium in the games, in which, at each iteration, virtual games with dynamics sampled from certain distributions are solved. The paper's main result is theoretical and consists of estimates for the rate of the algorithms' convergence. Strengths: The paper is aimed at solving an important problem. It is well structured and written in clear mathematical language. Weaknesses: Unfortunately, as a non-specialist, it is rather difficult for me to assess the significance of the obtained theoretical results. It is not entirely clear what useful conclusion the reader can draw from them concerning practical methods for solving zero-sum Markov games. The proposed algorithms seem very abstract since it is difficult to calculate the indicated distributions in experimental tasks. If I'm wrong and this is feasible, an example confirming this would significantly strengthen the paper. One more thing I doubt is the fact that in the algorithms, the authors apparently assume that a Nash equilibrium in Markov games exists. This is true if we consider games with an infinite horizon, but I'm not sure if this is true for games with a fixed number of steps $H$. Usually, in such games, it is assumed that the policy depends not only on the state but also on the step number. Otherwise, the existence of a Nash equilibrium is not obvious to me, and I would welcome a reference to this fact. The paper also contains typos: 178 - $V^*$ instead of $V_1^*$. 189 - $P^{\pi,\nu}_f$ instead of $P^{\pi,\nu}_h$ 246 - “begin” instead of “bening” Technical Quality: 3 good Clarity: 3 good Questions for Authors: On line 151, the authors introduce the reward function $r_h(o,a,b)$. It seems a bit exotic that it depends on an observation $o$ rather than on a state $s$. Is it important for the obtained results? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper does not have potentially negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer qywY for the valuable advice and questions. We will address your concerns below. **1. (1) The significance of the theoretical results? (2) Feasibility of the proposed algorithm.** **(1)** Our work studies this problem in alignment with the recent progress on developing reinforcement learning algorithms with **provable statistical efficiency**. In particular, we majorly focus on **a)** how to design posterior sampling algorithms, an important research direction in RL, with statistically low sample complexity, **b)** how to incorporate more general function classes that cover a wide range of function approximators. Both questions have attracted a lot of attention in RL theory works (e.g., [1,2,3]. Kindly find more details in Related Work). Concentrating on the competitive RL with full and even the harder partial observability, we successfully identify such function classes named Self-Play and Adversarial GEC and design the first posterior sampling algorithms based on them, which fills a gap in competitive RL theory. More importantly, our algorithm enjoys provable sample efficiency guarantees. The dependence on $O(\sqrt{T})$ in our theorems indicates that our regrets are near-optimal. **(2)** Our algorithm remains feasible when instantiated for a concrete function class. To exemplify, consider a linear-mixture Markov game (MG) setting, which is subsumed by our framework. The transition model can be represented linearly as $\theta_h^\top \phi(s,a,b,s')$ where $\theta_h$ is the model parameter assumed lying in a finite set. Then, in the posterior sampling step, e.g., Line 3 of Algorithm 1, the function class $\mathcal{F}$ is finite, and we can calculate $p^t(f)$ for each $f\in \mathcal{F}$. The optimism term $V^*_f$ in the distribution can be estimated by many existing iterative methods given any model $f$. This optimism term is unavoidable even in a single-agent setting due to the feel-good Thompson sampling framework [2,3]. In addition, [2] mentioned that posterior sampling is relatively amenable to tractable implementation via ensemble approximations or stochastic gradient Langevin dynamics. In fact, in contrast to sample efficiency, how to design provably computation-efficient RL methods with general function approximation [1] and under partial observability [4,8] remains not well explored in existing works so far. Further designing such algorithms is orthogonal to our contributions. We deeply appreciate the reviewer making efforts to review our work and raise insightful questions. We humbly hope our work can also be assessed on the basis of its statistical results and theoretical contributions. **2. Existence of Nash equilibrium (NE)?** Many existing works, e.g., [5,6,7], show that NE exists for the finite-horizon fully observable MGs. These works show the existence using the finite-horizon property, the notion of best response, the Bellman equation, and the backward induction from the last step $H$ to the initial step $1$. For example, to find such a NE, for any state $s$ at step $H$, there always exists a NE $\big(\pi_H^*(\cdot|s), \nu_H^*(\cdot|s)\big)$ for a matrix game with the payoff matrix $[r\_H(s,a,b)]\_{a\in\mathcal{A},b\in\mathcal{B}}$. Then, for step $H-1$, given any state $s$, by the Bellman equation, we have a new payoff matrix as $[r\_{H-1}(s,a,b)+\mathbb{E}\_{s'\sim \mathbb{P}\_H(s'|s,a,b),a'\sim \pi\_H^*(\cdot|s'),b'\sim\nu\_H^*(\cdot|s')}(Q_H(s',a',b'))]\_{a\in\mathcal{A},b\in\mathcal{B}}$ where $Q_H(s,a,b) = r_H(s,a,b)$. Thus, we compute a NE for the above new payoff matrix as $\big(\pi_{H-1}^*(\cdot|s), \nu_{H-1}^*(\cdot|s)\big)$ for any $s$. Iterating from step $H$ to $1$, we obtain the NE for finite-horizon MGs as ${(\pi^*\_h,\nu^*\_h)}\_{h=1}^H$. It indicates that we can find the NE of a finite-horizon MG by decomposition into multiple matrix games whose NE always exists. The work [8] studies the finite-horizon tabular partially observable MG, where a similar idea of backward induction can be used to show its existence (consider the history-dependent policies and regard the history as the state in fully observable MGs). **3. Reward function depends on $o$ instead of $s$.** Our reward function is modeled following the recent works on the finite-horizon partially observable RL. Many existing works on partially observable MDPs (e.g., [4]) and MGs (e.g., [8]) defined the reward dependent on $o$. Such a definition is also natural since only the observation $o$ can be accessed by learners. Moreover, our definition does not conflict with a state-dependent reward function, denoted as $R_h(o,a,b)$. At state $s$, we view $o$ as a random variable sampled from $\mathbb{O}_h(\cdot|s)$. We always have $$ R\_h(s,a,b) = \sum\_{o\in \mathcal{O}} \mathbb{O}(o|s) r\_h(o,a,b). $$ That is, $R_h(s,a,b)$ is intuitively an expectation of $r_h(o,a,b)$ based on the emission kernel. In this sense, whether the reward function depends on $s$ or $o$ does not introduce extra learning difficulty. **Reference** [1] Simon Du, et al. Bilinear classes: A structural framework for provable generalization in rl. ICML 2021. [2] Alekh Agarwal, and Tong Zhang. Model-based rl with optimistic posterior sampling: Structural conditions and sample complexity. NeurIPS 2022. [3] Tong Zhang. Feel-good thompson sampling for contextual bandits and reinforcement learning. SIAM Journal on Mathematics of Data Science 2022. [4] Chi Jin, et al. Sample-efficient reinforcement learning of undercomplete pomdps. NeurIPS 2020. [5] Yu Bai, Chi Jin. Provable Self-Play Algorithms for Competitive Reinforcement Learning. ICML 2020. [6] Qiwen Cui, and Simon S. Du. When are Offline Two-Player Zero-Sum Markov Games Solvable? NeurIPS 2022. [7] Qiaomin Xie, et al. Learning zero-sum simultaneous-move markov games using function approximation and correlated equilibrium. COLT 2020. [8] Qinghua Liu, et al. Sample-efficient reinforcement learning of partially observable markov games. NeurIPS, 2022. --- Rebuttal Comment 1.1: Title: Comments on the response Comment: Thanks to the authors for answering my questions. Considering them and the opinions of other reviewers, I am ready to raise my rating. --- Reply to Comment 1.1.1: Title: Thank you for raising the rating Comment: Thank you for raising the rating! We greatly appreciate you taking the time to read our rebuttal and reconsider our work. We are happy to answer any further questions to address the remaining concerns regarding our submission.
Summary: This paper investigates posterior sampling algorithms for competitive reinforcement learning (RL) with general function approximations in zero-sum Markov games (MGs). It introduces complexity measures for function approximation and proposes model-based self-play and adversarial posterior sampling methods to learn Nash equilibrium in partially observable states. The algorithms provide low regret bounds and can be applied to various tractable zero-sum MG classes in both fully observable and partially observable settings. Strengths: I think this work really pushes the Multiagent+PORL community research efforts further by answering: > can we design a generic posterior sampling algorithm for MGRL in the context of function approximation? The main contribution of model-based posterior sampling algorithm equipped with rigorous analyses is worthy for publication at NeurIPS. Weaknesses: I have only minor weaknesses for this work as follows: 1. For the self-play, Algorithm 1 and 2 seems repetitive. One can essentially combine them for presentation. 2. More discussion needs to be added comparing between Self-play and Adversarial results (Thm 1 and 2). For example, the self-play considered here is the zero-sum, hence the player 2 is the worst possible adversary. With this notion, comparing these two results will be helpful for future directions. It will be curious to see if the main player in Self-play (Thm 1 result) can handle the adversary in the Alg 3. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: na Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer DqQm for recognizing the contribution of our submission. We will address your concerns below. **1. Combine the presentation of Algorithm 1 and Algorithm 2.** We will carefully polish our paper and revise the presentation of the algorithms in our next version. **2. More discussions on algorithms and the theoretical results for self-play and adversarial settings.** We politely point out that although Algorithm 1 can also be viewed as a learning algorithm for Player 1 with a worst possible adversary Player 2, it cannot be used to handle the adversarial setting in Algorithm 3 directly. The main reason is that the execution of Algorithm 1 requires both players to be controlled by the learner due to the setting of self-play, while Algorithm 3 cannot control Player 2 and views Player 2 as an arbitrary player. In Line 5 and Line 6 of Algorithm 1, we need to compute a policy $\underline{\nu}^t$ for Player 2, and then the exploration policy $\sigma^t$ is set to be $(\pi^t, \underline{\nu^t})$ for data collection as shown in Line 239 of the main text. Thus, Algorithm 1 forces Player 2 to take a specific policy $\underline{\nu^t}$ instead of an arbitrary uncontrollable one $\nu^t$ as in Algorithm 3. We will further highlight this difference in the revision. Regarding the theoretical result, the two different settings result in distinct regret metrics. We note that the result in Theorem 1 is inferred from Proposition 1 and Proposition 2, where Proposition 1 shows the regret for Algorithm 1 and Proposition 2 shows the regret for Algorithm 2 under the self-play setting. Then, to compare the results for Algorithm 1 and Algorithm 3, we focus on Proposition 1 and Theorem 2. The regret metric in Proposition 1 is defined as $\mathrm{Reg\_1^\mathrm{sp}}(T):=\sum_{t=1}^T(V_{f^\*}^\* - V\_{f^\*}^{\pi^t,\*}).$ And the regret metric in Theorem 2 is defined as $\mathrm{Reg^{\mathrm{adv}}}(T):=\sum_{t=1}^T(V\_{f^\*}^\* - V\_{f^\*}^{\pi^t,\nu^t}).$ Since $V_{f^*}^{\pi^t,\nu^t}\geq \min_\nu V_{f^*}^{\pi^t,\nu}= V_{f^*}^{\pi^t,*}$ always holds, we have $\mathrm{Reg}^{\mathrm{adv}}(T)\leq \mathrm{Reg}^{\mathrm{sp}}_1(T)$, which indicates that $\mathrm{Reg}^{\mathrm{sp}}_1(T)$ is a tighter regret metric. Moreover, Proposition 1 and Theorem 2 have comparable upper bounds in the same order (ignoring the numerical factors) but under different regret metrics. This indicates that the self-play algorithm for Player 1, i.e., Algorithm 1, can induce a tighter theoretical result, which reflects the power of self-play. We will add this discussion in our revision. --- Rebuttal Comment 1.1: Comment: The rebuttal addressed my concerns. My rating is unchanged considering the rebuttal and other reviewers’ concerns. --- Reply to Comment 1.1.1: Comment: We greatly appreciate you taking the time to read our rebuttal. We are happy to answer any further questions to address the remaining concerns regarding our submission.
Summary: This paper focuses on posterior sampling for competitive reinforcement learning, aiming to propose a model-based self-play posterior sampling method to approximate Nash equilibrium in the case of self-play and adversarial learning. The theoretical analysis indicates that the proposed method achieves a low regret bound that can scale sublinearly converge. Strengths: 1. This paper is well-organized and gives a throughout survey of the related work. 2. this paper seems to be a solid theoretical work, though I'm not entirely sure of that. Weaknesses: 1. Too long a supplementary, so that the reviewer may miss some details Technical Quality: 3 good Clarity: 3 good Questions for Authors: line 56: "... partial observations into the posterior sampling framework under a MARL ...", so the question is what is the difficulty of using posterior sampling in POMDP? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer pkbQ for the valuable advice and questions. We will address your concerns below. **1. The supplementary material is long so that the reviewer may miss some details.** In our version, we will carefully revise the whole paper and move additional important details from the supplementary to the main text to highlight our technical contribution. **2. What is the difficulty of using posterior sampling in partially observable RL?** Our work studies a more challenging partially observable Markov game (POMG) problem than the single-agent POMDP. When it comes to the multi-agent setting, i.e., POMGs, to the best of our knowledge, there are no prior works studying the posterior sampling methods, particularly when incorporating function approximation. The difficulties of using posterior sampling in POMGs lie in the following aspects: **(1)** First, as a different learning framework, posterior sampling diverges fundamentally from existing POMG methods in techniques. In posterior sampling methods, we need to propose a proper model sampling distribution incorporating a well-designed loss formulation fitting the POMG models, which does not exist in the non-posterior sampling POMG method. Thus, this leads to different proof methods. Before our work, it was unknown how to design a statistically efficient posterior sampling algorithm for POMGs with a provable guarantee. **(2)** Second, the posterior sampling POMG method is not a direct extension of existing single-agent POMDP methods. We now have two coupled agents with different policies competing with each other in a min-max way instead of one agent pursuing a single maximization objective. In the POMG setting, we need to consider a more challenging minimax optimization problem. Obtaining such a competitive learning algorithm requires a novel algorithmic design methodology. **(3)** Third, compared to the fully observable Markov games (FOMGs), the intrinsic model structure differences between FOMGs and POMGs bring the challenges of extra emission kernels, unknown underlying states, and history-dependent policies. Thus, POMGs cannot be solved by the existing FOMG posterior sampling approaches. **(4)** Finally, when considering a more general function approximation setting in POMGs, novel multi-agent general function approximation conditions should be proposed to cover a wide range of known function approximation approaches. The conditions were unclear before our work. We tackle those challenges by successfully proposing unified statistically efficient posterior sampling algorithms for POMGs and FOMGs incorporating general function approximations. More importantly, our work covers both self-play and adversarial setups. We remark that the adversarial setting is not even studied in the existing posterior sampling method under full observability. --- Rebuttal Comment 1.1: Comment: Dear Reviewer pkbQ, We sincerely appreciate you taking the time to thoroughly evaluate our paper and provide insightful questions. In our rebuttal, we aimed to carefully and comprehensively answer your questions. We sincerely hope our responses and clarifications can adequately alleviate your initial concerns about our work. For your concerns about the long supplementary material, since our work presents a novel RL algorithm with theoretical guarantees, we include rigorous proofs and analysis in the supplementary material to support our results. The detail in the supplementary material is necessary for a technically sound paper. We truly value the discussion period and hope to address any concerns to the best of our ability on the last day of this period. Please do not hesitate to let us know if there are any lingering concerns or unclearness in our rebuttal. We would be more than happy to address them.
Summary: This paper investigates posterior sampling algorithms for competitive RL in the context of general function approximations. The authors propose the self-play and adversarial generalized eluder coefficient (GEC) as complexity measures for function approximation, capturing the exploration-exploitation trade-off in MGs. They further provide low regret bounds for proposed algorithms that can scale sublinearly with the proposed GEC and the number of episodes T. Strengths: 1. Two new generalized eluder coefficients are proposed as the complexity measure for the competitive RL with function approximation. 2. The authors also propose a novel model-based posterior sampling algorithm with self-play to learn the Nash equilibrium with provable regret bounds. 3. The technical contribution of the paper looks solid. Weaknesses: Currently, readers are hard to follow the technical results in the main paper. It will be better if the author could include some explanatory parts for explaining the technical details intuitively, so as to highlight their technical contribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer Gu3i for the valuable advice and questions. We will address your concerns below. **1. Explain the technical results intuitively and the associated contributions.** We provide the intuitive explanation by elucidating connections between the subsequent four aspects: **(1) Complexity conditions:** Our paper defines two general complexity conditions, the Self-Play GEC and the Adversarial GEC, for competitive reinforcement learning (RL). Since we consider a Markov game setting, in contrast to the single-agent setting, our definitions of the complexity conditions in Definitions 1 and 3 reflect the competitive nature of the two players via the defined sequence of the policies and fit the two important Markov game scenarios, i.e., the self-play and the adversarial settings. These conditions are not proposed in prior works and cannot be directly generalized from the single-agent setting. The inequalities in these definitions have a particular meaning: the left-hand side of the inequality is the prediction error defined based on the value difference while the right-hand side is the training error with a multiplicative factor $d_\mathrm{GEC}$ defined on a problem-specific loss function plus a small burnt-in error. We use these conditions to characterize the exploration hardness of online competitive RL. Let $\mathcal{F}$ represent such model classes. Intuitively, a sequence of approximation functions in $\mathcal{F}$ satisfies that under the self-play or adversarial scenarios, if they induce small in-sample training error on a collected well-explored dataset, then the out-of-sample prediction error on the next trajectory generated is also small. Interestingly, as we show in Section 5, we discover with detailed proofs that many well-known function classes for function approximation in both fully observable Markov games (FOMGs) and partially observable Markov games (POMGs) can be subsumed in our defined Self-Play GEC and Adversarial GEC function classes with different dimensional factor $d_\mathrm{GEC}$. We can even propose a new function class named Decodable POMG that can be covered by our framework. **(2) Algorithms:** The definition of Self-Play/Adversarial GEC motivates us to design algorithms based on posterior sampling. In our proposed posterior sampling algorithms, we assign a larger probability to a model $f\in \mathcal{F}$ if its in-sample training error is small. Moreover, according to the characteristics of the self-play and the adversarial scenarios in a competitive setting, we design algorithms from different perspectives. In particular, in Algorithm 1 and Algorithm 2 for the self-play setting, where we can coordinate both players for learning, we have a two-step posterior sampling strategy, where the first sampling step for learning the Nash equilibrium policy and the second sampling step is for constructing the opponent's best response where the opponent assists by exploiting the main player's weakness. On the other hand, in Algorithm 3 for the adversarial setting, since the opponent's policy cannot be controlled by the learner, we do not have the second sampling step, which thus leads to different algorithm designs. We note that our self-play and adversarial learning algorithms are the first unified methods considering both full and partial observability. Moreover, in the construction of the posterior sampling distributions in all the algorithms, we add and customize the optimism terms in terms of the different learning scenarios, i.e., $V_f^*$, $V\_f^{\pi^t,\*}$, $V_f^{*,\nu^t}$, for learning efficiency in our proof. Moreover, proving the regret bound itself requires a careful analysis of integrating the Self-Play/Adversarial GEC, the loss functions for FOMGs and POMGs, optimistic posterior sampling, and the self-play and adversarial algorithm frameworks in a unified perspective. **(3) Main regret results:** For self-play and adversarial settings, under different learning regret metrics, we prove the upper bounds of the proposed posterior sampling algorithms by incorporating the newly proposed Self-Play and Adversarial GEC function classes and the associated complexity conditions. Intuitively, in both Theorem 1 and Theorem 2, we can achieve near-optimal $O(\sqrt{T})$ bounds in terms of the number of episodes $T$, which justifies the statistical efficiency of our algorithms. Moreover, the results have a dependence on another two factors, $d_\mathrm{GEC}$ and $\omega$. The factor $d_\mathrm{GEC}$ as defined in Definitions 1 and 3 represents the complexity of the Self-Play and Adversarial GEC function classes, whose value can be instantiated in concrete example subclasses as elaborated below. The quantity $\omega$ measures how well the initial prior distributions cover the optimal model $f^*$, which thus further reveals the size of the function space $\mathcal{F}$. In fact, we can prove in our supplement that when $\mathcal{F}$ is finite, it equals the value of $\log |\mathcal{F}|$, and when $\mathcal{F}$ is infinite, it is the log covering number of the space. **(4) Examples:** In Section 5, we present a bunch of examples for approximation function classes in both FOMGs and POMGs. In the supplement, we provide detailed and rigorous proofs to show that all these demonstrated classes are subsumed by our proposed Self-Play and Adversarial GEC classes, indicating the generality of the proposed GEC classes. We additionally provide a new function class named Decodable POMG and show that it can also be covered by our proposed function classes. We eventually calculate the concrete relation between $d_{\mathrm{GEC}}$ and the specific complexity measure in each of these classes. Thus, plugging the calculated value $d_{\mathrm{GEC}}$ into our main theorems leads to the theoretical guarantee for the instantiation of our proposed algorithms on each function class. --- Rebuttal Comment 1.1: Comment: Thank you very much for your response and clarification. --- Reply to Comment 1.1.1: Comment: We greatly appreciate you taking the time to read our rebuttal. We are happy to answer any further questions to address the remaining concerns regarding our submission.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning
Accept (poster)
Summary: The paper introduces A3FL, a backdoor attack tailored for FL. Unlike traditional approaches, A3FL adapts backdoor triggers in an adversarial manner considering global training dynamics, thereby enhancing its robustness and effectiveness. By accounting for discrepancies between global and local models in FL, A3FL ensures that the trigger remains potent even if the global model attempts to negate it. The paper demonstrates through extensive experiments that A3FL significantly surpasses existing backdoor attacks in terms of efficiency, even when faced with established defenses. Strengths: The paper is easy to follow. It addresses a crucial problem in federated learning. The authors have made code available, which makes it easy for others to replicate the study. Weaknesses: First of all, the paper needs editorial revision as it discusses the same thing multiple times. By making it shorter and to the point, there is more room to include extra results in the main content. The technical limitations of the paper are discussed as follows: Line 36-37: "the selected trigger is usually sub-optimal, which makes the attack less effective and stealthy as shown in experiments" - I would like to request some clarification and elaboration. Firstly, it would be beneficial for the readers if the authors could provide references or citations that substantiate the claim regarding the sub-optimality of the triggers in existing backdoor attacks. Additionally, it is crucial to note that in FL, semantic backdoor triggers are considered to be particularly stealthy. The stealthiness of semantic triggers arises because they leverage the inherent properties of images rather than altering pixels, making them less detectable. In this regard, it would be insightful if the authors could elaborate on what they imply by 'stealthy' in the context of semantic triggers. Line 45-46: "they only leverage local models of compromised clients to optimize the backdoor trigger, which ignores the global training dynamics" - However, it is noteworthy that some existing works, such as Neurotoxin[1], have been known to embed backdoors by considering parameters that remain robust to the training dynamics in FL systems. Please clarify the claims made in this regard and support them with relevant references or citations, particularly contrasting with what Neurotoxin has achieved. Also, differentiate the contribution of A3FL in this regard. Line 47-50: "they strictly regulate the difference between the local and global model weights to bypass defenses, which in turn limits the backdoor effectiveness" - While regulating the difference between local and global model weights can be a strategy to bypass defenses, it has been demonstrated in the literature that this approach can also lead to high backdoor efficiency[2]. Please provide a more comprehensive analysis of this aspect and substantiate the claim regarding the limitation of backdoor effectiveness in such approaches with references or citations that clearly illustrate this limitation. Line 170: "injected backdoor is rapidly eliminated since the global model is continuously updated by the server" - This statement is misleading. In reality, the persistence of backdoors in FL systems can be influenced by the timing of their injection. Existing research indicates that if a backdoor is introduced after the model has reached a stable state, it tends to persist longer before being eliminated. Line 174-175: "the global model is trained to directly unlearn the backdoor" - I would like to request the authors to provide references or citations that substantiate the statement. It is not clear how the backdoor trigger can be eliminated from the global model without knowing the specific patterns that activate it. In the A3FL workflow, the first step involves optimizing the trigger pattern based on the model parameters. Subsequently, the adversarial global model is optimized in each FL training round. This approach raises a question regarding the trigger pattern during evaluation. Unlike in training rounds where the trigger patterns vary due to FL dynamics, it is unclear what trigger will be used during the evaluation. Specifically, it is important to describe whether the adversary requires access to the global model to generate triggers during testing. If the adversary does require such access, it would be considered an adversarial attack rather than a backdoor attack. Conversely, if the adversary does not need access to the global model, the trigger for misclassification remains uncertain. A3FL uses an adversarial example generation strategy for the incorporation of backdoor triggers. In contrast, an approach called PerDoor[3] has been recently introduced, utilizing adversarial examples to achieve a similar purpose. It is essential for the authors to clearly explain the contributions and differentiating factors of A3FL in relation to PerDoor. The efficiency of A3FL must be evaluated in the presence of FLAME[4], a state-of-the-art defense mechanism against backdoor attacks in FL. Furthermore, A3FL needs to be compared against recent state-of-the-art attacks, like 3DFed[5]. A3FL should be evaluated using more practical benchmark federated learning datasets in the LEAF project[7]. The authors have only used the ResNet18 model as the underlying structure for all their experiments. For a more comprehensive evaluation, A3FL should be evaluated across a variety of models. [1] Z. Zhang et al., "Neurotoxin: Durable Backdoors in Federated Learning", ICML 2022. [2] H. Wang et al., "Attack of the Tails: Yes,You Really Can Backdoor Federated Learning", NeurIPS 2020. [3] M. Alam et al., "PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations", arXiv 2022. [4] T. Nguyen et al., "FLAME: Taming Backdoors in Federated Learning", Usenix Security 2022. [5] H. Li et al., "3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning", IEEE S&P 2023. [6] https://leaf.cmu.edu/ Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please address the issues discussed in Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper has several limitations, as discussed in the review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## reviewer k1k8 We thank the reviewer for the detailed suggestion to improve the clarity and enhance the evaluation of our work. > Q1: Line 36-37 Thanks for commenting on the place that could cause misunderstanding. In the context of this sentence (Line 34-37), we were discussing the fixed-trigger backdoor attacks. The sub-optimality of such fixed-trigger attacks has been shown by previous works [1-3] and in Section 4.2 of our paper. We also thank the reviewer for pointing out the possible confusion w.r.t. the definition of stealthiness. In our case, we consider stealthiness in FL as the capability of bypassing defenses without harming the utility of the global model. We do not care about the stealthiness in images as local data is not directly seen by the server. Following this definition, semantic backdoor trigger is not stealthy since they inherit the sub-optimality of fixed-trigger backdoors. We will clarify the meaning of stealthiness and discuss semantic backdoors in revision. > Q2: Line 45-46 Thanks for pointing out the place that could cause misunderstanding. In the context of this sentence (lines 44-45), we were discussing the limitations of trigger-optimization backdoor attacks[4][5]. Neurotoxin is a fixed-trigger attack which does not fall into this discussion in Line 45-46. As discussed in Line 34-37 and 95-96, the differences between A3FL and Neurotoxin are two folds. First, A3FL considers a worst-case scenario in which the global model is trained to directly unlearn the backdoor while Neurotoxin does not have such an adversarial component. A3FL further optimizes the backdoor trigger to survive the worst case scenario, therefore improving the backdoor persistence. In comparison, Neurotoxin passively avoids embedding the backdoor in frequently changing model weights. As shown in Figure 17 of [6], Neurotoxin can only achieve minor improvement in the image domain, which is also supported by our experiments in Figure 2. > Q3: Line 47-50 We would like to use the ablation study in F3BA[4] to demonstrate our claim. In Appendix D.6 of [4], the authors wrote "Meanwhile, too high candidate parameters proportion for fully-connected layers can cause an obvious loss of ASR." The authors also conducted a grid search to find an optimal regularization proportion. This indicates that regularization can potentially harm the backdoor efficiency. We conduct an experiment to further support our claim. We use L2-norm regularization to regulate the backdoor loss and vary the strength of regularization by adjusting the value of balancing coefficient $\beta$. As observed in Table 7 in the attached PDF, the ASR drops as the strength of regularization enhances. We will include this in the revised paper. > Q4: Line 170 Thanks for suggesting another factor (the timing of injection) that influences the persisitence of attack. We were focusing on the attack design perspective, when the injection timing is controled the same, other baseline backdoors are eliminated in a shorter time than A3FL. We will modify this sentence into "injected backdoor could be eliminated since the global model is continuously updated by the server, especially when the global model has not reached a stable state". > Q5: Line 174-175 The FL server cannot access the specific backdoor trigger in practice. However, the compromised client has access to the trigger as an attacker to simulate a worst-case scenario where the global model is trained to directly unlearn the backdoor trigger, as discussed in lines 173-174. > Q6: The trigger pattern during the evaluation and the attacker's access to global model. As discussed in Line 248-252, in the attack window (i.e. compromised clients being selected to participate), the attacker can access the global model through compromised clients to optimize the trigger. After the end of the attack window, the attacker-compromised clients leave the FL procedure. The attack performance is evaluated using the trigger at the end of the attack window. Following [6], the trigger pattern will no longer be optimized after the attack window, therefore the attacker do not need to access the global model anymore. > Q7: Compare A3FL to PerDoor We will discuss the difference between PerDoor[15] and A3FL in revision. PerDoor adopts adversarial attack to generate the backdoor trigger. However, the trigger in A3FL is not an adversarial example. As discussed in Section 3, we optimize the backdoor trigger to survive adversarially crafted global model. This pipeline can be seen as adversarial training in a reverse way where the crafted global model corresponds to the inner maximization problem and the trigger optimization corresponds to the outer minimization problem. > Q8: More baselines: FLAME and 3DFed. We will discuss FLAME[13] in the revised paper. As for evaluation, we found the code of FLAME is not publicly available and have not got response from the author after acquiring the code. We will update the experimental results on FLAME once the code is received. We will discuss 3DFed[14] in the revised paper. However, we note that 3DFed was first published in S&P 2023 held on 21-15 May 2023. And the paper was added to IEEE on 21 July 2023. To the best of our knowledge, 3DFed was not published on arxiv before. Therefore, we were not able to compare A3FL against 3DFed. What's more, according to the concurrent policy of NeurIPS 2023, we are not required to consider concurrent work which was published in two months before the submission deadline. We also found that the code of 3DFed is not publicly available and will update the experimental results once the code is received. > Q9: A3FL should be evaluated using more practical benchmark in the LEAF project. Thanks for suggesting more benchmarks. In the attached PDF, we further report the performance of A3FL on FEMNIST from the LEAF project in Table 3, as well as A3FL on VGG16 in Table 2. Observe that A3FL can still achieve high ASRs. --- Rebuttal Comment 1.1: Comment: >Supplementary results to respond **Q8: More baselines: FLAME and 3DFed**. We would like to follow up on this question and welcome further discussion from the reviewer. Since there is no official implementation of FLAME [13] available yet, we tried our best to reproduce FLAME following the paper and evaluate A3FL against FLAME. Following the settings for image classification in Appendix B.3 of [13], we set $\epsilon = 3705$ and $\delta = 0.001$. We adopt `sklearn.cluster.HDBSCAN` as the implementation of the clustering algorithm adopted by FLAME. We set `min_cluster_size = N/2+1` and `min_samples = 1` following Appendix.E in [13], where N is the number of sampled clients in each round. The experimental results are shown below (and recall P is the number of compromised clients): | P | 1 | 2 | 5 | 10 | 20 | |------------|-------|-------|-------|-------|-------| | Neurotoxin | 9.74 | 10.34 | 37.24 | 92.41 | 97.7 | | DBA | 9.46 | 11.22 | 13.33 | 55.96 | 90.61 | | CerP | 18.55 | 22.41 | 88.75 | 99.76 | 99.85 | | F3BA | 10.76 | 11.55 | 63.63 | 99.99 | 99.98 | | A3FL | **61.75** | **91.97** | **100** | **100** | **100** | Observe that A3FL can still achieve the highest ASR against FLAME. We will include the evaluation in the revised paper. --- Rebuttal 2: Comment: I want to thank the authors for their efforts in responding to these queries. I respect the hard work put into this paper and trust that these suggestions will only enhance its quality. I am satisfied with most of the responses and am increasing my score. However, I have concerns over a couple of points. (1) I am not satisfied with the discussion that differentiates Neurotoxin from A3FL, considering the worst-case assumption. (2) The discussion on trigger unlearning is still not convincing, considering the threat model. --- Rebuttal Comment 2.1: Comment: We thank the reviewer for the response and we are glad that most of the concerns are addressed. We hope that the following clarification could further alleviate your remaining concerns. > (1) I am not satisfied with the discussion that differentiates Neurotoxin from A3FL, considering the worst-case assumption. We will avoid the imprecise word of "worst-case" (as can be seen in the discussion with AC) yet we believe our distinction with Neurotoxin is clear and is not really relevant to this issue. We will try our best to iterate the differences as follows: Neurotoxin uses a **fixed trigger**, focuses on how to identify parameters that are less important to modify, thus improves the durability of injected backdoor. In comparison, A3FL is a **trigger-optimization backdoor attack** focusing on how to optimize a persistent backdoor trigger that withstands potential defenses. This is achieved by simulating the defender's goal (i.e., mitigating the impact of the backdoor trigger), and optimize the trigger accordingly to survive possible defenses. While we agree that both Neurotoxin and A3FL share the goal of improving the durability of the trigger, the intution as well as the technical details significantly dispart. We will discuss and clarify this in the revised paper. > (2) The discussion on trigger unlearning is still not convincing, considering the threat model. We appreciate the reviewer's comment but it is not very clear on what part is unconvincing to the reviewer. We will try our best to answer this with our understandings. We would like to emphasize the threat model: the real server does not have access to the backdoor trigger and any private data held by clients. Meanwhile, the malicious client does not know the server-side defense strategy or movement. Thus from the malicious client's perspective, it is hard to guess what the server might do. Instead, the malicious client can only simulate the defender's goal: mitigating the impact of backdoors (i.e., through trigger unlearning on the client side). The simulation is feasible because: 1) the malicious client has the actual trigger; 2) the malicious client has the received global model from the server. Again, this is a simulation from the client side (i.e., the malicious client use the server model copy as well as its own data/trigger to mimic what could happen after the defense). The real server does not have access to the trigger and trigger unlearning is done by the malicious clients. We hope this clarifies your concern.
Summary: The authors introduce A3FL, a backdoor attack that strategically adjusts the backdoor trigger to decrease the chances of its removal by the global training dynamics. The fundamental idea behind this approach lies in the disparity between the global model and the local model in Federated Learning (FL), which diminishes the effectiveness of locally optimized triggers when transferred to the global model. To address this issue, the authors tackle the optimization of the trigger in a manner that ensures its survival even in the worst-case scenario, wherein the global model is specifically trained to eliminate the trigger. Through extensive experiments conducted on benchmark datasets, they comprehensively evaluate the efficacy of A3FL against twelve existing defense mechanisms. Strengths: 1. The authors consider a bi-level objective (i.e., the worst-case adaptation of global dynamics) to optimize the trigger pattern during the FL process without knowing the defense mechanism. 2. Experiment results are good. 3. The proposed method compares with extensive baselines in different settings. Weaknesses: 1. No theoretical analysis or guarantee on attack performance and convergency/sample complexity for Algorithm 1 2. It seems like solving a bi-level optimization in each FL round is computational inefficient. Either empirical or theoretical justifications comparing with baseline methods are needed to prove the efficiency. 3. To incorporate global dynamics (i.e., long-term goal, benign clients' behaviors, defense mechanism, etc.) is useful to increase backdoor durability, however, it has already been proposed in previous works [1] [2]. [1] Wen, Yuxin, et al. "Thinking two moves ahead: Anticipating other users improves backdoor attacks in federated learning." arXiv preprint arXiv:2210.09305 (2022). [2] Li, Henger, et al. "Learning to backdoor federated learning." arXiv preprint arXiv:2303.03320 (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The calculation of $\theta'_t$ solely based on the local training data of malicious clients appears to be an approximation of the true worst-case scenario, as it neglects the inclusion of benign clients' data. I am interested in understanding the implications of an inaccurate $\theta'_t$ on the performance of the attack, particularly in scenarios where there is a high level of heterogeneity. It would be intriguing to compare the attack performance when the attacker possesses knowledge of the benign workers' data, considering previous studies have demonstrated that attackers can learn and reconstruct benign workers' data through inference attacks [1] [2]. 2. How to choose/tune a good or even an optimal (is it exist?) $\lambda$? It seems to me in each FL round, the optimal $\lambda$ should be different (based on current model, number of attackers been chosen, etc.) 3. To ensure fair comparison, while A3FL tune parameters like $\lambda$ and poison ratio, how are the hyperparameters are chosen in other baseline attack methods? [1] Geiping, Jonas, et al. "Inverting gradients-how easy is it to break privacy in federated learning?." Advances in Neural Information Processing Systems 33 (2020): 16937-16947. [2] Li, Henger, Xiaolin Sun, and Zizhan Zheng. "Learning to attack federated learning: A model-based reinforcement learning attack framework." Advances in Neural Information Processing Systems 35 (2022): 35007-35020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1. It is unclear to me what exactly is being optimized in equations (2) and (3). I assume that the optimization pertains to the parameters associated with each pixel within the predetermined square pattern. In the conducted DBA experiments, the trigger(s) take the form of fixed-sized square(s) located at specific position(s) and may involve certain numbers of squares. These characteristics represent potential parameters that can be incorporated into the optimization problem. However, it is worth noting that for achieving the most versatile trigger, it is worthwhile to consider every pixel in the image. In recent works [1] [2], researchers have explored generated backdoor triggers that encompass the entire range of the image. 2. Post-training stage defenses play a vital role in countering backdoor attacks. Even within the context of FL, certain techniques such as Neuron Clipping [3] and Pruning [4] have demonstrated their effectiveness in detecting and mitigating the impact of backdoor attacks. Consequently, I am curious to know how the proposed A3FL performs when subjected to these post-training stage defenses. [1] Salem, Ahmed, et al. "Dynamic backdoor attacks against machine learning models." 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P). IEEE, 2022. [2] Doan, Khoa D., Yingjie Lao, and Ping Li. "Marksman backdoor: Backdoor attacks with arbitrary target class." Advances in Neural Information Processing Systems 35 (2022): 38260-38273. [3] Wang, Hang, et al. "Universal post-training backdoor detection." arXiv preprint arXiv:2205.06900 (2022). [4] Wu, Chen, et al. "Mitigating backdoor attacks in federated learning." arXiv preprint arXiv:2011.01767 (2020). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## reviewer BFfe We thank the reviewer for the constructive comments to strengthen our work. > Q1: No theoretical analysis for Algorithm 1. While theoretical analysis is important and interesting, it is very challenging and an open problem to theoretically analyze the performance of FL backdoor attacks, especially when A3FL is implemented through bi-level optimization. To the best of our knowledge, most existing works proposing FL backdoor attacks did not provide theoretical analysis on attack performance or convergency. We will leave this as our future work. > Q2: Solving a bi-level optimization in each FL round is computational inefficient. Thank the reviewer for constructive comments. We empirically compare the efficiency of A3FL to other baseline attacks. We record the average time taken by each attack in each round in Table 4 in the attached PDF. Observe that A3FL has a comparable efficiency with CerP and F3BA. > Q3: Incorporating global dynamics has already been proposed in previous works. Thank you for commenting on the place that could cause misunderstanding. We will discuss and comment these works in the revision. Instead of incorporating global training dynamics, A3FL proposes a different principle by anticipating a worst-case scenario in which the injected backdoor is directly unlearned by the global model. A3FL optimizes the backdoor trigger to make it robust to this worst-case scenario and therefore persistent to other perturbations introduced by global training dynamics. Our empirical experiments in Section 4.3 and Appendix B.8 demonstrates this intuition. As observed in Figure 4, while all backdoor attacks can achieve nearly 100% ASR on the local model, only A3FL can achieve similarly high ASR when transferred to the global model. This observation demonstrates that A3FL is more persistent to perturbation when transferred to the global model. > Q4: Attack performance when the attacker possesses knowledge of the benign workers' data. Thank you for the interesting perspective. We discussed the impact of data heterogeneity in Appendix B.6. As shown in Figure 12, a large data heterogeneity does not significantly impact the attack performances of A3FL. We further consider a scenario where the attacker can access a part of private dataset from benign clients. Specifically, we merge the private training dataset of 5 randomly selected benign clients as $\mathcal{D}_p$. We assume that the attacker can access $\mathcal{D}_p$ to better approximate the worst-case scenario. As observed in Table 5 in the attached PDF, A3FL with access to benign worker's data indeed has slightly better performance but the improvement is marginal. We will discuss these works[7][8] and the potential extension in the revision. > Q5: How to tune $\lambda$ As discussed in Line 261-269, we introduce a balancing coefficient $\lambda_0$ to control the strength of adversarial training, and $\lambda=\lambda_0$sim$(\theta'_t, \theta_t)$. When the adversarial global model $\theta'_t$ differs a lot from the current global model $\theta_t$, we pay less attention to the adversarial training loss. This is motivated by that when the adversarial global model too different, the backdoor trigger optimized on current model is harder to be adapted to the adversarial model thus can be more easily unlearned. In practice, we find this helpful in balancing the bi-level optimization. As discussed in Section 4.3, Figure 6, A3FL is not obviously sensitive to different values of $\lambda_0$. We simply grid search $\lambda_0$ on FedAvg and apply the same value of $\lambda_0$ to other cases. Since we observe that this has already achieved satisfying results, we do not adopt more complicated tuning methods to find a better $\lambda_0$. > Q6: Hyperparameter settings of other attacks. Thank you for commenting on the place that could cause confusion. For all evaluated attack methods, we tune hyperparameters on FedAvg via grid search. It is reasonable to search for optimal hyperparameters on FedAvg and use the same setting for other experiments, since the attacker does not know the server-side defense beforehand. We have discussed hyperparameter settings in Appendix A.2 and B.9. For CerP, we set two balancing coefficients $\alpha = 0.005$ and $\beta = 0.005$. For F3BA, we set the proportion of candidate weights for convolutional layers to be 0.02, and the proportion for fully connected layers to be 0.001. For Neurotoxin, we only update the last 95% important model weights. For DBA, we set the trigger shift to be 0, trigger gap to be 6, and trigger size to be 4. > Q7: What exactly is being optimized in equations (2) and (3). We thank the reviewer for suggesting an interesting direction to combine the progress from both optimization (as our work does) and trigger pattern design perspective. To clarify, our work mainly focuses on the optimization perspective, thus uses the commonly adopted squared trigger shape: A3FL only optimizes $\delta$ for the masked-out trigger region. This can be viewed as applying $\delta = \delta \cdot m$ at the end of each time of optimization. $m$ is a binary mask with 1 on the pixels within the predefined region. We adopted the same trigger shape for evaluated attacks to ensure fair comparison. We will explain this in the revised paper to eliminate potential confusion. We will also discuss these papers[9-12] in the revised paper, and extend A3FL to backdoor triggers encompassing the entire range of the image in our future work. > Q8: Post-training stage defenses should be considered. Thank you for your constructive comment. Following the suggestion, we evaluate A3FL against Neuron Clipping and include the experimental results in Table 6 in the attached PDF. We observe that Neuoron Clipping cannot impact the ASR of A3FL. A3FL still significantly outperforms other baselines against this defense. We will include this defense in the revised paper. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and informative clarification. After reading the rebuttal and other reviewers' comments, most of my concerns have been addressed. I will change my rating. Thank you. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback. Your constructive comments and suggestions are exceedingly helpful to improve our paper. Please let us know if you have any further suggestions.
Summary: This work proposes a better backdoor attack on Federated Learning. Existing FL backdoor attacks do not consider the global training dynamics, resulting in limited backdoor attack performance. As the true global training dynamics are impossible to know, this work proposes a method to regularize backdoor training using the worst-case global training dynamics as guidance. The worst-case scenario represents a strong protector who tries to unlearn the exact backdoor pattern, and the proposed backdoor attack greatly benefits from adversarially adapting to this worst case. The paper demonstrates its better performance against 12 existing defense approaches and consistently outperforms state-of-the-art (SOTA) backdoor attacks by a large margin (10x), especially when there is only a small number of attackers. Strengths: 1. The method successfully unlearns the backdoor and learns a strong backdoor trigger to adapt (adversarially) to such global learning objectives. 2. The design is smart in automatically tuning the lambda, which uses similarity to adjust. This design considers potential defenses and explains why the proposed method works well against them. Additionally, the tuning of the base lambda is not sensitive, as shown in Fig 6. 3. The proposed method has been shown to be effective against 12 existing defense approaches and consistently outperforms state-of-the-art works by a large margin (10x). 4. There is no need for strict regularization, and it is harmless to utility. 5. It requires a smaller attack budget in terms of the number of attackers and the number of rounds in poisoning. 6. A comprehensive evaluation is done on utility, ASR, and lifespan. 7. The achieved better transferability of the backdoor to the global model is well analyzed. Weaknesses: 1. Different from previous state-of-the-art works, this work does not directly consider potential defenses (i.e., detection-based methods), while still demonstrating better performance against them. The reason for this is not well justified. 2. The impact of the number of available training data on the attack performance is not clear. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Can you further address the weaknesses and limitations? 2. Can you provide some directions regarding potential defense strategies? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: 1. The trigger pattern is limited to a rectangular shape. It would be interesting to explore the adoption of irregular shapes in the proposed attack. 2. More datasets and architectures can be considered. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## reviewer i2mP We appreciate the positive comments on our paper and insightful suggestion for further improvements from the reviewer. > Q1: Different from previous state-of-the-art works, this work does not directly consider potential defenses (i.e., detection-based methods), while still demonstrating better performance against them. The reason for this is not well justified. We are sorry for the confusion. We provide two reasons in terms of method design and empirical observation. In the method design, while we do not explicitly consider any specific defense, we design the adversarial adaptation loss in Eq. (3) to enables the attacker to foresee and survive even the worst-case scenario where the global model is trained to directly unlearn the backdoor (which is imaginary since in practice the server does not know the backdoor trigger, and thus it is considered to be stronger than any existing defense). Therefore, if the attacker could survive this worst-case scenario, it is not strange that it can survive other defenses. Also from the empircal side, in Appendix B.8, we conducted a case study on Krum and looked into the reason why A3FL outperforms baselines. We recorded the ASRs corresponding to the training rounds, and highlight rounds that an attacker-compromised client is selected by the server. We have the following observation from Figure 14: * A3FL-compromised clients are not more frequently selected by the server. * Once selected, A3FL can achieve higher and more persistent attack ASR than any other baseline attacks, since A3FL can maintain higher ASR when transferred to the global model. Both the model design and empirical observation should be able to explain why A3FL can still achieve high attack performance on detection-based methods. > Q2: The impact of the number of available training data on the attack performance is not clear. We thank the reviewer for the insightful comment. To study the impact, we introduce **data resizing factor $\gamma$**, and assume that each client has only $1/\gamma$ private dataset compared to the default setting. We vary the value of $\gamma$ and record ASRs on FedAvg corresponding to the number of compromised clients (P). As observed in **Table 1 in the attached PDF**, the ASR is significantly impacted only when there is merely 1 compromised client and $\gamma$ is larger than 8. In other cases we do not observe obvious drop in ASR. Therefore we can conclude that attack performance is not sensitive w.r.t. the number of available training data. > Q3: Can you provide some directions regarding potential defense strategies? We thank the reviewer for the interesting question. A3FL is based on a typical FL setting where there is only one global model and currently we didn't find a perfect defense strategy yet. However, in some non-typical FL settings (e.g., if the server can maintain multiple global models trained by sampled clients and aggregate these models for evaluation), the chance for attacker to successfully inject the trigger can be lower especially when the number of compromised clients is limited. However, this potential defense strategy could be much more computational expensive in comparison to typical FL algorithms. > Q4: The trigger pattern is limited to a rectangular shape. It would be interesting to explore the adoption of irregular shapes in the proposed attack. Thank you for your insightful suggestions. In this paper, we adopt the same trigger shape for each evaluated attack to ensure fair comparison, which is also well acknowledged in previous works[4][5]. The idea of A3FL does not rely on trigger shape, thus should be direclty applicable to other trigger designs. We will consider backdoor triggers encompassing the entire range of the image in our future work. > Q5: More datasets and architectures can be considered. We further evalute A3FL on FEMNIST dataset and VGG16 model. We record our results in Table 2 and 3 in the attached PDF, and observe that A3FL can still achieve high ASR. --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response. Overall I appreciate the comments. Most of my concerns are well addressed. After carefully reading other reviewers' comments and the authors' responses to them, I believe in the correctness of my evaluation despite the significant divergence with Reviewer k1k8. (Reviewer k1k8 put really valuable comments on related works, however, Reviewer k1k8's questions 5 and 6 which are more critical to the paper's contribution are well addressed by the author.) --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the valuable feedback. Your constructive comments have been improving the quality of our work. We are also delighted to learn that most of your concerns have been addressed to your satisfaction. Once again, we appreciate your time and effort in reviewing our paper. Please let us know if you have any further suggestions or concerns.
Summary: This paper presents a new backdoor attack method termed A3FL to address some limitations of existing predetermined and fixed backdoor attack methods. The proposed method can adversarially adapt to the dynamic global model so that when transferring to the global model, the local-optimized trigger will be affected very little. The method is benchmarked against and performs at par with or better than a number of state-of-the-art methods. Strengths: - The paper presents an adversarially adaptive backdoor attacks to Federated Learning. - A3FL can alleviate the problem of suboptimal attack performance caused by existing work ignoring the global training dynamics. - The method is benchmarked against relevant recent work. Weaknesses: - In the abstract, it would be better to give the full name of A3FL. - It is recommended to discuss the limitations of the proposed method in order to help other scholars improve it. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the "weakness" part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No apparent limitations were found. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## reviewer iLct We thank the reviewer for the positive feedback and constructive comments on our work. >Q1: In the abstract, it would be better to give the full name of A3FL. Following your suggestion, to improve the clarity of our paper, we will include the full name of A3FL in the abstract,in the updated version. > Q2: It is recommended to discuss the limitations of the proposed method in order to help other scholars improve it. We thank the reviewer for insightful comment. Currently A3FL is conducted on the image tasks, and it would be interesting to the robustness of FL in other scenarios, such as large language models. What's more, it would be interesting to explore the application of A3FL in other FL scenarios, such as vertical FL. We will discuss the limitations in the future work section. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. Although I’m not an expert in this field, after reading the comments of other reviewers and the author’s replies, I think this paper remains a positive contribution to the community. Therefore, I tend to maintain my original score. --- Reply to Comment 1.1.1: Comment: We appreciate your feedback and comments in improving our paper. Welcome to share any additional suggestions you may have.
Rebuttal 1: Rebuttal: ## General Response to All Reviewers We thank all reviewers for your constructive suggestions and insightful questions! We have responded to them in our separate responses. We have provided supplementary experimental results about different available training data and the attack performance of A3FL on other mode architectures and more datasets. We also discuss the computational overhead of A3FL in comparison to other attacks. We evaluate A3FL against post-training as well. Welcome to raise following questions to further improve our work! The citation list in our defense is as follows: [1] Nguyen, A., & Tran, A. (2021). Wanet--imperceptible warping-based backdoor attack. arXiv preprint arXiv:2102.10369. [2] Doan, K., Lao, Y., Zhao, W., & Li, P. (2021). Lira: Learnable, imperceptible and robust backdoor attacks. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 11966-11976). [3] Doan, K., Lao, Y., & Li, P. (2021). Backdoor attack with imperceptible input and latent modification. Advances in Neural Information Processing Systems, 34, 18944-18957. [4] Fang, P., & Chen, J. (2023). On the Vulnerability of Backdoor Defenses for Federated Learning. arXiv preprint arXiv:2301.08170. [5] Lyu, X., Han, Y., Wang, W., Liu, J., Wang, B., Liu, J., & Zhang, X. (2023, February). Poisoning with cerberus: stealthy and colluded backdoor attack against federated learning. In Thirty-Seventh AAAI Conference on Artificial Intelligence. [6] Zhang, Z., Panda, A., Song, L., Yang, Y., Mahoney, M., Mittal, P., ... & Gonzalez, J. (2022, June). Neurotoxin: Durable backdoors in federated learning. In International Conference on Machine Learning (pp. 26429-26446). PMLR. [7] Geiping, Jonas, et al. "Inverting gradients-how easy is it to break privacy in federated learning?." Advances in Neural Information Processing Systems 33 (2020): 16937-16947. [8] Li, Henger, Xiaolin Sun, and Zizhan Zheng. "Learning to attack federated learning: A model-based reinforcement learning attack framework." Advances in Neural Information Processing Systems 35 (2022): 35007-35020. [9] Salem, Ahmed, et al. "Dynamic backdoor attacks against machine learning models." 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P). IEEE, 2022. [10] Doan, Khoa D., Yingjie Lao, and Ping Li. "Marksman backdoor: Backdoor attacks with arbitrary target class." Advances in Neural Information Processing Systems 35 (2022): 38260-38273. [11] Wang, Hang, et al. "Universal post-training backdoor detection." arXiv preprint arXiv:2205.06900 (2022). [12] Wu, Chen, et al. "Mitigating backdoor attacks in federated learning." arXiv preprint arXiv:2011.01767 (2020). [13]Nguyen, T. D., Rieger, P., De Viti, R., Chen, H., Brandenburg, B. B., Yalame, H., ... & Schneider, T. (2022). {FLAME}: Taming backdoors in federated learning. In 31st USENIX Security Symposium (USENIX Security 22) (pp. 1415-1432). [14]Li, H., Ye, Q., Hu, H., Li, J., Wang, L., Fang, C., & Shi, J. (2023, May). 3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning. In 2023 IEEE Symposium on Security and Privacy (SP) (pp. 1893-1907). IEEE. [15]Alam, M., Sarkar, E., & Maniatakos, M. (2022). PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations. arXiv preprint arXiv:2205.13523. Pdf: /pdf/4a8f866ff8ef3d43c7aab4d9cadd2b957cc6857d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks
Accept (oral)
Summary: This paper takes a closer look at the mechanistic explanation of neural networks learning to perform modular addition expanding on recent work that argued that such networks discover a simple "clock" algorithm. The authors demonstrate that changes in initialization and hyperparameters can lead to the discovery of qualitatively different algorithms - most notably what is referred to here as the "pizza" algorithm. This provides evidence that even the simple learning problem of modular addition leads to the discovery of diverse solutions in neural networks and mechanistic explanation requires a more complex analysis. Strengths: The paper is well written and succeeds in clearly communicating the findings on an active topic in the field of mechanistic interpretability. The analysis is carefully conducted, empirical results are mostly convincing and the conclusion is of importance for the broader field. Weaknesses: One of the main claims of the paper is that "some networks very similar to the ones trained by [1] preferentially implement a qualitatively different approach" but most of the evidence presented for such different solutions only apply to models that (transiently) remove the attention mechanism. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Focusing on one of the main claims that "some networks very similar to the ones trained by [1] preferentially implement a qualitatively different approach": What is the fraction of neural networks with attention rate 1 that according to your metrics implement the "pizza" algorithm? Judging from Figure 6 it seems to me that this claim might be too strong if the originally investigated model with full attention almost always discovers "clock" solutions. 2. You state that you also find "non-circular algorithms" in your trained networks. What fraction of trained models is non-circular and removed from the main analysis? Again this would be especially insightful to understand in dependence of the attention rate. 3. How are Distance Irrelevance and Gradient Symmetricity related to each other? From my understanding they both intend to measure the same property (pizza vs clock). A scatter plot showing one vs. the other might give some insight on their relationship. 4. In cases where the metrics contradict each other (judging from Figure 6 this sometimes happens), can you still make a confident statement on what algorithm such solutions implement? 6. Could you elaborate how you come to the conclusion that some solutions "implement multiple, imperfect copies of either the Clock or Pizza algorithm in parallel."? How do such solutions work? 7. Could you elaborate why accompanied pizzas achieve almost perfect accuracy (footnote of page 6) despite the failure mode of antipodal pairs? 8. In section 3.4 you conjecture that "accompanying pizzas" are primarily used early in training. Would it be possible to test this hypothesis by comparing the accuracy of "accompanied pizzas" early in training with and without the hypothesised "accompanying pizzas"? 9. Minor point: The word "symmetricity" is unfamiliar to me. Is there a reason to deviate from the more common term symmetry, i.e. calling your metric "Gradient Symmetry"? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations have been addressed appropriately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your helpful and constructive questions/suggestions! Below is our reply to your questions: > Q1: The argument “some networks very similar to the ones trained by [1] preferentially implement a qualitatively different approach" seems too strong. What is the fraction of neural networks with attention rate 1 that according to your metrics implement the "pizza" algorithm? A1: By “similar” we mean structurally similar: our setup is almost identical to [1] except for the introduction of attention rate. From the data we have it seems quite unlikely for a network with attention rate near 1 to implement the pizza algorithm, although we did observe many non-circular (and thus unlikely clock) ones (Fig 6, Appendix C Fig 9). We agree it is somewhat misleading and we will tone down the claim in the updated version. > Q2: What fraction of trained models is non-circular? A2: For our trained 1,2,3,4-layer 128-width models, the circular (circularity >= 99.5%) ones are 34.31%, 9.95%, 11.55% and 6.08%, respectively. We also attached the circular rate at each attention rate decile and at each width range (figure b & c in the attached pdf). > Q3: What’s the relation between distance irrelevance vs gradient symmetricity? A3: We consider distance irrelevance as the deciding factor of pizza, as there seem to be limited other reasons for the output logits to depend on the distance. Gradient symmetricity is mostly used to rule out the clock algorithm as the clock algorithm requires multiplying (transformed) inputs, which will result in asymmetric gradients. Following your suggestion, we compiled the scatterplot of distance irrelevance vs gradient symmetricity over all the standard structure experiments we’ve done, and we can indeed see at low distance irrelevance (suggesting pizza) the gradient symmetricity is always close to 1 (suggesting non-clock) except for a few outliers (figure a in the attached pdf). > Q4: When two metrics contradict, how to make a confident statement? A4: Following the answer of Q3, we consider distance irrelevance as the defining signature of the Pizza algorithm, while gradient symmetricity is additional evidence against Clock. > Q5: What’s the evidence for some solutions "implement multiple, imperfect copies of either the Clock or Pizza algorithm in parallel."? A5: We agree that it is a bit confusing, but here we refer to the algorithms operating on a single circle as the clock or pizza algorithm, and they are imperfect (the pizza algorithm suffers from antipodal pairs; keeping only the first circle in Model A gives only 32.8% accuracy (L135)). We will clarify this in the revision. > Q6: Why do accompanied pizzas achieve almost perfect accuracy despite the failure mode of antipodal pairs? A6: Mechanically, the numbers are arranged differently in each circle so they have different antipodal pairs. In circle #2 of Fig 4, 0 and 10 are roughly antipodal, so circle #2 alone might not be able to get the input (0,10) correct, but we can see that they are relatively close in circle #1, and circle #1 is likely to provide the correct answer. In other words, the multiple copies of the pizza algorithm error correct each other. The accompanying pizzas can also be helpful (Appendix D) although the three circles alone are enough to get close to 100% accuracy. > Q7: Test the hypothesis that “accompanying pizzas are primarily used early in training”. A7: We observed the early emergence of a pattern similar to accompanying pizza in training runs (figure f in attached pdf) and removing that circle brings accuracy down from 99.7% to 97.9%. They are less helpful later in the network (removing accompanying pizzas in trained Model A only brings accuracy down to 99.7%). > Q8: Why use the terminology “symmetricity” rather than “symmetry”? A8: We think “symmetry” is mostly used as a binary adjective (something either poses symmetry or not), so we used the word “symmetricity” to emphasize the continuous aspect of our metric. --- Rebuttal Comment 1.1: Comment: Thank you for your comprehensive clarifications and the additional experiments/figures to support them. I am happy to increase my score accordingly.
Summary: In this paper the authors study neural networks doing modular addition of integers, i.e. output $= mod_P( a + b)$ for fixed integer P and input integers a and b. Previous work on these networks has found that small transformers implement a simple _clock_ algorithm. This work verifies this, but shows that if you simplify the transformer’s attention and make your network more like a simple feedforward ReLU network, then you find the networks implement a completely different algorithm they call the _pizza_ algorithm. Their evidence for this includes (i) strange patterns in the correct logit outputs, (ii) gradients that do not fit the clock model but can be understood in their pizza model, (iii) patterns in the logits when the inputs are restricted to particular 2D planes, and (iv) the need for error correcting that leads to particular patterns in a pizza embedding. They then study an algorithmic phase change from pizza to clock as you vary how much attention is included. Strengths: I thought the large-scale motivation for this work was justified, previous people have claimed that this specific task leads many networks to solve it in the same way. Turns out that isn’t true. That seems important. I thought the main discovery of this work was very interesting. I thought the evidence to back up the claim was convincing. I thought the experiments performed were pretty thorough. For the most part the claims were not overblown (for example I really appreciated the discussion of non-circular algorithms for solving this task) Finally the appendix was a bit of a treasure trove. Weaknesses: I think the paper’s clarity could definitely improve. At times, mainly in the appendix where a lot of the explanation gets relegated, the paper felt rushed and the explanations were terse, expecting a lot from the reader. I found this especially true of the equations in the appendix, for example: 1. In appendix A you introduce $s$ on the first line, which is the same as $E_{ab}$ I think. Why do you introduce a new symbol? Why do you not make it clear it is the same as $E_{ab}$ in the main paper? 2. You then do the same, again in Appendix A, when introducing the symbol $P_c$, which is I believe $Q_{abc}$ from the main paper? You also say thus, which in maths makes me expect to understand that you have derived a result, instead at the moment it reads like a definition of $P_c$ so the use of thus is confusing. 3. Line 392, in step 1 you talk about an accompanied pizza in a section that readers are expected to reach long before they read about accompanied pizzas where that adds no value as far as I can tell, but just makes the whole thing confusing. 4. Appendix G was a minefield of strange notational choices in my mind. The layer index was initially denoted with i, but then changes to t, and i is reused as the token index. I found this needlessly confusing. 5. You define $x^j$ as the value of the residual stream after j layers, but then talk about $x_{i}^j$ for a couple of lines of the algorithm, before dropping the lower index. I think this is because there’s only one output logit so after the attention you can drop which token the input is coming from, but I still found it confusing at first (because you never say what I just wrote). It felt important to specify that the lower index of x is tokens for understanding the sum over k in the constant attention equation. 6. For the next description you switch notation but still use x, but now without the top index rather than the bottom one, and use y and z. Could you highlight what is changing? And beyond the equations I thought that occasionally the appendix was a tough read. For example in appendix H you talked about adding an equal sign. I eventually looked at the caption for figure 19 and understood what that meant, but I thought I’d missed some previously discussed equals sign (I now realise this is a hangover from the original clocks paper). Making sure the writing doesn’t have these kind of moments when the reader hasn’t been told about something and isn’t completely sure where to find out about it seems good. Perhaps you could edit the writing and captions when you try and re-read the paper with fresh eyes to see what is confusing - or get some fresh eyes on it. A few of small things I am confused by: 1. I think the sine and cosine addition formulae you are using in step 2 of the algorithm in appendix A, during the development of alpha and beta, are missing factors of 2. [since you use them so often maybe it would be good to state the sine and cosine addition formulae somewhere] 2. In section 3.4 it says the condition for there to be no antipodal pairs is for p to be prime, isn’t it for p to be odd? What am I missing? 3. The plots in figure 6 (and all figures like it in the appendix) are titled wrong. 4. In the formula for distance irrelevance i on the top row should i be a member of $\mathbb{Z}_P$ not $\mathbb{Z}_P^2$? It seems there are a class of pizza like algorithms (e.g. the two in appendix A), and the evidence listed does not distinguish them. Is this true? Do you know which is happening? If not, perhaps figure 1 is misleading. Instead your claim is that step 3 is one example of a class of pizza type algorithms that are being implemented, and perhaps figure 1 could say that? Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. Why do you compute the gradients after having projected to first 6 principal components of the embedding space? What happens if you don’t do this? 2. Further, why on earth did you think to make this gradient symmetry plot?! Did you already have the pizza algorithm in mind and knew this would distinguish them??? 3. In table 1 it says non-circular algorithms show gradient symmetry, but in general (figure 9) it appears like they don’t. Why does it say this? 4. The phase transition appears to happen at slightly different points when measured by distance irrelevance or by gradient symmetry, is this true? This definitely seems true in the 2D phase plots. How do you interpret this? 5. This study restricts itself to circular algorithms, what proportion of models were circular? 6. The phase change is not discrete, i.e. it does not just jump from the most pizza-ey to the most clock-ey. This could be outside the scope, but what do you think happens in between? An algorithm that is a mixture of the two somehow? Or an output that is a mixture of both algorithms running concurrently? Any evidence in any direction? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The authors did a good job discussing limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your helpful and constructive questions/suggestions! Below is our reply to your questions: > Q1: Are $E_{ab}$ and $s$ the same? > Q2: Are $P_{c}$ and $Q_{abc}$ the same? A2: Thanks for pointing them out! Yes to both - Appendix A was following an older set of notations. We will make sure to correct these mistakes in the final version. > Q3: Line 392, an accompanied pizza is mentioned but is only defined long after. A3: Agreed. We will remove the first sentence in the final version. > Q4: The notations in Appendix G are confusing. A4: Thanks for the feedback. We have changed all i’s in the first paragraph to t’s. > Q5: Are $x^j$ and $x^j_{i}$ the same? A5: Here $x^j$ stands for the whole residual stream vector and $_{i}$ stands for taking its i’th element / dimension. When we drop the lower index, we are performing vector operations on the whole vector $x^j$. We agree it is very confusing and we will make sure to explain the notation choice. > Q6: Why the change from x to y,z ? A6: In our particular case there is only one layer, so we feel using x^0 and x^1 will be more confusing so we switched to x and z instead. Except for the notation choice and the expanded loop, nothing is changed. > Q7: The sine and cosine formula is missing a factor 2. Also specify the formula somewhere. A7: Great catch! We have added the missing factor and added the two relevant trigonometry formulas at the beginning of Appendix A. > Q8: No antipodal pairs only require p to be odd, not necessarily prime. A8: Our intention was to stress the case where p is an odd prime which is the most typical setup, but it is indeed confusing. We will change it to be p odd. > Q9: Plots in Figure 6 (and similar figures in the appendix) are titled wrong. A9: If we understood your concerns correctly, the plots are not titled (and are labeled) and the text on top are descriptions for the color bar. We agree it is a bit confusing and we will left-align the text to make it clearer. We will update it in the next version. > Q10: $\mathbf{Z}_p$ or $\mathbf{Z}_p^3$? A10: It should be $\mathbf{Z}_p$, thanks for pointing that out! We will surely correct it. > Q11: It seems there are a class of pizza-like algorithms (e.g. the two in appendix A), and the evidence listed does not distinguish them. A11: Indeed there exists a class of pizza algorithms. The pizza algorithms can differ in how the terms $\cos(w_k(a+b))$ and $\sin(w_k(a+b))$ are approximated by ReLU neurons (Figure 7 and Step 2 in Appendix A). More active neurons could lead to better approximation, but different random seeds and/or hyperparameters may lead to different numbers of active neurons. We will follow your suggestion to update Figure 1 such that it can encompass the whole pizza family and use the current algorithm as a possible special case. > Q12: Why compute gradients after projecting onto the first six principal components? A12: The gradient *a*symmetricity is more prominent for the first principal components, as these are more important for the function, and being symmetric is likely easier for the network, and we choose six to be consistent with the later discussion on the three circles. In fact, in the later calculation of gradient symmetricity (Def 4.1) no translation to the principal component space is performed. We’ve attached the same figure with more principal components and without the projection (figure e in the attached pdf). We can see that the gradients are most symmetric for the later principal components as they are not very useful for the algorithm, and without the projection step, the gradient asymmetricity is, in fact, more pronounced for Model B, as the asymmetric gradients on the few principal components are now pronounced across multiple dimensions. > Q13: What's the rationale behind the gradient symmetry plot? A13: Past work has shown that neural networks (especially without attention) struggle to learn how to multiply inputs. In this respect, the Clock algorithm felt *unnatural* and we suspected there might be an alternative Pizza-like solution based on linear combination instead. > Q14: In table 1 it says non-circular algorithms show gradient symmetry, but in general (figure 9) it appears like they don’t. A14: Thanks for pointing that out! There indeed exist both gradient symmetric and gradient asymmetric non-circular algorithms. We will modify Table 1. > Q15: The phase transition points seem different when measured by distance irrelevance or gradient symmetricity. A15: Yes. In short, we believe this is caused by algorithms that are neither clock nor pizza. We consider distance irrelevance to be the *defining* feature of Pizza. Gradient symmetricity is mainly presented as supplementary evidence against the Clock algorithm, which requires multiplying (transformed) inputs, which will result in asymmetric gradients. From figure a in the attached pdf we can see that having low distance irrelevance is indeed a stronger condition than having high gradient symmetricity, suggesting the existence of algorithms that are not clock (high gradient symmetricity) and not pizza (high distance irrelevance). > Q16: What proportion of models are non-circular? A16: For our trained 1,2,3,4-layer 128-width models, the circular (circularity >= 99.5%) ones are 34.31%, 9.95%, 11.55% and 6.08%, respectively. We also attached the circular rate at each attention rate decile and at each width range (figure b & c in the attached pdf). > Q17: The phase change is continuous. What happens in between? A17: We conjecture that it is a hybrid of clock and pizza: the Algorithm in Appendix 1 takes the dot product of $(\alpha,\beta)$ with $(\cos(w_k c),\sin(w_k c))$ - same as in the clock algorithm. Therefore, it is possible to have some PCA circles operating as the clock algorithm and some operating as the pizza algorithm, and their results are added together before the final dot product with $(\cos(w_k c),\sin(w_k c))$. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: I thank the authors for answering many of my concerns. I had not thought of combining different circles each doing a different algorithm, that is an interesting possibility, and the information on proportion of networks that are circular is interesting. I realise I was being an idiot about figure 6 since, as you say, I indeed thought the title of the colourbar was the title of the plot. So you can ignore that! I think my scoring of the work still stands, and hope to see a cleaned up version of the paper accepted.
Summary: *Background*: Previous work in mechanistic interpretability has identified a particular algorithm that NNs implement (the clock algorithm) to solve modular addition. However, under certain architecture changes, the authors notice that NNs implement a different algorithm. The main goal of this paper is to present inconsistencies in the clock algorithm for neural networks without attention (ie: an inductive bias that allows the model to implement multiplication) and motivate a different algorithm (the pizza) that explains a different algorithm through which such networks learn modular attention. The observations are backed by experiments on linearly interpolating between a NN with attention and an NN without it. The experiments also demonstrate that a single algorithm doesn’t always win: different models can and do ensemble multiple copies of both algorithms in parallel. Strengths: The paper is very well written and the proofs and arguments presented seem airtight.  Most of the questions I’ve had while reading the paper are either addressed in subsequent sections or in the appendix. The experiments are simple yet, I believe, comprehensive in evaluating the arguments presented. This is also an exciting emerging area of research and should lead to interesting discussions in the mechanistic interpretability community. Weaknesses: This work raises a lot of interesting questions, but I really can’t find any egregious logical inconsistencies with this work. meta-(non)concern: This work largely relies on a problem from a paper that hasn’t been peer-reviewed. However, I do not think this is a reason to reject this work. Overall, I recommend _acceptance_. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * L78: Why the choice of 6 vectors specifically? For exposition? Will choosing less significant components introduce unnecessary noise? * L113: What would happen with an odd number of ReLU units? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have addressed limitations in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your helpful and constructive questions/suggestions! Below is our reply to your questions: > Q1: L78: Why the choice of 6 vectors specifically? For exposition? Will choosing less significant components introduce unnecessary noise? A1: The main motivation is that the gradient *a*symmetricity is more prominent for the first principal components, as these are more important for the function, and being symmetric is likely easier for the network. The choice also helps to be more consistent with the later discussions on 3 circles (Fig 4, Fig 5), which correspond to the first 6 principal components. In fact, in the later calculation of gradient symmetricity (Def 4.1) no translation to the principal component space is performed. We’ve attached the same figure with more (20) principal components and without the principal component projection (figure e in attached pdf). > Q2: L113: What would happen with an odd number of ReLU units? A2: We can implement absolute value $|x|$ by $\text{ReLU}(x)-\text{ReLU}(-x)$. If there are an odd number of ReLU units, some could be dead neurons (in the sense that the activation is near-zero for all inputs). There are also multiple possible variants of the pizza algorithm (Appendix A). --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications! I'm keeping my score where it is.
Summary: The authors present a novel algorithm as a mechanistic explanation of neural networks for modular addition. It is noted that the model without attention fails to implement the ‘Clock’ algorithm. This assertion is substantiated with evidence related to gradient symmetricity and logit patterns. The authors then propose an alternative solution, named the ‘Pizza’ algorithm, supported by evidence concerning logit patterns via circle isolation and accompanying ‘pizza’. Ultimately, they demonstrate the presence of an algorithmic phase transition along the attention rate and model width, employing metrics that indicate gradient symmetricity and distance irrelevance. Strengths: The paper is well-structured and supports its arguments with solid experiments. The authors demonstrate that a neural network is capable of learning diverse algorithms for the same task. They introduce an impressive procedure for interpreting the neural network via embedding vectors. This methodology has the potential for extension to more complex models and tasks. Weaknesses: The authors employ the term logit $Q_{abc}$ as well as the term output logit, which refers to the un-normalized log probability. The choice of terminology, however, proves to be confusing. Given that $Q_{abc}$ is not used in the model and is a concept introduced by the authors themselves, it would be beneficial to rename $Q_{abc}$ to a more intuitive term like "value" or "rank". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What prevents us from directly ascertaining the algorithm employed by the model? Couldn't it be possible to determine the intermediate vector $E_{ab}$ to gain insights into the algorithmic process? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: As the authors have noted, their focus lies on a simple learning problem. Significant further work is required to adapt their techniques for use with the more complex models typically employed in real-world tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your helpful and constructive questions/suggestions! Below is our reply to your questions: > Q1: Calling $Q_{abc}$ output logit is confusing. A1: This terminology has been used in many previous interpretability studies [Nanda2023] [Wang2023], so we are using standard nomenclature in this research area. > Q2: Can we determine the intermediate vector $E_{ab}$ to gain insights into the algorithmic process? A2: It is certainly possible, and we can prove that mechanistically in constant-attention Transformers, the computation starts by adding two embeddings (Appendix G). **Reference** [Nanda2023] “Progress measures for grokking via mechanistic interpretability”, Nanda et al. [Wang2023] “Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small”, Wang et al. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: Thank you for your clarification! I'll maintain my rating.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their helpful and constructive suggestions, which will greatly improve the final version of the paper. Besides individual responses, we want to summarize our responses/updates to reviewers’ common questions here. Reviews prompted us to try several additional experiments, which have led to fruitful discoveries: ### Distance irrelevance vs gradient symmetricity Distance irrelevance is a rather surprising and defining feature of the pizza algorithm while the gradient symmetricity is mainly presented as supplementary evidence mostly used to rule out the Clock algorithm, which requires multiplying (transformed) inputs and hence has asymmetric gradients. We plotted the relationship between gradient symmetricity and gradient irrelevance for all the 1~4 layer 128-width models we trained, and we confirmed that low distance irrelevance (suggesting Pizza) almost always implies close to 1 gradient symmetricity (suggesting non-Clock) (figure a in the attached pdf). ### Accompanying pizzas are employed early in training We observed the early emergence of accompanying pizza in training runs (figure f in the attached pdf; irrelevant principal components not displayed for space concerns). The model was trained 600 epochs at the time and reached 99.7% accuracy on the validation set (for reference, all the models we reported are trained for 20000 epochs). From the logit pattern, the first two principal components of the input embedding resemble the pizza algorithm, and the 13th and 14th principal components resemble the accompanying pizza. It is surprising that although the two components do not exactly resemble a circle due to the lack of training, the logit pattern is still clear and corresponds to the first circle. Removing this “accompanying pizza” brings the accuracy down to 97.9%. ### Projection for gradient symmetricity We projected the gradients of the models to the principal components so as to match our description of algorithms on principal components. The less significant principal components account less for the correct functioning of the model and we observed their corresponding gradients concentrating near 0 (fig e left in the attached pdf). If we consider the raw unprojected gradients, the asymmetricity of a few model B’s principal components’ gradients is more pronounced as it now affects multiple raw gradient dimensions (fig e right in the attached pdf). ### Circularity with respect to layer, attention rate, and width We computed the circular rate (circularity >= 99.5%) of models with respect to the number of layers, attention rate, and width (fig b and fig c in the attached pdf). We found out that the circular rate is higher for 1-layer models than multiple-layer models, and among 1-layer models circular rate is higher when the attention rate is closer to 0 or 1. Our explanation is that Pizza and Clock are two circular phases that are easiest to obtain at 1 layer and attention rate 0 or 1, correspondingly, so setups closer to these two phases are more likely to be drawn to them, resulting in similarly circular states. ### Sparsity and norm distribution We plotted the relationship between attention rate, distance irrelevance, gradient symmetricity, and parameter L2 norm (figure g in the attached pdf). Here the parameter stands for all the trainable coefficients in the trained model. Besides clear concentration, we can see a slight increase in the mean L2 norm as attention rate and distance irrelevance increase. We also observed a slight increase in L2 norm from Model A (22.9) to Model B (24.8). Their parameter distributions are also different (figure d in the attached pdf). We believe this is a result of different attention rate setups. Namely, for Model A with constant attention (attention rate 0), the query and key matrices are ignored so they are optimized to near-0 values. Besides empirical experiments, we also incorporate other suggestions from reviewers, including clarifying the family of pizza algorithms (Reviewer 7M7r), related work discussions (Reviewer wANB) and writing (Reviewer 7M7r, ZmSF, DtSC). Pdf: /pdf/de33e26dfbad28910df06ac482bea468780f1a1c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this work, the authors focus on the problem of learning modular addition in NNs. Using clock and pizza algorithms, they show that model exhibits sharp algorithm transitions, which are affected by layer width and attention strengths, often resulting in the parallel occurrence of these phases. A series of experiments are performed on the single-layer network to support the hypothesis proposed in this work. Strengths: 1. Well-written paper. 2, an useful contribution in terms of interpretability of NNs. 3. Novel contribution in terms of analyzing the algorithmic transitions. Weaknesses: 1. Ablation study is missing 2. Did authors perform a grid search to select best hyper-parameters? Then that should be mentioned with ranges in the appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In terms of interpretability there are models inspired by formal language theory that insert rules [1,3] and extract rules [2-7], termed as interpretable by extractions and such models are even tested on mathematical reasoning[4] task and are known to be turing complete even with finite precision and time [8]. Authors should discuss this line of work, as they are relevant. How to determine the threshold for the circularity, does value >= 99.5% works for all model/architectures in terms of thresholding for circularity. It would be ideal if authors can provide empirical bound for this and how to determine such threshold. Can this framework be extended to convolutions with tensor weights or stateful models such as RNNs? Authors do point out that deeper models leads to non-circular algorithms, but what is the bound for that? After how many layers non-circular behavior is shown by various models? At minimum providing empirical results will further strengthen this work. I would also like to see some empirical bounds on attention rate, what quantifies as high attention rate and what is low attention rate. Can author provide ablation study on various value of attention rate and also the width of layers? Few additional comments that are not clear from the manuscript. How does the model effectively interpolate between the memorizing and generalizing solutions? Does this also work with sinusoidal embeddings, masked embedding? Does the choice of embedding have an issue in generalization? The authors do mention pruning the weights, so what effect does sparsity have in the model performance? Can authors comment on this? Like how the two-phase switch? Finally I would like to see total computational time required by the model including FLOPS and also standard error for various trials on proposed experiments. Minor comments The figure 4 should be improved, its difficult to read values on y-axis and also values overlap in the circular diagram. Same goes for other figures too. 1. Omlin, C.W. and Giles, C.L., 1996. Rule revision with recurrent neural networks. IEEE Transactions on Knowledge and Data Engineering, 8(1), pp.183-188. 2. Tiňo, P. and Šajda, J., 1995. Learning and extracting initial mealy automata with a modular neural network model. Neural Computation, 7(4), pp.822-844. 3. Mali, A.A., Ororbia II, A.G. and Giles, C.L., 2020. A neural state pushdown automata. IEEE Transactions on Artificial Intelligence, 1(3), pp.193-205. 4. Mali, A., Ororbia, A.G., Kifer, D. and Giles, C.L., 2021, May. Recognizing and verifying mathematical equations using multiplicative differential neural units. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 6, pp. 5006-5015). 5. Weiss, G., Goldberg, Y. and Yahav, E., 2018, July. Extracting automata from recurrent neural networks using queries and counterexamples. In International Conference on Machine Learning (pp. 5247-5256). PMLR. 6. Wang, C., Lawrence, C. and Niepert, M., 2022. State-Regularized Recurrent Neural Networks to Extract Automata and Explain Predictions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6), pp.7739-7750. 7. Okudono, T., Waga, M., Sekiyama, T. and Hasuo, I., 2020, April. Weighted automata extraction from recurrent neural networks via regression on state spaces. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 04, pp. 5306-5314). 8. Stogin, J., Mali, A. and Giles, C.L., 2020. A provably stable neural network Turing Machine. arXiv preprint arXiv:2006.03651. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As highlighted above, the main point is an ablation study to support the hypothesis and computational overhead. ********** Score increased after Author rebuttal Responses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your helpful and constructive questions/suggestions! Below is our reply to your questions: > Q1: Ablation study is missing. Can authors provide ablation studies on various values of attention rate and also the width of layers? A1: Thank you for the suggestion! If we understand correctly, the ablation study you are requesting is already provided in Figure 6 of the submission, which shows behavior at various values of attention rate and model width. > Q2: Did authors perform a grid search to select the best hyper-parameters? Then that should be mentioned with ranges in the appendix. A2: No. We largely followed [Nanda2023] for hyperparameter setups and we chose p=59 following [Liu2023] to simplify the investigation. > Q3: Can you discuss the literature and cite relevant papers? A3: Thanks for pointing us to these references, which are indeed relevant to our work. We would like to include the citations in the next updated version. > Q4: How to determine the threshold for circularity? A4: This is necessarily a bit subjective. Circularity is fundamentally a qualitative phenomenon, and this threshold indicates how tolerant we are in terms of considering some shapes as being approximately circular (which the authors all agreed was true of shapes with circularity >= 99.5%). > Q5: Can this analysis apply to convolutional neural networks or recurrent neural networks? A5: Yes, our analysis can apply to other architectures, e.gCNN and RNN. This analysis does not require inspecting latent representations (which are specific to architectures), only involving output logits and gradients wrt to input embeddings (which are universal to all architectures). > Q6: For the transition from circular to non-circular algorithms, what is the bound (phase transition point) along depth, attention rate, width? A6: There is no clear phase transition from circular to non-circular algorithms against attention rate and width, but depth 1 networks are clearly more likely to be circular solutions than deeper (2-4 layers) networks. See figure b and c in the attached pdf. > Q7: How does the model effectively interpolate between the memorizing and generalization solutions? A7: Our work focuses mostly on analyzing the final (generalization) solution. The training dynamics (how the model interpolates from a memorizing to a generalization solution) is interesting but might be out of the scope of this work. > Q8: Does this also work with sinusoidal embeddings, masked embedding? Does the choice of embedding have an issue in generalization? A8: Yes, we believe other types of embeddings can also lead to generalization. We’re currently using learnable positional embedding, but our proposed pizza algorithms do not depend on them, so we believe the pizza algorithms also exist under sinusoidal positional embeddings. As for masked embeddings, in the 1-layer case masked embedding is equivalent to bidirectional embedding (since attention to token #2 to token #1 doesn’t count) so our conclusions should remain valid. > Q9: How does pruning weights (hence sparsity) affect performance? A9: It is unclear to us whether our analysis has implications for sparsity. Norm-wise strong concentration is observed and its relationship with attention rate, distance irrelevance is observed but weak (figure g in the attached pdf). We observed some difference in parameter distribution for Model A and Model B (figure d the attached pdf) and we believe it is primarily a result of different model configuration (for example, query and key matrices are ignored by constant-attention Transformers). > Q10: Can you provide computation time (including FLOPs) and error bars? Q10: We spent roughly 226 GPU days on a V100 cluster with ~30% utilize rate, so the total computation is around 4e19 FLOPs. It is hard to provide an error estimation since we are sampling with respect to multiple parameters, but we have made the full distribution available. > Q11: Figure 4 (and so as other figures) should be improved to maximize readability. A11: Thanks for the suggestion! We will increase fonts and do other optimization if needed. **References** [Nanda2023] “Progress measures for grokking via mechanistic interpretability”, Nanda et al. [Liu2023] “Towards Understanding Grokking: An Effective Theory of Representation Learning”, Liu et al. --- Rebuttal 2: Title: Rebuttal Response Comment: I thank the authors for their detailed responses. I have also read other reviews and responses; thus, I am increasing my score and moving toward acceptance. I believe the updated paper will have all the components promised by the authors in their rebuttal. Overall Good work.
null
null
null
null
null
null
Efficient Equivariant Transfer Learning from Pretrained Models
Accept (poster)
Summary: The paper introduces $\lambda$-equitune, an innovative method that refines existing strategies for achieving equivariant outputs from non-equivariant neural networks. $\lambda$-equitune employs importance weights for feature averaging, which outperforms the group averaging method equine. The authors also present equizero, another approach enhancing zero-shot and fine-tuned performance. The effectiveness of these methods is validated across diverse applications and models, including image classification, deep Q-learning, and natural language generation. Strengths: **Originality**: The paper introduces a novel approach to addressing the limitations of existing equivariance methods by incorporating importance weights for feature averaging. This creative combination of ideas contributes to the originality of the work. **Quality**: The research is of high quality, backed by rigorous theoretical justifications and empirical evaluations. The authors provide compelling evidence to support the claims made, ensuring the reliability and robustness of their proposed method. **Clarity**: The paper is exceptionally well-written, presenting complex concepts in a clear and concise manner. The organization of the paper facilitates understanding, and the inclusion of theoretical proofs enhances its clarity. **Significance**: The significance of the paper lies in its broad applicability across multiple domains and models, including image classification, deep Q-learning, and natural language generation. The improved zero-shot and fine-tuned results achieved by $\lambda$-equitune and equizero highlight their potential to advance the field of transfer learning. Weaknesses: One weakness of the paper is the limited exploration of continuous groups, as the focus is primarily on finite groups. The authors acknowledge the need for further work to extend their methods to continuous groups but do not provide concrete solutions or insights in this regard. This limitation restricts the proposed algorithms' generalizability and applicability to real-world applications involving continuous transformations. Addressing this weakness by offering a more detailed discussion on approaches for handling continuous groups would enhance the paper's relevance and broaden its potential impact. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) This paper mentioned that equizero performs well when good agent loss functions are available for downstream tasks. Can the authors elaborate on the process of selecting or designing these proxy loss functions? How did they ensure that the selected loss functions accurately capture the desired goals of each task? (2) Could the authors discuss the potential implications of their research in real-world applications and any limitations or considerations when deploying the proposed methods in practical scenarios? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer finds our approach novel, innovative, and applicable in multiple domains and models. We address the weaknesses and questions raised by the reviewer below: weaknesses: Reviewer: "...limited exploration of continuous groups..." **Response:** We thank the reviewer for pointing out the significance of exploring continuous groups to our setting. We have only considered discrete groups for our work since we were mainly motivated by our considered applications that only required discrete groups. To this end, we first point out that the work of Kim et. al. (that appeared on arxiv after the Neurips submission deadline), as pointed out by Reviewer DNsq, generalizes our framework to continuous groups using symmetric probabilistic averaging leading to equivariance in expectation for continuous groups. We would also like to note that extending our framework to continuous groups is simple and there could be many ways to generalize our framework to continuous groups. As such, we show that it also can be done by combining the setting of Kaba et. al. with our $\lambda$-equitune setup. **We have provided the extension with proof of equivariance in the response to Reviewer DNsq and the experimental results are in the pdf corresponding to the global response.** It shows that $\lambda$-equitune can be used to weigh the features corresponding to different transformations of the inputs obtained from canonicalization differently, leading to improved performance. We hope this convinces the reviewer that our work can be extended beyond discrete groups easily. Questions: Reviewer: "This paper mentioned that equizero performs well when good agent loss functions are available for downstream tasks. Can the authors elaborate on the process of selecting or designing these proxy loss functions? How did they ensure that the selected loss functions accurately capture the desired goals of each task?" **Response:** We thank the reviewer for raising this important question. We found that several machine learning tasks contain naturally available loss functions that can act as a proxy of their performance, e.g. CLIP has similarity scores, and Q-learning has Q-values. Maximizing these scores naturally leads to better performance as it is part of their original optimization tasks. The task of fairness in GPT2 is similar to that of the RL task, where we assign values describing the performance of the models. For GPT2 the scores are directly obtained from the regard scores of Sheng et. al. For compositional generalization in languages, the task is similar to classification and we chose the maximum of the probability as the score since it shows the confidence of the model in making the prediction (alternately one could also use the entropy of the probability vector, which is often used to measure confidence score of prediction). We found this score to work well for compositional generalization. The only surprising case that we found was that of classification using CNNs, where simple averaging of equitune seems to outperform loss /score functions such as entropy/maximum of probability, which shows that there are cases where finding good loss functions might be non-trivial. This, in addition to noting that the loss function can be replaced with learnable weights as score function leads to our general method, called lambda-equitune. Hence, we conclude that several machine learning tasks have naturally available loss/score functions that can be exploited for equizero. However, lambda-equitune can always be employed to obtain good equivariant finetuning results irrespective of the domain of application. Reviewer: "Could the authors discuss the potential implications of their research in real-world applications and any limitations or considerations when deploying the proposed methods in practical scenarios?" **Response:** We believe equizero and lambda-equitune when used with strong pretrained models would definitely make them robust, e.g. uses in robotics or object recognition. Regarding the fairness studies in our work, the current version focuses on debiasing using the metric and setup of Sheng et. al. and using the equality and neutral word sets from Basu et. al. that were human-made. We believe that for deployment of this application requires further testing and improvement in both the used regard scores and the construction of the equality and neutral sets. In particular, both the regard scores, and the equality and neutral word sets need to be constructed such that they satisfy any requirements of the application where it is deployed. Basu et al., Group Equivariant Fine-Tuning of Pretrained Models (2023) Kim et al., Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance (2023) Kaba et al., Equivariance with Learned Canonicalization Functions (2022) Sheng, et. al. The woman worked as a babysitter: On biases in language generation. (EMNLP-IJCNLP), 2019. --- Rebuttal Comment 1.1: Comment: I am highly satisfied with the authors' responses; their answers have completely addressed my concerns. I recommend accepting this paper in light of the other reviewers' comments.
Summary: This paper proposed an equivariant few-shot learning method from pretrained models, namely λ-equitune, averaging the features using importance weights, i.e. λs. These weights are learned directly from the data using a small neural network, leading to excellent zero-shot and finetuned results that outperform equitune. This work further proves that λ-equitune is equivariant and a universal approximator of equivariant functions, and shows that equitune and equizero (the method of Kaba et al. (2022) used with appropriate loss functions) are special cases of λ-equitune. The authors conduct a series of analyses and experiments, validating the simplicity and generality of the proposed method on a wide range of diverse applications and models. Strengths: (1). The idea of this work is novel for equivariant few-shot learning from pretrained models. (2). A wide range of diverse applications and experiments validate the claim of the work. Weaknesses: (1). I think Sub-Section '3.2 Properties' is somewhat confusing and should be analyzed in detail to ensure logical consistency. For instance, what is the connection between Theorem 1 and Definition 1, Theorem 2? Is Theorem 1 the foundation of Definition 1 and Theorem 2? Mathematical tools such as definitions, theorems, lemmas, etc. are used to summarize and abstract the theory of the entire method, requiring detailed descriptions to construct their connections. (2). Using importance weights to the features averaging in λ-Equitune need to be discussed, including its benefits against to existing equivariant finetuning methods. Could learnable λ weights adapt the feature outputs from pretrained models? Is this a key to λ-Equitune obtaining excellent results in few-shot learning? Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Equation (3), maybe M_G^λ (g,x) is M_G^λ (gx)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are provided in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding our work novel and appreciating the diversity of our experiments. We address the weaknesses and the questions raised by the reviewer below: Reviewer: "I think Sub-Section '3.2 Properties' is somewhat confusing and should be analyzed in detail to ensure logical consistency. For instance, what is the connection between Theorem 1 and Definition 1, Theorem 2? Is Theorem 1 the foundation of Definition 1 and Theorem 2? Mathematical tools such as definitions, theorems, lemmas, etc. are used to summarize and abstract the theory of the entire method, requiring detailed descriptions to construct their connections." **Response:** We thank the reviewer for pointing out the confusion in the explanation of the theoretical results in Sec. 3.2. We clarify the theoretical contributions below and will update it to the revised version. Theorem 1 is a standalone theorem showing that the proposed method called lambda equitune is provably equivariant to the considered group G. Definition 1 provides a definition of universality popularly used in several group equivariant neural networks papers, Yarotskiy (2022), Ravanbakhsh (2020). Theorem 2 uses the definition of universality from Definition 1 in its proof of showing that lambda equitune is a universal approximator of equivariant functions. The proof to Theorem 2 is provided in the appendix. We hope this improves the exposition of the results in Sec. 3.2. Reviewer: "Using importance weights to the features averaging in λ-Equitune need to be discussed, including its benefits against to existing equivariant finetuning methods. Could learnable λ weights adapt the feature outputs from pretrained models? Is this a key to λ-Equitune obtaining excellent results in few-shot learning?" **Response:** Indeed the idea behind lambda equitune is that the learnable lambda weights adapt such that they are high for the "good" features and low for "bad" features. Here "good" features are the features that contribute towards better performance and similarly, the "bad" features do not help get good performance. Existing equivariant finetuning methods such as equitune simply average all the obtained features from different transformed inputs, which is deleterious because not all features contribute equally to the performance of the model. The reason why certain transformations of the input yield better results can be explained through the example of using CLIP on transformed imagenet and CIFAR100 in Fig. 4a and 7, respectively. Note in both the figures that CLIP shows better results when the input images are provided in the upright form. Whereas, when the input images are rotated or flipped, their performances drop significantly, thus showing that the features obtained from non-upright images are not as useful aa the ones obtained from the upright images. Now, when a dataset contains images that have random rotations and flips, the lambda weights can automatically find out which transformed images contribute the most to the performance (in this case the upright images) and weight them the most leading to better performance than equitune of Basu et. al. We hope this explains the importance of the lambda weights better. We would add this to the revised version of the paper to help improve the clarity. Reviewer: "In Equation (3), maybe M_G^λ (g,x) is M_G^λ (gx)?" **Response:** We thank the reviewer for pointing out this typo. We will update this in the revised version of the paper. Ravanbakhsh. "Universal equivariant multilayer perceptrons." ICML 2020. Yarotsky. "Universal approximations of invariant maps by neural networks." Constructive Approximation (2022) --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response from the authors, which have sufficiently addressed my concerns. I am happy to accept this paper.
Summary: This paper proposes an extension of symmetrization approach (Yarotsky 2018; Puny et al., 2021; Kaba et al., 2022; Basu et al., 2023) for achieving invariance and equivariance to (small and finite) symmetry groups, with a focus on empirical demonstration of zero- and few-shot transfer learning from non-equivariant pretrained architectures for a range of applications involving different group symmetries. The technical contribution that allows zero- and few-shot transfer is the introduction of some score (rank) function denoted lambda that weights all possible transformed inputs, and using it to turn standard group averaging (Yarotsky 2018) into weighted averaging in a way that the equivariance (and universality) of symmetrization is still guaranteed. The key idea is that an appropriate choice of the score function allows symmetrization to work favorably for the underlying pretrained model (by assigning higher weights to "important" group transformations), which can allow few-shot transfer learning. The authors name this approach lambda-equitune. In particular, choosing the score function as an indicator on argmin of some loss function allows for a canonicalizing symmetrization (Kaba et al,., 2022) that empirically allows zero-shot transfer, which the authors name equizero. The authors experimentally demonstrate the proposed algorithm in a range of applications including reinforcement learning, fairness in language models, compositional generalization, and image recognition under 90-degree rotations and flips. Yarotsky et al., Universal approximations of invariant maps by neural networks (2018) Puny et al., Frame Averaging for Invariant and Equivariant Network Design (2021) Kaba et al., Equivariance with Learned Canonicalization Functions (2022) Basu et al., Group Equivariant Fine-Tuning of Pretrained Models (2023) Strengths: S1. The paper aims to address an important and original problem of few-shot or zero-shot transfer of non-equivariant pre-trained deep neural networks to solve equivariant problems. While equivariant transfer learning from non-equivariant model has been investigated by some prior and concurrent work (Basu et al., 2023; Kim et al., 2023), few-shot or zero-shot transfer has not been demonstrated in literature as far as I know. The methodology is clearly motivated and explained, and I think this offers a nice way to steer the behavior of pretrained models towards equivariance with additional controllability offered by the score function or loss function, as demonstrated in the fairness experiment where fairness jointly with high regard score is achieved. S2. The score function lambda for lamdba-equitune and the loss function for equizero doesn't seem to have to respect group symmetry, which is an advantage as it allows for wider range of choices. While this has been theoretically shown in Kaba et al., 2022, as far as I know this is the first work to empirically utilize the property, since Kaba et al., 2022 proposed but did not experiment with optimization approach. S3. The applications demonstrated with experiments are quite comprehensive, ranging from reinforcement learning to language generation and visual recognition. Also, the experimental results overall seems to support the main claims of the paper. Basu et al., Group Equivariant Fine-Tuning of Pretrained Models (2023) Kim et al., Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance (2023) Kaba et al., Equivariance with Learned Canonicalization Functions (2022) Weaknesses: W1. One major ambiguity I find in the paper is that, while the title of the paper and some parts in the main text mention few-shot learning, it seems the experiments concern zero-shot learning or fine-tuning with a fair amount of data. This was confusing to me given that fine-tuning is not equivalent to few-shot learning. Am I missing something? W2. The approach is only applicable to small, finite groups, due to the requirement of evaluating the score function (lambda) for all possible transformed inputs. This is in contrast to some concurrent work (Kim et al., 2023) that extends to combinatorial or continuous groups, and can be considered a limitation of the proposed algorithm at the current state. W3. For the fairness in language generation, I think there is a limitation in the proposed algorithm that the considered words (upon which group transformations are defined) are implicitly assumed to be not separated by the tokenizer of the language model. This might not be generally true for modern language models, as a tokenizer can choose to split an equality word into non-equality substrings (a potential example is waitress -> wait + ress). In this case, fairness would not be achieved as expected using vocabulary permutation transformations. Kim et al., Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance (2023) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. For the experiments mentioned as few-shot (e.g., Figure 10), how many shots are used? Q2. In case of zero-shot learning without parameter updates, what kind of practical advantage could we expect from the universality result (Theorem 2)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed potential limitations and negative societal impact in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer finds our problem original and contributions advantageous. We first provide some clarifications, then address the weaknesses. Clarifications: Reviewer: "S2: The score ... choices. While ... shown in Kaba et al., 2022, ... experiment with optimization approach." **Response:** To the best of our knowledge, Kaba et. al. only show that the loss function need not be equivariant in the case of the optimization approach where there is an arg min (cf. eqn. 4, eqn. 6 in Kaba et. al.). Instead, our setup simply performs weighted averaging. Of course, the setting of Kaba et. al. can be generalized to ours. However, this generalization is not shown in Kaba et. al. Weaknesses: Reviewer: "One major ambiguity ... title of the paper .... Am I missing something?" **Response:** We thank the reviewer for pointing out the confusion in title. We agree that the focus of the paper can be better described as a combination of zero-shot learning and finetuning rather than few-shot learning. We admit our number of training samples, even though much smaller than those used by pretrained models, should technically be called finetuning rather than few-shot learning. We used the umbrella term "few-shot learning" to explain the plethora of application across zero-shot learning and finetuning. We wanted to emphasize that while equituning takes several iterations on the data to obtain good results, equizero and lambda-equitune can perform much better than equitune with only a few iterations or none (equizero). We are happy to change the title to "Efficient Equivariant Transfer Learning from Pretrained Models", a better description for our contributions. We apologize for any caused confusion. Reviewer: "... applicable to small, finite groups... concurrent work (Kim et al., 2023)...." **Response:** Our framework can be easily extended to continuous groups by using it with canonicalization of Kaba et. al., which is an alternative to the method of Kim et. al. (*please note this work appeared only after the Neurips deadline*). **Main idea:** combine canonicalization from Kaba et. al. with $\lambda$-equitune leading to expressive equivariant network with weighted averaging over features with different group actions applied to them. **Def**: Given a (continuous) group $G$, a non-equivariant function $M:X\mapsto Y$, and equivariant auxiliary function (from the setting of Kaba et. al.) $h: X \mapsto G$, lambda functions $\lambda: X \mapsto R^+$, and a set of group elements $\Theta$ = {$\theta_1, \ldots, \theta_k$}, i.e. $\theta_i \in G$, we define the canonical-$\lambda$-equitune operators as $M^{ \lambda }_{G, equi}(x) = $ $(\sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1}x) h(x) M( \theta h(x)^{-1}x) )/( \sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1} x) )$ $M^{ \lambda }_{G, inv}(x) = $ $(\sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1}x) M( \theta h(x)^{-1}x) )/( \sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1} x) )$ **Thm**: $M^{\lambda}_{G, equi}(x)$ is equivariant to $G$. **Proof**: First note $h(gx) = g h(x)$. Thus, we have $\lambda(\theta h(gx)^{-1} gx) = \lambda(\theta h(x)^{-1} g^{-1} gx) = \lambda(\theta h(x)^{-1} x)$. Hence, $\lambda(\theta h(gx)^{-1} gx)$ is invariant to actions of $G$. Finally, $M^{ \lambda }_{G, equi}(g x)$ $=(\sum_{ \theta \in \Theta } \lambda (\theta h(gx)^{-1}gx) h(gx) M( \theta h(gx)^{-1}gx) )/( \sum_{ \theta \in \Theta } \lambda (\theta h(gx)^{-1} gx) )$ $=(\sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1}g^{-1}gx) g h(x) M( \theta h(x)^{-1}g^{-1}gx) )/( \sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1} g^{-1}gx) )$ $=g (\sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1}x) h(x) M( \theta h(x)^{-1}x) )/( \sum_{ \theta \in \Theta } \lambda (\theta h(x)^{-1} x) )$ $=g M^{\lambda}_{G, equi}(x)$. The proof for invariance of $M^{\lambda}_{G, inv}(x)$ follows similarly. **Exp. results are provided in the Tab. global response pdf** For exp., we use $G = $SO(2), the invariant regression function from Finzi et. al. as our task. We define $M$ as an MLP with 5 layers. $h$ is constructed using a fixed function that sums $x_1, x_2$ and computes the corresponding SO(2) rotation matrix from it. $\lambda$ is a small MLP with 3 layers, but much smaller number of neurons in them. We adjust the num. of params. in models to make sure both when using $\lambda$ and without, we have similar number of parameters. We use a train and test size of 10000, 10000, batch size 500, learning rate $5*10^{-3}$, epochs 100, num. of seeds 5. Finzi et. al.. "A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups." ICML, 2021. Reviewer: "For the fairness... limitation...tokenizer of the language model." **Response:** We agree with the reviewer that the tokens might not be words. As discussed in limitations in lines 318-319, we think there is a scope for optimizing these equality and neutral sets instead of using human-made sets of words. In future, one can directly learn sets of tokens that maximizes regard scores of LLMs, e.g., using RLHF. Note, our current formulation still give empirically good fairness results for GPT2 (cf. Fig. 3 and 9) using BPE tokenizers. Questions: Reviewer:"..., how many shots are used?" Please note, as discussed above, our motivation is to show efficiency of finetuning (using few iterations) in this Fig. We are happy to update the title and figure labels. Apologies for confusion. Reviewer:"In case of zero-shot learning without parameter updates, what kind of practical advantage could we expect from the universality result (Theorem 2)?" **Response:** It says that if the non-equivariant model is well-trained and is a good approximator to a certain function, then so are the obtained equivariant zero-shot results. This result ensures that obtained equivariant model is still an expressive equivariant model and not overconstraining the pretrained model. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the comprehensive response. Overall, I find that the rebuttal clearly addresses most of my concerns. I especially appreciate pointing out that score based canonicalization was only considering argmin while this paper extends to weighted average, and also the added extension to large groups, as well as added explanation on usefulness of universality on zero-shot learning. I also agree that revising the title to "Efficient Equivariant Transfer Learning from Pretrained Models" would resolve my concern on few-shot learning. For the review on continuous groups, I was not intending to make explicit comparison to Kim et al., (2023) -- as the authors pointed out, it has been on arXiv after NeurIPS deadline -- but the review was to point out the limitation of the proposed method on large groups. Now I can see that the issue has been resolved with the extended algorithm of weighted averaging combined with canonicalization. Furthermore, it has been also demonstrated empirically with a compelling performance. Overall, I am happy to raise my score from 5 to 7.
Summary: This paper proposes lambda-equitune based on previous works, which averages the features using learned weights by data and small neural network. This paper provides detailed theoretical analysis to prove that the proposed lambda-equitune is equivariant and a universal approximator of equivariant functions. Diverse experiments are conducted to show the generality of the method. Strengths: 1. Efficient transfer learning from foundation models to downstream tasks is an important tasks. This work make interesting improvements on previous work by introducing important weights trained from data and small neural network. 2. Diverse experiments results are provided, including image classification, deep Q-learning and natural language generation. The proposed lambda-equitune method shows good results on majority of the tasks. 3. Detailed theoretical analysis for equitune is provided. The writing and illustration is clear. Weaknesses: 1. The novelty against the previous work of Basu et al. (2023). Since extra parameters and fintuning process are introduced, the contribution of this work could be further explained. 2. According to figure 1, both original data and transformed data are used for inference of the proposed method, it is very natural to conduct embedding from features or from results, how about these embedding methods compared with the proposed method? 3. The setting of the image classification experiments is a bit naive. Since the paper focuses on efficient transfer learning, there are many more meaningful and realistic transfer learning tasks in computer vision domain rather than flip or rotation of 90 degree. 4. Minors: the format of reference part should be adjusted in Page 12. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Pease refer to the weaknesses part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for finding our contribution interesting. We first clarify a few points from our paper that the reviewer might have misunderstood, then we address the weaknesses pointed out by the reviewer. Reviewer: ``This paper proposes lambda-equitune based on previous works, which averages the features using learned weights by data and small neural network." **Response:** Please note that we not only provide the general framework of \lambda-equitune but also provide an important special case called equizero that does not require any additional neural network or even any additional data. We would like to point out that using equizero with appropriate loss functions (often naturally available, e.g. image-text similarity scores in CLIP) we outperform equitune in several experiments such as a) improving the robustness of CLIP b) deep Q-learning c) Debiasing and detoxifying (improving the regard scores) LLMs, using regard scores as the loss/score function d) improving compositional generalization capabilities of RNNs, GRUs, and LSTMs We use $\lambda$-equitune in cases where searching for a loss function is non-trivial. We illustrate one such case in the paper: classification with CNNs. Now, we address the weaknesses pointed out by the reviewer. Reviewer: ``The novelty against the previous work of Basu et al. (2023). Since extra parameters and fintuning process are introduced, the contribution of this work could be further explained." **Response:** There are two main novelties of this work that benefit our work compared to Basu et. al: a) equizero: equivariant zero-shot learning using no additional data or small neural networks. This outperforms equitune for zeroshot learning on several diverse downstream tasks. b) \lambda-equitune: where we use a small neural network to weigh the features obtained from the pre-trained model corresponding to different transformed inputs. From Tab. R1, note that the number of added trainable parameters is negligible. It shows that using a tiny fraction of extra parameters for performing the weighted averaging can be highly beneficial for extracting equivariant features from pretrained models. Table R1. Number of additional trainable | Exp. name | added trainable params| pretrained params | frac of added params| | ----------- | ----------- |----------- | ----------- | CLIP (RN50) | 112.5k | 25.6 M | 0.0043 | CLIP (RN101) | 61.3k | 44.5 M | 0.0013 | CLIP (ViT-B/32 and ViT-B/16)| 61.3k | 86M | 0.0007 | Resnet | 66.6k | 11.6M | 0.005 | Alexnet | 934.k | 61.1M | 0.0150 | Reviewer: "According to figure 1, both original data and transformed data are used for inference of the proposed method, it is very natural to conduct embedding from features or from results, how about these embedding methods compared with the proposed method?" **Response:** Unfortunately, since the reviewer has not provided any reference for the embedding methods they are referring to, we are unable to provide a detailed comparison with these methods. We would urge the reviewer to kindly share some references so that we could try to provide some comparison. Please note that several of our experiments use embedding from the CLIP model and use image-text similarity scores to obtain the classification scores. In Fig. 4a and 7, we find that existing CLIP embedding-based classifiers that do not use group equivariance are not robust to transformations such as flip or rotation of 90 degrees. Then, in Fig. 4b, 4c, and 8, we show how equizero leverage group equivariance to provide robustness to such transformations, moreover, it outperforms other equivariant methods such as equitune without using any additional data or learnable parameters. Experiments on lambda equitune that use CLIP-based embedding techniques are provided in Fig 5, 11. Moreover, since our formulation of both lambda-equitune and equizero are model agnostic, it is easy to extend them to other embedding based methods not considered in this work, such as in [1]. In [1], we can simply make both the CLIP and the cache models equivariant to guarantee equivariance of this embedding based technique. We believe this convinces the reviewer that a) we have already provided comparisons with CLIP-based embedding methods, b) the generality of our method allows to incorporate our method into any other embedding based methods. [1] Zhang et. al. "Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification", arXiv:2207.09519v1 Reviewer: "The setting of the image classification experiments is a bit naive. Since the paper focuses on efficient transfer learning, there are many more meaningful and realistic transfer learning tasks in computer vision domain rather than flip or rotation of 90 degree." **Response:** Note our method is completely general and provably equivariant for any group. Our experiments focusing on robustness to flips/rotations for pretrained CLIP/CNN models can be easily generalized to any other discrete groups, when applicable. We emphasize that our work is not restricted to computer vision. Besides, we also show efficient transfer learning in several other domains such as Fairness in NLG (e.g., debiasing GPT2), deep Q-learning, compositional generalization in languages. Moreover, recent work [2] shows that equivariance to seemingly naive groups such as rot90/flip can provide improvement in performance where the actual transformations in data are much more complicated and not explicitly known. [2] Wang et. al. "The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry", ICLR 2023 Reviewer: "Minors: the format of reference part should be adjusted in Page 12." **Response**: We thank the reviewer for pointing this out. We will make sure to fix this in the updated version. --- Rebuttal Comment 1.1: Comment: Most of my concerns are addressed by author's response. I tend to accept this paper.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments and suggestions regarding our paper titled, “Equivariant Few-Shot Learning from Pretrained Models”. Our paper is motivated by the need to efficiently utilize pretrained models for a variety of downstream applications which benefit from equivariance. To that end, we propose equizero, a zero-shot model that is equivariant and with much better performance than equituning. This is because equizero chooses the best features from pretrained models using a proxy loss function, unlike equitune that simply performs averaging over features. We also show important theoretical properties of this equivariant model like universality, which says that our method still preserves the ability to approximate any equivariant function. We further demonstrate diverse downstream applications of equi-zero over wide-varying tasks like image classification, reinforcement learning, compositional generalization and equivariant CLIP. We also propose $\lambda$-equitune, which is a relaxed generalization of the equizero algorithm and does not require any proxy loss whatsoever. Experiments validate the efficiency of $\lambda$-equitune and equizero over equitune. We next try to alleviate a few key concerns pointed out by reviewers **Key Concerns** **Approach limited to finite groups:** We address the concerns raised by reviewers DNsq and 8HWc regarding the limitations of our approach to finite groups. Reviewer DNsq also refers [3], which extends our framework to continuous groups by performing symmetric probabilistic averaging. First, we gently point out that [3] **appeared online only after the Neurips deadline**. Further, we would like to show there are alternate simple methods to extend $\lambda$-equitune to continuous groups. We describe one such method here. We simply use the canonicalization method of [1] in conjunction with $\lambda$-equitune to obtain $\textit{canonical}$-$\lambda$-$\textit{equitune}$, which is a) equivariant to continuous groups and b) weighs different features using importance weights to obtain expressive equivariant network. This idea is based on constructing equivariant frames of [2]. [1] uses a frame of size exactly one, whereas we use a frame of larger size and weigh the features corresponding to different frame elements based on their importance. A formal definition of our method and its proof of equivariance is provided in the response to reviewer DNsq. Further, we conduct an experiment on the SO(2) invariant regression task of [4, Sec. 7. 1]. The results of this experiment are provided in the attached table. It shows that canon-$\lambda$-equitune clearly outperforms non-equivariant model as well as canonicalization method of [1]. Thus, a simple extension of $\lambda$-equitune leads to equivariance to continuous groups. [1]: Kaba et. al., “Equivariance with Learned Canonicalization Functions”, ICML 2023 [2]: Puny et. al., “Frame Averaging for Invariant and Equivariant Network Design” ICLR 2022 [3]: Kim et. al., “Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance”, ArXiv 2023 [4]: Finzi et. al. "A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups." ICML. 2021. **Ambiguity about title** We agree that the focus of the paper can be better described as a combination of zero-shot learning and finetuning rather than few-shot learning. It should be technically called finetuning rather than few-shot learning. We used the umbrella term "few-shot learning" to explain the plethora of application across zero-shot learning and finetuning. However, we admit that changing the title to a better name would be appropriate. As such, we are happy to change the title of our paper to "Efficient Equivariant Transfer Learning from Pretrained Models", which would be a better title to describe the combination of zeroshot and finetuning. We apologize for any caused confusion and thank the reviewer for pointing this out. We believe this will help readers to understand our work better. **Image classification experiment for $\lambda$-equitune is a bit naive** First, we would again emphasize that our method is completely general and provably equivariant for any transformation for which the group actions on the input/output are well-defined. Thus, our experiments focusing on robustness to flips/rotations for pretrained CLIP/CNN models can be easily generalized to any other discrete groups. Secondly, we emphasize that our work is not restricted to computer vision. Apart from experiments on computer vision (CLIP and CNN-based equivariant/robust classification), our work also shows efficient transfer learning in several other domains such as Fairness in natural language generation (e.g., debiasing and detoxifying GPT2), equivariant deep Q-learning, compositional generalization in languages. Moreover, recent work [5] shows that equivariance to seemingly naive transformations such as rot90/flip can provide robustness/improvement in performance where the actual transformations in data are much more complicated and not explicitly known. This shows that our experiments with equivariance to seemingly simple transformations can be useful in much more complicated scenarios and could motivate further investigation in this direction in future works. [5]: D.Wang et. al., The Surprising Effectiveness of Equivariant Models in domains with Latent Symmetry, ICLR 2023 Pdf: /pdf/beb4372bbc90e4e562d29263accd9ad413ea22e6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Benefits of Being Distributional: Small-Loss Bounds for Reinforcement Learning
Accept (poster)
Summary: This paper explores the benefits of distributional RL and provides a theoretical understanding of its advantages through the lens of small-loss bounds. The authors propose distributional algorithms for contextual bandits, online RL, and offline RL, and prove that these algorithms achieve small-loss regret bounds. Strengths: I appreciate the attempts to understand the practically powerful algorithm and also the provided empirical simulations, which should be the promising direction in the RL theory study. The obtained bound scales with the minimum loss so can outperform standard results when it is small. This result seems to be the first line of work in the literature on RL. The paper is also very well-written and easy to follow. Weaknesses: 1 The algorithm of RL setting still follows 1) reduce the out-of-sample target (immediate regret) to the in-sample loss by complexity measure (SEC); 2) maintain a version space by effective estimation of in-sample loss and choose the optimistic estimator to handle the difference between V^* and V_f (min_a f_1 in this case). The technical novelty is rather limited since similar treatments have been developed in the literature. Also, as far as I know, there are some proofs based on Banach fixed point theorem for the distributional RL, which typically assumes that we can always solve the sub-optimization problems. These results should have supported the effectiveness of the distributional RL to some degree. 2 The proposed algorithm relies on the (distributional) Bellman completeness assumption, which is hard to satisfy because it is non-monotone (adding a new function can violate this assumption). While I understand it is common in the literature to handle the double-sampling issue, it limits the contribution of this work if the goal is to provide a theoretical understanding of the DRL algorithms. I am a little bit confused indeed because, in my mind, the MLE analysis does not require a completeness assumption. Does the distributional setting lead to unique technical challenges here so you have to make such an assumption? In particular, I am wondering whether it is possible to replace it with the trajectory average technique (at a cost of worse regret bound) as in [2,3]. 3 The definition of linear SEC. I am wondering whether there are more non-trivial examples captured by it. For linear structure, my understanding is that the square comes from Cauchy-Schwarz to relate $|\phi(z_t)^T \theta_f|$ to $||\phi(z_t)||_{\Sigma_t^{-1}}$ and $ ||\theta_f||_{\Sigma_t} $. The square also exists in the definition of the standard eluder dimension. One exception is the $\ell_1$ eluder dimension developed in [1] (see the end of section 5) where both the out-of-sample prediction error and the in-sample training error are linear. I am wondering whether there is any direct application or corollary between them. Another minor comment is that you may also mention the decoupling/eluder coefficient in [4, 5] where such a reduction treatment is first proposed and generalized by them and you can easily bound the complexity measures via SEC. **The generality of the framework is indeed my main concern**. Actually, I personally have obtained a very similar first-order result with OMLE. I think we have a similar technical issue that we need to have a result as in line 778. My choice is to define the eluder dimension as in [7] (so, we can call it linear eluder dimension just like linear SEC) but I could not find more interesting instances covered by it. 4 An interesting point is that you choose to decouple the original target with two types of complexity measures. In contextual bandit, the immediate regret is decoupled to another out-of-sample target plus some decoupling cost (that is why we need an online oracle). It turns out that this type of complexity measure can be sub-optimal (especially in model-free cases, see [6] for examples). I am wondering whether this choice is only for a convention or for technical requirements because otherwise, we can definitely apply the eluder techniques to reduce the immediate regret to the in-sample error as in the RL case (that is why we only need an offline oracle that looks back at the k-1 samples collected so far). [1] When Is Partially Observable Reinforcement Learning Not Scary? [2] Contextual decision processes with low bellman rank are pac-learnable [3] Bilinear classes: A structural framework for provable generalization in rl [4] A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning [5] GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP, and Beyond [6] A note on model-free reinforcement learning with the decision-estimation coefficient [7] Eluder-based Regret for Stochastic Contextual MDP Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: see weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your constructive feedbacks and please find our responses below. 1. **a)** You are absolutely right that our RL algorithms use the global optimism & pessimism ideas from GOLF (Jin et al., 2021a) & Bellman-consistent pessimism (Xie et al., 2021), and that is actually exactly our point: by simply changing squared loss regression to distributional RL via MLE (just like how C51 simply changed DQN's square loss regression to MLE) we can obtain much faster finite-sample rates in both online and offline RL. Notably, by only changing the loss from square loss to distributional loss, our theoretical _ablation_ proves that distributional RL is indeed responsible for the faster small-loss bounds. Additionally, we highlight that most of our RL results are the first small-loss bounds in these settings (e.g. offline RL and online non-tabular RL), which we believe to be a significant novel contribution in its own right. \ **b)** Regarding prior distributional RL (DRL) results with Banach fixed point, these convergence results are *asymptotic* to the best of our knowledge, and hence do not show any *benefits* of DRL compared to vanilla non-distributional RL, i.e. DRL's bounds are not any better than those from vanilla RL. Our key novelty is that our small-loss bounds have a *faster convergence rate* ($\widetilde{\mathcal{O}}(1/N)$ in small-loss regime) than the typical $\widetilde{\Omega}(1/\sqrt{N})$ rates attainable with non-distributional methods; in this sense, our work is the first to illustrate concrete theoretical *benefits* of DRL compared to vanilla non-distributional approaches. 2. **a)** You are correct that MLE by itself does not require BC (see eg. Theorem E.4), but since we perform MLE recursively in a TD-fashion (i.e. the target of the MLE depends on the learned distribution from the last step) we need BC to guarantee all the MLEs will succeed (see eg. Theorem F.2). In other words, the reason BC appears here is exactly why it also appears in the non-distributional setting: squared loss regression itself does not require BC, but TD-style methods like GOLF and FQI do require BC to guarantee squared loss regression succeeds each time. \ **b)** Please also see our discussion on BC in the global response. We can prove distributional BC for tabular, linear, low-rank, and LQR MDPs. We will add this to the paper to address your important point. 3. Thank you so much for pointing us in the direction of the $\ell\_1$-eluder dim [1], which inspired a new result we can now prove. The LSEC is indeed bounded by the $\ell_1$ *distributional* eluder dimension in a manner similar to how SEC is bounded by $\ell_2$ distributional eluder dimension, cf. Proposition 7 of the SEC paper. We can also show that $\ell_1$ distributional eluder is bounded by $\ell_2$ distributional eluder, cf. Proposition 19 of [1]. Altogether, we've shown the following chain: $\text{SEC}(\{f^2:f\in\Psi\},\mathcal{D},K)\leq \text{LSEC}(\Psi,\mathcal{D},K)\leq \text{dim}\_{\ell_1}(\Psi,\mathcal{D},1/K)\leq \text{dim}\_{\ell_2}(\Psi,\mathcal{D},1/K),$ where $\text{dim}\_{\ell\_p}$ is the $\ell_p$ distributional eluder dimension. It turns out for low-rank MDPs, we can show $\text{dim}_{\ell_2}(\Psi,\mathcal{D},\epsilon)=\mathcal{O}(d\log(1/\epsilon))$. The intuition is to leverage the linear transitions by using the one-step back trick (Lemma 12 of Uehara et al., 2021 "Representation learning for…") and then apply the elliptical potential lemma (Lemmas 19 and 20 of Uehara et al., 2021). Therefore, we've shown that the V-type LSEC (for any function class) is bounded by $\mathcal{O}(d\log(d/\epsilon))$ in low-rank MDPs. Therefore, our framework can prove the first small-loss PAC bound for low-rank MDPs (more generally, any model with low Bellman eluder dimension!), which is significantly more general than latent variable models. We'd like to thank you again for your insightful suggestions. We hope this, along with the global response on BC, clears up any concerns about generality. 4. **a)** As you pointed out in [6], model-free DEC can be sub-optimal in RL. However, in CBs, the DEC-like approach we took is actually rate-optimal, as shown by Section 3 of Foster and Rakhlin, 2020 “Beyond UCB: …”. \ **b)** As you mentioned, another route is to use optimism with Eluder techniques, which allows for offline oracles. However, this typically has two drawbacks. First, it is more restrictive since the examples covered by it are typically linear. Second, for general function classes, the optimism step is typically computationally hard, while our CB alg is easily implementable (hence, we could provide experimental results on complex, real-world CBs). \ **c)** Finally, we want to highlight that this Eluder route is captured as a special case of our online RL algorithm when $H=1$. In this one-step case, the BC assumption is not needed since there is no TD. Using our new result in (3), we can obtain a regret bound with the Eluder dimension. In sum, our algorithms in this paper can capture both the DEC and the Eluder route for CBs. Thank you again for all your constructive feedback. We will incorporate all these comments in the camera-ready, and please let us know if you have any additional questions. --- Rebuttal Comment 1.1: Title: thanks for the response Comment: Congratulations on your nice work. My major concerns have been addressed. I raise my score to 7 to support this paper for acceptance. Another minor comment is that you may check the techniques in [1] to see whether we can find more interesting examples as they also consider the linear eluder dimension of Hellinger/TV. [1] On the Statistical Efficiency of Mean Field Reinforcement Learning with General Function Approximation
Summary: This paper establishes the regret bounds of the proposed distributional methods from the perspective of the small-loss bound, where the regret upper bound depends not only on the number of iterations but also the optimal value. Based on this, they can show that regret could be lower under those environments whose optimal policy corresponds to a higher value function. This paper provides three new distribution RL approaches for three different RL settings. The first one combines ReIGW and MLE to solve contextual bandits. The second one and the third one use the similar concept, which apply confidence intervals on the estimation of likelihood, to solve online RL and offline RL, respectively. They show that the proposed method can achieve competitive regret bound theoretically. The numerical results also demonstrate the empirical performance of the distributional approach for contextual bandits. Strengths: - The concept of small-loss bound is very interesting and appears novel in the context of distributional RL. This new perspective indeed offers a promising way to theoretically understand why distribution methods could empirically achieve better performance in the existing distributional RL literature. - This paper is thorough in that it considers three important RL settings: (i) contextual bandits, (ii) online RL, and (iii) offline RL. This paper rigorously shows that the distributional MLE methods have provable benefits in terms of regret for a wide variety of RL problems. - The idea of combining MLE and confidence set is interesting and different from the traditional idea of considering the confidence interval of mean return. Weaknesses: - Page 7 seems to be missing. This makes the upper half of Page 8 somewhat difficult to read. - The regret bounds depend on some factors about the size of the distribution class. For example, in contextual bandits, the regret bound depends on Regret_{log}(K). Similarly, in online and offline RL, the regret bounds depend on $|\mathcal{F}|$. The finiteness of distribution classes appears to be a fairly strong assumption in RL. - Accordingly, another concern is on the assumption that there exists good distribution classes that satisfy Bellman completeness. While it is mentioned in Section 5 that this assumption could hold under some special tabular MDPs, it is unclear how far this argument could go in the more general RL settings. - The numerical simulations only discuss the result of contextual bandits. While typically I would not complain about the simulations in a theory paper, I do think that experiments on both online RL and offline RL could be very helpful in understanding the connection between small loss bounds and the empirical regrets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I agree with the authors that there is very limited theoretical understanding of distributional RL, and therefore overall I can appreciate the contribution of this paper which offers a new viewpoint for understanding distributional methods in RL. That said, the algorithms proposed and analyzed in this paper are all of MLE style, and somehow they are quite distant from the mainstream distributional RL methods (e.g., C51, QR-DQN, IQN, etc). While the results in this paper are nice to have, I am not sure by how much these analytical insights could benefit the understanding of these popular distributional RL methods. In other words, one of my concerns is that the usefulness of small-loss bounds is tied to the specific algorithms presented in this paper. It would be very helpful if the authors could comment on the connection between the small-loss bounds and other common distributional methods. Some detailed questions: - Line 55: The paper claims that triangular discrimination is a novel approach for decomposing the regret, but this idea has been adapted from [Foster and Krishnamurthy, 2021. - Line 131: The cost distribution should be some distribution in [0, H-h]? Similar issue occurs for the distribution of the loss-to-go in Line 146. - Lines 127&148: The notation \bar{C} and \bar{Z} is not defined. - Line 152: How to define the sum of two distributions (or two random variables)? Do we assume independence here? - In Algorithm, line 4 and Algorithm 3, line 3: What is $\bar{f}_1$ in line 3? - I am not sure how to compute the inner max of the confidence set. It seems that if the policy is not given, the \mathcal{F}, which is a function of {\pi}, is hard to compute. - There are many hyperlinks pointing to the wrong positions. - Line 626, the notation in this inequality is misleading, does f_1 only depend on x or (x,y)? I am starting with 5 and would be willing to raise the score if the authors could address these questions and the issues mentioned above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not describe any specific limitation of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your constructive feedback. Please find our responses below. ### For the Weaknesses section: 1. We used the allowed 1 page PDF to upload the contents of Pg. 7, which got accidentally cut. We sincerely apologize for the inconvenience. Luckily no crucial material was accidentally omitted. 2. **a)** Depending on $\text{Regret}_{\log}(K)$ for CB regret is fairly standard, e.g. see Theorem 1 in Foster and Rakhlin, 2020 “Beyond UCB: …” and Foster and Krishnamurthy, 2021 ”Efficient first-order…”. Essentially Theorem 4.1 translates the decision-making regret to the regret of online learning of the distribution, which can be bounded for particular classes and learning algorithms. The crucial point is the dependence on $C^\star$. \ **b)** We consider infinite distribution classes in Appendix F. We mention this at the bottom of page 6 but can certainly advertise this extension much better in the main text. We focused on finite classes in the main text to keep it brief and since finite classes are commonly considered in RL theory. Nonethless, we did extend to infinite classes, as we agree it is much more realistic. The complexity measure we use for infinite classes is the bracketing entropy as it is the standard complexity measure for MLE, cf. Van de Geer, 2000 "Empirical Processes in M-estimation". For example, if $\mathcal{F}$ is a linear function class with features of dimension $d$, then its bracketing entropy is $\mathcal{O}(d)$. 3. Please see our discussion on BC in the global response. In short, we can prove distributional BC for tabular, linear, low-rank, and LQR MDPs. We will add this to the paper to address your important point. Thanks for urging us in this direction. 4. As you write, our primary contributions are theoretical. Running additional experiments for the RL case may be difficult as our DistRL algorithms are based on GOLF and Bellman-consistent pessimism (BCP), which are version space methods that are NP-hard to run. With that said, we'd like to highlight two possible directions for practical versions of the algorithm. First, our confidence set construction is for deep exploration; if the problem only needs shallow exploration, we can adopt a cheaper exploration strategy such as $\epsilon$-greedy, cf. Dann et al., 2022 "Guarantees for $\epsilon$-greedy…" Second, a follow-up to BCP successfully implemented its main algorithmic idea and showed state-of-the-art results in offline RL benchmarks (Cheng et al., 2022 "Adversarially trained…"). Since our offline RL alg shares similarity with BCP, we believe our work provides strong support for empirically investigating a distributional version of Cheng et al., 2022. We leave the implementation and benchmarking as promising future work. ### For Questions section: **Mainstream DistRL algorithms:** 1. C51 is actually very similar to MLE, modulo a projection step that is needed due to discreting values. If $Z_{\tilde{\theta}}$ is the learned distribution from the last step, C51's update aims to learn $Z_{\theta}$ that minimizes $KL(\Phi \mathcal{T} Z_{\tilde{\theta}}|| Z_\theta)$, where $\Phi\mathcal{T}$ is the projected distributional Bellman operator. Since minimizing KL is equivalent to MLE, C51 is essentially doing MLE with projection, so our insights may well apply to C51-style methods. 2. Quantile-regression (QR) methods such as QR-DQN & IQN minimize the pinball loss rather than maximizing log-likelihood. While we use guarantees in the squared Hellinger from MLE, QR gives guarantees in the Wasserstein distance. It is interesting future work to explore the theoretical benefits of QR for decision making. **Detailed Questions:** * Line 55: Foster and Krishnamurthy, 2021 study solely CBs and their analyses do not immediately lead to RL bounds. In contrast, our novel techniques (e.g. self-bounding Lemma G.4) enable us to go beyond CB and prove to the first small-loss bounds in offline and online RL (in non-tabular settings). (Our CB results are not significant in view of Foster and Krishnamurthy and are only meant as expositional warm-up for our RL results, which _are_ novel and significant.) * Line 131: The costs-to-go are in [0,1] rather than [0,H-h] since we work under the normalized cumulative costs setup, i.e., costs and cumulative costs are normalized in [0,1] as in Jiang and Agarwal, 2018 “Open Problem: The Dependence…”. This setup allows for sparse costs and is a more general than assuming costs to be normalized in [0,1] up to rescaling by H, i.e. in our setup, 100% of the total cost can be obtained at a single step, while in the traditional setup, only 1/H-fraction of the total cost can be obtained each step. * Line 127 & 148: The \bar notation, which we defined in Line 128, denotes averaging over a distribution. We will remind the reader in a few spots. * Line 152: Yes; by + we meant convolution here, that is, the distribution of the sum of independent draws from each distribution. We will simply avoid this notation and describe the distribution explicitly. Note this is the standard distributional Bellman operator, cf. Definition 4.8 from Bellemare et al., 2023 "Distributional Reinforcement Learning". * Our online RL confidence set does not depend on policies (it is defined wrt Bellman optimality operator). Our offline RL confidence set *does* depend on policies (it is defined wrt policy's Bellman operator). We discuss their computational complexity in the uploaded PDF (Pg 7). * We will ensure hyperlinks are fixed for the final version. (Splitting main text and supplement broke all links.) * Line 626: Thanks for catching. Where we write "$f_1(x,y)$" on the rhs we meant the density of the distribution $f_1(x)$ evaluated at $y$. We will add the assumption that the density exists and give it a notation. Thanks again for your constructive feedback. We will incorporate all these comments in the camera-ready, and please let us know if you have any additional questions. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thank the authors for the detailed response. My main concerns about the BC condition, the distribution class, and the connection between the mainstream distributional RL and this paper have been addressed. With that said, I raise the score to 7 and vote for acceptance.
Summary: This paper explores the benefits of distributional reinforcement learning (RL) and provides a mathematical basis for its advantages. Traditional RL approaches focus on learning the mean loss-to-go, but recent developments have shown that learning the entire loss distribution can lead to improved performance in various tasks. However, the theoretical understanding of why and when distributional RL works well has been limited. The paper introduces the concept of small-loss bounds, which are instance-dependent bounds based on the minimum achievable cost in the problem. By optimizing over distributional confidence sets constructed through distributional Bellman equations, the proposed algorithms achieve small-loss regret bounds in tabular Markov decision processes (MDPs) and small-loss PAC bounds in latent variable models. The paper also presents a distributional contextual bandit algorithm and an offline RL algorithm with a novel robustness property. Empirical results demonstrate the effectiveness of the distributional RL algorithms in challenging benchmark tasks. Strengths: This paper investigates an important problem in theoretical RL and deepens our understanding of its benefits based on rigorous mathematical analysis. It proposes new distributional algorithms for contextual bandits, online and offline RL settings and provides corresponding small-loss bounds. The work presented in this paper is novel and original, to the best of my knowledge. The paper is well-structured, building from the simple setting of contextual bandits and then extending the analysis to more complex RL setups. The paper clearly states the assumptions, theorems and the proof sketches in each section and provides formal algorithms wherever required. Weaknesses: The readability of the paper is poor owing to the large amount of text/math. It seems the authors have modified the spacing between lines and headings in some parts of the paper in order to adhere to the page limit. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: None. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: The paper includes a brief discussion of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments! Distributional RL is indeed quite notation heavy, and since we show its benefits in all three settings of CBs, online RL, and offline RL, we necessarily need to use notations from all three settings. We hope that Appendix A's table of notations can serve as a convenient index for searching notations, and we'll be sure to add more text descriptions to improve readability.
Summary: The paper's main concern is to theoretically understand why distributional RL achieves good performance. They consider three different settings: contextual bandit, online RL with an optimistic algorithm, and offline RL with a pessimistic algorithm. In each case, they use MLE to learn a distribution over the unknown cost (bandit case) and loss-to-go (RL case). The key technique that enables the proof is the relating a new notion of distributional regret to the regular regret; this is done by manipulation of distributional divergences. It appears that the key across settings is that distributional divergence gives more fine-grained control over value/cost differences compared to only looking at means. This enables them to provide small-loss bounds in each case. They validate their findings empirically for the contextual-bandit case. Strengths: Significance and Originality: 1. The problem considered is important and relevant to practitioners. The authors do a good job in the introduction of motivating the theoretical conundrum. 2. The tools developed by the authors around distributional divergence are novel to the best of my knowledge. Quality 1. The results appear correct. And the settings considered are comprehensive across contextual bandit, online and online RL. 2. The empirical results display improved performance over well-chosen benchmarks. Weaknesses: Clarity: 1. The regret decomposition using triangular discrimination, while novel, is intricate and difficult to interpret intuitively. If further intuition were provided, this would help inspire future theoretical work. 2. The writing is notation-heavy and takes a while to parse through and keep track of notation. 3. Relating to point 1, it would be nice if there were further connections to the practical applications around risk-sensitive RL, which are the primary motivating examples for distributional RL. This would help bridge the gap to practitioners. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The triangular discrimination technique seems central to converting distributional divergence into bounds on value difference. Intuitively, what are the main factors that enable the small-loss term to show up when distributional approaches are used? Are there ways to make this term appear without distributional RL? If not, is there reasoning for why it is not possible? 2. How restrictive are the realizability and Bellman completeness assumptions made in the analyses? Do you have a sense of how the techniques could extend to violated assumptions? 3. Would it be easy to generalize the results to linear MDPs? What obstacles may arise? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive and constructive comments! Please find our responses below. **Triangular discrimination:** Intuitively, triangular discrimination bounds give finer control over estimation error than traditional $L_2$ bounds from non-distributional methods. For example, for some target conditional distribution $g(y\mid x)$, imagine we want to estimate its conditional mean $\bar g(\cdot\mid x)$. On one hand, square loss regression would learn a function $\hat f_{sq}:\mathcal{X}\to\mathbb{R}$ such that $|\mathbb{E}\_x[\hat f\_{sq}(x) - \bar g(\cdot\mid x)]|\leq \\\|\hat f\_{sq}(x) - \bar g(\cdot\mid x)\\\|\_{L\_2} = \mathcal{O}(1/\sqrt{N})$. On the other hand, suppose we learn a conditional distribution $\hat f\_{dist}:\mathcal{X}\to\Delta(\mathbb{R})$ with MLE. Using triangular discrimination, we can obtain a *self-bounding* inequality $|\mathbb{E}\_x[\bar{\hat{f}}\_{dist}(\cdot\mid x) - \bar g(\cdot\mid x)]|\leq \sqrt{(\mathbb{E}\_x\bar g(\cdot\mid x) + D\_\triangle(\hat f\_{dist}(\cdot\mid x)||g(\cdot\mid x))) \cdot \mathbb{E}\_xD\_\triangle(\hat f\_{dist}(\cdot\mid x)||g(\cdot\mid x)) }$. This is in fact the *implicit inequality* that can be derived from Eq. $\Delta_1$ on Page 5. By standard MLE generalization results, we expect $\mathbb{E}\_x D\_\triangle(\hat f\_{dist}(\cdot\mid x)||g(\cdot\mid x))=\mathcal{O}(1/N)$. Thus, if $\mathbb{E}_x\bar g(\cdot\mid x)\approx 0$, we expect the bound to converge as $\mathcal{O}(1/N)$, which is faster than squared loss's $\mathcal{O}(1/\sqrt{N})$ rate. This separation between squared loss regression and MLE is actually fundamental, and there already exists a lower bound in the CB setting, see Theorem 2 of Foster et al., 2021 "Efficient first-order contextual bandits..." To summarize, we can only obtain this key self-bounding inequality with MLE, and not squared loss, and this is the key intuition for how we obtain small-loss bounds. **Realizability and Bellman completeness:** Since BC is stronger than realizability, we focus our discussion on BC, which we posted in the global response. In short, we can prove distributional BC for tabular, linear, low-rank, and LQR MDPs. We will add this to the paper to address your important point. **Generalization to linear MDPs:** As remarked in the global response, DistBC indeed covers linear MDPs. Moreover, we proved another new result inspired by Reviewer 8JhK's comments, which shows the LSEC (an analysis tool used in Appendix G.2) is bounded by the Bellman eluder dimension. Since low-rank MDPs have Bellman eluder dimension $\widetilde{\mathcal{O}}(d)$, these two new results imply that our small-loss bounds also hold for low-rank MDPs (and thus also linear MDPs), further generalizing our results! **Risk-sensitive RL:** Risk-sensitive RL is indeed well-motivated for distributional RL, so there is no conundrum there. The long-standing conundrum about distributional RL is regarding the risk-neutral setup: when optimizing expected returns, why can learning the distribution then computing its mean perform better than learning the mean directly? (By Bellman equations, all we need are the expected returns, so why should we learn the distribution?) We answer these questions with small-loss bounds, which *converge faster* than bounds from non-distributional methods; to the best of our knowledge, our work shows the theoretical benefits of distributional RL for the first time. With that said, we believe the techniques developed in our paper could also be useful for deriving small-loss bounds in risk-sensitive settings, and leave that as future work.
Rebuttal 1: Rebuttal: We are grateful for all the encouraging and constructive reviews, which have been helpful in improving and polishing our work this week. Amongst all four reviewers, three of them (9rQf, nAEh, 8JhK) inquired about the necessity and generality of distributional Bellman completeness (DistBC), so we'd like to address this in the global response. **Necessity of BC for TD:** As remarked in Lines 224-230, BC is necessary for TD-style algorithms to succeed. Without it, TD can diverge or converge to bad fixed points, e.g. Tsitsiklis and Van Roy, 1996 showed such a counterexample. Since our algorithms are distributional versions of GOLF and Bellman-consistent pessimism, which are TD-style algorithms that already require BC, it is quite natural for our results to rely on analogous assumptions to these prior works. One reviewer (8JhK) brought up the issue of non-monotonicity of BC (adding a new function can violate BC): we want to point out our theorems also hold under "generalized BC," a weaker and monotone assumption that there exists function classes $\mathcal{G}\_h$ such that $\mathcal{T}\_h\mathcal{F}\_{h+1}\subseteq \mathcal{G}\_h$ for all $h$ (cf. Assumption 14 of Jin et al., 2021a). If $\mathcal{G}=\mathcal{F}$, this recovers the typical BC assumption. We'll add this as a remark. In fact, Foster et al., 2022 "Offline RL: Fundamental Barriers..." showed a lower bound that $Q^\star$-realizability and all-policy concentrability (stronger coverage condition than the single-policy one we use!) are not sufficient conditions for sample efficient offline RL. This suggests that removing BC is challenging and would require some other assumptions in its place. Seeking alternative conditions to BC is not our goal here. Instead, our contribution is that distributional versions of prior algorithms can yield small-loss bounds, which provides the first theoretical answer for the long-standing conundrum in the RL community: when optimizing expected returns, why can learning the return distribution and only then computing its mean perform better than learning the mean directly, which is all we need for Bellman's equation? **(New result) DistBC is satisfied by linear and low-rank MDPs:** Urged by the excellent feedback, we've proven a new result: in linear MDPs, the following linear function class automatically satisfies DistBC, $\mathcal{F}=\{f(z\mid x,a) = \phi^\star(x,a)^\top w(z): w: [0,1]\to B^d(r)\}$, where $\phi^\star(x,a)$ are the linear MDP's features and $B^d(r)$ is the radius-$r$ $\ell_2$-ball in $\mathbb{R}^d$ (with $r$ chosen appropriately for normalization purposes). This result is analogous to the well-known fact that linear MDPs automatically satisfy (vanilla) BC with a similar linear function class, and the proofs are similar. Moreover, this result easily extends to low-rank MDPs (where $\phi^\star$ is unknown) if we let $f(z\mid x,a)=\phi(x,a)^\top w(z)$ with $\phi$ varying in $\Phi$, assuming it is realizable, $\phi^\star\in\Phi$. We want to point out that DistBC also captures the Linear-Quadratic Regulator (LQR), as shown in Section B.2 of Wu et al., 2023 "Distributional Offline Policy Evaluation...". In sum, DistBC captures linear MDPs, low-rank MDPs, and LQRs, so DistBC essentially captures the same interesting models captured by vanilla BC. **Conclusion about BC:** To summarize the above, (1) BC is necessary for TD and assumed by GOLF and Bellman-consistent pessimism (the non-distributional analogs of our algs), and (2) DistBC captures all the interesting models captured by (vanilla) BC. We hope these two points clarify our rationale for assuming DistBC and that it is not strong at all (relative to prior works). Additionally, we have attached a 1 page PDF containing a proof sketch of Theorem 5.2, the definition of low-rank MDP and a discussion on computational complexity, which we hope clears up any missing notations. Pdf: /pdf/a7f8d0a52b01e86c138947c7a9e30b0fc7b41a1a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Self-Refine: Iterative Refinement with Self-Feedback
Accept (poster)
Summary: This paper proposes Self-Refine which uses the same LLM to provide feedback for its output and refine itself to improve output quality. Self-Refine is training-free, using a single off-the-shelf LLM as the generator, refiner and the feedback provider. The authors evaluate Self-Refine across 7 tasks and outputs generated with Self-Refine are preferred by humans and automatic metrics over results of one-step generation. The success of Self-Refine demonstrates that state-of-the-art LLMs can be further improved at test-time. Strengths: 1. The proposed Self-Refine framework is conceptually neat and can be easily combined with different LLMs. Though it still can be viewed as a prompting method, it decouples the refinement of model output into iterative steps instead of focusing on engineering a single prompt. 2. Self-Refine is fairly effective according to the experimental results in Table 1, especially on difficult and uncommon tasks. Weaknesses: 1. Self-Refinement framework relies on prompting for feedback generation and refining outputs. According to examples in Appendix S, the prompt is non-trivial and requires a lot of designing efforts. However, how to design the prompt is not elaborated. Also, Appendix S seems to be unfinished as I see "TODO: Add relevant information for the remaining task" in Line 863. 2. As discussed in Section 2, as an iterative framework, Self-Refine requires a stopping condition. How you design the stopping condition is not clearly stated in the main paper. Also, as there are different designing choices, e.g., using a pre-defined iterative step, using a scalar stop score, comparing refined results and results in the previous iteration, etc., control experiments are necessary. 3. In a high level, I think the contribution of Self-Refine lies in providing a good prompting way to fully tap LLM's potential instead of improving really improving LLM's ability as the authors claim in Line 14-15. For example, Table 1 shows that Self-Refine cannot improve the results in Math Reasoning. I think it may be because the performance of this task is bounded by LLM's reasoning ability and Self-Refine cannot improve such an ability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I think "Sentiment Transfer" in Line175 refers to "Sentiment Reversal" in Table 2 (I would suggest using the same term for clarity). I'm puzzled why its result in "No feedback" setting is 0. According to Section 3.2, the metric for this task is human preference in A/B evaluation. Does that mean multiple round of generation make the generation result collapse? Do you have any explanation for this? **Updated on 8/11/2023:** The authors' response clarifies my question and some points regarding the weaknesses. Thus, I update my score to 6. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have already discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our three main points: (1) Self-Refine is training-free, (2) Self-Refine benefits a variety of tasks, and (3) Self-Refine can even improve state-of-the-art LLMs at inference time. We think that all your questions are addressable within this discussion period. Please see our response below. We would love to address additional questions during the discussion period if anything is unclear. --- **How do you design the prompts?** The prompts we used are very simple, and our preliminary experiments showed that any prompt that follows the feedback-and-refinement steps provides benefits. That is, the contribution of having these general steps is much more significant than the specific design of any particular prompt. Further, as requested by other reviewers, we conducted additional experiments with instructions-only (“zero-shot”), which further shows that our general idea is beneficial regardless of the exact prompts. See our prompts in Figures 16-35 in the Supplementary Material, and in our [anonymous repo](https://anonymous.4open.science/r/selfrefineanon-EFEA/data/prompt/) . --- **Design of code readability/missing details in Appendix S** Thanks for pointing this out. The line is supposed to include a link to Section L of the Appendix, which expands on the task design for code readability (Fig 22, 23). We will fix this in the next version. --- **How did you stop iterating?** We generally employ a fixed stopping criterion of four iterations due to budget constraints. However, certain tasks can provide feedback that allows us to stop the iterative process early. For instance, in tasks like the Constrained Generation (CommonGen), iterations cease once all concepts are covered [please see line 62](https://anonymous.4open.science/r/selfrefineanon-EFEA/src/commongen/run.py). For GSM-8k, we use the feedback 'it is correct' as a stopping indicator. Overall across all tasks, we observe diminishing returns (see L181, Figure 4), as the first 3 iterations provide the most significant improvement. So, in the case of a limited-compute budget, even using a constant number of iterations provides significant benefits. We will make this clear in the paper. --- **The contribution of Self-Refine lies in providing a good prompting way to fully tap LLM's potential, instead of really improving LLM's ability** We agree. Self-Refine does not improve the base model, since it doesn't involve training. However, Self-Refine improves generation ability by changing the algorithm that we use to generate an output, which is an important part of using LLMs. We will clarify this distinction in our revised version. --- **Why does Sentiment Reversal with “No feedback” achieve “zero improvement” in Table 2?** In the “No feedback” setting, the model was not given clear instructions on how to change the output. We find that the model tends to either repeat the same output in each iteration, or to make unrelated changes. Since the scores in this task are the relative improvement increase in human preference (see also Appendix C), a score of “zero” means that “No feedback” did not improve over the base model outputs in any case. As the main components of Self-Refine are “Feedback”, “Refine”, and repeating them iteratively, these results show that the Feedback step is crucial. We used “sentiment reversal” and “sentiment transfer” interchangeably but agree this is confusing. We’ve decided to stick to the term “sentiment reversal” here in the rebuttal and will make this change in the paper – thanks for pointing it out. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply! It'll be beneficial to clarify those points mentioned in your response in the revised version. I slightly increased my score to "6: Week accept". --- Reply to Comment 1.1.1: Title: Thank you for increasing your score! Comment: Thank you for increasing your score! Please let us know if there are additional questions before the discussion period ends.
Summary: This paper proposes Self-refine, which prompts LLMs to generate feedback on the initial generation and revise the initial response based on the generated feedback iteratively. On 7 tasks, Self-refine leads to performance improvement without any training. Strengths: - The method is straightforward. - The writing is clear. - The paper provides an in-depth analysis of the proposed method. Weaknesses: - It is stated that out of 7 tasks, 4 tasks are evaluated by setting GPT-4 as the evaluator for the result of Table 1. Although it is known that GPT-4 possesses order bias, the paper does not handle this bias. Bidirectional evaluation should be conducted to reduce the bias (switching the order between response A and B). - For the result of GPT-4 of Table 1, because the response is refined into GPT-4 preferable response, it is highly likely that GPT-4 would favor the refined response (bias exists). To mitigate this bias, using other LLMs that are not provided by OpenAI such as Claude for either the target model (the model that is being evaluated) or evaluator might be a better solution. - Reliability of the human evaluation setting is questionable. The inter-labeler agreement is missing. Also, author-based human evaluation potentially leads to biased results. - Although the paper states that the authors have experimented with Codex, the result is missing in Appendix F. The main concern of this paper is on the evaluation setting, especially on the results of Table 1. Showing the effectiveness of Self-Refine on additional reasoning tasks based on automatic metrics (such as MMLU, BBH tasks) would enhance the reliability of the evaluation setting. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: - Are there any reason for not conducting a human evaluation for Code Readability (Table 6)? - Any ablation on iterative self-refine? What is the effect of retaining the history of previous feedback? - Any human evaluation on Constrained Generation? It seems that only coverage is measured while other metrics such as coherence might be important. (The generated sentence might be not coherent but the coverage might be high.) - What does stopping indicator refer to in line 86? Any examples of stopping indicator? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points, that Self-Refine leads to performance improvements without any training across 7 tasks, and that the paper provides an in-depth analysis of the proposed method. We think that all your questions are addressable within this discussion period. Please see our response below. We would love to address additional questions during the discussion period if anything is unclear. --- **Conduct Bidirectional evaluation to overcome GPT-4 order bias** Thank you for bringing up this important point. To mitigate order bias in our tasks, we indeed randomly flip the labels before evaluation (see Line 31 in our [anonymous repo](https://anonymous.4open.science/r/selfrefineanon-EFEA/src/sentiment_reversal/gpt4_eval.py) for an example). We will make this clear in the paper. --- **GPT-4 may prefer its own responses … consider using Claude to evaluate GPT-4** Thanks for the great suggestion! We agree that despite the measures we took to prevent any inherent biases, GPT-4 might inherently favor self-refined outputs. To remedy this, we’ve added an analysis where we do the evaluation of GPT-4 as the base LLM using [Claude 2](https://www.anthropic.com/index/claude-2) (newly available, as of July 11th) as the evaluator. These results (table below) from GPT-4 as the base LLM with Claude-2 as the evaluator show the same strong preferences for Self-Refine over the base model. We will add and clarify them in our revised version. New Results for GPT-4 as the base LLM with Claude-2 as the evaluator: |Task|% Claude-2 preferred base|% Claude-2 preferred Self-Refine| |---|---|---| |Dialog Response Generation|30.6|**64.7** ($\uparrow$34.1)| |Sentiment Reversal|10.6|**69.2** ($\uparrow$58.6)| |Acronym Generation|32.0|**49.2** ($\uparrow$17.2)| |Code Readability|37|**60** ($\uparrow$23)| --- **Author-based human evaluation potentially leads to biased results** We employed a fully blind protocol to ensure unbiased evaluation. Annotators were unaware of which outputs came from which method, and tasks were allocated so that the author responsible for a task didn't annotate it. More details are in Appendix C. While crowdsourcing can introduce noise, our author-based blind setup is deemed more reliable. With the reviewer's recommendation, we've added evaluations using Claude-2; now, evaluations from humans, GPT-4, and Claude-2 consistently demonstrate improvements. --- **What is the inter-labeler agreement?** While multiple annotators participated in each task, we only collected a single annotation per-instance, in order to scale the number of datapoints we could annotate. However, following your suggestion, we conducted an additional evaluation. For all of the following datasets, two annotations were obtained for 50 samples. All human evaluations were conducted in a double-blind manner: the responses were randomly flipped, ensuring that annotators were unaware of which output was from the base model and which was from the Self-Refined model. For each task, we measured inter-labeler agreement using Cohen's Kappa score: Code Readability and Acronym Generation both scored a substantial 0.75, Sentiment Transfer was also substantial at 0.61, while Response Generation was moderate with a score of 0.53. --- **Codex results are missing in Appendix F** Yes, we did experiment with Codex, apologies for missing the results in Appendix F. Here are the results: |Metric|Task|Base Rate (%)|Self-Refine Rate (%)| |---|---|---|---| |Solve Rate (Oracle feedback)|MathReasoning|71.3|**76.2**| |%Programs Optimized|Code Optimization|9.7|**15.6**| |%Readable Variables|Code Readability|37.4|**51.3**| --- **Q1: Code Readability Human Eval.** To provide an evaluation of Self-Refine over the base LLM for code readability (which we agree is beneficial), we conducted an additional human evaluation for code readability, which we will add to Table 6. 50 pairs were rated by at least 2 annotators each (cohen’s kappa 0.75), and the results are as follows: |Task|Self-Refine (%)|Direct (%)|Either (%)| |---|---|---|---| |Code Readability|**50.00**|3.00|47.00| As the results indicate, the Self-Refined responses are significantly preferred by the annotators. --- **Q2.1: Ablation on iterative self-refine?** We've conducted ablation studies detailed in: - Page-6;Section-4;Table-2 covers generic feedback vs. no feedback. - Page-6;Section-4;Figure-4 shows results on varying self-feedback iterations. We would be happy to any additional ablation suggestions to the final version of the paper. --- **Q2.2 What is the effect of retaining the history of previous feedback?** In our initial experiments, we found that retaining the history of feedback prevents the model from generating the same (suboptimal) response twice. However, as the model always receives the latest output to refine, the need for history might vary by task. --- **Q3: Human eval on Constrained Generation?** We conducted a human evaluation of the outputs generated by GPT-4. Our findings revealed that **all sentences** were well-formed and free from grammatical or fluency errors. We analyzed 50 outputs, and noticed that while some outputs incorporated highly imaginative scenarios, such as a "violinist playing a lullaby by the river," none of the sentences evaluated were implausible or nonsensical. --- **Q4: Stopping indicator** Stopping criterion is a condition at which we stop the Self-Refinement process and return the last generated answer. For instance, in the Constrained Generation (CommonGen) task, Self-Refine iterations halt as soon as all concepts are covered, please see [line 62-anonymous code](https://anonymous.4open.science/r/selfrefineanon-EFEA/src/commongen/run.py). In the GSM-8k task, we used the feedback `it is correct` as the stopping criterion, which could cause early termination of Self-Refine before the hard constraint of four iterations. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Many of my concerns have been resolved. I have one question that has not been addressed yet. Do you have any preliminary results on MMLU or BBH benchmark? (Evaluation on the subset or a few tasks would be fine if inference cost is the issue.) --- Reply to Comment 1.1.1: Title: Results on BBH Comment: Thanks for your response. Following your suggestion, we ran additional experiments and we now have preliminary results on additional Big Bench Hard tasks: | Task | Base model | +Self-Refine | Gain | |-----|-----|-----|-----| | Date Understanding | 62.0 | **66.8** | $\uparrow$4.8 | | Geometric Shapes | 17.6 | **20.0** | $\uparrow$2.4 | | Logical Deduction (seven objects) | 43.2 | **45.2** | $\uparrow$2.0 | | Multi-Step Arithmetic [Two] | 61.6 | **64.0** | $\uparrow$2.4 | | Tracking Shuffled Objects (seven objects) | 31.6 | **36.0** | $\uparrow$4.4 | All of these experiments were conducted with `gpt-3.5-turbo-0316` (ChatGPT), with a temperature of 0.0, and without any task-specific prompts (all tasks used the same instruction prompts). We believe that task-specific prompts would further increase the gain of Self-Refine compared to the base model. We remain open to further feedback and would greatly appreciate any additional insights and questions you might have.
Summary: This paper proposes a Self-Refine framework for improving initial outputs from LLMs. Given an input, Self-Refine generates feedback and refines its outputs iteratively. Without additional training cost or human effort, Self-Refine outperforms baselines with the same large language model that generated output draft previously. Specifically, Self-Refine can be divided into the following three steps: initial generation, feedback, and refinement. These three steps are accomplished through task-specific in-context learning (i.e., few-shot prompting). The authors evaluate Self-Refine on seven diverse tasks (including generation and reasoning tasks). The experimental results demonstrate the superiority of the proposed method, which significantly outperforms baselines on automated metrics or GPT-4-pref (high correlation with human preference on a subset). Subsequent analysis shows that LLMs across different model sizes consistently benefit from Self-Refine. Compared with generic feedback, specific and constructive feedback performs better. Moreover, other analytical experiments have proposed valuable conclusions from different perspectives. Strengths: 1. The impressive experimental results show superior performance to the baseline. 2. The authors have done a wealth of analytical experiments, including the impact of the feedback quality and iterations, effectiveness across different scales of models, etc., which are very meaningful. 3. This well-written paper demonstrates Self-Refine implemented by the authors to generate better outputs with the same LLM. Weaknesses: 1. Implementing iterative few-shot prompting-based Self-Refine may require a high computational cost. However, this is a minor issue since the extent of performance improvement indicates that the cost is worthwhile. 2. Self-Refine doesn't work with weaker models, as it requires the models to have good few-shot modeling or instruction-following abilities. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The GPT series models used in this paper have good instruction-following abilities. I am interested in how this method performs with zero-shot prompting (only instructions), which would facilitate the community in making trade-offs between performance and cost. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: There is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points: that Self-Refine works without additional training cost or human effort, that Self-Refine benefits diverse tasks including generation and reasoning, that Self-Refine can even improve state-of-the-art LLMs at test time, and that experimental results are impressive. We think that all your questions are addressable within this discussion period. Please see our response below. We will love to address additional questions during the discussion period if anything is unclear. --- **Implementing iterative few-shot prompting-based Self-Refine may require a high computational cost. However, this is a minor issue since the extent of performance improvement indicates that the cost is worthwhile.** We agree that the extent of performance improvement indicates that the cost is worthwhile. Besides the performance improvement, we would like to highlight that: 1. The first few iterations provide most of the gains, so even 2 or 3 iterations can provide large gains. Figure 4 further analyzes this tradeoff. 2. Our approach is applied only at inference, and inference is getting cheaper over time. In addition, various approaches such as [Chain-of-Thought (Wei et al., NeurIPS’2022)](https://arxiv.org/pdf/2201.11903), [“Least-to-most” (Zhou et al., ICLR’2023)](https://openreview.net/pdf?id=WZH7099tgfM) and [Self-Consistency (ICLR’2023)](https://openreview.net/pdf?id=WZH7099tgfM) provide similar trade-offs: *unlocking emerging capabilities of LLMs, at a slightly higher inference cost*. Finally, we note that Self-Refine presents a trade-off between cost and performance. That is, even two iterations of self-refine offers better results compared to initial responses from these models (see Page 6-Fig 4). --- **Self-Refine doesn't work with weaker models, as it requires the models to have good few-shot modeling or instruction-following abilities.** It seems that as reported in other papers such as [Emergent Abilities (Wei et al, 2022)](https://arxiv.org/pdf/2206.07682.pdf), there are abilities that only sufficiently strong/large models exhibit. We believe that Self-Refine is a kind of emergent ability, and in these sufficiently strong models, Self-Refine can significantly further improve their outputs. However, we anticipate that such strong models will be widely available soon. Please see results on the open-access LLAMA-2 model in the global response. --- **How does Self-Refine perform with zero-shot prompting (only instructions)?** Few-shot prompting eases the parsing of response, as it shapes the generated content to an easy to extract format. Following the reviewer’s suggestions, we conduct instruction-only experiments where instead of giving few-shot examples, we used instructions at each stage of Self-Refine. The results are shown below for gpt-3.5-turbo-0613, and in the global response for LLaMA-2 70B, and show that Self-Refine continues to be effective in the instruction-only setup | Experiment | Base | Self-Refine (zero-shot) | Equally good | |------------------------|-------|-------------|--------------| | Acronym Generation| 16.66%| **44.8**% | 38.5% | | Constrained generation | 41.5% | **46**% | 12% | | Sentiment Reversal | 4.4% | **71.4**% | 16.2% | | Math Reasoning (GSM8k) | 22.06%| **59**% | - | | Dialogue Response Generation | 23% | **48.8**% | 22.8% | --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply! --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks for your note, and for considering our response. We hope to have addressed your concerns and questions with the newly reported results. Please let us know if you have any more questions before the end of the discussion period.
Summary: The paper introduces Self-Refine, an approach for improving Large language models (LLMs) through self feedback and refinement. The method does not require further training, and uses a single LLM as the generator, refiner, and feedback provider. The authors evaluate the method on seven NLG tasks, including dialog response generation and mathematical reasoning, using multiple GPT models, i.e., GPT-3.5, ChatGPT, and GPT-4. Experimental results show that Self-Refine greatly improves model performance in terms of both humans and automatic evaluation. **I have read authors' rebuttal as well as other reviewers' comments. I have increased my score from 6 to 7.** Strengths: 1. This paper is well motivated and well written. 2. The proposed method is simple and free of training yet shows great improvement over tasks. Weaknesses: 1. The major concern is that the proposed method can be very costly, in particular when the iteration requires multiple steps. 2. The current paradigm seems to highly rely on the great ability of GPTs, and can not work well for other smaller models, such as Vicuna as discussed in the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! We were happy to read that you appreciated our main points: that Self-Refine does not require further training, that Self-Refine benefits a variety of tasks, that Self-Refine can even improve state-of-the-art LLMs at test time, and that Self-Refine greatly improves model performance. We think that all your questions are addressable within this discussion period. Please see our response below. We will love to address additional questions during the discussion period if anything is unclear. --- **The major concern is that the proposed method can be very costly, in particular when the iteration requires multiple steps.** While Self-Refine does employ multiple inference iterations, characterizing it as “*very* costly” may not fully capture its nuances. Self-Refine is exclusive to inference—a domain where costs are progressively declining. Various prompting approaches such as [Chain-of-Thought (Wei et al., NeurIPS’2022)](https://arxiv.org/pdf/2201.11903), [“Least-to-most” (Zhou et al., ICLR’2023)](https://openreview.net/pdf?id=WZH7099tgfM) and [Self-Consistency (ICLR’2023)](https://openreview.net/pdf?id=WZH7099tgfM) provide a similar tradeoff: unlocking a model’s full potential and getting better performance, at the cost of longer outputs or querying the model multiple times. Our approach requires querying the model multiple times during inference, but we believe that in most cases, this is a reasonable price to pay for improving the outputs of the newest and largest models. Notably, Self-Refine substantially improves the quality of responses in LLMs, as noted by [Reviewer mJq7](https://openreview.net/forum?id=S37hOerQLB&noteId=vqWj3bEgMe) and shown in our Table 1. Further, the tradeoff between the number of iterations (which is proportional to cost) and performance that we analyze in Figure 4 shows that the first few iterations provide most of the gains. So, if one has a limited compute budget, even using a small constant number of iterations provides significant benefits. --- **The current paradigm seems to highly rely on the great ability of GPTs, and cannot work well for other smaller models, such as Vicuna, as discussed in the paper.** Recent papers, such as [Emergent Abilities (Wei et al, 2022)](https://arxiv.org/pdf/2206.07682.pdf), show that there are abilities that only sufficiently strong/large models exhibit. We believe that Self-Refine is a kind of emergent ability, and in these sufficiently strong models, Self-Refine can significantly further improve their outputs. Our experiments with LLAMA2 (please see the global response) demonstrate that even publicly available models now possess these capabilities. As we anticipate the release of even stronger models to the public in the future, the potential and applicability of Self-Refine are set to expand. --- Rebuttal Comment 1.1: Comment: Thank the authors for their reply. I have increased my score from 6 to 7. --- Reply to Comment 1.1.1: Title: Thank you for increasing your score! Comment: Thank you for increasing your score! Please let us know if you have any more questions before the end of the discussion period.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their valuable feedback. We are encouraged that they find our approach well-motivated (Reviewers [artr](https://openreview.net/forum?id=S37hOerQLB&noteId=QYxVBerAcu), [YNkB](https://openreview.net/forum?id=S37hOerQLB&noteId=lFzaxuj8QC)), efficient in improving LLMs without extra training (Reviewers [artr](https://openreview.net/forum?id=S37hOerQLB&noteId=QYxVBerAcu), [mJq7](https://openreview.net/forum?id=S37hOerQLB&noteId=vqWj3bEgMe), [UJCc](https://openreview.net/forum?id=S37hOerQLB&noteId=9XPd5By9bN)), with comprehensive results surpassing baselines (Reviewers [mJq7](https://openreview.net/forum?id=S37hOerQLB&noteId=vqWj3bEgMe), [YNkB](https://openreview.net/forum?id=S37hOerQLB&noteId=lFzaxuj8QC)), and straightforward and “conceptually neat” (Reviewers [UJCc](https://openreview.net/forum?id=S37hOerQLB&noteId=9XPd5By9bN), [YNkB](https://openreview.net/forum?id=S37hOerQLB&noteId=lFzaxuj8QC)). We have addressed reviewers’ comments individually and look forward to a fruitful interaction during the author response period. --- **New Results Reported** We provide all the new results that were requested: - [Self-Refine in Zero-shot/Instruction-only mode](https://openreview.net/forum?id=S37hOerQLB&noteId=ClZMzlW7nC) (finding: self-refine works well regardless of the prompting approach). - Results on open-access model LLAMA-2 (finding: Self-Refine provides promising gains even with open-access models) - [Additional evaluation using Claude-2](https://openreview.net/forum?id=S37hOerQLB&noteId=gk0JpkcMOR) (finding: that self-refine performs well across all metrics). We also address the other concerns including cost of multiple iterations by discussing the performance cost tradeoff, ensuring blindness in the human evaluation protocol, and address the other clarifications. We will add these clarifications and details in the next version. --- * Following questions by reviewers `mJq7` and `artr`, we benchmarked Self-Refine on the open-access model [LLAMA-2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-model) in an **instruction-only** setting (no few-shot prompts, only instructions). The results show that Self-Refine continues to be effective on an open-access model and without few-shot examples at all. Given these performance metrics, alongside anticipated advancements in hardware, we anticipate the broad and cost-effective applicability of Self-Refine. Instruction-only (zero-shot) results with LLama-2 as the base model: | Task | Base | Self-Refine | Equally Good | |---------------------|--------------|--------------|--------------| | **Acronyms** | 22.30 | **53.08** | 22.30 | | **Yelp** | 13.2 | **60.8** | 26 | | **Response Generation** | 11.2 | **20.4** | 54.6 | | **GSM-8k** | 37.6 | **37.8 (41 with Oracle)** | N/A |
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Transportability for Bandits with Data from Different Environments
Accept (poster)
Summary: The paper analyses how bandits can exploit causal similarities (need not be fully known functionally) across different environments to improve their regret bounds. In particular, previously collected data from related environments can improve learning. Strengths: Strengths 1. Use of domain discrepancy and selection diagrams to formally treat the different environments. 2. The proposed tTS conforms nicely with how a bandit can use priors and recently tried arms to inform its posteriors. Weaknesses: None Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive assessment of our work. We would be happy to provide any clarification that could help further with the evaluation. --- Rebuttal Comment 1.1: Title: Thank you Comment: I have looked at the other reviews and responses. It would be great if you add comments addressing the weaknesses identified by the other reviews, in addition to the answers to their questions. Thanks again for many of the clarifications. --- Reply to Comment 1.1.1: Title: Further comments Comment: Thank you for engaging with us. We can certainly comment on mentioned weaknesses; in the following, we will consider each one in order. 1. "***This paper is rather notation-heavy, and it is a bit hard for readers not familiar with the language used therein.***" The notation we introduce follows existing literature in causal inference [3, 38] for the transportability formalism and existing literature on bandits [19, 29] for the Bayesian regret guarantees. We believe that careful distinctions between data generating mechanisms and distributions over variables in different environments (including their parameterization) are necessary to accurately describe the transfer learning problem. In applications, we expect that there could be range of possible settings that must be accounted for, e.g. different graphs and multiple environments each with different sets of pairwise invariances and discrepancies. Our methods and theory aim to consistently describe all of these variations and to be generally applicable in a way that is agnostic to the specific data and assumptions provided. As a result, we do acknowledge that there is a correspondingly heavier notation than found in (causal) bandit papers that do not consider learning from multiple environments. In our view, unfortunately, the introduced notation cannot be made lighter without loss of generality. 2. "***The proposed method introduces an additional computational burden.***" Learning from additional data sources involves inference of parameters given prior data and therefore results, for equal number of online runs, in an increased computational cost compared to ``online only'' algorithms. An important motivation for the use of prior source data in the first place, however, is the expectation that it could improve the efficiency of online experimentation. In particular, we show that if source environments are sufficiently related to the target environment many fewer online runs are typically necessary to attain a given level of performance, see e.g. Experiment 1 (Sec. 5). This could effectively lower the overall computational cost of the method, especially if online runs are considered to be more expensive that offline runs which might be reasonably true in applications. In any case, the proposed Thompson sampling remains a fast algorithm that could be generally applied in small to moderately-sized environments. 3. "***This work relies on having quite detailed information from the related environments– the SCM and the selection diagrams for each environment. Having this type of information is somewhat unrealistic. Nevertheless, the contribution of the paper is conceptual/theoretical and may pave the way for future work that requires less detailed information.***" For prior data to consistently inform reward distributions in the target environment, some knowledge of commonalities and differences across environments is necessary. Selection diagrams (and their encoding of strict invariances of causal mechanisms) is one type of domain knowledge that our theory leverages and under which consistent improvements can be guaranteed. This form of domain knowledge is often available in practice, especially in the medical domain or advertising where practitioners often have an understanding of the underlying biology and user characteristics, respectively. Understandably, this form of domain knowledge may be less realistic in other applications. There are two observations that could be made, however. First, some degree of mis-specification can be allowed (under which improvements could still be guaranteed) as described in Appendix B.1. Second, some relaxations of the transportability paradigm, i.e. that specifies strict equalities or inequalities between causal mechanisms, can be naturally handled by our current framework: in particular prior knowledge intervals for probability values in the target environment, as exemplified in Appendix B.1. We believe these settings do cover the kind of domain knowledge that could available in several relevant applications and for which prior data could then be used consistently to improve inference. Relaxing the graphical assumptions, e.g. by leveraging instead sets of potential causal graphs that could be learned with a causal discovery step, could be an exciting future research direction. 4. "***Furthermore, it is also somewhat unrealistic that any of the relationships between the variables in the related environment are exactly the same as the relationship as relationships in the target environment.***" This point was answered explicitly in the response to Reviewer KpW6. Let us know if we can provide further details. 5. "***Experiments seem to be conducted on toy data only. How useful is the method in practice, on real data?***" This point was answered explicitly in the response to Reviewer QDCf. Let us know if we can provide further details. --- Rebuttal 2: Title: Follow-up on exchange Comment: Dear Reviewer eikn, We appreciate you taking the time to engage with us. We were wondering whether our follow-up on the rebuttal sufficiently addressed the points you wished to have discussed in more depth. If not, we would be happy to expand on any remaining concern. Thanks again, Authors of #13757
Summary: A framework is presented so that bandits can use data from different environments, by exploiting the causal relationship between those environments. Strengths: The contribution and problem statement are clear. Weaknesses: Experiments seem to be conducted on toy data only. How useful is the method in practice, on real data? Technical Quality: 3 good Clarity: 3 good Questions for Authors: cf above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address your question below. We are happy to engage further to address any remaining concerns on the practical usefulness of our method. 1. ”***Experiments seem to be conducted on toy data only. How useful is the method in practice, on real data?***” To our knowledge, it is typical in the literature to work with a synthetic set up as bandits require active experimentation. We believe that the environments discussed in the experiments and their motivation, e.g. inspired by the literature on clinical trials and advertising, are realistic, and expect therefore that a similar analysis could be conducted with real data. In particular, if the selection diagram is well-specified (under mild mispecification, Appendix B.1) we expect the use of offline data to enable the proposed approach to outperform bandit algorithms such as Thompson sampling (Sec. 3) in every application. --- Rebuttal 2: Title: Follow-up on rebuttal Comment: Dear Reviewer QDCf, We are reaching the end of the discussion period. We were hoping to understand whether our rebuttal clarified your concerns or whether we could give any additional details. We would be happy to expand on our response if needed. We appreciate your time and attention. Thanks! Authors of #13757
Summary: This paper considers the problem of leveraging data from many different environments to warm start a bandit algorithm. The paper assumes that the environments share the same variables, and the relationships between variables in related environments can be captured by structural causal models (SCMs). Taking a Bayesian perspective, the paper defines a probability distribution over unknown quantities in the target SCM and constrains the probability distribution using the information obtained from the related environments. This probability distribution is then used to warm-start a Thompson sampling algorithm. Strengths: 1. This paper has interesting theoretical contributions. The authors provide an algorithm that leverages data from related environments and are able to prove a sub-linear regret bound and the regret bound depends on a term that captures how informative the related environments are for the target environment. 2. Compelling experimental results. 3. The paper is overall clear and well-written. Weaknesses: 1. This work relies on having quite detailed information from the related environments– the SCM and the selection diagrams for each environment. Having this type of information is somewhat unrealistic. Nevertheless, the contribution of the paper is conceptual/theoretical and may pave the way for future work that requires less detailed information. 2. Furthermore, it is also somewhat unrealistic that any of the relationships between the variables in the related environment are exactly the same as the relationship as relationships in the target environment. Would it be possible to place looser restrictions from the prior data than the equality constraints in Eq 2? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the motivating example in lines 42-60, it would be helpful if the authors could add a comment that this problem arises due to the fact that the clinical trial population and historical data population differ across observables (or unobservables). 2. [Nit] $S_{Z}$ is not defined in the text until Section 2 but appears in Figure 1, which is referenced in Section 1. 3. Would it be possible for the authors to contextualize their perspective on distribution shift within the broader literature on generalizability/transportability (e.g., Stuart, et. al. 2011, Tipton, et. al, 2013, Tipton et. al., 2014)? For example, can the different environments vary across unobservable attributes (confounders), or can they only differ in observable attributes? References: Stuart, Elizabeth A., et al. "The use of propensity scores to assess the generalizability of results from randomized trials." Journal of the Royal Statistical Society: Series A (Statistics in Society) 174.2 (2011): 369-386. Tipton, Elizabeth. "Improving generalizations from experiments using propensity score subclassification: Assumptions, properties, and contexts." Journal of Educational and Behavioral Statistics 38.3 (2013): 239-266. Tipton, Elizabeth. "How generalizable is your experiment? An index for comparing experimental samples and populations." Journal of Educational and Behavioral Statistics 39.6 (2014): 478-501. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive comments. In the following, we address each point separately and hope to clarify all concerns that were raised. Please let us know if any issues remain. 1. ”***It is also somewhat unrealistic that any of the relationships between the variables in the related environment are exactly the same as the relationships in the target environment. Would it be possible to place looser restrictions from the prior data than the equality constraints in Eq 2?***” Relaxations to the formalism in Eq. (2) are possible. One relevant example are settings in which instead it is plausible to assume that some of the causal mechanisms or probabilities are known to lie in some non-trivial interval. As each probability relates directly to some combination of model parameters, this constraint could be incorporated in posterior approximations. We discuss such a scenario in more details on lines 558-570 of the Appendix. There, specifically, instead of an equality across causal mechanisms that implies that $P^*(z) = P^a(z)$ we pose a looser restriction, e.g., $P^*(z) \in I = [P^a(z) - 0.1, P^a(z) + 0.1]$. That is, $P^*(z) = \sum_{u_z}\mathbf 1(\xi_Z^{(u_z)}=z)\theta_{u_z} \in I$, where $\mathbf 1(\cdot)$ denotes the indicator function, which defines a constraint on possible parameter values and is implemented with a rejection step while sampling from the posterior. 2. ”***In the motivating example in lines 42-60, it would be helpful if the authors could add a comment that this problem arises due to the fact that the clinical trial population and historical data population differ across observables (or unobservables).***” This observation is stated in the sentence starting line 50 and will be emphasized. 3. ”***Would it be possible for the authors to contextualize their perspective on distribution shift within the broader literature on generalizability/transportability (e.g., Stuart, et. al. 2011, Tipton, et. al, 2013, Tipton et. al., 2014)? For example, can the different environments vary across unobservable attributes (confounders), or can they only differ in observable attributes?***” Thank you for sharing these references, which we read with interest. Based on our reading, under two assumptions on the dependence of potential outcomes on treatment and domain indicators, propensity scores can be used to correct for distribution shift between target and source populations. Within the perspective of transportability (that uses selection diagrams to encode assumptions on differences and similarities between populations) this could be seen as a special case for which selection diagrams imply the independence assumptions. In general, selection diagrams may not imply this set of independencies and different weights may be applicable to correctly adjust for distribution shift. Further, we consider a generalization of this setting, so called partial transportability. In this generalization, the correct adjustment for distribution shift might not be uniquely identifiable and the goal, instead, is to infer a non-trivial interval for outcome distributions under intervention that could nevertheless provide some information to improve inference (Sec. 2.1). Selection diagrams may be used to encode differences in observable and unobservable attributes. --- Rebuttal 2: Title: Follow-up on rebuttal Comment: Dear Reviewer KpW6, With the discussion period coming to its end, we were wondering whether you had a chance to check our rebuttal. We hope to have answered all concerns to your satisfaction. If not, please don't hesitate to get in touch if there is any concern we could still help to clarify. Thank you again for your time and attention. Authors of #13757 --- Rebuttal Comment 2.1: Comment: Thank you for your response! I am still reviewing this paper and the rebuttal and will provide a full response in the next day.
Summary: This paper considers the online bandit problem with additional batch/observational data, where the additional data could be from different (but related) environments. The authors present a representation of the interventional distribution, based on which one can sample from the posterior distribution of the SCMs and select an action based on the realized model. It is shown that the resulting regret is sublinear, and the improvement upon an algorithm without using prior data is explicitly dependent on the "relevance" of the other environments. Strengths: 1. This paper considers an interesting problem: how to leverage observational data to improve the online bandit algorithm. 2. The proposed method makes use of prior data in a clean way, and the theoretical result shows an explicit dependence on the relevance of the environments from which the prior datasets are generated, while the algorithm itself is agnostic to this knowledge. Weaknesses: 1. This paper is rather notation-heavy, and it is a bit hard for readers not familiar with the language used therein. 2. The proposed method introduces an additional computational burden. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I was wondering how the computation time scales with the cardinality of different variables. Appendix B.2 has briefly mentioned this --- I am curious if it would be possible to have an analytical result. 2. It might also be helpful to compare the computational time of different methods in the simulations (in addition to the ones presented in Appendix B.2). Minor: Is there a typo in Algorithm 1? "$\mathcal{G} ** a$" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. We hope to have clarified your concerns in the following response, please let us know if you would like us to expand our discussion on any of it. Thanks for pointing out a typo in Algorithm 1! 1. ”***I was wondering how the computation time scales with the cardinality of different variables. Appendix B.2 has briefly mentioned this --- I am curious if it would be possible to have an analytical result.***” In a given iteration of the Gibbs sampler, posterior updates are done for each parameter separately so that computational time is proportional to the parameter count, approximately, which in turn is determined by the cardinality of variables as well as the structure of the graph. For a fixed graph, assuming that each update requires a small constant amount of time to compute, we could therefore establish analytically how computational time scales with the cardinality of variables. For arbitrary graphs an analytical result is in general more involved as the parameter count increases differently depending on the local structure of each variable. As an example for illustration, consider the graph $\mathcal G = (X\rightarrow Z \rightarrow Y, X\leftrightarrow Y)$ where $X$ is an action variable, $Z$ is a contextual variable, and $Y$ is a reward variable. Following the parameterization in Cor. 1, the cardinality of parameters is defined as follows: $|\boldsymbol{\theta}_u|= |\Omega_X|\cdot|\Omega_Z|\cdot|\Omega_Y|$, $|\boldsymbol{\xi}_X| = |\Omega_X|\cdot|\Omega_Z|\cdot|\Omega_Y|$, $|\boldsymbol{\xi}_Z| = |\Omega_X|$, $|\boldsymbol{\xi}_Y| = |\Omega_X|\cdot|\Omega_Z|\cdot|\Omega_Y|\cdot|\Omega_Z|$. We would expect run time to increase linearly with the cardinality of variables $X,Y$ and to increase “slower than quadratically” with the cardinality of $Z$. We will update the manuscript with a discussion. 2. "***It might also be helpful to compare the computational time of different methods in the simulations (in addition to the ones presented in Appendix B.2).***" We appreciate the suggestion. Over a single run of $10,000$ experimentation rounds, empirically, the run times for experiment 2 are: TS$=1.2$ seconds, UCB$= 1.7$ seconds, Random$= 0.1$ seconds, tTS$= 5.4$ seconds; and for experiment 3: TS$=1.3$ seconds, UCB$= 1.8$ seconds, Random$= 0.1$ seconds, tTS$= 5.8$ seconds. Run times of tTS include the prior step of 1,000 rounds of sampling from the posterior distribution of parameters given 1,000 samples of prior data. We will add these analyses in the updated document. --- Rebuttal 2: Title: Follow-up on rebuttal Comment: Dear Reviewer DogX, The discussion period is reaching its end. We hope you have had the chance to check our rebuttal and wonder whether it has answered your questions. If not, we would be happy to expand on any remaining concerns. We appreciate your time and attention. Thank you! Authors of #13757 --- Rebuttal Comment 2.1: Title: response to the authors Comment: I would like to thank the authors for the clarification and additional results. My concerns are addressed and I would like to maintain my score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ClusterFomer: Clustering As A Universal Visual Learner
Accept (poster)
Summary: This paper proposes the recurrent cross-attention clustering (RCA) that groups patch features by the traditional EM algorithm with soft-assignment, and also proposes feature dispatching that aggregates spatial information using the cluster center obtained from RCA. The novel backbone model using RCA and feature dispatching, called ClusterFormer, is proposed. ClusterFormer shows better accuracy than baseline architectures on several datasets and tasks, including image classification, object detection, and segmentation. Strengths: - The proposed method is evaluated on major tasks in the computer vision field. - The accuracy improvement is significant. - It is an interesting idea to aggregate spatial information based on the proposed feature dispatching. Weaknesses: 1. The technical novelty of the clustering module is limited. The hierarchical clustering with neural networks is proposed in [1] and [2], and [2] also claims the explainability. The proposed recurrent cross-attention clustering is almost the same as the superpixel sampling network[3]. The paper needs to discuss the relationship between these methods. 2. The experiments are conducted only with relatively small backbones. 3. The authors claim the explainability of the recurrent cross-attention clustering. Still, I do not understand how useful it is because I think it does not contribute to the model's explainability, i.e., we cannot know how the model makes classification results. 4. I think the paper does not provide sufficient information to implement ClusterFormer and reproduce results. At least, the detailed architectures of ClusterFormer-tiny, -small, and the overall pipeline of detection and segmentation should be described. [1] J. Xu et al. “GroupViT: Semantic Segmentation Emerges from Text Supervision.“, 2022 [2]T. Suzuki “Clustering as Attention: Unified Image Segmentation with Hierarchical Clustering.“, 2022 [3] V. Jampani et al. “Superpixel Sampling Networks.“, 2018 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Since the proposed method recurrently computes the cross-attention, I think the FLOPs and latency will be higher than the baseline methods. I would like to know the FLOPs and latency of the proposed method. Also, how much memory budget does clusterformer require? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The proposed method would require computational cost, although I do not know for sure because there is no analysis of the FLOPs and the memory budget. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. The technical novelty of the clustering module** **A1:** Thank you for your suggestion, and we will discuss the relationship between these methods. Our model improves the cross-attention mechanism from an Expectation-Maximization clustering perspective to unify the encoding process. This modification provides a unique approach to the task and contributes to our model's performance. To compare with the references mentioned, [ref1] jointly trains the model and a text encoder using a paired image-text training fashion and then transfers the trained model to the task of zero-shot semantic segmentation. Our method, however, does not rely on text encoders or paired image-text training and instead focuses directly on the vision task. In [ref2], a DCN-based kernel is used to generate the attention map for clustering. While our model also uses an attention mechanism, it does so in a fundamentally different manner by modifying the cross-attention mechanism from an EM clustering perspective. Finally, [ref3] employs the superpixel sampling network, which follows the SLIC scheme. This is conceptually different from our method, where the recurrent cross-attention mechanism is used for cluster formation. We will ensure to discuss the relationship between these methods in a dedicated section of our revised manuscript, highlighting both the shared insights and distinct differences. Thank you again for the insightful suggestion. [ref1] J. Xu et al. GroupViT: Semantic Segmentation Emerges from Text Supervision, 2022 [ref2] T. Suzuki Clustering as Attention: Unified Image Segmentation with Hierarchical Clustering, 2022 [ref3] V. Jampani et al. Superpixel Sampling Networks, 2018 #### **Q2. larger backbone** **A2:** Thank you for your suggestion. Due to the limited computation resources, we provide a base-sized model for image classification as follows. Based on our experience, it is generally true that increasing the size of the model -- by adding more parameters -- can lead to better performance. We conduct additional experiments on the Base-sized backbone and will incorporate the below results in the revision. | Method | Parameters | FLOPs | top-1 accuracy | top-5 accuracy | | :-: | :-: | :-: | :-: | :-: | | ResNet-152 | 60.19M | 11.58G | 78.61 | 94.15 | | Swin-Base | 87.77M | 15.19G | 83.36 | 96.44 | | ClusterFormer-Base | 81.95M | 14.27G | 83.62 | 97.36 | #### **Q3. Explainability** **A3:** Sorry for the confusion on the explainability. Our claim of the explainability hinges on the unique role that cluster centers play in our recurrent cross-attention mechanism. The cluster centers, derived through our clustering process, act as 'prototypes' for the features they cluster. These 'prototypes' serve as a form of a representative sample for each cluster, reflecting the most salient or characteristic features of the data points within that cluster. The benefit of this is twofold. Firstly, it provides an avenue for interpreting the clustering mechanism itself, offering a glimpse into how the data is being partitioned and what features are being considered most pertinent within each cluster. Secondly, and perhaps more significantly, these prototypes can be associated with the final classification results, providing some degree of interpretability. Specifically, by examining which cluster a particular instance is associated with, and looking at the prototype of that cluster, we can gain some insight into what features the model deemed most relevant when classifying that instance. Again, we appreciate your insight and will aim to make the level and limitations of our model's explainability more explicit in our revised version. #### **Q4. Implementation** **A4:** We appreciate your emphasis on implementation details. In response to your comment, we would like to point out that in Section 3.3 of our paper, we have endeavored to provide comprehensive information on implementation details and adaptation to different tasks. Moreover, to enhance reproducibility, we have provided both pseudo-code and the actual code via an anonymous link in our supplemental material. Concerning the specific architectures of ClusterFormer-tiny and -small, these are variations of our model followed by the same configuration (e.g., heads, embedded dimension, windows, and layers) of Swin Transformer. We appreciate your constructive feedback and will enhance the description of detailed architectures. --- Rebuttal Comment 1.1: Title: Additional Questions Comment: I read the authors' rebuttal, and I still have some questions. **1. Difference between the SLIC scheme and the recurrent cross-attention mechanism** I'm not sure about the difference between the SLIC scheme and the recurrent cross-attention mechanism because both are EM-based clustering. In my understanding, the difference is only the design of the similarity function, and I think ClusterFormer is the architecture incorporating SSN into every downsampling layer (and I think it is an important contribution). Could you clarify the conceptual difference that the authors mentioned? **2. Computational costs** I wonder why the FLOPs, GPU memory, and latency of ClusterFormer are smaller than that of Swin. I thought they are larger due to the recurrent computation of the clustering. If so, I think ClusterFormer with larger backbones can be trained with sixteen A100 GPUs which the authors used in the experiments. **3. Explainability** I am not sure what insight Figure 3 provides. Using Fig. 3 as an example, could you explain what findings we can make? And I recommend that the authors provide other examples to clarify how useful ClusterFormer is in terms of explainability. --- Reply to Comment 1.1.1: Title: Response to Reviewer QRN3 Comment: Thank you for the follow-up questions! We answer them as follows: #### **Q1: Difference between the SLIC scheme and the recurrent cross-attention mechanism** **A1:** Thank you for the insightful question. We try to provide some clarification (from our perspective). ClusterFormer employs the recurrent cross-attention mechanism for the purpose of clustering and center updates, while SLIC determines centers relying on distance similarity. The cross-attention methodology offers a more dynamic consideration of interactions and relationships among all features in comparison to a distance-based formulation. We're genuinely appreciative of the insightful perspective provided, and we concur with the reviewer's observation. Indeed, if we were to regard cross-attention as a form of similarity function (based on attention scores), then ClusterFormer can be viewed as an architecture that integrates SSN into each downsampling layer. #### **Q2: Computational costs** **A2:** The FLOPs of the cross-attention mechanism within a single iteration are significantly lower than in Swin Transformer. However, with more recursions, the FLOPs will increase, achieving an on-par computation cost with three iterations. To further illustrate the computational costs, we report complete results under different numbers of iterations (from 1 to 4) below. | Number of Iterations | Parameters | FLOPs | top-1 accuracy | top-5 accuracy | | :-: | :-: | :-: | :-: | :-: | | 1 | 27.85M | 2.50G | 81.06 | 96.23 | | 2 | 27.85M | 3.15G | 81.22 | 96.29 | | 3 | 27.85M | 3.89G | 81.31 | 96.32 | | 4 | 27.85M | 4.41G | 81.33 | 96.33 | Regarding training ClusterFormer with larger backbones (e.g., ClusterFormer_large) using sixteen A100 GPUs, we genuinely appreciate your input. While this suggestion holds considerable value, due to our constrained computing resources, committing sixteen GPUs for an extensive period (over one month) for training larger backbones and fine-tuning on large datasets presents a substantial computational expense. We intend to explore more along the efficiency direction in the future to overcome this limitation. #### **Q3: Explainability** **A3:** Sorry for the confusion. Our explainability approach emphasizes ad hoc analysis. Generally speaking, dense feature vectors after a self-attention operation are highly entangled, and these vectors lack clear interpretation. In contrast, our method, viewed from a clustering perspective, directly provides features from their corresponding cluster centers, aiming to enhance systemic transparency spontaneously. This signifies our intent to provide immediate and intuitive comprehension of how the model processes classification and clusters information. In the context of Figure 3, this is evident through the cluster-anchored results. For instance, in the first image, the red cluster highlights the head of the dog, the green cluster indicates the body, and the yellow cluster pinpoints the dog's paws/legs. Such learned cluster centers allow for direct and comprehensible insight into how the model interprets and categorizes different semantics of the image. To further illustrate the explainability via visualization, we present the attention maps for several images in the classification tasks as suggested. These attention maps elucidate the correlation between the learned feature clusters and the image labels. We have shared the attention map results with the AC, as we're mindful of the constraints against uploading PDFs or providing external links during this rebuttal phase. Please don't hesitate to contact the AC for access to the results. Thank you again for the feedback, and we are glad to have further discussions.
Summary: This paper proposes a vision model based on the clustering paradigm with Transformer, named ClusterFormer. It contains two modules, `recurrent cross-attention clustering` and `feature dispatching`. This paper explains the `cross attention` mechanism from the perspective of `E-M` process. It cleverly combines clustering and attention mechanisms, making feature fusion more accurate. The `feature dispatching` module updates the patch embeddings. The extensive experiments on classification, object detection, and image segmentation demonstrate ClusterFormer has superior accuracy. Strengths: 1) Originality. This paper explains the `cross attention` mechanism from the perspective of `E-M` process, which is novelty. It cleverly combines clustering and attention mechanisms, making feature fusion more accurate. 2) Quality. The vision model is carefully design and experiments show promising performance and ablation studies present interesting results. It presents promising results on varying levels of clustering granularity (i.e., image-, box-, and pixel-level). 3) Clarity The paper is well-written and well-organized. The citation of the paper is comprehensive. 4) Significance The paper may have a high significance. It propose a new backbone paradigm, integrating clustering. It alleviates the problem of unrelated patches being associated with each other in the global attention form of the ViT series models, and may become a universal visual learner. Weaknesses: Because of the recurrent step, the model may have a high FLOPs and slow inference speed under the same parameter size. This paper does not reveal the inference speed, nor does it compare accuracy under the same inference speed conditions. This may result in unfair experimental comparisons. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. The reviewer wants to see the speed of this model on different task, e.g. using throughput (image / s). 2. The reviewer wants to know if further optimization can be made in the initialization of the cluster center ? 3. What is the value of `k`? Is `k` related to specific task? If `k` is different in different layers, would the results be better? 4. Can a larger model achieve better results at the same inference time? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The speed of this model may be slow. Reviewers may question the practicality of the proposed model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. Computation cost** **A1:** The computation cost and inference speed are reported as follows. | Method | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | DeiT-Tiny | 5.72 M | 1.26 G | 0.35 ms | 1884 MB | 74.50 | | ResNet-50 | 25.56 M | 4.12 G | 0.96 ms | 7658MB | 76.55 | | Swin-Tiny | 28.29 M | 4.36 G | 1.35 ms | 7990 MB | 81.18 | | ClusterFormer-Tiny | 27.85 M | 4.19 G | 1.31 ms | 7786 MB | 81.31 | | Method | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | DeiT-Small | 22.05 M | 4.24 G | 1.04 ms | 5251MB | 80.69 | | ResNet-101 | 44.55 M | 7.85 G | 1.68 ms | 9682MB | 77.97| | Swin-Small | 49.61 M | 8.52 G | 2.41 ms | 13976 MB | 83.02 | | ClusterFormer-Small | 48.71 M | 8.24 G | 2.24 ms | 13215 MB | 83.41 | #### **Q2. Initialization of the cluster center** **A2:** Thank you for your insightful question regarding the potential for further optimization in the initialization of the cluster centers. In our study, we employ the Forgy method for initialization, which involves randomly choosing K data samples as the initial centers. This approach has been selected because of its simplicity and the practicality it provides, allowing us to handle large datasets and complex structures efficiently. While it is random and thus prone to variability, it does, in many cases, provide a reasonable starting point for our EM clustering. Nonetheless, we recognize that this may not always guarantee the optimal solution due to the stochastic nature of the algorithm. As such, we are considering investigating the use of more advanced methods for the initialization of the cluster centers in our future work. #### **Q3. Value of K** **A3:** Sorry for the confusion. In our study, we set the value of k to 100 in the context of image classification. The selection of k is indeed related to the specific task. In a broader sense, the choice of k in our model is a balance between model complexity, performance, and computational efficiency. While it is generally true that increasing the value of k can potentially improve model performance by allowing it to capture more intricate patterns within the data, this comes with a trade-off. A larger k leads to a higher number of parameters, which in turn increases the computational demand and potentially the risk of overfitting. Therefore, the selection of k should be guided by the specific requirements and constraints of the task. For different k in different layers, we utilize a progressive pipeline that passes the centers directly between different layers. It may have a different performance if we change the number of k in different layers. #### **Q4. Better performance with the same inference speed** **A4:** This is a great question. As shown in the above table, our model achieves on-par inference speed compared with the Swin transformer while achieving better performance. A larger model might have higher inference latency. --- Rebuttal Comment 1.1: Title: Additional Questions Comment: Thanks for your response. My confusion was partially solved after seeing the table. I appreciate the idea that clustering with EM-like optimization, and that's why I gave a high score. However, 1) I still think the initialization of k needs to be studied. 2) I have the same question with Reviewer bUzy14 why ClusterFormer has a similar number of parameters and FLOPs compared to the Swin transformer? The ClusterFormer uses recursion in the network, which may lower the number of parameters and increase FLOPs. The answer to Reviewer bUzy14 is still confusing to me. Please explain it in more detail. --- Reply to Comment 1.1.1: Title: Response to the additional questions Comment: Thank you for the additional questions! We answer them as follows: #### **Q1: Study of k** **A1:** Thank you for your valuable suggestion. We fully agree with the reviewer on the importance of exploring the impact of different values of k. In fact, we have conducted experiments in this regard, and selected 'k = 100' as it achieved a favorable balance between performance and efficiency. Below, we provide the experimental results for different 'k' values. | Value of K | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | K = 144 | 30.46 M | 5.60 G | 1.65 ms | 8.51 G | 81.33 | | K = 100 | 27.85 M | 4.19 G | 1.31 ms | 7.79 G | 81.31 | | K = 49 | 23.13 M | 2.47 G | 0.87 ms | 7.17 G | 80.93 | | K = 25 | 20.25 M | 1.35 G | 0.52 ms | 6.79 G | 79.59 | These results illustrate that while a larger 'k' typically yields improved performance, it also comes with a higher computational burden. When 'k' increases to 144, the model's performance plateaus, but at the expense of significantly increased computational resources. This key observation is the primary rationale behind our choice of 'k = 100' for presenting our main results. In addition, we conduct an additional experiment of choosing different k in different layers as suggested, i.e., (100, 64, 36, 25) and (25, 36, 64, 100), and compare with (100, 100, 100, 100). The results are shown below. | K in different layers | top-1 accuracy| | :-: | :-: | | (100, 100, 100, 100) | 81.31 | | (100, 64, 36, 25) | 80.52 | | (25, 36, 64, 100) | 80.26 | The results suggest that maintaining the same 'k' value across different layers yields the highest level of performance. As we explained earlier, this occurs because our model efficiently transfers the centers directly between consecutive layers. Consequently, employing different numbers of centers in different layers could potentially introduce additional optimization challenges, such as the need to learn an optimal projection network to transition from the 100 centers to the desired 64 centers. We will incorporate these supplementary results and discussion in the revised version. Once again, we sincerely appreciate your constructive suggestion! #### **Q2: Parameters and FLOPs** **A2:** Thank you for your question. We try to provide more comprehensive details here. First, our ClusterFormer configuration closely aligns with that of the Swin Transformer, such as having an identical number of blocks in each stage and similar network size, dimensions, and depth. Consequently, the difference in the number of parameters between ClusterFormer and Swin Transformer is relatively small. Second, a notable divergence emerges when we consider FLOPs due to the cross-attention clustering mechanism, which differs from the self-attention mechanism in Swin Transformer. Thus, the training FLOPs of ClusterFormer **in a single iteration** is significantly smaller compared with Swin Transformer. The recursion does not increase the number of parameters but introduces extra FLOPs since the parameters are updated in each iteration. As a result, the cumulative FLOP count for ClusterFormer increases and eventually reaches a level close to that of the Swin Transformer, within three iterations or recursions. We hope this explanation clarifies your question. Thank you for your valuable feedback!
Summary: The paper proposes a ClusterFormer approach for visual recognition. The ClusterFormer has a Recurrent Cross-Attention Clustering stage which aggregates patch-level images features by cross-attention to form so-called "cluster centers" which contains global context information, and a Feature Dispatching which adds global context information from cluster centers to local patch-level features. Results show that the proposed approach obtains better results than the previous approaches on multiple visual recognition tasks, including image classification, object detection, instance segmentation, semantic segmentation, and panoptic segmentation. Strengths: + The proposed approach is interesting. + Most parts of this paper are easy to understand. + Promising results are obtained by the proposed approach. Weaknesses: - Over-claimed. The paper claims a "universal vision model". However, adaptations are needed to make the proposed approach work on different tasks. And different fine-tuned models are needed for different tasks. This is not new and exciting for a visual backbone - It's common in visual backbone work that after some adaptation an ImageNet pre-trained model can be adapted to different tasks using different fine-tuned models, from CNN-based models (e.g., ResNet) to transformer-based models (e.g., Swin). Therefore, saying the proposed model as a "universal vision model" is over-claimed. The author(s) should remove this claim in their next version of paper. - Qualitative results of cluster centers. It would be better to show some qualitative results of cluster centers to show that the learnt cluster centers are really "cluster centers" rather than just some feature vectors with global context information. - Others. a) Should give definitions of abbreviations, e.g., RCA in Eq. (4). b) The term "recurrent" is not accurate. It's more like a recursive process instead of recurrent. c) The Recursive process may take a lot of time and computation. The author(s) should provide FLOPS, inference latency, and training/testing GPU memory comparisons with other approaches as well instead of just # parameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What's the FLOPS, inference latency, and training/testing GPU memory costs of the proposed approach compared to other approaches? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations in their supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. claims a "universal vision model"** **A1:** We appreciate your invaluable feedback. As you correctly point out, each task requires specific adaptations (a small head) and fine-tuning to maximize performance, which is a common practice in the field. Our intent behind the terminology was to highlight the unique aspect of our framework. Our proposed approach utilizes a straightforward clustering paradigm and this paradigm allows the model to simultaneously tackle heterogeneous tasks, which demonstrates a generic learning capacity --- we, therefore, called it a "universal visual learner". We agree with your comments that using this term might lead to misunderstandings and may appear as an overclaim. We will revise our terminology to more precisely reflect the model's capability and will stress the necessity of task-specific adaptations in further versions. Again, we greatly value your feedback as it helps improve the clarity and accuracy of our work. #### **Q2. Qualitative results of cluster centers** **A2:** We appreciate your careful review and the suggestion to provide qualitative results of cluster centers. In response to your feedback, we would like to draw your attention to Figure 3 in our paper. This figure provides a visualization of the center-feature assignment at the final stage of our recurrent cross-attention clustering process. Each color in the map corresponds to a distinct cluster, representing a grouping of features with similar representations. These visualizations are meant to demonstrate that our cluster centers are representative and serve as meaningful points of aggregation for related feature vectors. The cluster centers act as representative vectors around which similar features coalesce, which is a defining characteristic of "cluster centers" in many clustering algorithms. #### **Q3. Definition of abbreviations** **A3:** Sorry for the confusion. RCA stands for the recurrent cross-attention layer. In light of this, we will revise our paper to include more definitions of abbreviations. #### **Q4. 'Recurrent' is not accurate** **A4:** Thank you for your insightful feedback. We will use “recursive process” in the version. #### **Q5. Computation cost** **A5:** The computation cost and inference speed are reported as follows. | Method | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | DeiT-Tiny | 5.72 M | 1.26 G | 0.35 ms | 1884 MB | 74.50 | | ResNet-50 | 25.56 M | 4.12 G | 0.96 ms | 7658MB | 76.55 | | Swin-Tiny | 28.29 M | 4.36 G | 1.35 ms | 7990 MB | 81.18 | | ClusterFormer-Tiny | 27.85 M | 4.19 G | 1.31 ms | 7786 MB | 81.31 | | Method | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | DeiT-Small | 22.05 M | 4.24 G | 1.04 ms | 5251MB | 80.69 | | ResNet-101 | 44.55 M | 7.85 G | 1.68 ms | 9682MB | 77.97| | Swin-Small | 49.61 M | 8.52 G | 2.41 ms | 13976 MB | 83.02 | | ClusterFormer-Small | 48.71 M | 8.24 G | 2.24 ms | 13215 MB | 83.41 |
Summary: In this paper, ClusterFormer, which is a network for various vision tasks, is proposed. The proposed algorithm consists of two major components: recurrent cross-attention clustering and feature dispatching. The recurrent cross-attention clustering module groups similar patches by combining EM algorithm with the attention technique. Also, feature dispatching module updates the feature based on clustering results. These two modules are alternately employed. To evaluate the performances of the proposed algorithm, results on various vision tasks, including image classification, object detection, and some segmentations, are provided. In most tests, the proposed algorithm outperforms the conventional techniques. Strengths: 1. The proposed algorithm is simple, but technically sound. The design of each core module may not be very new to the vision community. However, I think it is still meaningful when considering the experimental results. 2. It seems like reproducible since the authors provide the demo code as well. 3. Extensive experimental results are provided. The proposed algorithm achieves the best scores in most tests. Also, the detailed ablation studies are included. Weaknesses: 1. In section 4, more detailed discussion on the experimetal results is needed. In 4.1.1-4.1.5, only the performance gains, which may be easily recognized by readers, are described continuously. It would be useful if the discussion about the major difference between the proposed algorithm and the conventional algorithms, which causes the performance gaps, is provided. 2. The proposed algorithm omits the explanation on some parts. For example. in Eq. (5) on page 4, there is no definition for FFN. Then, L288 on page 7, it is described. Even though FFN is widely-used these days, it would be better to explain briefly at least. Similarly, there is no explanation for adaptive sampling or adaptive pooling in L158. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I think that the proposed algorithm has more strengths than weaknesses in overall, even though the each piece of the proposed algorithm may seems not very new to the vision community. It is mainly because the proposed algorithm achieves the improved scores in various tests. Also, it is because the proposed algorithm is simple in some way, but sound to me. However, I'm still willing to see other reviewers' comments and the author responses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed the limitation and broader impacts in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. Detailed discussion** **A1:** We thank the reviewer for the feedback. The performance gains we observed are predominantly due to our core design of recurrent cross-attention and feature dispatching mechanisms. This strategy enables an implicit generation of semantically rich cluster centers and subsequently distributes this information individually to each output token. In specific, this cross-attention clustering mechanism allows our model to identify cluster centers and utilize feature dispatching to update the corresponding cluster center's feature representations. This innovative process inherently elevates our model's performance compared to traditional methodologies, as observed in our experimental results. To better address your concern, we will ensure that these key differences and their impact on the performance gaps are discussed in detail in our revised manuscript. Again, we greatly appreciate your constructive feedback. #### **Q2. Explanation of FFN and adaptive pooling** **A2:** To clarify, FFN stands for Position-wise Feedforward Network which is an integral part of the Transformer architecture. It comprises two fully connected layers along with an activation function used in the hidden layer. In addition, adaptive pooling is a type of pooling that adjusts to the size of the input feature map. This differs from max pooling and average pooling, which require a fixed size of the pooling window. Adaptive pooling calculates an appropriate window size to achieve a desired output size, offering more flexibility and precision compared to traditional pooling methods. We will ensure that a proper definition of these terminologies is provided in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response. As other reviewers pointed out, the novelty of the proposed algorithm may be limited according to the point of view. However, I'm still think that this paper is interesting to the community, because the proposed algorithm achieves the high scores on various major benchmarks -- even though the proposed algorithm consists of familiar modules and concepts. Also, the authors provide the demo code and it will be helpful for reproducing. Therefore, I decide to keep my original rating. --- Reply to Comment 1.1.1: Title: Thanks for your valuable feedback and support Comment: Dear Reviewer, We deeply appreciate your insightful feedback and your acknowledgment of the contribution of our paper. We will make sure to incorporate all the suggestions in the revised manuscript. Your support and recognition are valuable to us. Thank you once again. Best, Authors
Rebuttal 1: Rebuttal: #### **To all reviewers:** Thank you very much for your valuable time and suggestive comments. We will revise our paper according to your comments. The major changes are as follows: 1. We will offer a more detailed discussion regarding our novelty compared to existing methods and highlight the architecture distinction with original vision transformers as suggested by Reviewer KGpU and QRN3. 2. We will provide more insights on the explainability of our design from the perspective of clustering, as suggested by Reviewer KGpU. 3. We will further clarify the design of the recursive cross-attention mechanism compared with traditional self-attention in any pyramid-based transformer, as suggested by Reviewer bUzy. 4. We will add more descriptions and clarify some misleading terms, as suggested by Reviewer RD2x and R2Rz. 5. We will include more experimental results of the Base-sized model in the revision, according to the suggestion by Reviewer QRN3. 6. We will supplement additional implementation details together with the computational cost to ensure a more complete comparison, according to the comments from the reviewers. We have strived to address each of your concerns comprehensively and welcome further discussions and insights. Sincerely yours, Authors
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a new architecture ClusterFormer. Instead of a transformer block, ClusterFormer uses recursive clustering implemented by cross-attention layers and MLP layers dispatching cluster information to tokens. ClusterFormer outperforms other architectures in the same number of parameters. Strengths: - ClusterFormer uses cross-attention similar to k-means clustering, significantly different from a self-attention layer of ViT. It enhances the originality of research and makes ClusterFormer interesting. Weaknesses: - Computation (FLOPs) and throughput of ClusterFormer are not reported. Comparison in computation costs is necessary for architecture paper. - The paper lacks essential details regarding network architectures, such as network depth, channels, stage configuration, and number of iterations for clustering. I don't think the paper includes enough information to reproduce the results. - Writing should be improved. Because the paper only focused on detailed modules, there is no overall architecture description. It is really hard to figure out ClusterFormer as an architecture. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - In recent architecture research, computation costs of architecture are the most important part. How much computation does ClusterFormer require? - In line 142, the paper claim that ClusterFormer is efficient architecture because $TK << HW$. However, in experiments, $K=100, 150$ are used. Original ViT uses $HW=196$. Although $T$ is not given in the paper, I think it is not enough to argue $TK << HW$. Please explain this mismatch between the method and experiments. - In the first plot of Figure 1, the x-axis scale looks wrong. Improvement of ClusterFormer is just +0.4. But, it looks +4.0 in the plot. Please correct this. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. Computation Cost** **A1:** Thank you for the suggestion. The computation cost and inference speed are reported as follows. We will incorporate them in the appendix. | Method | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | DeiT-Tiny | 5.72 M | 1.26 G | 0.35 ms | 1884 MB | 74.50 | | ResNet-50 | 25.56 M | 4.12 G | 0.96 ms | 7658MB | 76.55 | | Swin-Tiny | 28.29 M | 4.36 G | 1.35 ms | 7990 MB | 81.18 | | ClusterFormer-Tiny | 27.85 M | 4.19 G | 1.31 ms | 7786 MB | 81.31 | | Method | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | DeiT-Small | 22.05 M | 4.24 G | 1.04 ms | 5251MB | 80.69 | | ResNet-101 | 44.55 M | 7.85 G | 1.68 ms | 9682MB | 77.97| | Swin-Small | 49.61 M | 8.52 G | 2.41 ms | 13976 MB | 83.02 | | ClusterFormer-Small | 48.71 M | 8.24 G | 2.24 ms | 13215 MB | 83.41 | #### **Q2. Lack essential details** **A2:** Thank you for the feedback. In Section 3.3 of the paper, we try to comprehensively outline the implementation details. Furthermore, Fig 2 of the paper provides a visual depiction of the overall network architecture, illustrating its structure in a more intuitive way. The channel configuration, which you pointed out as a missing detail, is described in our ablation study. The study, presented in Table 6, explores the head dimension, and its impact on the results of our experiments. We also studied the number of iterations required for the clustering process and provided these details in the same table. In an effort to provide additional clarity and aid in reproducing our results, we have included both pseudo-code and the actual code with anonymous links in the supplemental material. #### **Q3. TK<HW** **A3:** Thank you for the feedback and we acknowledge your concern about the perceived mismatch between the theoretical claim TK < HW. Instead of using ViT, we follow the most recent pyramid architecture (e.g., Swin Transformer or Pyramid Vision Transformer) for building our model. Considering the nature of the pyramid architecture during the encoding process, the effective HW varies across different stages, with values of 12544, 3136, 784, and 196. The value of HW=196 applies to the final stage, while for earlier stages, HW can be significantly larger. Considering this, the efficiency of our model should be understood in the context of these pyramid architectures, where TK can indeed be much smaller than HW, especially in the earlier stages. #### **Q4. Figure 1** **A4:** Thank you for the feedback. We will update the figure in the revised version to make it more clear. --- Rebuttal Comment 1.1: Comment: Hi, thank you for your response. I read your responses, and here are additional questions. **Q1. Computation cost** The report on computation costs will significantly improve the contribution of your paper. I will adjust my rating after the discussion. I have one more question on the computation cost. As I understand, ClusterFormer uses recursion in the network, which may lower the number of parameters and increase FLOPs. However, ClusterFormer has a similar number of parameters and FLOPs compared to Swin transformer. Can you explain this to me? Does ClusterFormer use more blocks than Swin on stage 4? **Q2. Lack essential details** I missed the number on the ablation study. It would be better to mention the number of heads and recursion at the main experiments of ClusterFormer. Still, I can't find the depth and number of blocks in each stage of ClusterFormer. It is infeasible to reproduce ClusterFormer without knowledge on network depths. I recommend authors to explain the depth of ClusterFormer like Fig. 3 of Swin Transformer paper. (e.g. Blocks x2 on stage1, Block x 2 on stage2, Block x 6 on stage 3, Block x 2 on stage 4) **Q3. TK<<HW** To my knowledge, Swin Transformer uses 224 x 224 images, and HxW on each stage is `56 x 56`, `28 x 28`, `14 x 14`, `7 x 7`. Thus, HW is `3136`, `784`, `196`, `49`. Considers that `T=3` in Table 5 (b), `TK=300 or 450` is larger than HW in stage 3,4, and it is not significantly smaller than TK in stage 2. Thus, I still think the efficiency of ClusterFormer is only applicable to stage 1. Is there an additional explanation for it? --- Reply to Comment 1.1.1: Title: Response to Reviewer bUzy Comment: Thanks for your further feedback! We are glad that our rebuttal addressed some of your concerns. We answer your additional questions as follows: #### **Q1: Computation cost** **A1:** This is a great question! The number of parameters of ClusterFormer closely aligns with that of the Swin Transformer, but with significantly lower training FLOPs within a single iteration. The recursive mechanism within ClusterFormer indeed maintains a consistent parameter count in the network while increasing FLOPs, thereby leading to a total FLOP count akin to that of the Swin Transformer. We hope this clarifies your question. We will include the computation cost results along with the elucidation provided above in the revision. We deeply appreciate your insightful suggestion. #### **Q2: Regarding the details** **A2:** We followed the architecture and configuration of Swin Transformer. For example, for a tiny model, we utilize {2, 2, 6, 2} blocks and {3, 6, 12, 24} heads with head dimensions of 32 as default for different stages, respectively. We will mention these details in the main experiment as suggested. #### **Q3: TK<<HW** **A3:** Thank you for your feedback. As you correctly pointed out, the average number (over four stages) of HW in the Swin Transformer is 1041, which is approximately 3.5 times larger than TK (300). Moreover, when considering downstream tasks like object detection or segmentation, there's a common tendency to employ higher-resolution images. In such cases, the disparity between HW and TK becomes even more pronounced. Furthermore, we also investigate the tiny model with small numbers of clusters K (better FLOPs with certain performance drop). | Method | Parameters | FLOPs | inference latency | GPU memory | top-1 accuracy| | :-: | :-: | :-: | :-: | :-: | :-: | | Swin-Tiny | 28.29 M | 4.36 G | 1.35 ms | 7990 MB | 81.18 | | ClusterFormer-K-100 | 27.85 M | 4.19 G | 1.31 ms | 7786 MB | 81.31 | | ClusterFormer-K-49 | 23.13 M | 2.97 G | 0.87 ms | 7172 MB | 80.93 | | ClusterFormer-K-25 | 20.25 M | 2.35 G | 0.52 ms | 6793 MB | 79.59 | Thanks again for your thoughtful comments! We are happy to discuss more if you have any other questions.
Summary: This paper presents ClusterFormer, a new module as a replacement of the self-attention module for vision transformers. The method involves performing (iterative) clustering between the input tokens and finally summarizes them into a few clusters for feature computation (using a method named feature dispatching). ClusterFormer was validated effective in a wide range of vision tasks including image classification, object detection, semantic/instance/panoptic segmentation, etc. Strengths: + The proposed ClusterFormer has been validated on a wide range of vision tasks. + The paper is well-written and organized. Weaknesses: - The proposed ClusterFormer is yet another form of self-attention in which a few clustering tokens were constructed to collect information from visual tokens and then propagate information to them. The novelty is limited given the following works published previously (and it is possible to find more). [A] Zheng et al., End-to-End Object Detection with Adaptive Clustering Transformer, BMVC 2021. [B] Fang et al., Msg-transformer: Exchanging local spatial information by manipulating messenger tokens, CVPR 2022. [C] Liang et al., Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning, NeurIPS 2022. - The ablative study part mostly studies the design choices (e.g. how to perform feature dispatching), but it misses a study on how the method improves the baseline approaches (e.g. compared against Swin). I am not sure why the proposed method is better than the original vision transformers (the current explanations, including visualization, are insufficient to claim the advantages). - I think the paper somewhat overclaims the advantages, such as explainability. I am a bit conservative in saying that the proposed method is explainable because the essence is still self-attention. - I hope to make sure if all methods are compared fairly. For example, in Table 1, the ClusterFormer entries were trained with a batch size of 1024 on 16 A100 cards, what about others (e.g. the closest competitor, Swin)? I know that the batch size can largely impact the final results. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please address the concerns raised in the weakness part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Overall, this is an incremental improvement over the original vision transformers. There are two main limitations, including the existence of similar prior methods and the lack of ability to deal with smaller units (e.g. if a token occupies two semantic regions, it is not possible to split the token -- it is also a weakness of prior methods). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: #### **Q1. ClusterFormer is yet another form of self-attention. The novelty is limited.** **A1:** We appreciate the reviewer's insightful feedback and we will add discussion with these methods in our version. In the meanwhile, it is important to distinguish our method from the referenced works. Our ClusterFormer adopts the recurrent **cross-attention** mechanism from the perspective of EM clustering to unify the encoding process. Therefore, though our objective of clustering might be similar to [ref1], [ref2], and [ref3], the way in which we accomplish this is conceptually and operationally different. The novelty of our approach lies in the use of this advanced architecture to serve as a universal visual learner. Specifically, the method in [ref1] uses an adaptive clustering transformer only after the implementation of CNN backbones for further decoding, a different approach than ours that integrates the clustering mechanism throughout the attention mechanism. In [ref2], their methodology involves using multi-head self-attention, a shuffle module, and an MLP to generate messengers in every region. While this approach shares a similarity in the usage of self-attention, it does not align with our novel use of expectation-maximization clustering in our cross-attention mechanism. [ref3] constructs their clustering layer by following the improved SLIC scheme, a distinctly different approach from ours. Our methodology not only incorporates a different process of implementation but also provides a significant leap forward in the way we leverage the attention mechanism, fundamentally changing the way we gather, process, and disseminate feature representations within our model. We hope that this explanation provides more insight into the innovative nature of our approach. Thanks. [ref1] Zheng et al., End-to-End Object Detection with Adaptive Clustering Transformer, BMVC 2021. [ref2] Fang et al., Msg-transformer: Exchanging local spatial information by manipulating messenger tokens, CVPR 2022. [ref3] Liang et al., Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning, NeurIPS 2022. #### **Q2. Difference between original vision transformers.** **A2:** We appreciate the reviewer's feedback. Our work significantly diverges from the traditional vision transformer architectures such as Swin, ViT, and PVT. These architectures rely heavily on a self-attention mechanism, whereas our ClusterFormer employs recurrent cross-attention clustering. Traditional self-attention mechanisms may struggle with the encoding process which tends to be highly distributed and entangled, making it difficult to disentangle what aspects of the input contribute to the output. In contrast, by cross-attention clustering, our model offers an implicit way to generate the cluster center with high semantics and distribute it to the output token individually. Specifically, this unique process allows us to acquire cluster centers and then use feature dispatching to update the feature representations from corresponding cluster centers. We believe that our distinct methodological differences inherently lead to improvements over these methods. The recurrent cross-attention clustering offers a more dynamic and flexible means of processing visual knowledge, allowing for more robust and accurate representations. Thank you for your great suggestion. We will provide more discussion together with an ablation result to better explain and visualize the advantage of our model over traditional vision transformers. #### **Q3. Explainability.** **A3:** The explainability root on the nature of the centers for each cluster of feature representations (as shown in Fig 3). The key point is the centrality of the 'centers' for each cluster of feature representations in our model. The main idea is that these cluster centers, which are determined through our recurrent cross-attention clustering process, represent a 'prototype' of the features they cluster. This feature allows us to identify what representation is most salient within each cluster, providing interpretability not typically associated with traditional self-attention mechanisms. #### **Q4. Fair Comparison.** **A4:** Thank you for the great question. We used the same batch size and endeavored to maintain a consistent and fair environment for all models to be tested. All experiments are followed by the same training schedulers in mmclassification to ensure a fair comparison. We understand that this point might not have been clearly communicated in our paper. We will make sure to provide more explicit details about the setup of all methods in our experiments during revision. --- Rebuttal Comment 1.1: Title: I am still negative on this paper Comment: I read the authors' rebuttal and other reviewers' comments. Overall, I am still negative on this paper. First, I would like to say that I do not totally agree with Reviewer x7UH about the novelty of this work. Using clustering or similar methods in vision transformers is not a new idea. Besides what I meantioned in the review, there are also other methods for improving the speed of vision transformers such as [D][E]. Note that these methods did not use clustering, but also used the relationship between tokens to eliminate less important ones. On the other side, I do not agree with Reviewer bUzy who criticized too much because of the writing issues of this paper. - [D] Rao et al., DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification, NeurIPS 2021. - [E] Bolya et al., Token Merging: Your ViT But Faster, ICLR 2023. To me, this paper is a borderline case. I shall say that the paper suffers a bit (making me negative) because the authors always tried to overclaim the contributions or results. Many reviewers mentioned about the potential overclaim in the paper. Even in the rebuttal, I am still seeing many words like "a significant leap forward", which somewhat make other statements less convincing. Regarding the rebuttal (to my part) itself, I appreciate the efforts that the reviewers made to address my concerns especially in the relationship to previous methods. After reading it, I am even more confident about my original comments: this is a borderline paper which made marginal contributions. BTW, there are too many "we will"s in the overall rebuttal but few of them were really provided. I choose to keep my original rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer KGpU Comment: Thank you for taking the time to articulate your thoughts on our paper. We truly value your feedback and sincerely seek your approval. In terms of our contribution, we value the impartial and forthright evaluation provided by the reviewer. However, we still wish to underscore the significance of our work. While the foundational concept of utilizing clustering within vision transformers might not be new, our innovation lies in refining the recurrent cross-attention clustering with EM-like optimization. This novel approach provides a fresh outlook on feature representation learning, particularly adept at addressing a wide spectrum of visual tasks characterized by varying clustering complexities. Our endeavor represents one of the initial strides toward formulating a universal visual learner, and we hope to inspire future exploration in this direction. As for the use of "we will" in our general response, we have tried our best to address the concerns of all reviewers, where all points are incorporated into each individual response to the reviewers. Unfortunately, the guidelines of the rebuttal process have constrained us from directly implementing changes in the manuscript. We commit to fulfilling all the adjustments promised during the revision. Thanks again for your thoughtful feedback! We are happy to discuss more if you have any other questions.
null
null
null
null
Dense-Exponential Random Features: Sharp Positive Estimators of the Gaussian Kernel
Accept (poster)
Summary: This paper studies the problem of computing the matrix "KC". Here, K is the kernel matrix (L-by-L, with L very large, could be reproducing kernel, but OK otherwise), and C is a known constant matrix. The kernel is restricted to be "scaled softmax kernel": exp(x-dot-y)F(x)F(y) (for some function F not difficult to handle). The contribution of this paper, is the design of a class of random features, of which the correlation is exactly K, but to some extent, the variance of replacing the mean (for the correlation) by sample mean, can be minimized with closed form. This research, to the best of my knowledge, is innovative, inspiring, and interesting. Strengths: In this paper, the idea, and the derived formulation of optimizing sample variance for improved kernel estimation is innovative. Besides the notation problem raised below, the paper has a clear and friendly structure that is easy to follow. The main results are presented clearly. These suggest excellent writing quality and clarity. We do have concerns on significance, but we would like to raise such concerns below, instead of here. Weaknesses: 1. Concerns about the writing. I feel that the notations are unnecessarily too heavy. For example, I can not see any reason that blocks one from rewriting: f^(k) --> f^k (or just f and g), B^(k) --> B_k, C^(k) --> C_k or C^k, x^(i) --> x_i, y^(i) --> y_i, M^(k), Q^(k), Lambda^(k), .... By the rewriting, the number of notations is reduced by 50%! Some notations only appear a few times and can just be replaced by English, e.g., O_d (the set of orthogonal matrices). 2. The significance of the paper relies on the significance of the exponential kernel exp(x-dot-y). If computing the transformer attention is the only application, the vanilla computational cost should be compared. Otherwise, I feel not certain if this is a widely used kernel family, and (e.g., for Gaussian kernel) in the application scenarios (e.g., regularized least squares, or support vector machines), whether computing KC is really the bottleneck. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Lines 76--77. Could you please provide more details on why the computational complexity of hat-K-times-C is O(LMn)? 2. In Eq. (8), the variable "theta" does not show up in the right-hand side, and is not elsewhere defined. Better if this variable can be defined. 3. It remains very unclear even after a careful study of Section 3, what are "homogeneity heuristic" and "a certain optimization problem" mentioned in Lines 124 and 125. Better if these can be explicitly pointed out. 4. Consider the mission of computing the matrix KC, as elaborated in Section 2.1, whether adopting DERFs would reduce computational complexity? To be specific, whether the amount of variance reduced, deserves the computational cost incurred by optimizing the random features via the approaches introduced in Section 4? Can one achieve the same accuracy and computation complexity simply by using a larger M? For example, Theorem 4.2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: foundational research, limitations well managed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for all the comments. We address weaknesses and questions below. > Concerns about the writing. We will address this concern by simplifying and de-densifying notation in the revision of the paper. > Otherwise, I feel not certain if this is a widely used kernel family, and … whether computing KC is really the bottleneck. Note that our derivations, as mentioned in the title and in e.g. lines 57-58, apply for both softmax and Gaussian kernels. Gaussian kernel is a very important object of research in the kernel methods literature and a lot of efforts have been made to speed it up in situations where the kernel matrix is big (e.g. in SVM or kernel regression where the number of data points is above a few thousands) – see [38,39,40] and thousands of follow ups of these papers. > Lines 76--77. Could you please provide more details on why the computational complexity of hat-K-times-C is O(LMn)? As mentioned in the text, hat{K} = P S^T. Hence, hat{K} C = P S^T C = P (S^T C). The complexity of computing U = S^T C is O(LMn) since S^T, C are of shapes MxL and Lxn. The complexity of computing P U is also O(LMn) since P is of shape LxM and U is of shape Mxn. > In Eq. (8), the variable "theta" does not show up in the right-hand side, and is not elsewhere defined. Better if this variable can be defined. Theta is defined in line 126. We will expand its definition to make it clearer in the final revision. > It remains very unclear even after a careful study of Section 3, what are "homogeneity heuristic" and "a certain optimization problem" mentioned in Lines 124 and 125. “homogeneity heuristic” is defined in line 116 and the optimization problem is the minimization of (8) as mentioned in line 129. We will clarify these notations in the final version. > Can one achieve the same accuracy and computation complexity simply by using a larger M? As we demonstrate in our experimental results (Figure 1), our RF variants result in up to e^10 variance improvements over the previous best variant. To achieve that by using a larger M means taking e^20 * M random features (since variance is proportional to M^{-1/2}) which would be prohibitively expensive. --- Rebuttal Comment 1.1: Title: Thank you for the feedback Comment: All of my concerns are resolved. I would like to raise the rating after the next phase of discussion with other reviewers.
Summary: In this paper, the GERFs are genearilzied to dense exponential random features (DERFs). The paper shows that GERFs and PosRFs are the specific situation of DERFs, from which if follows that with a suitable parameter estimator, DERFs can have better performance. Strengths: As far as I know, the DERFs are novel and are a good extension from GERFs. There are sufficient discussions and experiments from many aspects, all showing the good performance of the proposed method. The paper is well organized and clearly represented. Weaknesses: I did not find significant weakness. One possible is that because this paper include many aspects, it may make the reader lose the focus. It is better to clearly express the suggestion: in which case DERFs is recommended. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In different experiments, several random features are compared when the number of random features are the same. How about the real calculation time? Since there could be difference on calculating different random features in the inference. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for a high score and kind words in “Strengths”! We address weaknesses and questions below. > It is better to clearly express the suggestion: in which case DERFs is recommended. We recommend using them whenever the sequence length is too prohibitive for using exact kernel matrix/self-attention. > How about the real calculation time? Since there could be difference on calculating different random features in the inference. The computational complexity for all RF methods scales linearly with the sequence length, hence, for a large sequence length, we consider all these methods as efficient and only compare the downstream metric performance. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I am also glad to see some other reviewers' concerns have been solved. Overall, I keep my positive score.
Summary: The authors propose new random features for Gaussian and softmax kernels, and apply the approximation to learning scalable Transformer networks. Strengths: 1. The proposed dense exponential random features (DERFs) generalize the current positive random features (PosRFs) and generalized exponential random features (GERFs). 2. The authors show stronger theoretical results for the proposed DERFs in scalable Transformer networks where the self-attention matrix is approximated as a low-rank matrix when the sequence is long. 3. The utility of the method is demonstrated on a variety of datasets with competitive results. Weaknesses: 1. While the authors argue that significant variance reduction can be achieved, it is not clear whether the proposed approximation is unbiased. 2. The method could be computationally expensive---the term D involves computing the determinant. 3. Theorem 4.1 makes a few assumptions, restrictions of these conditions are not fully discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How feasible are the conditions in Theorem 4.1? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! We address weaknesses and questions below. > While the authors argue that significant variance reduction can be achieved, it is not clear whether the proposed approximation is unbiased. The proposed approximation is precisely unbiased according to Theorem 4.1. > The method could be computationally expensive---the term D involves computing the determinant. Note that A is a dxd diagonal matrix in all our DERF variants, hence computing its determinant takes O(d) time. > Theorem 4.1 makes a few assumptions, restrictions of these conditions are not fully discussed. All our close-form optimal solutions (Theorems 4.2, 4.3) satisfy constraints from Theorem 4.1 as mentioned in lines 176-177 for Theorem 4.2 and 198-200 for Theorem 4.1. > How feasible are the conditions in Theorem 4.1? As we mentioned above, these conditions are satisfied automatically for our optimal solutions. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thank you for addressing my comments. I've read all the reviews and my score remains unchanged.
Summary: This paper is the next instalment in a series of works focusing on low-rank transforners, e.g. [15, 30]. They propose Dense Exponential Random Features (DERF) for unbiased Monte Carlo approximation of Gaussian or softmax kernels. This class of features generalizes GERFs from previous work, with the main difference being that the features are parametrized by matrix-valued parameters rather than scalars. Hence, this flexibility allows for capturing a larger class of features for kernel approximation. The parameters themselves can be analytically optimized in several useful special cases: ADERF, SDERF and SADERF. The parameter optimization is carried out with respect to a so-called shifted log-variance objective averaged over the dataset, and it is shown that optimizing this objective formalizes a "homogeneity heuristic" from previous work, which is a nice observation. In particular, this allows to select the same set of parameters across all data points. The experiments are essentially the same as in previous works: variance analysis on synthetic datasets + cifar + mnist, a set of nonparametric kernel regression benchmarks, speech modelling and low-rank uptraining on NLP tasks. Improvements compared to previous work and scalable transformers are achieved. Strengths: The paper is overall well written, it does really well at communicating the main issues and ideas on which it is based. There is both a theoretical and experimental component, both of which have some interesting results. The main idea is novel, and the proposed approach for generalizing previous random feature methods is non-trivial; I was impressed that the analytical calculations could also be carried out in the matrix-valued case. I believe it is of interest to both the kernel and efficient transformer communities to have some scalable and expressive random features, and the paper definitely takes another step in this direction. Weaknesses: Overall there are not that many weaknesses in my opinion. The impact is slightly less significant than of its predecessor [30], where the main novelty was allowing for low-rank uptraining of pre-trained Transformers, which was already achieved by FAVOR++ (efficient Transformer based on GERFs). Nevertheless, the improved experimental performance should be of interest. I was missing some more comparisons between the variations: for one, SADERFs were not included in any of them, and only used for the last NLP task. Computationally, this variant seems to be the most favourable since it does not require the eigendecomposition of a $d \times d$ matrix, hence foregoing an $O(d^3)$ computational cost, which is associated with ADERF and SDERF. It would be interesting to see this variation included in the comparisons to have an idea about the trade-off between the performance and this computational saving. I was also missing some intuition or hypothesis regarding why SDERF seems to perform best. Is it because it allows for a non-isotropic matrix $A$, hence allowing to adapt the variance of $\omega$ on a coordinate-by-coordinate basis in the quadratic form containing $\omega$, if this makes sense? It would be interesting to know which additional parameters result in the biggest improvement compared to GERF, so that focus could be placed on optimizing the scalability-performance trade-off. (See questions for more) Another question that seems unaddressed to me is how much information is lost compared to a full Transformer on truly long-range tasks. At the moment, we do not know the "price" we pay for the subquadratic scalability in sequence length, and it would also be interesting to include some more long-range tasks, which compares full Transformer vs RF methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Some questions and remarks: - Is $\sigma$ analogous to the bandwidth parameter of the kernel in Figure 1 (right) and Figure 2? This is not stated explicitly. - In line 138-139, it is stated that [30] achieved good results by optimizing the shifted log-variance. Given that this objective is only introduced in this work, maybe I would phrase this differently, since in previous work it was only incorporated as a heuristic for homogenizing the solution, rather than an optimization problem. - How significant is the $O(d^3)$ computation cost associated with SDERF and ADERF? I guess the previous layer could easily have an overall dimension in the 1000s, where this could become a bottleneck on GPUs in terms of memory? - In line 214, the authors make the remark that assuming $L \geq d$, the $O(Ld^2)$ cost dominates the $O(d^3)$ cost. Is this not already assumed in Theorem 4.2 (and hence, I assume in Theorem 4.3 from the way it is phrased), since this is necessary for the nonsingularity of $M^{(1)}$ and $M^{(2)}$? - If the main improvement of SDERF compared to ADERF comes from allowing a non-isotropic $A$, would it be beneficial to define extensions of GERF or SADERF, which add this flexibility to the parametrization? What might be the computational cost of such an RF construction? This is just a hunch, but if this can be solved for without matrix decompositions, then we might have the best of both worlds? No problem if working this out is difficult, just curious. - What is the range of sequence lengths in the speech modelling benchmark? Does this count as a long-range task? - In the NLP task, what is the number of random features (number of MC samples) used? Is there any intuition, investigation that the authors could report about how to choose this hyperparameter? - One more question; in the Appendix it is stated that replacing the standard normal distribution on $\omega$ with an orthogonal one works better in practice. is this achieved by QR decomposing a set of Gaussian vectors? Is there any intuition about why this improves the performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper is mainly theoretical, but it has direct implications regarding Transformers, which carry with themselves a variety of societal and environmental impacts, but this seems to be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for all the comments and a favorable score. We address weaknesses and questions below. > I was missing some more comparisons between the variations: for one, SADERFs were not included in any of them, and only used for the last NLP task. Note that we report ADERFs in Figure 2 which can be thought as a stronger version of SADERFs. It doesn’t perform worse than the previous best variant (GERFs), however SDERFs is even better. For that reason, we evaluated SDERFs in the speech modelling experiment, however we weren’t able to use it in the NLP setup because we experienced unhealthy behaviour when using SVD and eigen decomposition in Jax. That’s why we chose SADERFs in this setup and in general we recommend using it when matrix decomposition is infeasible. > Another question that seems unaddressed to me is how much information is lost compared to a full Transformer on truly long-range tasks. We refer to [14] which is the original paper proposing RFs in the context of Transformers and demonstrating superior performance compared to full Transformers. We note that RFs proposed in our paper are a stronger variant of FAVOR+ from [14]. > Is sigma analogous to the bandwidth parameter of the kernel in Figure 1 (right) and Figure 2? We don’t parametrize our Gaussian kernel definition (line 58) but sigma in Figures 1, 2 is equivalent to the inverse bandwidth parameter in the standard Gaussian kernel exp(-||x - y||^2 / (2 bandwidth^2)) since the arguments to (our) kernel are sigma x, sigma y. > Given that this objective is only introduced in this work, maybe I would phrase this differently… Thanks for the suggestion, we will incorporate it in the final version to mitigate a potential confusion. What we meant is that inherently [30] optimize Eq. (8) without knowing it and get good results which suggests that (8) is a good variance proxy. > I guess the previous layer could easily have an overall dimension in the 1000s, where this could become a bottleneck on GPUs in terms of memory? Note that d is not the dimension of the previous layer but the dimension of the attention head which is much smaller and is typically 64. > Is this not already assumed in Theorem 4.2 (and hence, I assume in Theorem 4.3 from the way it is phrased) … ? That’s correct, thank you for your observation! We will add this clarification to the Theorem statements. > If the main improvement of SDERF compared to ADERF comes from allowing a non-isotropic A, would it be beneficial to define extensions of GERF or SADERF, which add this flexibility to the parametrization? Thank you for suggesting this, this could be a nice extension of our methods and we leave it to the future work. > What is the range of sequence lengths in the speech modelling benchmark? For speech modeling the max sequence length was ~900. > In the NLP task, what is the number of random features (number of MC samples) used? Is there any intuition, investigation that the authors could report about how to choose this hyperparameter? The number of RFs is M=128. Bigger M can only decrease the variance of the approximation, hence we recommend setting M as higher as possible according to the given computational limitations. In our NLP experiments we tried 64, 128 and 256 and found that 128, 256 perform similar to each other while being a bit better to M=64. Moreover in Figure 3 we have shown plots for how accuracy changes across different dataset as we vary M. > One more question; in the Appendix it is stated that replacing the standard normal distribution on omega with an orthogonal one works better in practice. Is this achieved by QR decomposing a set of Gaussian vectors? Is there any intuition about why this improves the performance? There is a thread of work about orthogonal random features and why they reduce variance for kernel estimation: see “Orthogonal Random Features”, Yu et al. 2016, [13, 14, 15] and their references/citations.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work focuses on positive linear features for softmax kernels which are relevant for accelerating kernel methods and transformers. The paper observes that the functional form for random Fourier features (GERFs) proposed by prior work in [30] can be seen as optimizing the shift-ed log-variance objective. Building on GERFs, a more expressive and parameterized functional form is proposed, and corresponding shifted-log variance is derived. After that, several simplifications are considered for improved optimization of the log-shifted variance of the random features, which leads to different forms of RFs that are easily computable while keeping subquadratic complexity in the number of sequence lengths. Towards the end, a few empirical results are presented in support of their contributions. Strengths: * This work studies an important class of problems that has broader applicability. The paper builds on observations/heuristics from the prior work and attempts to justify using theoretical arguments. During this exercise, the proposed methodologies are mathematically elegant and present several exciting results on RFs that are relevant to both kernel methods and transformers. * While I have not examined every mathematical proof, the derivation appears generally accepted, insightful, and praiseworthy. Weaknesses: * To establish that GERF minimizes the objective, the supporting argument is the solution of 8 matches with the heuristic proposed by the [30]. If I have understood it correctly, more is needed as for an objective to be reasonable, its behavior beyond the maximizer/minimizer needs to be understood. Also, is there any relation between this objective and the actual variance of the GERF estimate? Can Jensen inequality be used? This is one of the main results of this paper, and it needs to be better motivated. * Notations could be improved. E.g., use a consistent format for scalars and matrices. L is a scalar. Avoid unnecessary precision; e.g., the upper subscript in lines 157 and 158 could have been avoided and stated as part of the text. Note that this work is built on math and therefore requires extra effort on styling to make it accessible for a larger audience. * Pages 5-6-7 are not used carefully and do not present the best aspects of this work. You are considering simple formulations and repeating more or less the same procedure for obtaining close form solution of the objective, which again needs to be justified better in the first place. Also, before going into these details, give an overview of possible choices on the parameters space and justification. Summarizing these results as a corollary instead of Theorem 4.3 and 4.4 might be helpful. * In line 217: “operations for which implementation has not yet matured in popular deep learning libraries with GPU and TPU support.” What is the evidence for this claim? Are you suggesting that eigen decomposition can’t be computed efficiently on GPUs using torch and TensorFlow? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Is it possible to repeat the proof of Theorem 4.2 for a setting where the log of features comes from a non-linear function, potentially a neural network? The main idea is to extend from the quadratic functions of w and x, use a more general parametric form, and optimize a relaxation of the variance. Note that to ensure the unbiasedness of the kernel, estimation can be imposed using constraints similar to the proposed work. How would you compare and contrast with the current approach? If not, why is it not possible? * Why are time versus accuracy results not reported anywhere, assuming scalability is one of the main motivations? When will these methods accelerate, and when will they not? Theoretical complexity may not account for constants that stem from implementation. * What aspects of the implementation may prohibit practitioners interested in the fast self-attention method from considering these proposed methods? * Respond to weakness 1. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please take a look at the weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for all the comments and kind words in strengths! We address weaknesses and questions below. > If I have understood it correctly, more is needed as for an objective to be reasonable, its behavior beyond the maximizer/minimizer needs to be understood. An intuitive understanding of this objective is that, if we assume that variances over different pairs of points are tightly concentrated around one point, then the objective is the logarithm of that point. Jensen’s inequality can be used to lower-bound the variance, however not to upper bound it since log is a concave function. Note that our experimental evidence (Figures 1 and 2) suggests that the minimization of (8) indeed leads to variance minimization where we get up to e^10 times variance reduction. > Notations could be improved. > Pages 5-6-7 are not used carefully and do not present the best aspects of this work. We will incorporate notation and style suggestions by the Reviewer in the final revision. Thank you! > Are you suggesting that eigen decomposition can’t be computed efficiently on GPUs using torch and TensorFlow? We used the Jax codebase and we experienced errors and unhealthy behavior when using SVD and eigen decomposition in Jax. > Is it possible to repeat the proof of Theorem 4.2 for a setting where the log of features comes from a non-linear function, potentially a neural network? The quadratic nature of the problem is essential for our close form solutions, meaning that we don’t see how our proofs can be extended to arbitrary nonlinear functions. Perhaps approximate solutions are feasible, we leave that to future work since it’s outside of the scope of this paper. > Why are time versus accuracy results not reported anywhere, assuming scalability is one of the main motivations? When will these methods accelerate, and when will they not? Our methods’ complexity grows linearly with the sequence length as opposed to quadratic growth for the standard self-attention. Hence, we are guaranteed to get improvements for long sequences (of order 1000 and above). We emphasize that, apart from empirical contributions, our paper is mainly theoretical and provides a significant extension of random features for the Gaussian kernel with nontrivial close form solutions (Theorems 4.2 and 4.3). We ask the Reviewer to take that into account. > What aspects of the implementation may prohibit practitioners interested in the fast self-attention method from considering these proposed methods? For smaller sequence lengths of order d (d is usually 64 and is the dimension of the attention head) our methods won’t give efficiency improvements over standard self-attention since O(LMd) will be close to O(L^2 d). --- Rebuttal Comment 1.1: Title: Thanks for your response. Comment: Thank you for your detailed response. In light of other reviews and rebuttal responses, I would like to reiterate that this work makes exciting contributions to theory and practice. However, I still believe that the theoretical claims lack rigorous justifications, and the empirical evidence does not support the practice-relevant results while having impressive asymptotic guarantees. It is interesting to note that the rationale for the appropriateness of the objective comes from experiments, while the overall validity of the proposed work relies on theoretical guarantees. One possible way to improve the results would be to conduct simple experiments on transformer inference and demonstrate that the results are crucial and could help practitioners. Thanks.
null
null
null
null
null
null